Modes of Uncertainty: Anthropological Cases 9780226257075, 9780226257105, 9780226257242


182 67 1MB

English Pages [257] Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Modes of Uncertainty: Anthropological Cases
 9780226257075, 9780226257105, 9780226257242

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Modes of Uncertainty

Modes of Uncertainty Anthropological Cases

LIMOR SAMIMIAN -DARASH PAU L R A B I N OW

The University of Chicago Press Chicago and London

Limor Samimian-Darash is assistant professor at the Federman School of Public Policy and Government at the Hebrew University of Jerusalem. Paul Rabinow is professor of anthropology at the University of California, Berkeley. He is the author or coauthor of many books, including, most recently, Designs on the Contemporary, Demands of the Day, and Designing Human Practices, all published by the University of Chicago Press. © 2015 by The University of Chicago All rights reserved. Published 2015. Printed in the United States of America 24 23 22 21 20 19 18 17 16 15

1 2 3 4 5

ISBN- 13: 978- 0- 226- 25707- 5 (cloth) ISBN- 13: 978- 0- 226- 25710- 5 (paper) ISBN- 13: 978- 0- 226- 25724- 2 (e- book) DOI: 10.7208/chicago/9780226257242.001.0001 Library of Congress Cataloging-in-Publication Data Modes of uncertainty : anthropological cases / [edited by] Limor Samimian-Darash and Paul Rabinow. pages cm Includes bibliographical references and index. ISBN 978-0-226-25707-5 (cloth : alk. paper) — ISBN 978-0-22625710-5 (pbk. : alk. paper) — ISBN 978-0-226-25724-2 (e-book) 1. Risk— Sociological aspects. 2. Uncertainty—Social aspects. 3. Uncertainty— Economic aspects. I. Samimian-Darash, Limor. II. Rabinow, Paul. HM1101.M635 2015 361.1—dc23 2014040552 o This paper meets the requirements of ANSI/NISO Z39.48- 1992 (Permanence of Paper).

CONTENTS

Introduction by Limor Samimian-Darash and Paul Rabinow / 1 ECONOMICS AND ENTREPRENEURIALISM

P AT O ’ M A L L E Y

ONE

/ Uncertainty Makes Us Free: Insurance and Liberal Rationality / 13 E I TA N W I L F

T WO

/ The “Cool” Organization Man: Incorporating Uncertainty from Jazz Music into the Business World / 29 N ATA S H A D O W S C H Ü L L

THREE

/ The Gaming of Chance: Online Poker Software and the Potentialization of Uncertainty / 46 S E C U R I T Y A N D H U M A N I TA R I A N I S M

M E G S TA L C U P

FOUR

/ Policing Uncertainty: On Suspicious Activity Reporting / 69 R E B E C C A L E M OV

FIVE

/ Guantánamo’s Catch- 22: The Uncertain Interrogation Subject / 88 CAROL A. KIDRON

SIX

/ Global Humanitarian Interventions: Managing Uncertain Trajectories of Cambodian Mental Health / 105

vi / G AY M O N B E N N E T T

SEVEN

/ The Malicious and the Uncertain: Biosecurity, Self-Justification, and the Arts of Living / 123 E N V I R O N M E N T A N D H E A LT H

ADRIANA PETRYNA

EIGHT

/ What Is a Horizon? Navigating Thresholds in Climate Change Uncertainty / 147 FRÉDÉRIC KECK

NINE

/ Sentinel Devices: Managing Uncertainty in Species Barrier Zones / 165 AUSTIN ZEIDERMAN

TEN

/ Spaces of Uncertainty: Governing Urban Environmental Hazards / 182

Afterword by Paul Rabinow and Limor Samimian-Darash / 201 Notes / 209 References / 221 List of Contributors / 239 Index / 243

Introduction L I M O R S A M I M I A N - DA R A S H A N D PAU L R A B I N OW

Why Uncertainty? In this book, we identify uncertainty as a central problem in contemporary anthropological thought and practice. The studies brought together in this collection explore how uncertainty emerges in various realms of human activity and is shaped in knowledge domains, governmental technologies, and forms of subjectivity. The book lays out a distinctive approach to the phenomena of risk and uncertainty, bringing an enriched conceptual framework to bear on the analysis of how these phenomena are experienced and managed. We argue that it is vital today to distinguish among danger, risk, and uncertainty, both analytically and anthropologically. To that end, we present a series of concepts and cases that clarify emergent problem spaces in a variety of domains as well as the way these problems are currently addressed— or not addressed— by relevant scholarship and those charged with managing risk. In our view, the scholarly fields that have historically focused on risk assessment and management are inadequate in the face of many contemporary problems, in part because the world is increasingly being populated by forms, practices, and events of uncertainty that cannot be reduced to risk. We make the case that scholars should not focus solely on the appearance of new risks and dangers in the world, which no doubt abound, but should also treat uncertainty itself as a problem and examine the forms of governing and experience that are emerging in relation to it. The studies in this book, with contributions from various fields— finance and markets, security and humanitarianism, and health and environment— enable consideration of the forms of knowledge and technologies as well as differing modes of subjectivity that have developed beyond risk assessment.

2 / Limor Samimian-Darash and Paul Rabinow

What Is the Anthropology of Uncertainty? Danger and Certainty The cultural approach to danger and risk, elaborated by Mary Douglas and Aaron Wildavsky (1982) in Risk and Culture, proposes that each society selects specific dangers for attention on the basis of particular cultural criteria. From this perspective, danger is an ontological hazard that is culturally selected and evaluated. Additionally, in Purity and Danger, Douglas (1966) describes a link between risk and taboo— how taboo (as a socially constructed norm) preserves social boundaries. Similarly, in Risk and Blame, Douglas (1994) argues that danger and blame are linked such that danger is perceived as resulting from undesirable behavior. Most important from our perspective, rather than investigating how cultures cope with risks, these studies demonstrate how they attempt to create certainty. In this book, we depart from this tradition and construct a framework in which danger, risk, and uncertainty are distinct concepts. The reconceptualization of these related issues informs the studies in the book and contributes to a better understanding of the contemporary problem of uncertainty and of the governing mechanisms it elicits. Risk and Modernity The risk society approach, classically laid out in Ulrich Beck’s Risk Society (1992) and Reflexive Modernism (Beck, Giddens, and Lash 1994), proposes distinguishing between danger, recognized in traditional societies, and risk, created by reflexive modernization. Anthony Giddens expands this notion, asserting that “traditional cultures didn’t have a concept of risk because they didn’t need one” (2000a, 40). In Giddens’s contestable view of these societies, hazards were associated with the past and the loss of faith, whereas risk is linked to modernization and the desire to control the future. “Risk is the mobilizing dynamic of a society bent on change, that wants to determine its own future rather than leaving it to religion, tradition, or the vagaries of nature” (42). The risk society approach, then, mainly deals with the production and transformation of “real” risks in “society” (understood as a totality and as an agent) and with society’s attempts to control the future, which render it more “dangerous.” Pat Caplan’s edited collection Risk Revisited (2000) criticizes the universality of risk in social theory and asks why risk has become such a central issue not only in the world but also in the social sciences, particularly sociology. The alternative to that trend is the relatively distinctive explanations to

Introduction / 3

risk offered by a collection of a variety of anthropological cases. In Modes of Uncertainty, our concern is not merely with the diverse perceptions, or cultural selection, of risk presented in various anthropological contexts. Drawing on Niklas Luhmann’s work, we address risk and uncertainty as concepts rather than as things in the world, through which certain knowledge and practices circulate and are made available (both to “societies” and to social theorists). In Risk: A Sociological Theory (1993) and Observations on Modernity (1998), Luhmann treats risk not as an object in a first- order observation but as a concept in a second- order observation. Risk is thus defined not as the obverse of security or as a synonym for insecurity but, rather, is distinguished from the concept of danger. Whereas danger is external to the system, risk is dependent on the decisions of the system: To do justice to both levels of observation, we will give the concept of risk another form with the help of the distinction of risk and danger. The distinction presupposes . . . that uncertainty exists in relation to future loss. There are then two possibilities. The potential loss is either regarded as a consequence of the decision, that is to say, it is attributed to the decision. We then speak of risk— to be more exact, of the risk of decision. Or the possible loss is considered to have been caused externally, that is to say, it is attributed to the environment. In this case we speak of danger. (Luhmann 1993, 21–22)

In this approach, the focus is not on the quality of new dangers in the world (universal or not, calculable or incalculable) but on the mode of observing risk as conceptually inherent to modern systems and how each decision or abstention from decision concerning the future determines risk. In this book, we propose an additional distinction with regard to the concept of uncertainty. We ask how observations about uncertainty come to circulate in the contemporary world, constituting a new problematic field for which certain policies emerge as solutions. From Risk to Uncertainty Other scholarly work approaches uncertainty as comparable to risk or insecurity. That is, the greater the uncertainty, the greater the risk and lack of security, and vice versa. In some studies, however, uncertainty is distinguished from risk and presented as something that exists in situations of incalculable risks (as François Ewald [2002] proposed in his use of the term nonrisk). However, even then uncertainty is presented as an object that ex-

4 / Limor Samimian-Darash and Paul Rabinow

presses a change in the quality of risks in the world and not as a different conceptualization associated with a distinct form of governing and subjectivation. Similarly, Beck (2009) in his later work World at Risk, points to the emergence of a new problem in the risk society, referring to hazards “which nowadays often cannot be overcome by more knowledge but are instead a result of more knowledge” (2009, 5). Though he takes steps toward defining a new problem, Beck still uses the term risk to conceptualize it. The essays in this book call for conceptualizing uncertainty to better confront contemporary problems. Moreover, they take a problematization approach (following Michel Foucault), and, instead of trying to provide or identify ultimate solutions for coping with risk and uncertainty (e.g., as policy makers do), they take technologies and experiences as objects of research and analysis and ask how they emerge in response to the problem of uncertainty: What kinds of truth claims are advanced about the future, what interventions are considered appropriate, and what modes of subjectivity are produced within this problematization? Governmentality and Risk Assessment In company with Mitchell Dean (1999) and Pat O’Malley (2004b), the contributors to this book move beyond the general narrative of an ontological shift— the transition from calculable to global uncontrolled risks. Dean and O’Malley draw on Michel Foucault’s work on “the art of government” and tend to focus on a central governmental technology— risk technology— that turns something into risk to make it governable. That is, risk technology converts uncertainty into possibilities, accidents (assessable risks) over which management and control are possible. This technology has been applied to many areas of research, such as insurance, old age, psychiatry, pregnancy, AIDS, crime prevention, and drug use. Michael Power expands on this theme in his Risk Management of Everything (2004) and Organized Uncertainty (2007), discussing the spread of risk- assessment technology in social and economic institutions. Risk management not only renders organizations extremely accountable and “preoccupied with managing their own risks”— in fact, obsessed with risk— but also results in the emergence of secondary risks. Thus “this trend is resulting in a dangerous flight from judgement and a culture of defensiveness that create their own risks for organisations in preparing for, and responding to, a future they cannot know” (Power 2004, 14). Science and technology studies scholars see as problematic the technoscientific assumption that more knowledge about future risk, by itself, al-

Introduction / 5

lows for prevention or control of that risk. Moreover, some studies, such as Sheila Jasanoff’s The Fifth Branch: Science Advisers as Policymakers (1990) and Brian Wynne’s Risk Management and Hazardous Waste (1987), ask whether the terms risk and risk assessment and control are still adequate descriptors of the “post- normal science” problem. Challenging the basic discourse of modern science and technology, which assumes that predictive control can (at least, theoretically) be fully attained, they argue that the world has become more uncertain and, thus, that a better strategic approach to future eventualities must be developed. Modes of Uncertainty contributes to the discussion of risk as a technology and a form of governmentality by moving the focus from the control of risk to the management of uncertainty. Thus, it joins other recent studies inquiring into technologies that emerge in relation to a new problem space that we have defined as uncertainty rather than risk. The essays in Embracing Risk (Baker and Simon 2002) and in Biosecurity Interventions (Lakoff and Collier 2008) exemplify such studies, though, using the term nonrisk, they present precaution and preparedness as new forms of governing uncertainty that supersede the rationale of prevention, which was adequate for risk The Management of Uncertainty A distinction between risk technologies and technologies of uncertainty was initially suggested during the course of Samimian-Darash’s anthropological study of Israeli preparedness for future biothreats, notably, pandemic flu (Samimian-Darash, 2013). In that case, uncertainty infused a situation in which not only had a pandemic not yet taken place but also, relying on past information, experts could not predict what it would look like or even what virus strain would cause it. This scenario raises the question of what kinds of preparedness practices to put forward before the nature of the actual event is known. In this case, three main technologies were proposed to deal with the problem of uncertainty: the purchasing and stockpiling of antiviral drugs, the enactment of attribution scenarios, and the operation of a syndromic surveillance system. Each of these solutions represented a distinctive approach to the problem and thus the application of a distinctive technology. Antiviral Drugs—Risk Technology

One attempt to prepare for future pandemic influenza involved purchasing and stockpiling antiviral drugs. In preparing for a virtual disease event, it is hard to create an effective vaccine. In other words, production of a vaccine based on information available at a particular time cannot take into account

6 / Limor Samimian-Darash and Paul Rabinow

the possibility that a new strain may appear. Hence, the decision was made in the Israeli case to use antiviral drugs that were not aimed at the unknown virus but, rather, at the symptoms of the disease it was expected to produce. Use of antiviral drugs that can treat a broad range of possible influenza events (based on information about known viruses) assumes that a new event will be similar to previous ones and that antiviral drugs will be effective. Therefore, although antiviral drugs are able to act on a wide range of possible events, they still only constitute a risk- technology type of solution to uncertainty, knowledge of which is based on similar past possibilities. However, regardless of how comprehensive the drugs are, new actual events (a new strain or different morbidity patterns) may occur that have not been (and cannot be) taken into account and against which these drugs would be ineffective. Attribution Scenarios—Preparedness Technology

The idea behind the attribution scenario is to create a possible event before one has come about and then treat the proxy event as if it were real and needs to be prepared for. The attribution scenario does not try to predict the future uncertain event or exchange its uncertainty for known possibilities, but to provide specific realizations of it in the present to enable the system to exercise its reaction before the event becomes catastrophic. The attribution scenarios in the Israeli case were constructed on the basis of available knowledge regarding past incidents of pandemic influenza, current- day avian influenza, and the morbidity patterns of seasonal influenza. Predictions regarding the event were also based on knowledge of morbidity and mortality rates during previous influenza pandemics. What is particularly interesting about the type of preparedness technology just outlined is the number of different scenarios it generates. Scenarios change in relation to new knowledge. That is, expanding knowledge leads to the creation of new scenarios; each scenario creates a new possibility that is treated as real and therefore should be prepared for. Thus, when any new information appears, the scenario is updated to better represent what is still uncertain. However, because the future uncertainty could be actualized in multiple events, which are not known in advance, no information in the present can portray the real event before it takes place. Syndromic Surveillance System— Event Technology

The Israeli Center for Disease Control operates a syndromic surveillance system, which it called into play during preparations for the pandemic flu event. The system has three main aims: to monitor “exceptional morbidity”

Introduction / 7

(before a specific disease has been diagnosed), to detect “new events” as early as possible, and to provide “reliable information” following the occurrence of an event to help in its ongoing management. The system’s challenge is to develop a technology that can deal with an emergency situation at two main stages: before and after the diagnosis of an exceptional event. The syndromic surveillance system is distinguished from the “traditional” response system on the grounds that it deals with the surveillance of information before a diagnosis has been made (regarding a given disease). Because the system does not try to identify the virus or diagnose the event, its success is measured in terms of the speed with which it discovers that there has been an event. Thus, a distinction is made between “early detection” and “first detection.” That is, the actual event has to take place, and then the issue becomes its early detection. Through this practice, uncertainty is recognized as such. It is not converted into past possibilities to be managed, nor is any use made of the realization of future possibilities before they have taken place. Instead, through the detection of syndromes and the emergence of exceptional morbidity patterns, this system produces new events, a multiplicity of occurrences that can signify actual events. The case of preparedness for pandemic flu clearly pointed to new ways of perceiving and responding to uncertainty. We continue to explore that new terrain in this book. Instead of seeing uncertainty as a unique, new problem or as an object in the world, the essays in the book show how uncertainty, as a concept, reflects a way of observing the future and how it facilitates forms of governing, as manifested in policies and experiences in diverse fields of research.

Anthropological Cases Modes of Uncertainty offers understanding of the problem of uncertainty in three major contemporary domains: economics and entrepreneuralism, security and humanitarianism, and environment and health. Each chapter addresses the governing of uncertainty either directly or by careful attention to modes of subjectivity and knowledge production in which uncertaintybased conceptualization plays a crucial role. More broadly, the volume shows how anthropology can productively analyze contemporary problems by providing a conceptual framework underpinned by empirical studies such as those presented in the book’s individual chapters.

8 / Limor Samimian-Darash and Paul Rabinow

Part 1. Economics and Entrepreneurialism Uncertainty has become a vital concern in the context of sustained volatility in global financial markets. The contributors to Part 1 of Modes of Uncertainty discuss how the focus of financial and economic logic has changed from risk to uncertainty and how policies and practices aimed at controlling risk— in a variety of related fields— have shifted to managing uncertainty. Some contributors also address how particular fields have embraced uncertainty as a mode of subjectivity. Pat O’Malley reviews how insurance was conceptualized and implemented at different times in the twentieth century and finds that uncertainty has historically taken on a diversity of meanings, with significant implications for the way freedom is envisioned. Eitan Wilf discusses how ideas of jazz music as a flexible, risk- taking form have infused organizational policy and management and led to the embrace rather than the avoidance of uncertainty. Natasha Dow Schüll analyzes online poker players who use self- tracking software to improve their play and finds that, instead of working against or despite the uncertainty of the game, this technology requires the player to make an ally of it. Part 2. Security and Humanitarianism The contributors to Part 2 of Modes of Uncertainty examine uncertainty as both a means and an end of security technologies. They discuss how security and humanitarian mechanisms not only face the problem of policing and governing uncertain subjects but also how these subjects are transformed through new technologies of uncertainty. Meg Stalcup analyzes the Suspicious Activity Reporting Initiative in the United States, and tracks how police officers observe and document incidents of unspecified suspicious behavior under the assumption that doing so enables management of uncertain events. Rebecca Lemov looks at the history of “coercive interrogation” as a domain of policy and experiment in the United States, examining the constitution of “the uncertain subject” through distinct risk and uncertainty technologies, as part of security policy and practice. Carol A. Kidron focuses on the encounter between global policies of humanitarian interventions aimed at governing potential “trauma outbreak” and the alternative discourses and life experiences with which Cam-

Introduction / 9

bodian trauma descendants confront the imposed cultural risk prevention approach. Gaymon Bennett explores how the uncertainty generated by contemporary biological research invokes new security concerns. Through the institutionalization of malice, the moral transformation of biosecurity is justified, a process played out vividly in the case of engineering of the H5N1 avian virus to be transmissible in humans. Part 3. Environment and Health The current proliferation of doomsday scenarios is symptomatic of how uncertainty has pervaded global environmental and public health issues. The contributors to Part 3 of Modes of Uncertainty discuss spaces and horizons of uncertainty relating to such concerns and describe new modalities of management brought to bear on them. Adriana Petryna shows how “extreme events” like Superstorm Sandy have opened new political horizons of uncertainty. She examines the notion of ecosystemic “tipping points” and looks at how policy makers and climate experts have turned to nonparametric ways of reasoning to assess climate change. Frédéric Keck discusses the management of uncertainty at the humananimal interface in species barrier zones, focusing on how virus hunters develop devices to capture potential catastrophes that they perceive as actual. This space of uncertainty produces a new articulation of microbiological research and public health policy. Austin Zeiderman tracks the governmental techniques and political rationalities being assembled in response to uncertainty in Colombia, where a literally unstable landscape combines with poverty and political inequality to produce urban “zones of high risk.” These zones become spaces of uncertainty that need not be measured to be governed. The volume concludes with an afterword by coeditors Rabinow and Samimian-Darash, who offer reflections on modes of uncertainty as a realm of contemporary anthropological research as well as a form of inquiry.

PA R T O N E

Economics and Entrepreneurialism

CHAPTER ONE

Uncertainty Makes Us Free: Insurance and Liberal Rationality1 P AT O ’ M A L L E Y

“Uncertainty makes us free” is the message of Peter L. Bernstein’s (1998) best- selling book Against the Gods: The Remarkable Story of Risk. In Bernstein’s view, risk— understood as statistical prediction of social and economic phenomena— is problematic for freedom. Probabilistic prediction of the future creates a “prison” that consigns us to an endless repetition of past statistical patterns over which we have no control. Bernstein suggests that if we adopt risk- based governance “nothing we can do, no judgement that we make, no response to our animal spirits, is going to have the slightest influence on the final result” (Bernstein 1998, 229). In his eyes, risk is a deeply troubling technology because to the extent that risk does render the future calculable it renders us unfree. Bernstein therefore celebrates uncertainty. He quotes with approval John Maynard Keynes, who rejected the possibility of using statistical methods to forecast events such as war or stock market prices. These are matters subject to “uncertain knowledge.” “About these matters” said Keynes, “there is no scientific basis on which to form any calculable probability whatsoever. We simply do not know!” (quoted in Bernstein 1998, 229). Bernstein concludes that “a tremendous idea lies buried in the conclusion that we simply do not know. Rather than frightening us, Keynes’ words bring great news: we are not prisoners of an inevitable future. Uncertainty makes us free” (Bernstein, 1998, 229). It is possible from this statement to understand how Bernstein’s book on a topic as obscure as the history of probability could so surprisingly have become a runaway best seller in the late 1990s. Its message appeals to those whose policy preferences run strongly against the “managed economy,” and for whom risk taking in the free market is close to a religious rite. Although the global financial crisis would temper some of this zeal, for close to twenty years leading up to the publication of Against the Gods and much

14 / Pat O’Malley

of the decade following, speculative investment appeared to many as the new Messiah. Triumphing over the planned economies of the Soviet bloc, it seemed that the uncertainties of the “free” market were foundational to a “free” society. At the same time, government was being urged to become more entrepreneurial and less dominated by experts (Osborne and Gaebler 1993). By the turn of the new century, even the US secretary of defense was urging that we must transform not only our armed forces but also the Department that serves them by encouraging a culture of creativity and intelligent risk- taking. We must promote a more entrepreneurial approach to developing military capabilities, one that encourages people, all people, to be proactive and not reactive, to behave somewhat less like bureaucrats and more like venture capitalists. (Rumsfeld 2002)

In the years before the GFC, risk taking and uncertainty appeared to be the touchstones of truth for public policy, the foundational principals of good government in all its branches. It would seem that Bernstein’s analysis of the history of statistical prediction was successful precisely because it cornered the beast in its lair. Against the Gods showed how it was that statistical probability, the nemesis of speculative uncertainty, was a contingent human invention rather than something given in the nature of the world. In short, we need not be governed thus. Hence the triumphal moment at the end of the book when the author wrests the truth from Maynard Keynes, the Dark Angel of the planned economy: uncertainty makes us free. Perhaps surprisingly, these are not at all novel concerns. For many years liberal political culture has regarded statistical probability and government through predictive techniques as compromising freedom. In the nineteenth century, concerns were voiced about the implications of statistical prediction for the sanctity of free will. As Theodore Porter (1995, 164–165) argues, after the publication of Henry Thomas Buckle’s quantitative History of Civilisation in 1847, debates on this issue became at least as prominent and urgent as those generated by Charles Darwin’s The Origin of Species. Porter quotes an outraged commentator from 1860 protesting against this “modern superstition of arithmetic” that threatened mankind with a “worse blight than any it has yet suffered— not so much a fixed destiny, as a fate falling upon us, not personally, but in averages.” The very use of the term “superstition” to refer to statistics gives more than a hint that this was not a technical debate, but a struggle over fundamental values and beliefs. Risk and uncertainty may be different technical operations in the “taming of chance” but they are also

Uncertainty Makes Us Free / 15

cultural artefacts (Douglas 1992), and they play a central role in the culture of liberal freedom. Fear of the “overly” calculable future is also something that profoundly concerned Max Weber, both as a liberal politician and sociologist. For Weber, the development of modernity was characterized by increasing rationalization. Through scientific, legal, bureaucratic, and economic changes, the process of rendering the future more rationally calculable diminished freedom and consigned us to an “iron cage.” For Weber, modernity appears to confront liberalism’s core visions of freedom, even while liberalism acts as one of its principal promoters and beneficiaries. And of course, this founding father of modern sociology was writing in the shadow of Karl Marx, setting out a sociology that was in no small measure a political intervention on behalf of liberal freedom. More recently, a new wave of liberal sociologists have renewed such concerns. Consider James Scott’s (1998) Seeing Like a State, which identifies key problems in the present with the “high modernism” that arises when state and expertise are aligned against a public that is not mobilized in its own interests. Or again, Anthony Giddens’s (2000b) eulogies of the Third Way in which entrepreneurs of society and community as well as economy create new life. They are to invent the future through techniques of uncertainty. An invented future disrupts risk by cutting across the assumption that future patterns of life will repeat those of the past. Likewise in the neoliberal and “new management” literature, enterprise is equated with uncertainty and is eulogized. Best- selling author Tom Peters, of Thriving on Chaos fame, pays tribute to the entrepreneur and promotes a new market- based liberalism. He sees the interventionist state and the planned economy as defeated by a resurgent liberal- capitalism driven forward to prosperity by enterprise. Peters “pays tribute to the entrepreneur,” quoting George Gilder: Entrepreneurs sustain the world. In their careers there is little of the optimizing calculation, nothing of the delicate balance of markets. . . . The prevailing theory of capitalism suffers from one central and disabling flaw: a profound distrust and incomprehension of capitalists. With its circular flows of purchasing power, its invisible handed markets, its intricate plays of goods and moneys, all modern economics, in fact, resembles a vast mathematical drama, on an elaborate stage of theory, without a protagonist to animate the play. (Peters 1987, 245)

In place of a scientifically calculated future epitomized by risk, uncertainty provides the creative art of the possible that will drive prosperity and in-

16 / Pat O’Malley

novation. This neoliberal vision of uncertainty involves techniques of flexibility and adaptability, and requires a certain kind of “vision” explicated at great length by other gurus such as Osborne and Gaebler in their iconic Reinventing Government. Dispensing with technocracy, they promote “anticipatory government” and “governing with foresight” (Osborne and Gaebler 1993, 229). They also promote “communities” over experts in the governance of local problems. They suggest that scientists do not govern as well as communities, but have created a kind of learned helplessness that must be overcome by ordinary people taking back the reins of power from experts. As such writings make clear, there is to be a resubordination of technocracy to the uncertain direction of enterprise and popular preferences. “Uncertainty makes us free” thus emerges not simply as an easy exercise in sloganeering coined for an airport best seller. It is an expression of a set of cultural and philosophical beliefs that are integral to political liberalism. However, they exist alongside a faith in modernization, with its value on rendering the future calculable, that has also been embraced by liberalism. While there is no shortage of evidence of a profound suspicion of modernism, as outlined earlier, of course liberalism from its inception has provided a favorable environment for the development of scientific knowledge, technological growth, the independence of the technocratic professions, and so on. Scientific and technological expertise are understood to be essential to good governance. For liberal policy makers this tension has been present from the start, an integral dilemma of liberal mentalities of government. For the modernist, technocratic strand in liberalism, uncertainty appears as a problem to be overcome. It needs to be rendered predictable. Set against this is a mind- set for which freedom and uncertainty are two sides of the same coin, in which the fundamental problem for liberal freedom is the strain toward rendering it too calculable. This chapter will explore how the struggles between these two philosophical and cultural currents in liberal mentalities have been translated into practice, with specific reference to the nature and place of insurance in British social and economic policy. This is not simply because insurance is one of the central institutionalizations for governing uncertainty in liberal polities, although this fact renders it an obvious focus. Equally it is because of its corollary: insurance has been an institutional site through which British liberalism itself has been shaped. Changes in insurance since the early nineteenth century are closely bound up with changing assessments of the place and nature of risk and uncertainty, and of the relationship between the two. In turn, these changes are imbricated with understandings of what it is

Uncertainty Makes Us Free / 17

to be free, particularly with respect to changing assumptions about whether or not uncertainty is what makes us free.

Freedom of Contract, Independence, and Uncertainty In Britain, insurance for the working classes emerged during the late eighteenth century with the activities of the Friendly Societies— fraternal and benevolent insurance arrangements formed among skilled artisans. Despite some liberal suspicions of the Friendly Societies as a form of combination in restraint of market relations, during the early part of the nineteenth century successive political administrations legislated to encourage the societies’ role in providing life, burial, and sickness insurance for the working class. This was regarded not only as fostering self- help and industry but also as alleviating the financial costs to taxpayers of supporting the destitute. Fraternal societies were characterized by “intentionally organising themselves around notions of ‘friendship, brotherly- love, charity’” in which “any selfunderstanding in terms of ‘risk’ or ‘insurance’ [was] largely absent” (Doran 1994, 134). It was clear at the time that the frequent failures of these funds followed from an inability of fund managers to predict liabilities and to balance these against contributions and funds in hand. The reason for this lay in the benevolent principles of the early societies, which distributed payment of benefits to members according to their need rather than in proportion to their premiums or levels of risk. In order to promote values and practices of thrift and self- help, legislation “encouraged” the societies to replace their traditional emphasis on fraternalism and benevolence with actuarially based principles of fund management. From 1819 onward statutes required that the data tables and distribution guidelines of societies applying for registration be approved by two persons at least, known to be professional actuaries or persons skilled in calculation. The uneven contest between the representatives of these competing principles— the workers on the one hand and the government and the actuaries on the other— resulted in the displacement of a horizontal, fraternally based, and essentially amateur organization by a hierarchical, actuarial, and managerial form of insurance that distanced the rank- and- file members from the professionals who operated the funds. The monthly meeting in the local inn, which was simultaneously a convivial social gathering and a business committee, had largely disappeared by the mid- nineteenth century (Gosden 1973, 23). In its place there emerged the domination of this field by large- scale fraternal orders, which operated out of centralized offices and

18 / Pat O’Malley

held annual meetings at which the membership was rarely able to wield effective power. More than that, the principles of actuarial methods that were set in place further eroded benevolent ideas and introduced a “disciplinary element into membership” (Doran 1994, 175). In particular, graduated contributions were imposed, and members became divided and ordered according to their levels of contribution and risk categorizations. Likewise, the help in kind that had characterized early mutual societies— for example, through the provision of food or work on allotments by fellow members— disappeared. The form of solidarity of fraternal orders of old was thus fragmented and transformed. While the resulting insurance arrangements were still collective, this collectivity was increasingly abstract, individuated, fiscal, and mediated by third parties. By the early part of the twentieth century, such “industrial life insurance” had become the principal institution for governing working class thrift, and few households were not enlisted in this regime (O’Malley 2002). It is easy to read this as the triumph of risk over uncertainty, and in limited respects this is true. As Doran (1994) and others have shown, many in the working men’s associations were aware of the financial benefits of actuarialism, but valued the political and social solidarities associated with the uncertainties of their form of insurance. Their little fraternities of uncertainty gradually dwindled and died. However, it may be more important to note that risk was valued and promoted by governments not just because it worked, but because this rendered insurance a technology that promoted thrift and foresight among the working classes. By reducing the “wasted thrift” and the assumed demoralizing experience that went with the failure of each little “uncertain” insurance fund, risk- based insurance was imagined by policy makers to encourage the practices of foresight and prudence among working people. Liberals of the time did not find the “triumph” of risk as problematic for freedom because it fostered and defended a kind of freedom particularly valorized in the nineteenth century: economic independence. Yet it should be recognized that the transformation in insurance was part of another, peculiarly liberal, freedom: the triumph of freedom of contract. With the rise of the insurance companies, formal insurance contracts and contractual arrangements displaced informal mutuality. Contract was very much about governing uncertainty by quite different means than statistical calculations of risk. Indeed, in order to understand just why risk- based insurance was so acceptable to the Victorian liberal imagination, we need to recognize that actuarial insurance was a commodity provided through a contractual relationship freely entered into by both parties. In this sense, actuarialism in insurance was no more problematic to freedom than the fact

Uncertainty Makes Us Free / 19

that engineering deployed statistics to increase the reliability of machinery. Actuarial insurance in this form was a commodity purchased by those with sufficient foresight to protect their financial independence. And foresight, as Bernstein and his allies would doubtlessly concur, is also the nonstatistical, entrepreneurial mode of dealing with uncertainty. During the nineteenth century, contract law came to embody specific expectations about what “foresight” means. Previously, while its importance to liberal subjectivities was well understood, it had been formulated only in abstract ways by such liberal philosophers as Jeremy Bentham. In contract law, foresight was developed primarily in relation to the requirement that subjects take into account the foreseeable impact of a breach of contract on contractual partners. Conversely, none should be accountable for outcomes that were not “reasonably” foreseeable. The courts elaborated on exactly what “reasonable foresight” entailed: What kinds of eventuality should be foreseen? How unlikely should a possible event appear in order that it can be ignored? What should parties tell each other so that each can make “reasonable” forecasts of the future? What should count as the level of prudence and foresight that can be expected of “reasonable” people? (O’Malley 2000b). As contract became a model for all “free” relations in the Victorian era, such standards of foresight were to be applied with respect to “accidents.” In tort law, negligence emerged during the nineteenth century as a key requirement for attributing responsibility and liability in relation to accidental harm. People should be held responsible for events they could foresee, but are responsible only for these events.2 Thus an injured worker would only be able to sue the employer for compensation if he or she could prove that the employer had been negligent. However, if the employer could prove that the worker’s own negligence had contributed to the accident, then the worker’s “contributory negligence” would limit or remove the right to compensation. Likewise, if it could be shown that the accident resulted from the negligence of another employee, then an action would lie against the fellow worker, who of course was usually too poor to be worth pursuing. Rather than imposing what policy regarded as a “paternalistic” relationship between employer and employee, this framework assumed that the worker, or the fellow worker, was a responsible adult who acted as a free subject. He or she thus should have exercised foresight and prevented the accident. Even in work known to be dangerous, recovery of compensation for injured workers was often difficult. Following the same assumptions about foresight, it was assumed that the worker should have checked to see if the work was dangerous, and if so should have negotiated a higher wage to reflect this risk. Having made this “voluntary assumption of risk,” and having

20 / Pat O’Malley

been paid a higher wage for it (which of course rarely happened in reality), then the worker could not double dip by claiming compensation if injured. All of this appeared “reasonable” because it embodied a specific vision of freedom through foresight and responsibility in a free and thus uncertain world. In this sense, contract was a characteristic invention of the classical liberal era: a way of rendering the future more calculable but only through the exercise of freedom’s instrument of foresight. Nowhere is the place of foresight as a technique of liberal freedom made more clear than in the writings of Jeremy Bentham. For Bentham “security”— the protection of person and property— was the primary object of law precisely because it “necessarily embraces the future” (Bentham 1962, 302). Security appeared to him as the condition of existence upon which rests rational calculation of the future and all that follows from this foundational attribute of liberal subjects: In order to form a clear idea of the whole extent which ought to be given to the principle of security, it is necessary to consider, that man is not like the brutes, limited to the present time either in enjoyment or suffering, but that he is susceptible of pleasure and pain by anticipation, and that it is not enough to guard him against an actual loss, but also to guarantee to him, as much as possible, his possessions against future losses. The idea of his security must be prolonged to him throughout the whole vista that his imagination can measure. This disposition to look forward, which has so marked an influence on the condition of man, may be called expectation— expectation of the future. It is by means of this that we are able to form a general plan of conduct. (Bentham, 1962, 308)

Security would be provided through the guarantees of property law and contracts, and through control of predation through crime. Equally it would be provided by minimizing the drain on resources created by relief to the poor. The principal solution to both crime and poverty was to train these problematic segments of the population in the ways of independence or “self- reliance”— a theme that became almost the defining characteristic of classical liberal problematics. In Bentham’s view a vital task was to impart to all a particular kind of rationality: the “disposition to look forward.” Discipline was not only (and perhaps not even) intended to create habits of blind obedience. Even more important, it was to “accustom men to submit to the yoke of foresight, at first painful to be borne, but afterwards agreeable and mild: it alone could encourage them in labour— superfluous at present, and

Uncertainty Makes Us Free / 21

[the benefits of] which they are not to enjoy till the future” (Bentham 1962, 307; emphasis added). “Encouraging the spirit of economy and foresight among the inferior classes of society” (Bentham 1962, 316) was important because, when coupled with security, this would produce abundance. It is through the exercise of foresight that entrepreneurs and workers alike could seek to create and take advantage of opportunities and thereby create wealth. Prudence, thrift, and enterprise were all cut from the same cloth. Each involved common sense calculations of future possibilities. They required no arcane knowledge or scientific expertise to perform.3 They were arrangements entered into voluntarily (at least in theory). In this way uncertainty came to represent to liberals not simply the “incalculable” but instead a specific set of techniques for being free. They were techniques that would guarantee nothing but that operated through the common sense, “reasonable” calculation of possibilities. By implication, it meant also that through bringing about these possibilities and taking advantage of them, liberal subjects constantly and competitively invented the future anew.

Making Freedom Social Various commentators on the development of social insurance have focused on its status as a social technology: that is, as a technology that operates at the level of the entire society or, more precisely, of the nation- state. One effect of this kind of analysis is that it overlooks important continuities with the liberalism of the nineteenth century, via insurance. It was not at all the case that social insurance, even compulsory social insurance, was alien. Bentham, to take a foundational argument, had complained that in the early 1800s private insurance was flawed: This remedy is imperfect in itself because it is always necessary to pay the premium, which is a certain loss, in order to guarantee one’s self against an uncertain loss. In this point of view it is to be desired that all unforeseen losses that can fall upon individuals without their fault, were covered at the public expense. The greater the number of contributors, the less sensible is the loss to each one. (Bentham 1962, 579; emphasis added)

For Bentham it was clear that this insurance would be “founded on the calculation of probabilities” (Bentham 1962, 579). Risk was no problem for such liberals in this context; quite the reverse, as has been seen. But

22 / Pat O’Malley

Bentham added the proviso that insurance compensation would have to be subject to foresight: there could be no compensation for negligent actions for that surely would be an invitation to accidents. Fault- linked insurance, in other words, retained the subordination of risk to foresight as the key technology of uncertainty.4 What was emerging began to cut clean across this. The development of worker’s compensation insurance, for example, largely displaced the array of tort doctrines that denied workers compensation under the rules of contributory negligence, the fellow servant rule, or voluntary assumption of risk. In some measure this displacement of tort was based on the sense of injustice that had been associated with the denial of compensation that the legal category of negligence inflicted on injured workers. But in at least equal measure it was based on the observation that industries had constant rates of accidents year after year. In practice, it appeared that the focus on fault merely assigned harms to specific individuals, while the overall distribution of harms in an industry remained constant from year to year (O’Malley 2004b). Likewise, in fields such as joblessness, the focus on distinguishing the culpable and feckless “idle poor” from the deserving poor came under attack from those whose examination of statistics suggested to them that unemployment was not a characteristic of individuals but a property of another entity: the “economy” (Walters 2000). In both instances, scientific investigation had discovered (or invented) meta- individual entities such as “industries,” “economies,” “societies,” and “populations,” which appeared to obey their own quasinatural laws of motion. The social sciences, in their turn, were to render these observations scientifically real. From one point of view, this interpretation of the social world depended on the growth of statistics and is integrally associated with risk. Yet the invention of these entities was not in any straightforward sense a “statistical” discovery, for statistics do not speak for themselves. With reference to Germany, for example, Eghigian (2000, 43–44) argues that elementary statistics had long been compiled by the absolutist states, as part of their project of the omniscient Police, without this leading to the discovery of any causal entity understandable as “society.” Thus early nineteenth- century governments “remained individualistic, psychological, and prescriptive in their approaches” (Eghigian 2000, 43). Statistics, from their viewpoint, revealed the patterns of individual behavior, mapping the activities and distributions of ideas and behaviors of individuals. However, the statistics and probability theories applied to social life in nineteenthcentury Europe were of a profoundly different character. Advanced by indi-

Uncertainty Makes Us Free / 23 viduals and groups from government, industry and science, the new statistics was part of a self- consciously social science of social motion. . . . It was offered as an eminently empirical, quantitative method for discerning the laws of a changing society. Yet equally important were its political implications. Statisticians of the early nineteenth century saw their science as an attempt to bring a measure of expertise to social questions, to replace the contradictory preconceptions of the interested parties by the certainty of careful empirical observation. They believed that the confusion of politics could be replaced by an orderly reign of facts. (Eghigian 2000, 44)

What we are seeing here are the foundations of a political struggle between two liberal rationalities of security. One, a sociotechnical, modernist rationality in which society and economy are to be managed efficiently through scientific knowledge of entities that operate according to quasinatural laws “revealed” in probability and risk. The other is a volitional rationality in which security and freedom are founded in techniques of individual foresight and uncertainty. The struggle was registered in many sites and was to persist more or less unabated throughout the twentieth century. Most significantly, the struggle was etched into the foundations of social insurance itself. Private insurance’s contractual form provided a legal right of benefit to the insured party in the event of specified harms occurring— whether or not these were actuarially calculated. A relation of mutual obligation was established as a legally enforceable right rather than a relationship of dependence. In most of these respects there was a marked contrast with the operation of poor laws and charitable relief. Private insurance had nestled in a prudential diagram of freedom, risk, and security that takes the form of “freedom of contract” and prudential “independence.” It is no coincidence that the social insurances introduced after the early 1880s were contributory in nature: the members of the scheme paid regular, if compulsory, premiums from their wages. It was what Beveridge (1942) was later to refer to as “compulsory thrift.” This was a key reason that social insurance appealed to so many liberals and appeared consistent with liberal principles. It was true, significantly, for Bismarck’s pioneering Sickness Insurance Law of 1884–85. The National Insurance Act of 1911 in England, which was partially modeled on Bismarck’s scheme, introduced compulsory contributory insurance relating to health and unemployment. As Ogus put it, social insurance maintained, in a somewhat modified form, the exchange or reciprocal basis to social welfare: it was based on past performance in employment, and from financial contributions from the individual himself; benefit could thus be

24 / Pat O’Malley justified as having been “earned.” In legal terms it gave rise to something akin to a contractual right. In moral, cultural terms, it incorporated the traditional puritan, capitalist virtues of thrift and foresight. (1982, 183)

Writing more than a quarter century later Beveridge was still adamant that his blueprint for the postwar welfare state carried through the central principle of social insurance. It should be “benefit in return for contributions rather than free allowances from the state.” He was personally opposed to the means- tested forms of unemployment assistance, introduced in the 1920s and 1930s as a benefit for those whose unemployment insurance coverage had been exhausted. In his eyes such increasingly prevalent noncontributory schemes created asymmetrical relations of dependence, and penalized those who had “come to regard as the duty and pleasure of thrift, of putting pennies away for a rainy day” (Beveridge 1942, 182–185). Beveridge the liberal here surely is giving voice to Bentham: social insurance is not to displace foresight, but only to augment it. Set against this was the more “systematic” modernist or technocratic imaginary in which problems generated at the level of the social or the economic were to be addressed at that level. Individual contributions were of secondary importance where a single process of funding, such as taxation, could do away with so many complications created by the vagaries of markets and the vicissitudes of life that, after all, were understood to be generated primarily by systematic forces rather than individual wills. What mattered was not individual thrift and diligence, but membership of a social collectivity or distribution whose motions were comprehended only by expertise. Risk came to dominate uncertainty. The result was often a patchwork of uneasy compromises, of contributory and noncontributory schemes, of “earned” versus universal benefits where the latter followed simply from membership of the social distribution. Universal benefits were to become the principal target of neoliberals.

Reasserting Uncertainty: Freedom of Choice and Neoliberalism This chapter opened with Bernstein’s view that freedom lies in uncertainty, and argued briefly that this was a perspective that had found its time in the present. Nevertheless, the championing of uncertainty by Tom Peters and Osborne and Gaebler, and of Gidden’s vision of the enterprise society, clearly do not simply resurrect the liberalism of the nineteenth century. As many have pointed out, there are multiple and significant differences that

Uncertainty Makes Us Free / 25

impinge on the ways that uncertainty now is mobilized. To begin with, the market has ceased to be an economic domain that should be left alone by the state. Rather the market is a technique for governing a multitude of problems and processes regardless of whether they are within or outside of the state. States themselves have to become enterprising (Considine 2001); even professions come under the sway of markets when “freedom of choice” is used to insist, for example, that formerly deviant health regimes, from chiropractic to aromatherapy, become available competitors to professional medicine. This new rationality of freedom has reshaped areas of insurance. In the name of increasing the autonomy of subjects, and of expanding their “freedom of choice,” defined- benefit life insurance policies have been challenged by market- based policies. In these, benefits will depend upon the performance of the individual investment portfolio. This insurance is not about “the taming of chance,” the term that has come to be so familiar in risk- minimizing or risk- spreading readings of insurance. Rather, market risk and speculation— key techniques of uncertainty— are to be given free rein. Insurance itself has started to be transformed in this process. Distinctions between insurance and other forms of “financial product,” such as gambling and financial speculation, are blurring. For example, as Kreitner (2000) argues, it is now legal to buy the life insurance policies of HIV/AIDS sufferers in the United States. This can be seen as a gamble placed on when the victim will die. For the purchaser who must take over the premiums, the sooner the person dies the better. Or again, an insurance policy based on stock market performance readily appears as a speculative investment that is little distinct from shares themselves. It is a long way from the careful collection of pennies set aside as insurance against a rainy day. To this extent uncertainty has become not merely the resort of insurers under pressure, where actuarial data are not effective or available. Increasingly, with the blessing of neoliberal policy, uncertainty is becoming a front- rank technology of preference for the industry and its consumers. The neoliberal claim is simply that expertise creates dependency and usurps decision making. Consumers take charge of their money and exercise a choice for speculative investment governed by the uncertain skills of investment rather than by expert- calculated probabilities alone. The market, that enduring but adaptable technology of uncertainty, is to give another freedom— freedom of choice— to which experts and risk itself are subordinated. This emerges as part of a much broader liberal critique— one with a considerable lineage— that involves the reassertion of uncertainty over risk.

26 / Pat O’Malley

Conclusions: Freedoms, Liberalisms, Policies By classifying people and problems (and particular people as policy problems) policies actively create new categories of individuals, such as “citizens” and “ratepayers,” “asylum seekers” and “economic migrants.” . . . The point here is not that policy dictates the behaviour of its target population but rather that it imposes an ideal type of what a “normal” citizen should be. Individuals of a population must contend with, measure up to, subvert, or simply internalize these ideal types as their own identity. In short, modern power largely functions not by brute imposition of a state’s agenda but by using policy to limit the range of reasonable choices that one can make and to “normalize” particular kinds of action or behaviour. (Wedel et al. 2005, 37–38)

In Against the Gods, Bernstein did not condemn risk and probability. Quite to the contrary: while arguing that they are contingent inventions, he suggested they could not be dispensed with in the everyday lives of his readership. Various form of investment deploy probabilistic modeling. Investors should look to probabilities in making decisions about their portfolios and glean statistical data about the companies in whom they put their financial futures. There is nothing wrong with probability as such, but rather with the uses to which it is put, which by implication restrict rather than enhance freedom. So who are the subjects who uncertainty has made free, and in what does their freedom consist? The short answer is given by the fact that after the opening chapters, much of Bernstein’s book describes how risk and uncertainty relate to finance. This is not surprising once it is recognized that Bernstein is a practicing investment consultant and CEO of his own business. Among other things, Against the Gods is a background briefing for investment entrepreneurs who would accept his proposition that “the capacity to manage risk, and with it the appetite to take risk and make forward- looking choices, are key elements of the energy that drives the economic system forward” (Bernstein 1998, 229). The subjects of the neoliberal era, the intended readers of Against the Gods, are imagined as entrepreneurs, just as Donald Rumsfeld’s military personnel were to become venture capitalists in uniform. But these are not Jeremy Bentham’s subjects, even though they are to exercise the same technique of foresight. And it is not quite Bentham’s freedom. The vast majority of Bentham’s imagined subjects were poor people who needed to become systematic risk avoiders. Through the exercise of diligence, self- denial, and thrift

Uncertainty Makes Us Free / 27

they could weather life’s foreseeable vicissitudes. Books such as Samuel Smiles’s Self Help: With Illustrations of Conduct and Perseverance (1913) and Thrift: A Book of Domestic Counsel (1907)—originally published in 1859 and 1875 respectively— were textbooks on the arts of self- denial, savings, and diligence. These were the nineteenth- century equivalents of Against the Gods, Thriving on Chaos, and a host of publications on how to succeed in business and investment. The classical liberal subject was a risk avoider, prudent and cautious. The neoliberal subject embraces risk (or more precisely uncertainty) and seeks wealth through business enterprise and investment in the stock market. A knowledge of probabilistic risk management is an essential tool in these life courses, but like fire it is a dangerous master. Probability must remain a servant to free choice, for it is uncertainty that makes us not only free, but wealthy. Insurance now appears not as the institutionalization of statistical probability, a politically neutral and stable technology simply applied by policy makers in pursuit of this or that specific governmental program. Rather, its place and its form may be seen to vary quite radically with shifts in the broad political visions of liberalism itself. It moved early from being an uncertain technique of prudence and mutual assistance among workers to a risk- based commodity to aid in thrift. This development of actuarial insurance during the nineteenth century ironically made possible the emergence of the welfare state that was intended to make liberal subjects more free by removing the arbitrary vicissitudes of life in general and of the free market in particular. To some at the time, this appeared to erode the virtues of personal risk avoidance and attack the freedom of independence. To many others, however, it appeared as progress: the outcome of a linear development in social and economic policy in which science and technology contributed to political, social, and economic freedom just as they were contributing to freedom from sickness and disease. New freedoms were to be achieved through this transformation of insurance, by rendering the future more calculable. Few, if any, commentators in the early 1970s foresaw the unraveling of this policy framework. Talk was far more about an unholy corporatist order emerging from alliances between the regulatory and welfare state, monopoly capital, and giant trade unions. Competitive capitalism appeared, as Marx predicted, to be guttering out. The resubordination of risk to uncertainty from the mid- 1970s forward, carried out in the name of restoring freedoms, was not simply the exercise of liberalisms’ familiar concerns about “when have we governed too much?” Certainly policy apparatuses were subjected to demands for deregulation. Mantras of “small government” and “privati-

28 / Pat O’Malley

zation” abounded, and the development of “stakeholder politics” was intended to decrease. Some of this may have involved a shift in governance away from the state and into the private sector. But it was obvious from the start that states were to become more involved in governing many aspects of life, and perhaps nowhere more than in the economy. Regulation increased competition in the business sector and reduced professional domination in favor of market models; interventions eroded the power of trade unions, changed employment relations, and promoted enterprise through lifelong learning; and policy developments monitored, mobilized, and retrained the unemployed in their own interests. In addition, state bureaucracies themselves were to be changed so that they looked more like the new ideal of market- competitive organizations. If all this has become familiar, it may have been misunderstood. The question was not simply “when have we governed too much?” as is often assumed; it was “when have we made life too certain, too calculable?” The key changes associated with neoliberalism were focused on how to reintroduce and valorize uncertainty. Uncertainty would create corporate and personal wealth, efficiency, and well- being through competition. For the new subjects of liberalism— not the subjects of this or that policy, but all liberal subjects— uncertainty was not to be feared or reduced, but to be celebrated and embraced. Liberalism’s volatile relationships with risk and uncertainty took a dramatic turn after the mid- 1970s. The key question for us now emerges not simply as “does uncertainty make us free?” but rather, “what kind of uncertainty makes us what kind of free?”

C H A P T E R T WO

The “Cool” Organization Man: Incorporating Uncertainty from Jazz Music into the Business World E I TA N W I L F

At the 1995 Academy of Management National Conference in Vancouver, British Columbia, Canada, a symposium took place entitled “Jazz as a Metaphor for Organizing in the 21st Century.” The symposium consisted of a series of scholarly presentations, “a demonstration and discussion of jazz improvisation by panelists who were professional jazz musicians, followed by a concert and social event during which these musicians regaled the audience with superb jazz” (Meyer, Frost, and Weick 1998, 540). The presentations, together with additional articles on this topic, were eventually published in the top- tier journal Organization Science. The authors explained that the symposium had been organized in response to the significant changes in the nature of the challenges that organizations would have to cope with in the twenty- first century. As one author put it (Barrett 1998), to come up with organizational models that would be adequate to this changing environment, we need a model of a group of diverse specialists living in a chaotic, turbulent environment; making fast, irreversible decisions; highly interdependent on one another to interpret equivocal information; dedicated to innovation and the creation of novelty. Jazz players do what managers find themselves doing: fabricating and inventing novel responses without a prescripted plan and without certainty of outcomes; discovering the future that their action creates as it unfolds. (605) The mechanistic, bureaucratic model for organizing— in which people do routine, repetitive tasks, in which rules and procedures are devised to handle contingencies, and in which managers are responsible for planning, monitor-

30 / Eitan Wilf ing and creating command and control systems to guarantee compliance— is no longer adequate. Managers will face more rather than less interactive complexity and uncertainty. This suggests that jazz improvisation is a useful metaphor for understanding organizations interested in learning and innovation. To be innovative, managers— like jazz musicians— must interpret vague cues, face unstructured tasks, process incomplete knowledge, and yet they must take action anyway. (Ibid., 620; see also Weick 1998)

My purpose in this chapter is to analyze this specific strand of research in the field of organizational studies, which has approached jazz improvisation as a source of inspiration for the design of organizational models that would allow organizations to better function with respect to the increasing uncertainty that is likely to characterize their institutional environment in the twenty- first century.1 I argue, first, that one of the key advantages of the incorporation of the jazz organizational template highlighted by these theorists is the fact that it is productive of an organizational structure that is not only flexible enough to cope with unexpected events but is also capable of producing unexpected events that can be further developed into and function as innovations in fields in which to remain stagnant is to perish (Akgun et al. 2007; Dyba 2000; Kamoche and Cunha 2001; Mantere, Sillince, and Hamalainen 2007; Moorman and Miner 1998). In this respect, the jazz organizational template “does not operate by converting the future into calculated, known risks . . . nor does it imagine an unknown future and then realize it”; rather, it generates “multiple actual events” that are capable of being developed into viable innovations. In doing so, it “retains uncertainty through its action” as a form of “potential uncertainty” (Samimian-Darash 2013, 2). Second, I argue that in addition to an attempt to develop an organizational technology that incorporates uncertainty into its very logic of operation, the search for new organizational templates, which draws inspiration from the field of art in general and jazz in particular, represents a mode of legitimation and naturalization of a new work- related form of uncertainty with specific experiential dimensions that beg clarification. Whereas anthropologists have tended to focus on the experiential dimensions of contemporary work- related forms of uncertainty mostly with respect to people found either at the lowest or at the highest ends of the occupational ladder (Muehlebach and Shoshan 2012; Ho 2009),2 I shift the analytic focus to midlevel workers in contemporary knowledge- based companies and the specific form of uncertainty they are asked to inhabit as a key dimension of their work, namely, creative uncertainty. If, as some scholars have argued,

The “Cool” Organization Man / 31

post- industrial modernity is characterized by “the rise of the creative class” (Florida 2003)— that is, a coterie of professionals defined by their creative production and manipulation of knowledge and information— then there is little surprise in the fact that models and metaphors of creative agency have become the focus of management theories whose goal is to harness creative uncertainty and potentiality as a resource in boosting productivity and creating value. These transformations, in turn, have significant experiential implications that have hitherto remained undertheorized. The metaphor of “creativity,” in general, and “jazz,” in particular, normalize and legitimize these implications. Before unpacking these points, I provide an overview of the history of organizational studies throughout the twentieth century, focusing on different schools and the specific managerial theories they formulated in response to specific economic conditions. One crucial transformation in this intellectual history was the gradual acceptance by organizational theorists of uncertainty as a key feature of organizational reality, which necessitates the design of new organizational structures.

From Designing Predictability to Incorporating Uncertainty Although organizational studies did not exist as an institutionalized scholarly field until the late 1940s, by that time it already had important precursors in the form of administrative and management theorists such as Frederick Taylor who, from the end of the nineteenth century and throughout the first half of the twentieth century, attempted to formulate managerial principles and thus to rationalize and standardize production. Within this perspective, organizations were understood to be instruments designed to attain specific and predetermined goals in the most efficient and rational way that itself is amenable to clear formulation. Taylor’s scientific management of production was “the culmination of a series of developments occurring in the United States between 1880 and 1920 in which engineers took the lead in endeavoring to rationalize industrial organizations” (Scott 2003, 38; see also Shenhav 1999). These engineers’ vision was governed by the image of the organization as a well- oiled machine in which different parts work in precise and reliable coordination with each other and nothing is left to chance. The restructuring of the workplace encompassed not only every task performed by workers but also the design of the workspace in a way that would facilitate such efficient and reliable coordination. Ultimately, it also involved the restructuring of the principles of managerial decision making. Taylor wrote the following:

32 / Eitan Wilf Under scientific management arbitrary power, arbitrary dictation, ceases; and every single subject, large and small, becomes the question for scientific investigation, for reduction to law . . . The man at the head of the business under scientific management is governed by rules and laws which have been developed through hundreds of experiments just as much as the workman is, and the standards which have been developed are equitable. (quoted in Scott 2003, 39)

At stake is the restructuring of the organization such that uncertainty would be reduced; indeed it would cease to become a factor to begin with, or, if it should arise, could be immediately resolved by the application of predetermined, rational rules and procedures. If this rational system perspective was endorsed by mechanical engineers who viewed the organization as a system designed to pursue clear and specific goals by rational and well- formulated means, subsequent approaches to organizations were developed by industrial psychologists who viewed the organization as a much more complex entity defined by goal complexity and informal structure. Specifically, they highlighted the discrepancies between stated and actually pursued organizational goals, and between the ideal of formal structure and the reality of informal structure. Key among these industrial psychologists was Elton Mayo who demonstrated through a series of studies that individuals do not always function as atomistic, rational, and economic agents, but rather follow a complex set of motivations that involve feelings and sentiments based in group solidarity (Scott 2003, 62). Mayo’s findings produced a heightened focus on managerial leadership as a mechanism of influencing the behavior of individual subordinates. Managers were encouraged to be more sensitive to workers’ psychological and social needs. This perspective highlighted emotional control, anger management, empathy, and strong interpersonal skills as key managerial resources (Illouz 2008). It subsequently led to managerial notions such as job enrichment, participation in decision making, and work satisfaction. In contrast to the rational system approach, this framework acknowledged uncertainty as a component of organizational reality that arises due to the nature of human beings as complex entities governed by diverse and often conflicting sets of motivations. However, similar to the rational system approach, its goal was to train managers and restructure the work environment so that this uncertainty would not arise because workers would be satisfied and, if it should arise, could be immediately resolved by managers with the necessary emotional constitution and skills. By contrast, the third dominant approach within organizational studies,

The “Cool” Organization Man / 33

which emerged after World War II, conceptualized organizations as entities whose logic encompasses uncertainty and flexibility not as undesirable features but as natural components that, indeed, are a crucial resource for their survival (Scott 2003, 82–101). Inspired by cybernetics and information theory, this approach was based in the distinction between different systems and their degrees of complexity. It argued that in less complex systems, such as a certain kind of machines, the interdependence between parts is rigid and the behavior of each part is highly constrained. These systems are nonreactive to their environment. Hence they function well in stable environments and are suitable for the completion of predetermined, unchanging tasks. However, in more complex systems such as social systems and organizations, the interdependence between parts is less constrained, making these systems loosely coupled and flexible. They are reactive and open to the environment, dependent on informational flows, and are able to self- maintain and make adaptations based on informational input. This approach to organizations as loosely coupled entities viewed uncertainty as one of their natural features rather than as an anomaly that must be eliminated as quickly as possible. Instead of emphasizing formal structure and predetermined tasks performed by specific means, it highlighted complex systems’ success in coping with and mobilizing uncertainty based in their ability to process informational input of various kinds— both internally and externally derived— and thus to change their means for the attainment of specific goals, and to change their goals according to shifting contextual conditions. A significant share of organizational theory has subsequently focused on incorporating this understanding of uncertainty as an organizational resource by determining proper work flows, control systems, and informationprocessing templates in relation to the potentialities and limitations of human individuals.3 Karl Weick, who has been a key figure in the rise of the jazz metaphor for organizing, has argued that “the basic raw materials on which organizations operate are informational inputs that are ambiguous, uncertain, equivocal,” and hence the goal of organizing should be to establish “a workable level of certainty” (Weick 1969, 40; see also Scott 2003, 98) in the context of which human individuals could function well. On the one hand, theorists pointed out the limitations of human individuals as information processors in terms of “low channel capacity, lack of reliability, and poor computational ability.” On the other hand, they pointed out the advantageous features of “the human element,” such as “its large memory capacity, its large repertory of responses, its flexibility in relating these responses to information inputs, and its ability to react creatively when the

34 / Eitan Wilf

unexpected is encountered” (Haberstroh 1965, 1176; see also Scott 2003, 95). Hence the task of systems design became clear: The challenge facing the systems designers is how to create structures that will overcome the limitations and exploit the strengths of each system components, including the individual participants . . . Since individual participants are limited in their capacity to process complex information, organization designers endeavor to construct structures capable of assisting participants to deal with these shortcomings. (Scott 2003, 95)

The strand of research within organizational studies that turned to jazz improvisation in search of an organizational model that would be adequate to the needs of the twenty- first century organization emerged directly out of this concern of systems designers to create and find structures that would cultivate and tap into the strong points of “the human element” such as “flexibility” of response in relation to “information inputs” and the “ability to react creatively when the unexpected is encountered.” Jazz improvisation was understood to represent precisely such a structure.

Incorporating Jazz Improvisation The term “jazz music” encompasses a wide range of stylistic genres, and is thus a fuzzy category with porous boundaries. Almost all of the organizational theorists who have turned to jazz in search of organizational models focus on straight- ahead jazz, in which a group of musicians improvise on a given (“standard”) tune that consists of a melody and a harmonic sequence (a string of chords). Players improvise on these minimal structures by using a stock of conventional building blocks (such as short phrases and modes of articulation), which they combine in inventive ways and in response to the real- time contribution of their bandmates. The real- time, improvised nature of this art form means that it is inherently an emergent phenomenon, that is, it results in meaningful structures that, to a great extent, cannot be anticipated in advance. Although it is not creation ex nihilo, because mature improvisers must master various stylistic conventions and have a thorough knowledge of the canon, the actual outcome of group improvisation remains uncertain and unpredictable and its creativity resides precisely in these features. Organizational theorists who have turned to the jazz metaphor usually rely on ethnographies of jazz improvisation or their own experience as semi- professional jazz musicians to emphasize a number of features of

The “Cool” Organization Man / 35

jazz related to the uncertainty that pervades it, not as something that needs to be eliminated but as a feature that is constitutive of the very creativity of this art form.4 For example, Barrett (1998, 609–612) enumerates a number of “characteristics of jazz improvisation,” followed by concrete advice on how these characteristics can be applied to systems design. A number of those characteristics are directly related to uncertainty. First, Barrett argues that jazz musicians often intentionally disrupt their habituated playing patterns and put themselves in unfamiliar musical situations that are likely to produce errors and unexpected outcomes. They keep pushing themselves beyond their own comfort zone and thus ensure that their playing does not become stagnant and predictable. Second, musicians use the unexpected outcomes and errors that result from this emphasis as resources and musical opportunities to redefine the context: something that at one point seems like an error subsequently becomes coherent within this new context. In this way, new meaningful structures are continuously being generated and developed. Third, musicians use minimal structures of communication and planning that foster flexibility and indeterminacy. Thus a jazz player has only a tune’s basic harmonic structure and melody to improvise on, as well as the ongoing contribution of his fellow musicians. These minimal structures foster uncertainty of information. Fourth, the jazz band is structured around distributed task negotiation and synchronization between band members, which means that information constantly flows in all directions rather than hierarchically. With respect to each of these characteristics, Barrett makes concrete suggestions for organizing. First, organizational leaders must encourage and require their employees to abandon habituated modes of doing things and take risks. Second, they must change their modes of evaluating their employees by treating the latter’s errors as an inseparable part of learning rather than as punishable events. Significantly, this recommendation, which first treats errors as an inevitable outcome of learning, ends up embracing them as a resource and encouraging their production. Thus by creating “organizational climates that value errors as a source of learning . . . organizational leaders can create an aesthetic of imperfection and an aesthetic of forgiveness that construes errors as a source of learning that might open new lines of inquiry” (Barrett 1998, 619; emphasis added).5 Third, organizational leaders must develop the equivalent of minimal structures that will allow and enable maximum flexibility and maintain ambiguity while providing sufficient orientation. Such equivalent structures might be “credos, stories, myths, visions, slogans, mission statements, trademarks” (612). Fourth, organizational leaders must cultivate a work environment characterized by

36 / Eitan Wilf

“distributed, multiple leadership in which people take turns leading various projects as their expertise is needed” (618). These and similar recommendations, which are prevalent in this literature, focus on the incorporation into organizational policies of those features of jazz improvisation that are productive of uncertainty and whose very logic harnesses and promotes unpredictability of outcomes. The reasons for this orientation are twofold. First, these recommendations are related to the paradigmatic shift in organizational studies that culminated in the realization that contingency and uncertainty have become part and parcel of the environments within which many organizations must function today. A flexible organizational structure has a better chance of coping with contingencies than an inflexible one. However, I suggest that a more interesting reason is based in the realization that some organizations must assume organizational structures that harness uncertainty into their very logic of operation, not only because they may help cope with external contingencies but also because such structures are productive of contingencies that are essential for organizational survival in specific economic sectors and fields of operation. Organizational theorists have realized that “different environments place different requirements on organizations . . . Environments characterized by uncertainty and rapid rates of change in market conditions or technologies present different challenges . . . to organizations than do placid and stable environments” (Scott 2003, 96). A software company must function within an environment that is completely different than that of a standardized container manufacturing company. To succeed, the former must not only follow external changes in the industry but also continuously produce them.6 Hence many programmatic calls to adopt organizational models from jazz improvisation into organizations have been motivated by the hope that such models can foster a culture of new product development. If the creativity of jazz musicians consists, in part, of their ability to continuously produce unexpected events and then to elaborate some of them into meaningful structures, then, hopefully, an organization that adopts this organizational model might be able to continuously produce unexpected ideas that could then be further developed into and function as viable innovations in fields in which to remain stagnant is to perish. It is for this reason that organizational models inspired by jazz improvisation have been discussed predominantly in the context of “new product development in turbulent environments” (Akgun et al. 2007; Moorman and Miner 1998), “product innovation” (Kamoche and Cunha 2001), and the functioning of small software organizations (Dyba 2000),

The “Cool” Organization Man / 37

rather than in the context of organizational change in general (Mantere, et al., 2007). I suggest that this organizational strategy is a form of producing “potential uncertainty,” to be distinguished from “possible uncertainty” (SamimianDarash 2013). Based on her analysis of Israeli governmental strategies of preparedness against biothreats such as the pandemic flu, Samimian-Darash has articulated the distinction between these two kinds of uncertainty as follows: Possible uncertainty . . . is dependent on past knowledge, calculation, and evaluation (the chances of a particular risk being realized). I claim that this uncertainty is comparable to “risk” and that various risk technologies are available to act on it. Potential uncertainty, by contrast, does not derive from the question of whether one future possibility or another will be realized (as in the case of possible uncertainty) but from a virtual domain with the capacity to generate a broad variety of actualizations. (2013, 4)

These “actualizations” capable of being generated by this “virtual domain” may have never taken place before and hence are not known and cannot be known in advance. Similarly, the organizational strategies I have discussed thus far are motivated by the idea that the jazz band functions as a “virtual domain” capable of generating a wide variety of musical events that can be developed into structures whose full meaning becomes apparent only retrospectively because of their emergent and relatively unexpected nature. Consider how Barrett (1998) conceptualizes the emergent nature of jazz improvisation: The improviser can begin by playing a virtual random series of notes, with little or no intention as how it will unfold. These notes become the material to be shaped and worked out, like pieces of a puzzle. The improviser begins to enter into a dialogue with her material: prior selections begin to fashion subsequent ones as these are aligned and reframed in relation to prior patterns. (615; emphasis added)

To be sure, jazz improvisation (particularly in straight- ahead jazz) takes place in the context of a specific harmonic progression known in advance, and a jazz musician typically draws from a common stock of conventionalized or stylistic building blocks (such as phrasing, timbre, rhythmic and melodic formulae, forms of musical interaction between players, a specific

38 / Eitan Wilf

repertoire) to construct his or her improvisation in the course of performance (Berliner 1994; Monson 1996). In this respect, jazz improvisation actually passes back and forth across the possible and the potential. At the same time, it is an emergent communicative event typically characterized by higher unpredictability than, for example, the execution of a piece of Western classical music. Hence it is conducive to the production of a domain of virtuality and real- time creativity that organizational theorists have found appealing in light of their belief that in turbulent environments that require organizations to incessantly develop new products, services, and strategies, there is limited use in technologies that engage with uncertainty only in the form of the calculation of the chances of the realization of a particular possibility already formulized or imagined in advance. Although managers certainly make conjectures about what might be “the next big thing” and are engaged in calculating the probability that this or that next big thing would actually materialize, new product development and innovation must incorporate technologies grounded in potential uncertainty: they must be able to constantly generate new events that were not hitherto thought of and then develop some of them into viable products to really gain competitive advantage in the marketplace. At the same time, these events cannot be entirely unrelated to a company’s present context of existing products, services, technologies, and organizational structure. Hence it is precisely the fact that jazz improvisation passes back and forth across the possible and the potential that makes it valuable as an organizational model for the production of contextually meaningful uncertainty and innovation (Wilf 2013a).7 The organizational need to foster this specific configuration of uncertainty also explains why many organizational theorists have turned to the creative arts in general in search of new organizational models. As I have explored elsewhere in detail (Wilf 2011), modern normative ideals of creative agency are closely associated with the idea of self- expression, originally articulated by Romantic thinkers in the context of the arts. The ideal of selfexpression stands in contrast to the ideal of sincerity. Whereas sincerity entails the clear articulation of inner states previously known in advance, that is, with making sure that my words and public actions align with my already existing private states, self- expression is concerned with open- ended situations that are characterized by a lack of knowledge, multiple possibilities, and potential creativity. It is concerned with the future fulfillment of a potential that is not yet known to the subject . . . this ideal stipulates that the subject can find himself or herself only

The “Cool” Organization Man / 39 through articulation and expression. Interiority is revealed only as it is articulated because it is a potentiality. (Wilf 2011, 471–472)

One discovers one’s nature by generating different, as yet unknown scenarios and then attending to one’s feelings vis-à- vis these scenarios in an attempt to see what fits. Self- expression is thus contingent on the generation of scenarios “whose value is precisely in the fact that they represent future possibilities, not existing states. These fictive scenarios may or may not align with the subject’s interiority” (Wilf 2011, 472). Note that the difference between sincerity and self- expression mirrors the difference between possible and potential uncertainty. Whereas sincerity is concerned with the success of publicly articulating an already known private state, self- expression is concerned with generating numerous scenarios that are unknown in advance, in an attempt to discover what scenario actually resonates with “who one really is.” There are numerous technologies for generating such potential scenarios, some of which I have analyzed during fieldwork in various ethnographic contexts of socialization into the creative arts (Wilf 2010, 2011, 2013b). The organizational strategies I have discussed thus far— the goal of which is to find organizational templates capable of generating numerous potential scenarios that are tested against the practitioners’ expertise and knowledge of the market with the hope that some of them may eventually culminate in new products and innovations— align with this model of self- expression developed in the sphere of the creative arts. This alignment explains the appeal of the arts in general and of jazz in particular to organizational theorists at the present moment. However, the resonance of such organizational strategies with key modern normative ideals of creative agency has implications that transcend organizational functionality. As I argue in the next section, in addition to the competitive advantage such organizational strategies promise to deliver in the context of highly volatile economic and institutional environments, these strategies also naturalize and normalize an increasingly prevalent form of work- related uncertainty and its experiential dimensions, which have hitherto remained undertheorized.

Naturalizing a New Form of Work-Related Uncertainty A growing number of anthropologists have been studying contemporary work- related modes of uncertainty. Many of them have done so by attending to these modes’ experiential dimensions. However, their focus has been

40 / Eitan Wilf

either on those “melancholic subjects . . . often found at the bottom of the social ladder” for whom precariousness is “draining— a half- death,” or on the privileged few found at the top of the ladder who experience the extreme insecurity of their work as a source of self- worth because it is wedded to “extraordinary privilege” (Muehlebach and Shoshan 2012, 336, 337). The former include people such as factory and blue- collar workers who find themselves in the detritus of Fordism where job security does not exist and who experience “their precarious lives as anxiety- filled states of exception” (ibid.). The second group includes the Wall Street bankers studied by Karen Ho (2009); that is, “some of the most privileged actors in the global economy today.” They, too, can be fired at any given moment but in their context “precariousness is called flexibility and . . . flexibility is coveted as part of a personal skill- set indispensable to the portfolio of bankers” (Muehlebach and Shoshan 2012, 336). However, precariousness and increased uncertainty are fast becoming the lot of many workers found between these two extremes. These people are neither factory workers nor investment bankers. They are midlevel workers in various contemporary knowledge- based companies who are expected to inhabit creative uncertainty. I suggest that the transformations in the organizational theories I have discussed thus far have significant experiential implications for these workers, who belong to “the creative class” (Florida 2003). Because this chapter does not rely on an ethnography of work environments restructured according to these organizational strategies, ascertaining these strategies’ concrete implications must be pursued elsewhere. However, that these strategies might have such implications becomes evident in light of the relevant organizational theories. If past organizational theorists such as Elton Mayo were interested in developing ways to increase work satisfaction by emphasizing leadership skills that involve anger management, empathy, and emotional intelligence, contemporary organizational theorists are interested in the emotional and cognitive skills that might make people better improvisers and better able to inhabit creative uncertainty. Drawing on the example of jazz, a number of theorists have argued that improvisers “benefit most from high levels of anxiety and confidence” (Mirvis 1998, 588). Most telling are the practical recommendations they have formulated following this realization, namely, to simulate and stimulate anxiety in the workplace: Many companies, for example, run crisis simulations to prepare people and work units to handle computing system breakdowns, accidents, or environmental emergencies. Why not also simulate a corporate takeover, the loss of a

The “Cool” Organization Man / 41 star performer, or the launch of a marketing campaign by a competitor? Even if the rehearsed situation never arises, the emotional and operational lessons learned about improvising can be generalized. Want more real- time risk and consequences? In corporate training, Outward Bound trainees tip over into the water when their design of a river raft proves faulty. (Mirvis 1998, 588) To simultaneously boost both anxiety and confidence, companies are setting “Olympic” performance targets and using senior leader’s success stories to inspire everyday employees. The dilemma here is that these goals are not reachable and the consequences of failure are still dire. The senior management’s hope, inspired and desperate, is that anxious and pumped- up people will “figure it out” and deliver; or at the very least fail forward. (Ibid., 589; emphasis added)

Note the ways in which the new organizational transformations I have discussed thus far are productive of increased uncertainty with its attendant experiential dimensions. This is not only a by- product of the concrete recommendations given by organizational theorists about how a contemporary organization should be restructured (e.g., abandoning routinized ways of doing things, embracing errors, developing minimal structures that will encourage maximum flexibility and ambiguity, and cultivating task distribution rather than clear roles). Such theorists also explicitly and deliberately endorse initiatives whose goal is to increase workers’ anxiety up to a certain “optimum level,” thus presumably training them in real- time improvisatory problem solving and creativity. An anthropology of contemporary modes of uncertainty must train its analytical lens on such midlevel workers who are expected to inhabit creative uncertainty and its experiential dimensions as a condition of possibility for being productive. Furthermore, in such environments workers are expected to understand these expectations and the experiential dimensions of which these expectations are productive as conditions of possibility for their own self- expression as individuals. The turn to the creative arts in search for organizational models that are adequate for the requirements of the twenty- first- century competitive and knowledge- based marketplace thus entails a new mode of naturalization and normalization of this kind of uncertainty. Consider again the vignette with which I opened this chapter (Barrett 1998, 620), which succinctly articulates the difference between organizational policies of the first half of the twentieth century that conceptualized the organization as an efficient, predictable, and reliable machine, and contemporary organizational policies that conceptualize the organization as a flexible, nimble, and spontaneous jazz band (Hatch 1999a; Morgan 1986). To be sure, both kinds

42 / Eitan Wilf

of organizational policies recruit key metaphors in American culture— the industrial machine on the one hand, and jazz music on the other— in response to specific economic environments and challenges. Both metaphors serve as tropes that naturalize specific configurations: The first naturalizes predictability and the second naturalizes uncertainty. However, in the first kind of organizational policy the worker is conceptualized as a cog in a machine whereas in the second the employee is approached as someone who is able to be as creative and expressive on her feet as a jazz musician.8 Although both kinds of policies represent modes of naturalization of social formations, the second is much more powerful because on a general level, not necessarily related to jazz, this mode recruits specific modern normative ideals of creative agency such that working in the organization appears to be a means of realizing one’s unique creative nature and even a condition of possibility for such realization. Elsewhere I have described in some detail the idea, prevalent today in the high- technology sector, that the employee is a unique creative potential whose realization can be nurtured by the design of the workspace according to specific principles (Wilf 2013b, 143–145). Such a workspace often resembles a playground populated by objects with which the employee can exercise creativity at any time: toys, games, drawing boards and crayons, abundance of food and drink, “hip” furniture, and walls painted with unusual colors. In this way, work becomes the place where one can realize one’s creative potential and exercise expressive individualism. On the more concrete level of the jazz metaphor, we must take into account its dense web of associations in American culture and elsewhere, which, when recruited by organizational theorists, support the normalization of creative uncertainty and its attendant work- related configurations and experiential dimensions. Because of its improvisatory and emergent nature, a number of key figures in different branches of twentieth- century artistic modernism such as abstract expressionism and the Beat Generation embraced jazz as a model of spontaneity, radical freedom from tradition, and willingness to embrace the unknown and its multiple creative potentialities (Belgrad 1998).9 In part, these associations, which received their clearest expression in the mid- twentieth century, were motivated by the intrinsic features of jazz as an improvisatory art form productive of emergent meanings.10 However, they were also motivated by the image of jazz as an anti- institutional art form associated with lower- class African American communities and transgressive modes of behavior such as uninhibited sexuality, use of drugs and alcohol, unusual dress codes, and nonstandard modes of speech (Monson 1995; Ogren 1992; Ake 2002). Many of these associations were overdetermined by racialized discourses that appropriated black-

The “Cool” Organization Man / 43

ness as a locus of authenticity and primitivity free from institutional reigns (Gioia 1988; Torgovnick 1990). Important for our discussion, young white middle- class Americans in the mid- twentieth century were fascinated by jazz mostly because of their hope that it could provide them with an alternative to the confining postwar mainstream corporate culture as portrayed in a number of popular sociological essays, the most famous of which was William Whyte’s The Organization Man (2002 [1956]). These essays focused on the malaise and alienation produced by the growing suburbanization and corporatization of the United States that symbolized for many young Americans standardization of life, conformity, and blind obedience. Jazz was considered to be a potential anti- institutional antidote to this organizational world, a countercultural space in which people could take refuge. Ironically, today jazz is recruited by organizational theorists to salvage the corporate and organizational world itself from conformity, rigidity, and lack of flexibility. In other words, it is recruited to produce the “cool” organization man. In this way, a specific and increasingly prevalent form of work- related uncertainty is naturalized, normalized, and legitimized. Indeed, it becomes a key dimension of “cool.” In the process, the organizational theorists who play an instrumental role in such naturalization are themselves legitimized. Some of these theorists explicitly revel in the contradiction of “incorporating jazz” but only to legitimize their own position as forward- looking and innovative. In doing so, they also perpetuate the racialized discourses that have supported the image of jazz as an inherently anti- institutional art form associated with transgressive modes of behavior. Consider the following vignette that appears in another contribution to the Organization Science special issue: What is surprising is the turn to jazz as a metaphor for business today. It is artsy, performed disproportionately by people of color, still has an undercurrent of booze and drugs surrounding it, and frankly doesn’t sell that well to a broad base of customers. In short, it’s the antithesis of much of what we think about when we think about business. At the same time, hearing it, playing it, and even pondering it is good for the head, heart, and soul. It’s plain enough that many ‘scientific’ breakthroughs emerge from paradoxical insights where the discoverer is passionately caught up in the investigation and becomes ‘one’ with the phenomena— head, heart, and soul . . . Not long ago, [one scholar], commenting on the discovery of paradox in organization life, advised researchers to go places they have never been, think thoughts that never occurred to them, and use research methods with which they were not familiar. That this jazz thing would be enacted at the Academy of Management

44 / Eitan Wilf and appear in a mainstream journal follows right along. The improvisations behind it smack of planned serendipity and my sense is that the product reflects collective individuality. (Mirvis 1998, 591)

A couple of points immediately stand out. First, note the ways in which Mirvis reproduces the link between jazz and a number of essentialized markers of anti- institutionalism, associating jazz with “people of color . . . booze and drugs,” which make it “the antithesis of much of what we think about when we thing about business.” In this way jazz is celebrated for its antiinstitutional properties even as it is recruited by organizational theorists to buttress contemporary mainstream institutions. Second, the associations the author attributes to jazz have very little to do with the music’s actual conditions of existence. From the late 1970s, jazz has been played by a growing number of white middle class Americans, as well as learned and performed mostly in the hundreds of academic jazz programs that grant higher degrees in jazz performance, in large part because of the gradual disappearance of commercially viable jazz scenes and performance venues from almost all major American cities (Wilf 2010, 2012, 2014). Oblivious to these transformations, Mirvis reproduces a number of stereotypes that situate jazz on the intuitive, emotional, and unmediated side of culture, and hence as an art form that has a recuperative potential for a culture (organizational or not) focused too much on abstract theory and cognition: “hearing it, playing it, and even pondering it is good for the head, heart, and soul.” Third, most important, Mirvis suggests that the paradox generated by jazz’s antiinstitutional “nature” and the fact that organizational theorists turn to jazz to improve the functionality of organizations and businesses is a testimony to the creativity of the organizational theorists themselves, that is, to the fact that they do what they preach: “go places they have never been.” The result is the suggestion that behind every “cool” organization man stands a “cool” organizational theorist. Both the creative class and the organizational theorists responsible for theorizing the ways in which this class can be made to productively inhabit creative uncertainty are part of the story of an increasingly prevalent mode of uncertainty in the contemporary world.

Conclusion My purpose in this chapter has been to analyze the turn to jazz in organizational studies as a way to tease out a number of insights for the anthropology of modes of uncertainty in the contemporary world. I have argued that this turn to jazz has been made in response to shifts in the nature of

The “Cool” Organization Man / 45

the twenty- first- century organization and the challenges it must cope with, which entail increased levels of uncertainty. I have argued that we must analyze the organizational strategies that draw from jazz improvisation on two levels: their organizational efficacy and their ideological and experiential ramifications. On the one hand, designing the organization in the model of the jazz band is an attempt to give it a competitive edge in an increasingly uncertain marketplace. It entails harnessing “potential uncertainty” into the organization’s very logic of operation. On the other hand, designing the organization in the model of the jazz band and infusing it with metaphors of creative agency are also instrumental in normalizing an increasingly prevalent form of work- related uncertainty in knowledge- based companies— namely, creative uncertainty— and its experiential dimensions, as well as in legitimizing the organizational theorists who are instrumental in developing and institutionalizing this managerial framework. Of course, the body of theoretical knowledge I have examined is not homogenous. Specifically, in tandem with the waves of enthusiasm about the jazz metaphor, and beyond the cursory token lists of its limitations provided by some of its most ardent supporters (Barrett 1998, 620; Weick 1998, 552– 553), a number of scholars within organizational studies have criticized the use of the jazz metaphor on various grounds: for being incomplete in its description of jazz and in its account of the nature of twenty- first- century organizations, for its political implications, and for failing to take into account the implications for productivity of the increased anxiety that employees are likely to experience as a result of designing organizations in the model of the jazz band (Cunha et al. 1999, 332; Kamoche et al. 2003b; Hatch and Weick 1998). Such criticism notwithstanding, this strand of scholarly research and practice is indicative of major transformations in contemporary modes of work- related uncertainty that beg further clarification.

CHAPTER THREE

The Gaming of Chance: Online Poker Software and the Potentialization of Uncertainty N ATA S H A D O W S C H Ü L L

A man sits before a large desktop monitor station, the double screen divided into twenty- four rectangles of equal size, each containing the green oval of a poker table with positions for nine players. The man is virtually “seated” at all twenty- four tables, along with other players from around the world. He quickly navigates his mouse across the screen, settling for moments at a time on flashing windows where his input is needed to advance play at a given table. His rapid- fire responses are enabled by boxed panels of colored numbers and letters that float above opponents’ names; the letters are acronyms for behavioral tendencies relevant to poker play and the numbers are statistical scores identifying where each player falls in a range for those tendencies. Taken together, the letters and numbers supply the man with enough information to act strategically at a rate of hundreds of hands per hour. Post session, the man opens his play- tracking database to make sure the software has successfully imported the few thousand hands he has just played. After quickly scrolling through to ensure they are all there, he recalls some particularly challenging hands he would like to review and checks a number of filters to reveal for further analysis only hands that match these criteria. While replaying the hands forward in simulations to see how different actions might have played out, he performs a statistical calculation to determine whether his actual win rate for the session met the win rate that would have been expected from the cards he was dealt and how much this had to do with skill or luck. He consults a graph of his “aggression factor” to convince himself he hasn’t been playing as timidly as he used to and, finally, makes some notes in an Excel spreadsheet on minor behavioral adjustments to apply during his next session. Satisfied that he has taken adequate inventory of his performance that day, the player closes the program without once checking to see how much he won; now is not the time to be misled by short- term data.

The Gaming of Chance / 47

At first examination, software- assisted online poker would appear to be a contemporary instance of what Ian Hacking (1990) called “the taming of chance.” In his book of that title, Hacking extended his earlier work (1975) on the seventeenth century emergence of probability, a mode of thought inspired by experiments with games of chance and distinguished by the novel recognition that the past does not determine what will happen next. The notion of pure randomness was tempered by the rise of statistics in the nineteenth century and worldly phenomena came to be understood as governed by statistical laws; chance was “tamed” not in the sense that it could be controlled but in the sense that it could be subjected to calculation. This historical development is often cited as an antecedent for the kinds of riskmanagement technologies and practices that have flourished in capitalist economies since the 1980s, the argument being that they extend the taming process Hacking had identified, seeking to convert the chaotic uncertainties of chance into calculable, governable risks (Ewald 2002, 286). Yet what other modes of relating to uncertainty are afoot in the contemporary world? Might an excess of scholarly attention to the chancetaming projects of so- called risk society obscure these other modes? Recently, scholars have explored domains of modern risk- taking such as contract law (O’Malley 2000b), financial derivatives (Arnoldi 2004; Appadurai 2011, 2012), day trading (Martin 2002), and futures trading (Zaloom 2006) to argue that uncertainty can figure as a resource to invite, cultivate, and exploit rather than a liability to reduce, mitigate, or control. “Financial speculation is an active, voluntary engagement with risk,” writes Zaloom; “to work with risk is to engage fate and to play with the uncertainties of the future” (93). Competitive gambling, in which action depends on uncertainty (without it, there simply would be no game), is an example of what O’Malley has called “enterprise uncertainty”: gamblers approach uncertainty as a field of potential profit. But what are we to make of their use of online poker software as described above? Is it yet another instance of the mobilization of probabilistic expertise against uncertainty or, alternatively, evidence of the generative, untamed aspects of uncertainty? Or perhaps an indication that actuarial and speculative treatments of uncertainty can intermingle in hybrid forms such as O’Malley’s (2000b, 465) “enterprising prudentialism”? As this chapter will show, although some poker software features serve to reduce uncertainties by turning them into statistically calculable risks, the preponderance serve to help gamblers abide and strategically engage with uncertainties that simply cannot be converted into known risks and to actively foster and play with new uncertainties. On the whole, poker software

48 / Natasha Dow Schüll

is better regarded as a tool for “gaming chance” than for taming chance, in the sense that it works to potentialize rather than to minimize uncertainty. Drawing on interviews with gamblers, observations of online poker play, and discussion threads from poker forum archives, I examine how data tracking and analysis software configures the field of uncertainty and enables players to act in response to that field. The uncertainties that arise in the course of play are multiple, each unfolding from the next in an evercomplicating cascade: What cards are others holding? How might they play those cards? What cards do they suspect you of having and how do they believe you are likely to play them? Are they tracking you as you are tracking them? If so, how will the actions you take affect their statistical models of your behavior? These uncertainties, it should be noted before proceeding, occur within the context of a rule- bound game and in this sense are of a more finite nature than those involved in the cases typically considered in the scholarly literature on risk and uncertainty: global public health, biosecurity, nuclear threat, global financial catastrophe, and the like. Yet online poker and its associated technologies and practices offer a window onto more general forms and dynamics of contemporary subjectivity and the key role that uncertainty plays in those forms and dynamics. Gamblers who use poker- tracking software, I argue, are experimenting with modes of decision making and self- governance oriented toward the open- ended indeterminacy of uncertainty rather than the limiting, definitional project of risk calculation. As we will see, these modes value performance over outcome, multiple data points over single events, virtual over real time, and potentialization over actualization of the self.

The Rise of Online Gambling The first real- money online poker game was dealt on New Year’s Day in 1998; ten years later, annual revenue from online poker had grown to $6 billion. Despite heavy legal restrictions on the activity in the United States,1 more Americans play than any other national group: some 10 million in 2010 (Skolnik 2011, 117). At the close of 2011 the US Department of Justice reversed its stance on the legality of Internet gambling, permitting individual states to institute online gambling. Since then the gambling industry has quickly mobilized, with Nevada, New Jersey, and Delaware in the lead. Restrictions on online gambling are likely to be further rolled back as all levels of government look for new consumer activities to regulate and tax (see Schüll 2012; Skolnick 2011). Online poker sites commonly offer Texas hold ’em, Omaha, seven- card stud, and other popular versions of the game. Since the game of poker pits

The Gaming of Chance / 49

gamblers against one another rather than against the house, the house makes its money by collecting a “rake” (or percentage commission) on each cash game played or from entrance fees for tournaments. Online purveyors stand to collect far more rake than their land- based casino counterparts because players can gamble at multiple tables simultaneously when online— an activity called “multitabling.” Skilled players also stand to make more money when multitabling, for instead of the twenty to thirty hands they might play in an hour of live poker, they play as many as two thousand— a rate at which they can increase their exposure to hands worth betting on. A poker site explains: “Playing at more tables simultaneously can significantly increase your hourly wage.” As the reference to “hourly wage” indicates, an increasing number of online poker players approach the activity as a form of work— not wealth by sudden, singular windfall but by rapidly executed increments of labor.2 In 1939 the philosopher and cultural critic Walter Benjamin drew an extended analogy between gambling and the repetitive, speeded- up process of industrial machine labor. “Gambling,” he wrote, “even contains the workman’s gesture that is produced by the automatic operation, for there can be no game without the quick movement of the hand by which the stake is put down or a card is picked up” (1968, 179). In the case of online poker, the “workman’s gesture” has been reduced to the click of a mouse. In its speediest form, when players are gambling at the maximum number of tables permitted (twenty- four), play is referred to as “grinding.”3 Although grinders exponentially increase their exposure to risk, they do so in a way that reduces overall volatility. “In theory,” says Emil, a twenty- six- year- old biostatistician and former recreational poker player, “the more hands you play the more the variance will even out and you’ll reach your optimal expected wage.” “I try to approach it very rationally,” says Justin, a professional online poker player who participates in games with a $25,000 minimum buy- in, “to optimize my income.” Phenomenologically speaking, the experience of multitabling is significantly different than live poker— in which a gambler sits at one table and attends to a single event stream, sometimes playing his cards but more often folding and waiting. Online, a player is “present,” virtually speaking, at many tables at once, his attention distributed across a vast portfolio of games and events; there is no waiting, just constant action. Given the quickened pace of play, the time he can devote to each game decision is reduced. Monetary stakes, like time and attention, are spread across multiple games, thinning a sense of investment in the unfolding action narrative of any one table. Winnings, too, are diluted— for while profits go up overall when multi-

50 / Natasha Dow Schüll

tabling, “with each additional table that you play, your winnings per table will drop,” a poker website explains. This is due to missing turns at one table while taking action at another and to bad decisions made in haste. To ensure the highest possible “return on investment” (or hourly wage), multitablers must determine the maximum number of tables at which they can play well enough. “When you’re playing in real life, you’re playing every hand the best you can,” says Winslow, a theoretical computer scientist and specialist in algorithmic problem solving currently working toward his doctorate at the Massachusetts Institute of Technology. “Online, you’re weighing optimal play per hand against the optimal number of hands you can play in time.” In all these respects— temporal, attentional, financial— online poker would appear to be a “shallow” rather than a “deep” form of play, in contradistinction to the anthropologist Clifford Geertz’s (1973) famous description of gambling as a profoundly meaningful encounter between subjects in which social status and players’ very existence is at stake.4 Erving Goffman’s (1967) sociological account similarly depicted gambling as a focused, existentially freighted affair in which card- playing heroes engaged in “character contests” that allowed them to demonstrate courage, integrity, and composure in the face of contingency. Online multitablers, methodically clicking their way through thousands of hands per session while consulting statistical indices to guide their actions, are decidedly unheroic figures. Yet no matter how multiple the tables, how micro the stakes, and how fleeting each moment of play, online players cannot avoid the linear temporality of decision making: they must, ultimately, act from a single position in time without knowing what the outcome will be; uncertainty cannot, in the moment of action, be circumvented. Niklas Luhmann (1993) defines risk as the problem of making decisions at the limit of knowledge, on the border between present and future. Risk, adds Randy Martin (2002, 106), “presents not only the limit of what can be known in the present but also the burden of acting as if one could know.” Poker- tracking software and its evolving array of features and functions alleviate this burden by enabling players to act confidently yet without pretending to know what will happen next. In this sense, the technology equips them to abide— and, potentially, to profit from— uncertainty.5

Poker Technics: The Multitabler’s Equipment Software such as PokerTracker and Hold’em Manager6 depend on constant tracking and recording of play- by- play game information: what cards the player was holding, what plays he made, what plays his opponents made,

The Gaming of Chance / 51

Figure 3.1. Play- by- play event data in an online poker chat log (created by author).

and, if the information gets revealed, what cards they were holding. This data is collected from the “chat log” that appears below every table. Putatively there to give otherwise anonymous players a space to socialize as they might during live play, the log also automatically records all game events as they occur (see figure 3.1). Tracking software draws this information into a database of “hand histories” that becomes the raw material for a number of analytic features. In what follows I examine these features, moving from in- game tools designed to facilitate rapid decision making to retrospective tools designed to prepare players for future sessions. Acting in Real Time: The Heads-up Display During a game session, the heads- up display (HUD) is the most important poker software feature at a player’s disposal.7 The HUD continuously queries a player’s database to provide up- to- date information on opponents’ behavioral patterns, presented in panels of letters and numbers that hover over the players’ names (see figure 3.2). The figures on display, which may shift as real- time actions and events are fed into the database of hand histories, can be read as virtual “tells”; instead of looking at one’s opponent across the table and trying to sense him out in real time from behind sunglasses as in live poker, an online player consults the HUD’s summary of historical data with a quick glance. “If I see that a player typically never raises after he checks and is deviating from that behavior,” explains Justin, “I can make certain deductions about how strong his cards might be.”

52 / Natasha Dow Schüll

“You can create profiles of people in a way you could never do offline,” says Emil. “In live poker you have to sit and watch and try to remember what a person does to get a sense of how they play; you have to keep track of everything in your head. Online, you don’t have to waste your energy remembering things— you have all these statistics overlaid on the screen.” Justin comments, “I don’t know of anyone who can actually remember This player has been at the table for exactly 87 hands and has raised preflop exactly 11 times; it’s more intuitive, like This player has been raising a lot in the last few hours.” When betting at multiple tables online, memory becomes even less reliable than in live poker, and intuition less available. The software works as “an external memory,” as Justin puts it. “You trust the information more than your own memory and you feel more comfortable taking action, and doing it faster,” says Emil. “The numbers make the whole decision- making process easier, less agonizing . . . it becomes much more of a binary, yes/no process.” HUD numbers may help a player to feel more confident in his decisionmaking process yet they do not pretend to pin down an opponent’s behavior or predict what he will do next; they do not, in other words, eliminate uncertainty. Rather, they draw on a database of continuously accruing historical events to indicate emergent behavioral tendencies; they serve as a means for what Luhmann called “provisional foresight,” allowing actors to adjust their responses to real- time conditions (1998, 69–70). “The numbers in the display tell you, This player has certain tendencies,” says Winslow, “and you can take that information into account right before you make a decision about a hand.” HUD- facilitated decisions in poker thus remain interpretations rather than calculations, speculations rather than predictions.8 The latest versions of poker tracking software allow players to customize their HUD windows to show whatever mix of behavioral statistics they wish. Always included up front, however, is a set of numbers thought to capture the core style of any player, summed up by the shorthand VPIP/PFR/AF. This triptych reveals the percentage of hands an opponent chooses to play (Voluntary Put in Pot), the frequency of his betting during the first of four rounds of a hand (Pre-Flop Raise), and how likely he is to keep betting during the latter three rounds of a game (Aggression Factor). A consensus has formed around the optimal ranges for this so- called Holy Trinity of game statistics; values falling outside of these ranges “imply predictability” and therefore “can be exploited by observant players,” a poker website explains. Such players can glance at an opponent represented as “64/29/3” or “19/14/1.7” (as in figure 3.2) and instantly know whether they are up against a seasoned professional

The Gaming of Chance / 53

Figure 3.2. Heads- up display for an opponent in which the first three numbers designate VPIP/PFR/AF and the rest indicate statistical scores for a variety of other behavioral tendencies (created by author).

or an inexperienced newcomer, whether he is a pushover or a heavy bluffer, and where he falls on the timid- to- aggressive spectrum. The majority of players rely on a standard array of ten to twenty statistics in their HUD displays, swapping suggested configurations on message boards and trying out modifications in simulations before bringing them into live play. The most dedicated of players tinker with the software until they arrive at a personalized set of filters. “I use over 150 stats,” says Justin; “I select whatever outputs I want to see on the screen, and filter by them.” His current display, for example, shows 40 figures in a specific order. While a player might theoretically benefit from knowing how an opponent plays along a hundred different dimensions, HUD windows showing that many numerical values would be cognitively draining if not unassimilable, and would potentially overwhelm the aesthetic experience of play itself— especially with multiple tables open on the screen. A secondary, more granular set of statistics pops up when a player hovers his mouse over any given figure in the primary HUD. “Behind every stat is another set of stats,” says Justin. Consulting these deeper statistics takes time; it is done strategically. PokerUtilities.com, a website dedicated to discussion of emerging tools for online poker, recommends: It can be very useful to commit one afternoon to customizing these pop- up screens until they show the information you want them to show. Make sure that only the helpful information per statistic is shown. Especially when playing numerous tables it can be very important to quickly find the information you are looking for . . . you will need to invest some time to optimize the pop- ups to make them more efficient and save yourself time when having to make a decision.

54 / Natasha Dow Schüll

To further ease the decision- making process, poker players can configure the software to change the color of a given indicator when it passes certain statistical ranges. Not only do color changes break up the monotony of a wall of numbers, they also alert players, via intuitive visual triggers, to opponents’ exploitable behavioral patterns as they emerge. While basic values like AF (aggression factor) are readily legible to a moderately skilled player without color, more complex behavioral values— especially those composed of numerous different statistics— are hard to detect without color even for a player of Justin’s caliber. His advanced statistical dashboard is coded to provide him with color cues in such cases. “Certain stats are indicators of what to do in certain situations,” explains Justin. “So if I look at the HUD and see that they’re all green, I know I should play aggressively.” Software developers are constantly expanding the orbit of potentially significant data that can be automatically tracked and legibly displayed in the HUD. The capacity to take “notes” on particular scenarios or game occurrences, for instance, was recently added to the HUD’s repertoire. Formerly, players were urged to keep Excel spreadsheets open during a play session, record memorable moments as they happened, and review them periodically to find patterns. Such a system left it up to players to decide, in real time, that something noteworthy had happened, and to take the time to note it. Automated note taking, programmed to detect and record the incidence of prespecified behaviors or “note definitions” (such as how many seconds an opponent takes to make a decision, which might be correlated with bluffing), releases players from this task and frees up time for more game play. Note definitions are “fully customizable and there are millions upon millions of combinations,” reports an online review of a note- taking program. Once a note definition has been created, that note will flash in the HUD whenever an opponent fits the definition in question. It is important to reiterate that the HUD is not an actuarial instrument that serves to predict outcomes but a reserve of tendential indicators— clues to the directions events could potentially take. Tendency, writes Brian Massumi (2002, 30), can be understood as “pastness opening directly onto a future”; it pertains to “the intermediate space between what has occurred and what is about to occur,” as Samimian-Darash (2013, 3) has defined the field of “potential uncertainty.” The HUD provides players with a compass to navigate this field— that is, to more quickly detect what might be happening in any given moment and where they might gain an edge. It is no surprise that they spend so much time calibrating, recalibrating, and tuning this instrument of detection. “I put quite a lot of effort into configuring how

The Gaming of Chance / 55

I use the software, knowing what data to use and to combine, and what you can extract from it,” says Justin.9 Yet the HUD’s statistical scores, color- coded ranges, note definitions, and flash alerts add up to more than a detection apparatus, for they do not merely register events as they emerge but actively shape them. As much as the HUD indicates action it cues action and in this sense “affects the actualization of events before they take place” (Samimian-Darash 2013, 20). Following Deleuze’s notion of the virtual as “the intensive multiplicity out of which the actual emerges,” the HUD could be said to “virtualize” events (Arnoldi 2004, 33; see also Galloway 2012). Extending this idea, it could be said that the HUD virtualizes players as well as events— not simply in the representational sense but also in the sense that their unquantifiable potential for action is immanent in every figure of behavioral tendency displayed on the screen. It is not just other players who are virtualized in this way; so too is the acting player, for in addition to statistically sussing out his opponents using the HUD, he can use the technology to see how he appears to them. “It’s also important to keep an eye on your own stats, as tracking software has become so popular that it’s likely other winning players at your table will be using it and looking to exploit you in the same way,” an online tutorial suggests to novices. Justin notes: You never know for sure if they are tracking you, so before assuming that, I try to gauge what information they might have on me. I do this by looking at their behavior toward me and also at the speed of their play against me. Based on that, I can guess how aware they are of how I typically behave, and can adjust my behavior accordingly.

One way Justin adjusts his behavior is to frequently change his play style— for example, to alternately loosen and tighten his range of starting hands when playing against the same opponent. He thus uses HUD technology not only to compose a statistical profile of his opponents that can help him decide how to act in relation to them, but also to figure out what kind of profile they might be composing of him and how he might scramble the data he generates so as to keep them guessing about his play style. The best profile is one that gives off no signals or “tells” that could be exploited by discerning opponents; such a profile is ideal precisely because it remains in the sphere of uncertainty. While the HUD could be said to serve as a tool of uncertainty reduction when used to gauge the potential behavior

56 / Natasha Dow Schüll

of others, when used reflexively it serves as a tool of uncertainty cultivation. The key is to methodically extinguish all signs of passion— desire, weakness, or intention— from one’s data stream, so as to seem as truly random and unpredictable as possible. Retrospection: Post-Session Analytics While the HUD helps players dial down their human passions in the heat of the game, a different set of poker software tools helps them prepare for dispassionate play through retrospective exercises. In between game sessions, when players are not caught up in the rapid- fire stream of decisions that online play demands, they are invited to turn to their hand- history database and attempt to discern what patterns and habits might be revealed there. A range of queries can be put to the data: Am I overvaluing or badly playing certain hand combinations? Am I playing too many hands from a certain position? Do I become aggressive or timid in certain situations? The point is to reflect upon past action so as to shore up “leaks” in their game. A player can revisit the game scenarios he suspects he played suboptimally— perhaps all hands in which he held an Ace or in which he was the first to act— and “replay” them in the form of simulations showing “how they could have gone differently,” as Winslow puts it. By keeping the known information constant (i.e., the cards in one’s hand and those shown on the table) while varying the known information (i.e., the cards held by one’s opponents), “you can logically try to reason out the other lines you could have taken,” says Justin; “you can see what you would have won on the preflop, and on the flop, and on the turn and on the river [different stages in a round of betting]— what the chances of winning would have been if you had made any number of different choices.” In effect, simulations convert actual events back into a virtual field of potential actualities, training players to more easily “see through” the singularity of any given decision moment and recognize the multiple futures it carries. “In the moment, the right decision is not clear,” says Emil, “but in the aggregate you can see how it makes sense to act; certain things come up over and over again and start to make sense.” Unlike the risk- management scenarios that Lakoff (2008) has described in the domains of bio- security and public health, which are designed to arm actors with the tools and response set to cope with a specified array of possible futures, these “reverse scenarios”10 help players cope with the necessarily uncertain future of any hand by returning them to a point in the past and confronting them with the branching diversity of outcomes that might have emerged from it. Such

The Gaming of Chance / 57

a vantage does not reduce uncertainty but accustoms players to it, diminishing the consequential load of individual game decisions and facilitating the decisive, speedy flow of multitabling. The subjective stance sought is one of equanimity in the face of uncertainty and outcome variance. Another post- session analytic tool that helps players cultivate such a stance is the All- in Expected Value (AIEV) calculator. Looking back on a session, the calculator assesses the odds a player had of winning those hands in which he went “all in” against another player. (While all- in bets are relatively rare in live play, they occur often in online multitabling due to the sheer volume of hands players encounter.) “I can look back and say, Today I got into ten 50- 50s and five 20- 80s and four 40- 60s and six 70- 30s,” reports Winslow. In other words, he made ten all- in bets with a 50 percent chance of winning, five with a 20 percent chance, and so on. Also called pot equity, AIEV calculates what a player theoretically “owned” of a pot. “Basically, if you have a 40 percent chance of winning, you can think of that in the long run as owning 40 percent of it,” Justin explains, “because if you played the hand out an infinite number of times, that’s how it would work out. So that’s your expectation.” In actuality, a tie notwithstanding, one player will walk away with the entire pot and the other with nothing. Thus, the AIEV calculator cannot be described as a predictive technology, even in the retroactive sense, for it is not concerned with how a specific hand will turn out but rather with what a player can statistically expect from it. “Your expectation is based on the long term and that’s what should tell you how to act in the short term,” says Emil. The point is to base one’s expectations and one’s actions in an infinite rather than a finite register. To that end, poker players are emphatically encouraged to disregard their actual all- in winnings— for they may have won every all- in wager they made during a session of play, but only out of luck. Instead of calling up winnings after a session of play, they should call up their AIEV scores— and only after a statistically significant number of sessions have been played, since only a large number can be trusted to render an honest assessment of their performance. “Once we have played enough hands to make our sample size meaningful, the data will be more honest than our own impressions of how we stack up,” writes a player on an Internet poker forum. If players find their scores to be in the negative range, they knows they have been playing too loose (e. g., betting on too many 20- 80s and not enough 80- 20s); if they find their scores favorable, then they should feel good about their performance— regardless of actual game outcomes. “If you’re playing well,” says Emil, “you should feel just as good whether you’re losing or winning.”11 Justin emphasizes this point:

58 / Natasha Dow Schüll I never look at what I won; I just rate my performance. I don’t care how much money I made— it’s totally irrelevant, there’s almost no value to it . . . I guess knowing that might influence my happiness in the moment but that itself is ridiculous since I should be happy or not based on how well I played. I want that to be an emotional trigger; I don’t want any emotions connected with using or winning money because it’s totally useless. Some days I win, some days I lose.

While losing players in a live game of poker might take small comfort in the knowledge that they “played correctly” (that is, according to statistical laws), in the context of online multitabling where they play tens of thousands of hands every month, such knowledge grants a sense of ontological security. The ontology at stake is not that of a self whose value is determined in moments of winning or losing but, rather, a self whose value accretes through many, tiny actions over time. In order to optimize his value potential, such a self must respect the law of large numbers at every decision point. In keeping with this respect, skilled online players resist the temptation to retrospectively query or consult their tracked data too frequently. Winslow explains: “A lot of novice players get impatient and make the mistake of overvaluing their data— they get biased by short- term information and ultimately make poorer decisions. You have to have a lot of data points for anything you detect to be statistically significant— otherwise you can’t confidently conclude that a pattern is real.” He depicts himself as a dynamic database whose “real” value is emergent and impossible to evaluate without sufficient temporal resolution. Justin echoes his point: “It’s important not to look at the data too often, because you need to have a fairly large number of hands not to be fooled by randomness. You have to safeguard yourself against that.”

Tilt Management: Regulating the Passions Each of the software tools I have considered thus far— whether in- game or retrospective— is designed to help online poker players act in linear, worldly time yet from the vantage of an infinite temporal field in which probabilistic values can be trusted to bear out. HUD numbers, reverse- scenario simulations, and the AIEV calculator assist players in the project of abiding outcome variance in the short term, arming them against the dreaded state of “tilt.” In tilt, a given event or set of events triggers emotional reactions, loss of perspective, and a compromised ability to make decisions wisely; players inflate the significance of events as they happen and lose sight of the longterm horizon.

The Gaming of Chance / 59

“I wish I was a robot,” the much admired live poker player Jennifer Harmer once confessed to a journalist, explaining how hard it was to act, in any given moment, according to the statistical laws that she knew, rationally speaking, she should trust. The likelihood of tilting increases online, as do its costs: if a player tilts in a live game, she can sit out a couple of hands to clear her head without great consequence; but if she tilts online, the effects quickly bleed over to other tables, linking them in a dangerous cascade of emotional reactivity.12 The challenge multitablers face— to act in worldly time without being affected by event outcomes— is akin to the challenge that online financial traders face as they move in and out of trades in a matter of seconds, striving all the while to “treat each trade as if it has no effect on the next” and to “ignore a sense of continuity” between past, present, and future trades (Zaloom 2006, 133–134; see also Knorr Cetina and Bruegger 2000, 2002; Zwick 2005, 2013). Some gamblers use software add-ons specifically designed to protect against tilt. Tiltbreaker, for instance, offers take-a-break reminders; “automated lockdowns” triggered by big wins, a certain number of hands played, or a certain amount of time played; and a Rage Quit button for moments of “super tilt.” Others emphasize the development of self-awareness and inner strength. On a poker- forum thread entitled “managing tilt,” one member posted a long message advising his peers on how they might track, manage, and ultimately avoid tilt. He began by distinguishing between the main forms of tilt: angry tilt, in which losses despite statistically correct play tip players into overly loose and aggressive play; frustrated tilt, in which mounting exasperation at being dealt bad cards and having to fold for an extended period triggers impulsive, sloppy play in games that players should exit; fearful tilt, in which the trauma of past losses results in overly tight and passive play; and, finally, despondent tilt, in which others’ luck leaves players feeling they are bound to lose, a form of resignation that negatively affects their play and threatens to become a self- fulfilling prophecy. “Beware of your really ‘giddy or euphoric’ feelings too!,” warned the post. “The strong emotions aroused by winning can be just as mind-clouding as any form of poker despair.” The author went on to urge his fellow players to “set up a tilt management plan” with ready-at-hand techniques for identifying and combating tilt in its various guises. He recommended they perform “self-checks” every thirty minutes by taking inventory of any feelings of frustration, revenge, anger, or despondency that might be creeping into their game, rating the severity of those feelings, and applying counteractive measures. One might “walk away from the computer immediately,” for instance, and stay away for

60 / Natasha Dow Schüll

ten minutes, if sufficient to “un- tilt” oneself— or for twenty- four hours, if necessary. The important thing is to “ensure that you stay away long enough to rationalize the cause(s) of your tilt.” The work of “rationalizing” the causes of a tilt episode could involve “spending some time re- tooling your game” by way of retrospective investigation (“I recommend reviewing hands after each session, unless you are on tilt or too tired— then save it for the next day”), self- education on blogs or from poker- strategy books and websites (“Thou Shalt Understand Probability and Variance,” reads an article on rules to avoid tilt), or posting data from one’s tilted session on poker forums and message boards so as to receive feedback and advice. To keep themselves from tilting in the first place and to mitigate tilt when it does occur, players not only make use of software tools but also create custom routines of self- discipline. In one online discussion a gambler describes how he writes down every “automatic negative thought” that crosses his mind during a play session and afterward writes out a “rational response” to each of these in an effort to banish them from future sessions. His method recalls the early Christian practice of writing down thoughts and actions as a safeguard against sinning; he depicts himself as if at a similar moral crossroads, yet instead of being pulled between God and Lucifer, he is pulled between rationality and tilt. Justin has developed a particularly elaborate system of self- regulation to manage his reactions to in- game events and protect himself against tilt. Directly before a session of play he consults his “warm- up checklist” (see figure 3.3), a document he regularly revises. Simple items— such as making sure his desk is clutter- free, that he has a glass of water, that he has eaten enough food to sustain him through a session of play— are accompanied by larger goals, notes on how to raise motivation (e.g. do some pushups, study poker), and categories such as “mental focus points.” The latter includes the only entry he has underlined: “Take the time for decisions. Count out loud.” Directly beneath this line is a sublist of “REASONS TO TAKE TIME BEFORE CLICKING / MAKING A DECISION,” the first of which reads: “I click less from emotion.” Justin reflects: You’re making so many decisions that a lot of them will just happen intuitively. In most cases that’s fine, but when I enter that grey area where it’s not certain what I should do, I want to make sure I don’t rely only on my own intuitions. What I do is pause every time I’m facing a difficult decision. I try to count down in my head, three, two, one . . . I breathe in and out and try to override my intuition. Recently I ordered a metronome to see if it might help with

The Gaming of Chance / 61 TO DO: - Office clean-up - Good music / good software - Skype off - Sufficient food / drink GOALS: - 100% focus on poker - Optimize mental game - Improve game (Make notes during play)

GAME CHOICE: - 4 to 9 tables (sometimes 12, but watch out) - If less than 6 people at tables, don’t play LOW MOTIVATION: - View motivation points - Do some pushups - Take short breaks - Study poker

MENTAL FOCUS POINTS: - Only focus on poker - Take time for decisions. Count out loud. REASONS TO TAKE TIME BEFORE CLICKING / MAKING A DECISION: - I click less from emotion - l have time to change my mind, change my bet sizes, etc. - So far this has always worked well - I do not give off time-related tells to opponents - It is annoying for opponents - I win more money - I feel more professional - I play more often when I do this - It’s frustrating when I don’t do it - It forces me to think about every situation rather than mindless clicking - It helps me with one of my “mental” goals this year: making more mindful decisions Figure 3.3. Justin’s “warm- up” checklist. Given to author by anonymous gambler.

that process and prevent me from making decisions too quickly. My thinking is that if I have a metronome, it will give me some sort of external rhythm. I plan to experiment with that.

While the HUD serves as an “external memory” for Justin, a metronome, he hopes, could function as an “external rhythm” to bring him out of the affective intensity of uncertain moments and restore him to the realm of rational reflection, presence, and equanimity. After every session of poker, Justin consults his “cool- down checklist” (see figure 3.4), recording the time of day he played (morning, midday, evening), the amount of time elapsed, the total number of hands played, and scores for focus and technique based on the rating criteria he has developed, which range from “mega- tilted” to “maximal game time spent focused.” Finally, he records comments on areas for self- improvement. One entry reads: “Evening, 120 minutes, 1,305 hands played, Focus 7, Technique 7. Think it went ok. Next time: better focus, tighter play, fold preflop when in doubt. “I use

62 / Natasha Dow Schüll

Figure 3.4. Justin’s “cool- down” checklist. Given to author by anonymous gambler.

the information to try to adjust my behavior in the next session,” he says. “I have a whole working document with a long list of things I could adjust. I am constantly revising it.” Justin’s tilt- prevention checklists are not unlike the self- scrutinizing, self- doubting diaries of the Puritans, in which they took rigorous inventory of their passions in an effort to renounce them (Paden 1988; Weber 1958). His checklists also evoke the Jesuits’ systematic method for recording sins and sinful thoughts, a practice designed to help rid them of passion so that they could remain indifferent in the face of worldly events (Quatttrone 2004). Yet recently Justin has made a small but significant revision to his approach, inspired by the realization that to act optimally in moments of uncertainty he must leave himself open— just a little bit— to signals of an affective, qualitative, intuitive nature. He explains how his new orientation departs from his former discounting of all emotion as illusory and in need of taming: If you imagine a scale from negative 5 to plus 5, I would say that I want to be at a +1. For a very long time I thought the best state to be in was zero—I operated that way for years. Operating at 0, you’re acting like a perfect robot. But the risk in that for me was that I almost didn’t listen to any emotional signals, because I was trying to rationalize everything. But now I try to let in a signal so I can then decide if I should take that signal into account in my decisionmaking process or not.

To get himself into the target state of +1, Justin takes simple measures: “One of the things in my warm- up used to be not drinking coffee— but now I

The Gaming of Chance / 63

always drink one cup of coffee or espresso before a session, it has become a ritual.” Music is also important: “Basically what I do is configure my playlist to get me in that emotional state of +1— so some days I choose mellow music, because maybe I’m already at a 3 and I need to bring myself down, and other days I choose more activating music to bring myself up.” Justin’s affective reorientation from zero to +1 can be understood as a reorientation from risk to uncertainty, from taming to gaming. His experiments in quantified self- regulation have led him to conclude that too tightly bracketing his emotions closes him off from the potential that lies in the uncertainty of the game and stifles his ability to respond decisively to that potential: I’ve come to understand that if I use a rational model for everything and become more robotic then I feel disconnected from the world and not really sure of what I want to do . . . That’s why I try to open the interval to +1. Before, I tried to ignore or discount my gut feeling because I thought it was never to be trusted; I didn’t know what I could do with it. Now, I try to use it as a signal in those grey areas where things are uncertain.

In a sense, the interval of +1 marks the interval of uncertainty that Justin recognizes he can’t do away with— and, indeed, should not— if he wants to optimally game chance. In that interval, the task is not to statistically assess but to intuitively apprehend. As Appadurai (2011, 525) writes of contemporary financial actors, Justin uses “intuitions, experiences, and sense of the moment to outplay other players who might be excessively dominated by their tools for handling risk alone.”13

Lessons for Life: “Create Your Own Justice” The Dutch historian Johan Huizinga wrote in the late 1930s that play involves “stepping out of ‘real life’ into a temporary sphere of activity with a disposition all of its own” (1950 [1938], 8). Two decades later Erving Goffman proposed a less divided relationship between play and real life, characterizing games of chance as “world- building activities” that rehearse life “by immersing us in a demonstration of its possibilities” (Goffman 1961, 27, 34). For online multitablers, many of whom make money (and even a living) from poker, the game is neither a radical break from nor a rehearsal for life. It comes closer to anthropologist Thomas Malaby’s (2003, 147) description of gambling as “a semibounded refraction of the precarious nature of everyday experience, a kind of distillation of a chanceful life into a seemingly

64 / Natasha Dow Schüll

more apprehensible form.” Online poker and its suite of software tools, I argue, provide a kind of testing ground for experiments in navigating the uncertain terrain of a world that, as Niklas Luhmann observes, “has come to be regarded as more fluctuating, more contingent. Each instant has a vaster, and thus more unpredictable, future. Contingency, risk and indeterminacy have become predominant” (Luhmann 1998, 94–95). How does software- assisted, online poker help players meaningfully orient to such a world? As I have shown, the technological mediation neither tames nor provides refuge from perceived contingency; rather, it helps them to develop a subjective “readiness” for living with uncertainty. This readiness is characterized by the capacity to be simultaneously uncertain and decisive, speedy and cool- headed, and to maintain a temporally discontinuous view of outcomes. Players recount how the stoical stance they cultivate toward events- in- time carries over from online play to life offline. Winslow reflects: You’re tougher when things don’t go your way in life because you’re used to making the right decisions and not having things go your way in poker. When you play a lot online, at multiple tables, you can very visibly see the swings— you learn that in the short term there will be lots of variance, even if you’re making all the right decisions. You get a very good sense of the degree to which luck is at work, how much it matters. And you realize that it’s no different in life: sometimes you do the interview very well and you still don’t get the job. Thinking this way helps you stop connecting particular outcomes to your performance. This type of mentality really helps me when I fail at something in life and by the same token, when I succeed— because even if you win it could have been due to luck, not because you made the optimal decision at every turn. You can kind of see through a bad or a good outcome to all the other ways it could have gone.

Life events, the game of poker trains its players to see, are meaningful only as part of a pattern, and that pattern is revealed only over time. In a sense, this kind of thinking “de- actualizes” an event by placing it back into the field of potential even as it occurs— just one among other potential events that could have come to pass. I asked Winslow how this attitude did not lead him down a nihilist road. Why act at all? “Because in the long run if you make right decisions— the statistically correct decisions— you’re likely to come out ahead,” he responds. “What you care about is the long haul, and you learn to rise above the moment. It doesn’t make you want to give up— it makes you want to play the game better, which means playing to reach your optimal statistical potential.” What is controllable, or rather gameable, is the way in which one approaches, makes,

The Gaming of Chance / 65

and reacts (or better, does not react) to decisions made in real time under conditions of uncertainty. The object of the game is to not to master chance but to master indifference to the outcomes it deals in real time and, in this way, act more gracefully and profitably in relation to it. As the Puritan lives under God’s mercy, the poker player lives under the mercy of time; divine providence is replaced by the providence of probability, election by luck. The analogy comes across in a quote from a software developer who designs programs to help players resist the tendency to become tilted by the “injustice of the game.” In his blog post “How to Avoid Tilt” appears Rule #9, entitled “The Poker Gods Knoweth No Justice”: There really is no justice to this game, at least not until the very, very long run of things, but it’s really just a microcosm of life isn’t it? You will have horrible, gut- wrenching downswings where nothing goes right and nothing is fair; but you must persevere. Create your own justice; continuously push forward until the numbers inevitably yield in your favor.

Salvation, here, will come if one abides short- term variance; time, not God, is the protective, just force in which one must place ones faith. As the Puritan has no way to intercede in God’s decisions about who will be saved and who will not and can only be humble and self- vigilant, the poker player has no way to influence chance and can only play as much, as fast, and as well as he can. “The new religion of the market,” writes Appadurai (2011, 528), “treats the market as the source of certainty, as the reward for disciplined focus on its messages and rhythms, and as the all- powerful power that rewards its own elect, so long as they obey its ethical demands.”14 The injunction to “create your own justice” can be read as a response to the ethical demands of the market. Evoking the contours of a broader speculative habitus (Lee and LiPuma 2012, 293), the poker player strives to make the best decisions he can in moments of uncertainty so as to evermore closely approach his optimal yield. “We are taught to focus on the quality of our decisions, and if we make enough of them, we will win in the long run,” writes a participant in an online poker forum. As long- term participants in volatile financial markets, these subjects have learned to cope with erratic downturns in the near term; they accept that they must dwell in uncertainty for the foreseeable future; they have faith that variance will yield to smooth gains in time, as long as they tend to leaks in their game and “persevere.” They work to self- potentialize rather than to self- actualize; they expose themselves to uncertainty rather than avoid it; they seek to game chance, not tame it.

66 / Natasha Dow Schüll

Coda: Tragedy of the Commons The use of poker “bots” (or robots) that pose as players online is shunned by those committed to the game. Bots are shunned not because they can beat humans; indeed, while the more aggressive of the bots can beat most amateurs fairly quickly, they are not a threat to skilled players. Instead, they are shunned because they can be set to multitable around the clock, collecting vast quantities of data on real players; other players can then purchase this data and pull up detailed informational profiles on opponents they are encountering for the first time. This is considered “cheating” in no uncertain terms— a shameful violation of the rules of the game that compromises the potential for players to “create their own justice.” Alongside the denouncement of poker bots’ infiltration into the game, there is a creeping concern among players that their own use of tracking tools, now a universally accepted aspect of online poker, might become so advanced and so rampant that the very existence of the game will be endangered. The worry is that as more players adopt a statistically “winning” strategy, a point will be reached where no uncertainty remains— or rather, where uncertainty will no longer serve as a resource for gaming chance. “If everyone uses these stats and uses them correctly,” says Emil, “then there will be no room left to have an edge— because everyone will have the same information, like we’re all bots playing each other, and the game will be ruined for everyone.” “If everybody uses the technology,” echoes Winslow, “it’ll be a tragedy of the commons.”

PA R T T WO

Security and Humanitarianism

CHAPTER FOUR

Policing Uncertainty: On Suspicious Activity Reporting M E G S TA L C U P

Introduction Several of the men who would become the 9/11 hijackers were stopped for minor traffic violations.1 Mohamed Atta was cited for driving without a license in Florida near the end of April 2001. When he failed to appear in court, a warrant was issued for his arrest. The warrant, however, seems not to have been flagged properly, since nothing happened when Atta was pulled over again, for speeding. In the government inquiries that followed the events of September 11, 2001, and in the press, these brushes with the law were missed opportunities. But for many police officers in the United States,2 they were moments of professional revelation and were also personally fraught. “It is always a local cop who saw something,” said the deputy director of an intelligence fusion center.3 He replayed for me how the incidents of contact had unfolded with the men and the uncertainty of every encounter, whether a traffic stop or someone taking photos of a landmark. Shortly after 9/11, major professional organizations for US law enforcement mobilized a series of working groups. Funded by the Department of Justice, these brought together leading city- and state- level law enforcement from around the country, and representatives from federal agencies. The groups worked on designing policies to include police officers in national intelligence,4 producing detailed recommendations and plans. Among these was what would eventually come to be the Suspicious Activity Reporting Initiative. Through its operation, police officers, as well as members of the public and industry, could submit tips and incidents of note from the ground. In turn, the federal government would communicate to participants timely information on security threats. While state, local, tribal, and federal governments; citizens; and those in the private sector were all included in

70 / Meg Stalcup

the initiative, the network was organized around fusion centers and the work of police, who centrally designed both. The idea was to capitalize on what patrol officers already did when dealing with the general public. An officer who sees someone behaving suspiciously will observe and decide whether or not to write up the incident. This routine documentation of suspicious activity was systematized into steps for gathering, evaluating, and sharing information. An incident report is sent from the police department to dedicated intelligence specialists for evaluation. They may decide that an incident was innocuous or ordinary crime, in which case the submission is referred back to the local police or to an apposite task force. However, if an incident seems, in official language, to also be “reasonably indicative of preoperational planning related to terrorism or other criminal activity” (US DOJ 2010a,1) a Suspicious Activity Report (SAR) is created and uploaded through portals to the shared network spaces of a number of government agencies and fusion centers participating in the initiative. An FBI team of local officers and federal agents from various departments called a Joint Terrorism Task Force may be part of the decision process or notified subsequently, in order to take operational action if warranted.5 The design of the initiative was distinctively anticipatory. Suspicious behaviors were taken as the precursors of a threat that was still in virtual form. Officers and intelligence analysts already cultivate a relationship to uncertainty as part of their expertise and their very subjectivity. The problem they faced in this undertaking was not how to collect ever more information (others were engaged in that task). Nor was it how to associate disparate pieces of a plot in order to predict or preempt the future (this is a later step). Instead, their task was to detect a potential event, by discerning elements of the pre- event (Samimian-Darash 2009). The uncertainty inherent in the public activities they watched and analyzed had to be parsed, but without over determining a situation with potential that, as the traffic stops of the hijackers had shown, could exceed known or speculated possibilities. Events can be prevented, preempted, and anticipated, among other approaches, and in this chapter I look at these three as technologies of security in US counterterrorism, focusing on their differing modes of uncertainty. Prevention arises from the roots of risk (Ewald 2002), where uncertainty is a function of lack of knowledge. The same knowledge used to define a risk also supports efforts to address causes, to stop the threat from manifesting. Yet prevention has temporal limits (it cannot address an imminent threat), as well as epistemological ones (uncertainty comes not only from what one does not know, but from the potential inherent in the virtual for something

Policing Uncertainty / 71

else to happen). Preemption deals with the virtual by instigating the manifestation of threats to better manage the actual form taken (Massumi 2007). To these two technologies, already well- described in the literature,6 I add anticipation. Through empirical analysis of the Suspicious Activity Reporting Initiative, I suggest anticipation deals with potential uncertainty; not by creating possibilities, but by detecting precursor events as they actualize. The initiative relies on the capacity of officers and analysts to discern suspicion, which entails a distinct mode of uncertainty, particularly in the subjectivation of police officers and intelligence analysts. Finally, I place “policing uncertainty” within a broader national intelligence counterterrorism assemblage and the governance of security.

Technologies of Security: Preemption, Prevention, and Anticipation Experts of war traditionally separated preemption and prevention in relation to uncertainty, which had a significant temporal dimension (Macedo 2008).7 A threat could be both certain and imminent, as when an invasion was mobilized and on the border of a country. Under such circumstances, preemption was understood as akin to throwing the first punch against an already circling opponent (Doyle 2008, 81). Preemption in this sense had a narrow window, and moreover was necessarily unusual, because complex real- world scenarios rarely offer certainty, and determined attackers (those very ones about whom one is likely to be certain) do not want to publicize their imminent action. When the threat of the enemy was admittedly possible, yet uncertain or at some remove in time, then one was understood to act “preventatively,” in what was viewed as an act of aggression. Prevention could be a desire “to fight sooner rather than later” (Mueller et al. 2006, 9), or to keep enemies from acquiring “threatening capabilities” (Doyle 2008, 20), such as nuclear weapons. But with the threat neither certain nor imminent, prevention as an act of war was not justified under dominant traditions of international law. Preemption Following the events of 9/11, the Bush administration relabeled prevention as preemption.8 In the Bush Doctrine, a preemptive blow was one taken first, but unlike in previous war doctrine, the condition of certain, imminent threat was removed. One attacked sooner, rather than later. The justification for this shift in the meaning of preemption was an ontological change in the

72 / Meg Stalcup

nature of threat. “At the perilous crossroads of radicalism and technology,” President George W. Bush said in a June 2002 speech, “even weak states and small groups could attain a catastrophic power to strike great nations.” Therefore, he added, “if we wait for threats to fully materialize we will have waited too long” (Bush 2002). Waiting for certainty, which is to say, waiting to be sure of the threat, could have potentially catastrophic consequences; both certainty and the period of inaction were therefore refused. Preemption now not only did not require certainty, it required uncertainty. It was precisely because of what could happen (if preemptive action wasn’t taken), linked to the potential for devastating consequences, that acting was justified sooner rather than later.9 What the re- defined Bush Doctrine of preemption tackled was the problem of virtuality. The United States’ preemptive attack on Iraq in 2003 was launched because Saddam Hussein might have had weapons of mass destruction, and the potential (both for the weapons’ existence and their destructive capacity) provided the grounds for action (Massumi 2007). Although the weapons were not found, in claiming potential uncertainty as the grounds for taking action, the decision to act could not be wrong. Hussein always “could have restarted his weapons projects at any moment” (ibid.). With a virtual cause, no single actualization can exhaust potential. Preemption addresses the problem that the virtual poses for security by precipitating its actualization, making potential take “a shape to which it hopes it can respond” (Massumi 2007). Guided by a desire not to limit action to plan for what may be the incorrect future, potential uncertainty is brought directly into the sphere of governance. Louise Amoore and Marieke de Goede describe, for example, how transactions data, such as personal remittances sent abroad or the purchase of fertilizer, are quietly collected, archived, and used “to identify a suspicious body in movement and, most importantly, to verify or deny access in advance” (2008, 173). As with the Bush Doctrine’s preemptive war, action— stopping someone from boarding an airplane for example— is not taken on the basis of knowledge, the mass of collected data about the person, but on “an absence, on what is not known, on the very basis of uncertainty” (Amoore 2011, 27). The potential for threat in this uncertainty leads to preemptively denying entrance or access. Although the algorithms do not produce causal patterns, and they do continuously adapt by incorporating their own results and new data, they are nonetheless “an already encoded set of possibilities” (Amoore 2011, 34). Potential is preemptively turned into possible futures that emerge from the knowledge, imagination, and judgment of the software developers who create and refine the algorithms. These are assigned to travelers as they catch

Policing Uncertainty / 73

trains or wait at borders: losses of liberty, sometimes small, sometimes great, that constitute the “banal face of the preemptive strike” of the war on terror (Amoore and de Goede 2008, 173). Prevention Next in the trio of governmental technologies discussed here, “prevention” traces genealogically from a nineteenth- century approach to insecurity (rather than war doctrine’s concern with what counts as just war) that was “tied to a scientific utopia ever more capable of controlling risks” (Ewald 2002, 282). Risks in this specific sense are those threats for which probability can be calculated from knowledge about the past. François Ewald noted that the identification of risk presupposes “science, technical control, the idea of possible understanding, and objective measurement.” One way of dealing with risk is insurance, which shares risk among a broader social body and compensates individuals for harm suffered. The same knowledge that grounds insurance, with its calculation of damage and compensation, also provides the basis for prevention. Instead of compensating for someone’s losses, expertise in prevention seeks to reduce “the probability of their occurrence” (ibid.); the effort is focused on root causes, so that a threat is not realized. Preventative practices assume that a targetable threat exists prior to intervention (Massumi 2007). The preventative approach to “homegrown terrorism” in the United States, for example, takes such acts to result from “radicalization to violence” inspired but not directed by a foreign terrorist organization. The terms and terrain of action pertain to arenas of expertise that have defined the threat: specialists in radicalization. In the Executive Office of the President’s official Countering Violent Extremism strategy (2011) homegrown terrorism is indexed to and dealt with by “mental health experts, juvenile justice officials, and law enforcement” (4), especially those with counter- gang experience. Ewald explicates (2002, 297): “The logics of fault and prevention presuppose that, in the spheres they govern, it is always possible to articulate a standard of conduct that everyone must observe.” For the purposes of CVE strategy, the causes of radicalization are divided into categories: poor mental health, lack of opportunity, social pressures. The strategy targets sets of behaviors through which radicalization could be sparked and fed, such as associating in person or online with other extremists, reading extremist literature, and making speech acts.10 Ewald describes prevention as “a rational approach to an evil that science can objectify and measure” (2002, 293). Such techniques create the need

74 / Meg Stalcup

for more and more information. With enough of the right information, this approach assumes, violent extremists can be assessed empirically to identify what caused their radicalization. The “predictable, linear course from cause to effect” (Massumi 2007, 5) that these preventative efforts posit does not need, nor account for, potential uncertainty.11 Anticipation Prevention and preemption, although but two approaches to security, are a useful dyad to triangulate with anticipation in order to distinguish it as a distinct approach.12 Prevention centers around causes that hold a direct relationship to possible and knowable effects. In a preventative mode, uncertainty is the result of having insufficient information, and can be addressed by collecting data, calculating probabilities, or identifying at- risk populations. Preemption instead confronts a virtual threat by creating possible futures and bringing them into existence. A preemptive mode is one in which future potential uncertainty is made real in the present. Anticipation also deals with potential uncertainty, but instead of instigating or delimiting events, configures a time and space of waiting, poised and vigilant, to detect the moments that precursor incidents appear. This targets an intermediary stage of event emergence, in which prevention has been unsuccessful but the threat has not yet taken a definitive form. In this “in between,” the anticipatory technology amplifies inconspicuous components of the interval in which an event is coming into existence, its becoming period. Like other “event technologies” (Samimian-Darash 2013), this means that anticipation does not deal with future potential by predicting the future. Instead, anticipation aims to make the pre- event itself governable. One meaning of anticipation is to predict what will happen and to take action in order to be prepared; exercises, scenarios, and other imaginative methods of taking up the future present abound in government practice. “To anticipate,” however, is also to act as a forerunner or precursor, and this is how suspicious activities are understood to anticipate the event. The SAR technology seeks to capture minor incidents that could build into a major attack; it attends to not the threat itself but its precursors. These building blocks could be assembled into any number of final shapes in the future, but even without clairvoyance they can also be knocked out of place if identified, or serve as clues to underlying plans. If one considers the full “time of the event,” the past and future which inhere in time and divide each present infinitely (Deleuze 1990, 5), the

Policing Uncertainty / 75

incidents occur in the becoming period. When instances of the would- be terrorist’s “preoperational planning” are identified as suspicious activities, there has not yet been a terrorist event, and the intent is that it is never to be. Anticipatory technology configures this pre- event period rather than the event itself, because a virtual event could always exceed the knowable possibilities or prepared- for contingencies. However, while incidents that police officers observe may be forerunners to terrorist events, except for increasing capacity for intervention, the event itself remains undetermined by and external to the anticipatory technology. Michel Foucault suggested that disciplines crossed a kind of “‘technological’ threshold” when “they attained a level at which the formation of knowledge and the increase of power regularly reinforce one another in a circular process” (1977, 224). Building on Foucault’s conceptualization, Mitchell Dean (1996) argued that “practices of government” could attain a technological orientation: “when their assemblage evinces a certain kind of strategic rationality. The general form of this strategic rationality is one that brings governmental requirements of conduct into a kind of perpetual loop with technical requirements of performance” (61). Officers and analysts are constituted as “subjects of performance” in enacting the technology (ibid.). If preemptive detection via algorithms relies on decisions about uncertainty made in the design or application of the formulas, suspicious activity reporting counts on subjects of performance working in a mode of uncertainty. Marking the significance of suspicious behaviors requires cultivating discernment, a capacity understood to develop through experience and training. The following sections examine how older police knowledge was adapted into a post- 9/11 technology of anticipation, and the subjectivational demands made by these shifts.

Assembling an Anticipatory Technology Suspicious activity reporting developed at least in part because loci of experience and invention outside of federal government bureaucracies were mobilized. The initiative came out of not only the usual suspects among defense contractors, the FBI, and others in the intelligence community,13 but a “sub- state” array of venues that shape, govern, and manage daily social life– state and local government, police and legal professional organizations, civil rights and liberties activist groups, and religious advocacy organizations. The deputy director at an intelligence fusion center named what was, for him, the most significant factor: “Basically— and this was a big part— the state did not trust the federal government. They felt that there

76 / Meg Stalcup

was such a lapse that created September 11th that we needed to do our own project and track down these terrorists.” The history of the formative role of sub- state organizations is passed over on federal government websites,14 which emphasize the authorizing legal instruments from Congress and Executive Orders.15 In fact the top- level pronouncements generally specified what should happen— for example, state and local law enforcement should be included in national intelligence efforts— but offered little or nothing in terms of specifying how. The backstory of the two most important initiatives for the development of suspicious activity reporting, the Nationwide SAR Initiative and the National Network of State and Major Urban Area Fusion Centers, points, however, to the many effects of this shift away from federal venues and subjects. One month after September 11, 2001, the International Association of Chiefs of Police (IACP) announced that an “intelligence sharing summit” would be held in the spring. Recommendations from that summit proposed the core elements of what would become fusion centers (IACP 2002, iv) and suspicious activity reporting (then subsumed under “intelligence- led policing”), as well as the formation of a Global Justice Information Sharing Initiative Intelligence Working Group. A fusion center director who participated in the working group, a thirty- year career police officer before he entered management, recounted their thinking: There are a lot of cops in America. County and municipal and state and private cops, 850- 870, 000 of them . . . post- 9/11, we decided, if we train state and local cops to understand pre- terrorism indicators, if we train them to be more curious, and to question more what they see, and got them into a system where they could actually get that information to somebody where it matters . . . [we would] get cops to understand how important their role is, how they are really the first line of defense, the eyes and ears on the ground.

The working group issued the first National Criminal Intelligence Sharing Plan in 2003. A revised 2005 plan described an initiative by the Regional Information Sharing System centers (of which there are six), with the Department of Homeland Security, for liaising with local and state law enforcement agencies within the jurisdictional areas surrounding critical infrastructure facilities in order to support “reporting of suspicious activities and incidents.” That same year, a report was issued titled Intelligence- led Policing: The New Intelligence Architecture. Also developed by police chiefs, management, officers, and other public safety officials acting as representatives of police

Policing Uncertainty / 77

professional associations16 and funded by the Bureau of Justice Assistance at the Department of Justice, the report mentioned fusion centers by name, and reiterated the core of idea of reporting suspicious behaviors (Peterson 2005, 11): Patrol officers are the eyes and ears of the police effort, and they must be encouraged and trained to look and listen intelligently. Information from field interviews, interactions with business people, and other activities and observations must be captured and forwarded to intelligence staff members who can analyze the data, arrive at appropriate courses of action, and send information back to the beat officers.

Intelligence fusion centers are physical locations for receiving, analyzing and sharing the kind of “threat- related information” that is produced by police officers reporting suspicious behaviors. Early centers grew out of existing criminal intelligence and analysis units, a pathway observed and advocated as early as the 2002 summit (IACP 2002, iii): A variety of law enforcement and protective services organizations already engage in substantial intelligence sharing. Rather than replicating such efforts, the [proposed] Council should seek to connect them, strengthen them, support their expansion as necessary and then fill other gaps in the current system.

By the time of 2005 publication of the Criminal Intelligence Sharing Plan and the Intelligence-Led Policing report, twenty- five fusion centers had been established, and with the 2007 National Strategy for Information Sharing, the federal government was politically and logistically on- board. Counterdrug expertise would have a formative influence on the ways that law enforcement took up its new counterterrorism tasks. Before terrorism captured lawmakers’ attention, counternarcotics had been funding favorites, with grants that supported the establishment of criminal intelligence and “High Intensity Drug Trafficking Area” (HIDTA) programs. HIDTAs therefore were positioned to offer infrastructural support, and to serve as one of the models for conceptualizing fusion centers. At the same time, the increased importance given to terrorism meant that formerly untouchable “war on drugs” funding was at risk, and co- location offered a way to sustain the older drug programs. The counternarcotics approach to criminal intelligence in particular, its techniques, strategies, and institutional memory,

78 / Meg Stalcup

would come to be influential. Management and analysts were drawn from the ranks of former narcotics officers, among other law enforcement subgroups. They were officers who knew how to get wiretaps on mobile phones, run informants, identify gang leaders, and map gang followers. They had experience “following the money,” and spotting money laundering. All of this was viewed as applicable to counterterrorism. Fusion center staff also came out of the military, which, engaged in simultaneous wars in Iraq and Afghanistan, was producing a pool of veteran intelligence analysts. Those who were active in the national guard had the added advantage, from the point of view of data fusion and intelligence sharing, of providing a conduit between the military and state and local governments. Defense contractors both advised and competed to provide data management and analysis tools to the new centers. As these came to be recognized and approved by the federal government, the Homeland Security Grant Funding Program was initiated,17 and institutionalization intensified. By the time of a (highly critical) 2012 Senate review, an estimated $289 million to $1.4 billion had been spent to found some seventy centers.

Suspicious Incidents The police officer documents the suspicious behavior of an individual, staking a claim to an element of significance in otherwise undifferentiated uncertainty. In the words of a sheriff’s deputy: “Anytime you meet someone out in the field and you think you might run into them again, you would write up an FI card— a Field Incident or Field Encounter card.” The impetus for such interactions is something that the person does to call attention to him or herself. This “behavioral” profiling was developed in order to avoid racial or ethnic or religious profiling (all of which are illegal). Both for the operation of anticipation and social justice, such biases raise concerns (see also Amoore and de Goede 2008). However, whether it is feasible to perceive only behavior and context or if officers really attempt to engage in strictly behavioral profiling is less the point here than the presuppositions of the process itself. The officers’ capacity for discernment— marking a distinction between the ordinary and the suspicious— is assumed to allow them to detect incidents in the course of duty, in public settings, during traffic stops, or on service calls. “Creating an incident [card] would be the next level, and a report after that,” explained the deputy. “An FI card would state that you had contact with so- and- so, when and how, if cooperative or not. If you’re being written up on an FI card, you are already suspect.” No causal pattern is applied,

Policing Uncertainty / 79

however, or future possibility projected, beyond what is required to make sense of the immediate behavior. The deputy brought up the situation of an old man down in a gully with a young boy. It could be a grandfather with grandson, but “if anything’s a little odd, I document it. If the kid makes a complaint three years from now, you now have enough for a case.” The details are registered because of their potential. By discerning and documenting them, instead of delimiting the future, potential uncertainty increases, as do unknowable futures. In another example, the deputy recounted how he engaged a parolee who failed to identify himself as such. Recognizing him, and knowing that he was on parole but not the specifics of his case, the deputy said, “I want to check what his prior was for. It’s more of a moral judgment.” His assessment already includes an aspect of this “moral judgment”: the knowledge that that the man had previously been incarcerated makes his failure to identify himself suspect. The Field Incident card, in its confined capture of the time, date, location, and mechanics of the encounter, is documentation of this suspicion. Front- line officers were meant for exactly such work, said one trainer in Florida, a former criminal investigator. The intellectual history of discernment itself is largely one of how to recognize the “signs of the times.”18 This was coupled with the question of who one had to be in order to discern such signs. Despite the shift in scale, and to the secular, these questions remained central for the trainer, and to the Suspicious Activity Reporting Initiative. The discerning officer was someone who could recognize signs by drawing on intuition that was less instinct than an experiential knowledge internalized under the intensity of adverse conditions. The internalization occurred partially in training but mostly on the job. “When you are fighting sleep, walking up and down the back alleys,” said the trainer, “you come to know what does and doesn’t belong. Certain things belong, certain things don’t. You can tell if a car is parked in the wrong location.” Police, he said, “just need to be encouraged to do their normal jobs but maybe with better tools: better indicators, items to be on the lookout for.” He had doubts about cops making the connection to terrorism. “The thing about checking a box about terrorism is, is the officer on the street going to know it is about terrorism? Or is it just a peculiar thing?” But for him, this incapacity was a strength. The officers would discern the pre- event indications of terrorism without imagining them. Their task was to engage their capacity to differentiate the ordinary from the suspicious. Functional Standards for coding suspicious behaviors were nonetheless developed to aid officers (ISE 2008; ISE 2009). Issued in 2008 and then

80 / Meg Stalcup

revised in 2009, these provided a list of behaviors, criminal and noncriminal. Some agencies had officers apply the codes themselves, while in other places codes might be added to the officers’ reports later, by a specialized unit, or at a fusion center. Behaviors defined as criminal and with potential nexus to terrorism were straightforward and uncontroversial; these included attempted intrusion and damaging or threatening to damage infrastructure. Those behaviors on the “Potential Criminal or Noncriminal Activity” list, however, fell on much less solid ground. One example was: “demonstrating unusual interest in facilities, buildings, or infrastructure beyond mere casual or professional (e.g., engineers) interest such that a reasonable person would consider the activity suspicious.” Observation through binoculars, taking notes, and attempting to measure distances were listed as possible suspicious activities of this sort. The officer would need to intuit something was wrong and articulate what it was in relation to the circumstances. An action could not be suspicious outside of context. And the officers were not supposed to look for those behaviors. Rather if a behavior were observed as suspicious, the codes would be used to label them. A former police officer said, You may have an officer who says someone is taking a photo of a bridge, and asks, “what is this person doing?” then digs a little and finds it is probably a tourist. A person is taking lots of pictures of a bridge— it may be explainable. The idea of an SAR is taking behaviors that are outside the norm, and being able to share that.

Officers have repeatedly stopped photographers, however, at times leading to the erasure of what are protected images, other times leading to arrest (Simon 2012). The nuanced shift from discerning suspicion to that which is suspicious of terrorism was harder to put into practice than it seemed in theory.

Institutionalization From October 2008 through the end of September 2009, twelve major urban and state fusion centers participated in a test run of the initiative, called the Evaluation Environment. Each site developed a set of protocols that integrated local procedures for reporting suspicious incidents into the system that had been developed in order to share information with each other and with different entities in the federal government. At the end of the Evaluation Environment period, the final report presented a summary from

Policing Uncertainty / 81

each of the twelve sites on human workflow, implementation and harmonization with existing technical systems, what training and outreach to the public had taken place, and recommendations. The diversity of local approaches to handling information was one of the challenges of the initiative and its efforts to produce pre- event indicators. The working groups that had begun meeting in 2002 guided the intelligence processes that most major state and city departments put in place after 9/11, many of which had representatives in the groups. Local procedures were developed by individual departments, however, and were shaped by and around already existing city, regional, or state intelligence centers and communication infrastructure, the quality of relationships with the FBI, and localized political pressure on privacy and security concerns. The Suspicious Activity Reporting Initiative was launched in order to link these efforts into a nationwide system. In the Miami-Dade region of Florida, the police department had been shocked by the knowledge that, as one local newspaper put it, “the 9/11 terrorists lived, trained and did business” in their county.19 Prior to participating in the Evaluation Environment, the department had a directive on “Handling of Criminal Intelligence.” They issued another directive, on reporting, before joining the pilot project, in late June 2008. For the official Evaluation Environment period, they decided that those two directives were sufficient and created no general or special order as part of their participation in testing the Suspicious Activity Reporting Initiative, although they did refine their vetting process. The procedures were as follows (abbreviated and lightly edited; US DOJ 2010b:128–129): Prior to the Evaluation Environment, Miami-Dade police officers’ reports were submitted in hard copy to the department. If an officer determined that the report included suspicious activity, the report was forwarded to MiamiDade Fusion Center (MDFC), which served as the collection point for all SARs from the department. Officers were also encouraged to call the fusion center to inform the center of the suspicious activity noted in their reports. During the ISE-SAR Evaluation Environment, the center developed a multilayer review and vetting process to identify SARs. Once the initial report is submitted by an officer, a field supervisor within the police department reviews the report to ensure accuracy and appropriateness. Sent to MDFC, it is immediately reviewed by an analyst and investigative personnel to determine its relationship to terrorism. If the SAR is credible, a detective will deploy to the scene for follow- up. Once the review is complete and analytical value added, the SAR is then reviewed and approved by an MDFC supervisor before entry into the ISE-SAR Shared Spaces.

82 / Meg Stalcup

If an SAR is deemed to be credible, feedback is provided to the original submitter of the SAR and, depending on the validity of the information, commendations can be issued. Central to the initial reporting is the police officer as the discerning subject. The technology of anticipation relies on the officer’s management of uncertainty to produce the determination of suspicious activity. Likewise, once an encounter or incident report arrives at the fusion center, there is a set of skills applied to turning it into an official Suspicious Activity Report (SAR).

Suspicious Activity Reports The ISE-SAR Initiative idealizes suspicious activities as self- realizing actualization: the terrorist’s preoperational plans are unfolding, and the officer spots them. Yet such micro- incidents do not count as suspicious activity until they are identified as such, and, taken as a complete process, this is a matter of discernment, rather than technical detection. Efforts to discern the suspicious purely in behavior and context are confounded by the reality that these agents cannot perceive with discernment but free of their lived experience and shaping. The training offered to officers and analysts is a particularly important venue for making explicit or at least lessening the power of any faulty internalized guides, although often it does the opposite (cf. Stalcup and Craze 2011). Whether the capacity for discernment is shaped in counterterrorism training or everyday policing— also with well- documented biases— it is necessarily deployed and thus “directs” the actualization of potential uncertainty. Although this has many ramifications for the ways that security shapes life, what is pertinent here is how the system handles this inherent tendency to reproduce known or imaginable possibilities. The Evaluation Environment final report optimistically sets out a fourpart “integration/consolidation” process, which involves reviewing the activities to vet them for “a potential nexus to terrorism” after collection (US DOJ 2010b, 17). In practice, procedures for this vary from one fusion center to another. A concerted effort is made to specify any ties to terrorism, in a process that privileges the decision of the analyst, like the officer, in dealing with what is in hand. The lead analyst at one fusion center explained: I don’t want a checklist. If I have an incident report, I have to look at it. Then maybe I will check three things that catch my attention. What I check is not always the first thing on the list. Let’s say I look through one week’s submitted reports— that [collection of reports] will tell me something, not the checklist.

Policing Uncertainty / 83 That will me give a sense of what is going on. Furthermore, it’s intrusive to check the criminal database for someone, and I want to have a reason.

What is assessed, what counts as significant, and the analyst’s sense of ethical comportment are brought to the fore. After the analyst has done a preliminary check, the incident report would be reviewed at the fusion center’s weekly meeting with analytic staff, and representatives from federal agencies not located in the fusion center, but one floor up, which might include the FBI, ICE. and US Marshals. The procedures would be recognizable to analysts elsewhere, even if not identical. Each incident being considered for submission to the networked “Shared Spaces” or federal intelligence databases would be presented by an analyst, together with supplementary information she had collected. At the meeting, the representatives would discuss the incident in relation to “what was happening” locally and according to intelligence sent from the federal level. Like the officers on the street, the analysts are supposed to become familiar with the local state of affairs such that they have “a sense” of what something indicates. The intelligence team leader recounted how one analyst had been assigned an incident, and “started running the name down” through federal database systems. There was a link to criminal activity in the past, specifically, drug smuggling. We went back to the local police department, and asked for old intakes. There, we found linkages with drug criminal history. So, this just doesn’t pass the smell test for terrorism. It seems like a smuggling ring. That was the question we asked at the meeting, “which vat does it go into?” What we can say is, based on our findings this is what we think it is. It should go to the police department, maybe DEA, or the gang unit.

For the officers and analysts, interaction with the uncertainty that intelligence gathering addresses is part of the subjectivity they must cultivate. Formal criteria tend to be eschewed by both. At the most basic level, the officer aims to distinguish between the ordinary and the suspicious, marking this distinction in the potential of undifferentiated moments of human life observed on the street or in an encounter. They practice discerning what “does and does not belong,” and through that practice the virtual event becomes the suspicious incident in the register of observations and encounters. The analysts take this documentation, as well as calls and tips that make up their raw materials, and, in a sense, repeat the process as they decide what

84 / Meg Stalcup

is to count. None of it is meaningful per se; by the very norms of the analytic process the data are wiped clean of the significance attached by those who provided them— repotentialized— to be assessed. This is not to say that sources’ authority and their prior assessment are unimportant. Who provided the information and why they did so are crucial points, not in and of themselves, but as criteria used by the analyst in the process of differentiating what is significant (suspicious) and what is not.

After Anticipation An anticipatory technology, although it will multiply incidents, does not produce events. Incidents are, by design, meant to be detected in direct proportion to their independent manifestation. In relation to the bulk of those human activities that the police handle, or even criminal activities, they will be relatively few. A functional anticipatory technology will find mostly crime (unless there is much more terrorism activity in the United States than any evidence suggests). This explains the findings of a 2012 Senate committee that reviewed the fusion centers and their input into national intelligence. Attempts to assess the initiative’s output by how much terrorism it found, as Congress hoped to do, were frustrated: “Most reporting was not about terrorists or possible terrorist plots, but about criminal activity, largely arrest reports pertaining to drug, cash or human smuggling” (HSGAC 2012, 2). The Senate committee also found: Nearly a third of all reports— 188 out of 610— were never published for use within DHS and by other members of the intelligence community, often because they lacked any useful information, or potentially violated department guidelines meant to protect Americans’ civil liberties or Privacy Act protections.

The committee was angry and also flummoxed by these results, although dusty archives are presumably where reports should go that do not seem “reasonably indicative of pre- terrorism activity,” if they are kept at all. The documentation of incidents serves not to find a specific event but to hold the “time” of potential, and extend the period of the pre- event. Each traffic stop is an encounter with an unknown and unknowable future; the uncertainty, the virtual richness, of the interaction is preserved as it is turned into an official SAR, much as it had been preserved in the archives and databases of police departments. The potential is made available in the system

Policing Uncertainty / 85

as data that can be repurposed, but no event past that point is necessarily suggested or diagnosed. However, this is left to other technologies to take on. So while this basic level of national intelligence work retains and even increases potentiality, this phase does not extend into indefinite or infinite uncertainty. Suspicious activity reporting, in discerning the micro- instances that anticipate the event, is restrained and is generative as part of the broader intelligence assemblage. Evidently the process is not politically neutral or unconstrained, residing as it does in government databases for discerning threat. The official SAR channels incidents onto the path of “terrorism,” albeit not in relation to a specific event. The intermediary role of suspicious activity reporting (its expansion of the “becoming period” prior to an event), means that it can equally feed technologies of prevention and preemption. Suspicious behaviors have long been documented at the street level, but once archived, these tended to be reactivated only in the event of a crime later on. The advent of intelligence- led predictive policing increased the routine use of suspicious encounters, but retained a tight linkage to crime incidents. SARs, however, can be decoupled from criminal matters, and thereby lend themselves more easily to the speculative (or fraudulent) construction of a preemptive case against someone. This may be on the basis of a different (noncriminal) kind of profile. Surveillance of constitutionally protected activities have been found in the fusion centers of several states, suggesting that rights, and also the logic of anticipation (in which the goal is not simply more information but significant information) are challenged by the conflation of suspicion with behaviors. Alternatively, still in a preemptive mode, if suspicious activities are linked to a subject, an agent provocateur can instigate what the FBI calls a sting operation, the “ultimate goal [of which] is to catch a suspect committing an overt criminal act such as pulling the proverbial trigger but on a dud weapon” (Bjelopera 2013). One informant described it this way: The only way to prosecute these guys is to give them every option to trap themselves. Yes, most of the time you’re putting weapons in their hands. Otherwise all you can prove is that they had intent. If you don’t provide capability, what can you do? If you do, you can get them in the act. An FBI agent can’t go to a mosque to convert a group to radical views. But if you get a call from a parent, and you get turned onto someone that way, you can hear him say stuff. You say, “I’m going to see how far this guy is willing to go.” I am going to ask the kid a bunch of times, “are you sure . . . ?” but I’ve got to find

86 / Meg Stalcup out if he’s going to act if he has the weapons. We are either in front of the event, and people will say “entrapment” or we are after. If we’re after, the attack may have been successful.

Through the actions of the FBI, in nearly all cases since 9/11, an ensemble of incidents was preemptively provoked into a (highly controlled) event, subsequently charged as attempted terrorism. The Congressional Research Service (CRS) estimates that there have been 63 homegrown violent jihadist plots or attacks in the United States since September 11, 2001 (Bjelopera 2013).20 Although an official count of terrorist sting operations is not publicly available, the FBI has said that of the terrorist plots disrupted between 9/11 and the September 2009 Zazi plot to bomb the New York City subway, two plotters were “prepared to move ahead with their plots without the benefit or knowledge of government informants or US officials” (Bjelopera 2013, 21).21

Conclusions Among the “Lessons Learned” in the official report on the Evaluation Environment, two were particularly notable (US DOJ 2010b, 42). The first was that there was no clear agreement on what constituted a terrorism- related suspicious activity. The second was that none of the participants were sure how suspicious something needed to be to get classified as an SAR. The heart of this confusion was not the need for better guidelines or more details in any specific case, but the fact the initiative hoped to bring forth a discerning subject who would be able to differentiate a special kind of suspicion. The architects of the initiative reasoned from their early insight about traffic stops with the soon- to- be 9/11 hijackers that officers had always collected and documented suspicious activities. Suspicious activity reporting was conceived of as an extension of everyday policing and criminal intelligence; the difference was in altering “suspicious” from a primary differentiation of potential uncertainty to instead indicate a category of possibilities related to terrorism. The common legal standard of “reasonable suspicion” required for questioning, stopping, and searching someone was swapped out for “reasonably indicative of terrorism- related activity.” Counterterrorism training was intended to enable whatever expansion of capacities was needed to discern that which might be a precursor to terrorism. Officers would keep themselves attuned for certain signs while they went about their routine patrol work. By accumulating suspicious incidents, located in the intermediate period that is the pre- event, the potential with which the

Policing Uncertainty / 87

counterterrorism intelligence apparatus overall could work was increased (although those caught up in it were not necessarily terrorists). The initiative was thus based on the demand to learn to recognize a specific category of behaviors (possible terrorism precursors), without this compromising the capacity to discern that which is simply outside the norm. In the face of this paradox, standards of officer and analyst performance had been set, but they had not been internalized by the subjects of performance, nor were they stable. As the trainer in Florida brought up, instructing officers to engage differently with uncertainty by looking for terrorism may distort their very capacity for discerning the suspicious. Nonetheless, police departments hoped to learn from their local fusion center what to look out for, such as timely information on a new fundraising scheme or plans to case a local transportation site, and therefore they were willing to participate. Security, as a form of governance, is of course redesigned in an ongoing manner. Diverse logics undergird governmental practices that draw on and adapt to differently imagined futures, perceptions of crisis, scandals that impact public opinion, and shifts in jurisprudence. Discerning suspicion is only one way of handling uncertainty. Anticipation is only one event technology among many in use in security governance, and may yet be refashioned by more powerful technologies. Should capacities and practices cultivated in policing to deal with terrorism rather than uncertainty succeed in taking root, however, these may well continue, unmoored from the technologies themselves.

CHAPTER FIVE

Guantánamo’s Catch- 22: The Uncertain Interrogation Subject R E B E C C A L E M OV

“Why are they going to disappear him?” “I don’t know.” “It doesn’t make sense. It isn’t even good grammar.” —Joseph Heller, Catch- 22 The idea that we would still maintain forever a group of individuals who have not been tried, that is contrary to who we are, contrary to our interests, and it needs to stop. —President Barack Obama, Press Conference, April 30, 2013 (emphasis added)

At the heart of US President Obama’s public paralysis over the promise to close Guantánamo Bay Detention Camp in the face of his administration’s evident inability to do so is a set of questions about uncertainty. The one hundred forty- nine detainees, most held for twelve years without being charged or tried, constitute a sum of human materials in limbo: it is not known what to do with them because it is not known what they might do.1 Calling the camp a “sad chapter in American history” on the campaign trail, the president on January 23, 2009, signed an executive order stating that Guantánamo Bay Detention Camp would be closed within the year. Five and a half years later, it had yet to be. Between the election of 2008 and late 2014, governmental stasis persisted despite activist intentions to close the camp and free the unjustly interned. Even as each man accrued a dossier filled with increasing amounts of information, his future (the actions he might take whether or not granted the prospect of release one day) remained unknown. The problem was more complex than simply the fact that should the detainees be released, they would constitute a risk to US security. It was more

Guantánamo’s Catch-22 / 89

the case that, at the level of juridical and military- psychological operations, these subjects constituted human versions of Donald Rumsfeld’s famous “unknown unknowns”— or to put it in another register, of non- knowledge.2 The detainees do not simply represent risks, calculated or otherwise; rather, they embody an impossible- to- calculate potential reality. Subject to the routinized application of intensively structured interrogation techniques for years, over and over, they are akin to long- term experimental subjects in an experiment that is unacknowledged by its authors. The more knowledge about them and from them that piles up as a result of systematic interrogation and information extraction, the less, in a sense, can they surely and reliably be known. Much as they live in no- man’s- land—Cuban territory occupied by the United States and subject, somehow, not quite to international law— they constitute a sort of no- man’s persons.3 Their status as living beings is so much in question— at stake, in fact— that they are not even allowed, as the result of their widespread hunger strikes and attempts to hang themselves in the spring and summer of 2013, to take their own lives.4 This paradox of ever- increasing knowledge balanced against everincreasing uncertainty is captured in the dynamics of agnotology, as recently characterized by Robert Procter and Londa Schiebinger (2008, vii). The two seek to understand “how and why various forms of knowing have ‘not come to be,’ or have disappeared, or have been delayed or long neglected.” Ignorance or not- knowing by their account is not simply a void to be filled, something to be corrected, or a steadily retreating frontier. Instead, they argue, it is structurally produced, it has diverse causes and conformations, and relates in different ways to neglect, forgetfulness, myopia, extinction, secrecy, and suppression. It is produced as an “active construct.”5 Look at arenas such as climate- change debates, cigarette- industry- sponsored research, or personal- genome coding and one can see this process at work. Such areas are actively made and protected. Likewise, the detention camp of Guantánamo is a site where agnotology and epistemology— the not- seen and the seen, the unknown and the known— are in complex interrelationship. Increasing amounts of knowledge do not vanquish a lack of knowledge, but often increase it. Along these lines, legal scholar Fleur Johns makes a powerful argument that it is a mistake to count Guantánamo as a case of a legal “black hole,” in which international and US national laws fail to apply. Rather, she argues, it is an instance of the hyperbolic extension of the law, of “elaborate regulatory effects by a range of legal authorities,” sites of classification and legal representation, where the building up of knowledge is accompanied by the whittling away of awareness (Johns 2005, 614; Johns 2013; cf. Aradau 20076).

90 / Rebecca Lemov

There is also the concomitant process of defining areas of ignorance. Guantánamo has often been characterized as a “limbo” outside national and international law (Butler 2004), “another animal” (Ginsburg 2004), or an anomalous zone of exception (Agamben and Ulrich 2004), yet such zones can be and usually are in complex interrelationship with areas of intensive knowledge production. While a great deal of scholarship on Guantánamo examines its disciplinary, juridical, and penal dimensions,7 the Catch22 that binds its subjects, as sources both of vigorously active knowledge creation and intensively crafted areas of uncertainty, has not been examined.

Of the total detainees Guantánamo still holds— at its height, the prison had six to eight hundred “enemy combatants”— eighty- six have been cleared of terrorist charges by all assessments, including that of the US military. Since 2009 they have been okayed for release or transfer, but they have not received release or transfer orders. In fact, the State Department office set up to arrange their transfers was closed in 2013, even as— adding a further layer of logical bind— a spokesman declared this closure did not signal they were giving up on securing transfers. This set of detainees, not enemy combatants but labeled as such for more than a decade, remains difficult to release for a number of reasons, most directly because their release is blocked through Congress and draconian post- 9/11 laws. In addition, it is possible that, because they have been imprisoned so long and have been maltreated, they might begin to identify with terrorist aims, having been painted with the brush of terrorism’s allies. Not a single one of these exonerated prisoners moved toward release in what became in many cases more than a decade of detention (resulting in hunger strikes in the summer of 2013, the subsequent release of a handful of prisoners, and continuing but unpublicized hunger strikes into the fall of 2014). And of course it would not be surprising to find someone imprisoned indefinitely without cause turning to struggle against the power that unjustly held him, so the fear of what such prisoners may have become, as the result of their imprisonment and interrogation, is realistic to some degree. What was discovered to be false becomes potentially true, for who could not imagine scenarios in which men might contemplate antiAmerican action when freed after years of unjust imprisonment? Transfer is also a near impossibility, as it would require US funds to build a Super-Max Prison somewhere on the mainland—Fort Leavenworth, Kansas, and Standish, Michigan, have been under consideration as sites.8 However, the 2011 Defense Authorization Bill (signed by the president, though under protest regarding these clauses) additionally prohibits “the use of

Guantánamo’s Catch-22 / 91

funds to modify or construct facilities in the United States to house detainees transferred from United States Naval Station, Guantánamo Bay, Cuba” (Defense Authorization Act 2011).9 Unable to meet even the goal of ordering all exonerated men transferred, the administration accelerates the spread of despair among detainees faced with a logic along the lines of the following: We cannot release you because, although you were once innocent, through your detention, we have decided, you risk potentially becoming guilty of unknown acts. In this potentiality there is a kind of projected culpability of an unclear but undeniable sort. These men, each accompanied by a massive dossier of information drawn from or about him,10 have nonetheless become unknown unknowns. Even after two detainees gained their release to Sudan in late 2013 and one to Algeria in March 2014,11 hunger striking against indefinite detention has continued, though its extent has become harder to gauge. The most recent administrative response has been to rephrase procedural descriptions and remove the words “hunger strike” from the most current Standard Operating Procedures guidelines with “long- term nonreligious fast.” In the new procedural, “Medical Management of Detainees with Weight Loss” replaces “Medical Management of Detainees on Hunger Strike,” and the word “hunger” appears only once. An unnamed US military official told Reuters, “It’s not our job to feed into their political protests. It is our job to protect their health and keep them safe. And the new [Standard Operating Procedure] gives us a much better focus on which detainees are actually at medical risk” (Stewart 2014). In response to the extensive and attention- getting hunger protests in 2013, the US military decided simply to stop detailing how many of its detainees at Guantánamo Bay were refusing to eat. By 2014, a nurse who refused to continue force- feeding prisoners, citing a conscientious objection, was sent back to the mainland, where he faces (as of this writing) disciplinary proceedings. Footage of forced- feeding sessions, intended to demonstrate the practice’s brutality, has itself been forcibly blocked from release to detainees’ lawyers. Meanwhile, a second set of detainees has earned even more complex status, deemed in the Obama administration’s 2010 Guantánamo Final Report too dangerous to release or transfer “but not feasible for prosecution” (US Dept of Justice 2010).12 These forty- eight prisoners, according to US policy for the remotely foreseeable future, are being held indefinitely. Even more tangibly than the first group, they have been found guilty of something— it is called “danger,” but, as I will argue, that something is closer to uncertainty— even as they are ineligible to be tried for it. Why can these prisoners not be prosecuted? Why is it not legally “feasible” to prosecute them?

92 / Rebecca Lemov

It is because they have been tortured, according to a 566- page report issued by a bipartisan task force.13 And so, just as the “final report” is not final, the incarceration has no end either. This situation amounts to an inescapable Catch- 22 of illogical logic: the US military cannot justly try this second set of detainees because they have already violated their human and legal rights by torturing them. Building from Limor Samimian-Darash’s example of how such procedural legalisms operate to produce new kinds of uncertainty, there is an absurdist Alice- inWonderland quality at play. When Alice stands before a fork in the road offering two paths, she is unsure which way to go. But in order to take a path, the chooser must already know where she wants to go, for, as the Cheshire cat smilingly informs Alice, “If you do not know where you want to go, it doesn’t matter which path you take.” Neither path really exists unless or until it is actually taken, and thus uncertainty is multiplied. In such a state of uncertainty, there is no direction, and opposite paths lead to equally unknowable places. Making a choice is pointless. Alice contemplates a meaningless alternative, which results in paralysis (Samimian-Darash 2013). Likewise, there seems to be no reliable path to take in deciding the future course for the detainees. There is no preexisting answer, only the various layers of uncertainty, exigency, and conditions (political, legal, personal, historical) that combine to render a decisive solution impossible. The way forward is not simply unclear but, in light of the potent mix of public attitudes, legal proscriptions, US Congressional laws, and international outrage the situation occasions, almost impossible to imagine.14

1. From Coercive Interrogation to Enhanced Interrogation The origins of coercive interrogation policies adopted during the Bush administration’s War on Terror and continued in the Obama administration’s struggle lie in the Cold War. Although the history of torture is of course much longer than half a century, this span encompasses the history of scientific torture (McCoy 2006). What follows is a capsule account of the growth of interrogation techniques from the mid- twentieth century to the early twenty- first and of the radical change in how subjects of interrogation are constituted through targeted techniques. In 1949, an internationally broadcast crisis over the treatment of a detainee precipitated sudden changes in interrogation strategies within the United States. The detainee was Cardinal Jószef Mindszenty, and the occasion was his world- stage debut after being held captive by Hungarian police forces. As the newsreel played, Mindszenty made his entrance in a stutter-

Guantánamo’s Catch-22 / 93

ing and pathetic condition. The World War II hero confessed to a variety of crimes against the state and church. For resisting communism, he had been subjected for thirty- nine days to sleep deprivation and humiliation, alternating with long hours of interrogation, by Russian- trained Hungarian police. His staged confession riveted the US Central Intelligence Agency, which theorized in a security memorandum that Soviet- trained experts were controlling Mindszenty by “some unknown force” (Marks 1991). Surely he had been tortured— but not in the medieval sense; rather, it was apparent he had been subjected to a powerful transformative technique devised by communist forces, for he was evidently a different kind of human being. Mindszenty himself later confirmed the transformation in his memoir, recalling, “Without knowing what had happened to me, I had become another person” (Mindszenty 1974). The result would soon, among officers in the state department and CIA, be referred to as brainwashing. This alarum in turn would call forth rounds of academic study and strategic knowledge production about a new topic: the newly powerful regime of interrogation practices that came to be labeled “forceful interrogation” and “coercive interrogation” in US social scientific and military circles (Nebraska Symposium on Motivation 1953–1954; Farber, Harlow, and West 1957; Schein 1961). Not long after the Mindszenty newsreels, other footage circulated showing US airmen confessing to having dropped germ weapons over North Korea. These POWs seemed to display the same range of behaviors and alarming confessions the cardinal had, with a similar affect of wooden disconnectedness and apparent lack of emotion. Soon the emergence of twenty- one American POWs refusing repatriation and opting to remain in China after the UN- mandated exchange of prisoners precipitated the spectacle of the “Turncoat GIs,” and this caused an even more rapid build- up of public alarm and behind- closed- doors strategizing (Pasley 1955; Lutz 1997; Robin 2003). As Richard Helms, former director of the CIA, recalled of the program his predecessor, Allen Dulles, had instigated and that he continued to run: “There was deep concern over the issue of brainwashing. We felt that it was our responsibility not to lag behind the Russians or the Chinese in this field” (quoted in Morris 2009). Paradoxically, the lag did not really exist, and in intending to catch up to the Russians and Chinese, Americans preemptively invented a newly potent set of techniques for altering a subject. The military promptly engaged a range of available social scientists, especially those with expertise in the realm of psychiatry or what Lutz labels the militarization of a “specifically ‘psychological’ subjectivity” and Rose “the psy- sciences” (Lutz 1997; Rose 1989, 2008).15 Operation Little Switch and Operation Big Switch, which mandated the exchange of first a small and

94 / Rebecca Lemov

later a large group of POWs, became a de facto research site for investigating the failures of US servicemen to resist communist interrogation. Accompanying them home, a psychiatrist observed odd behavior: When observed stepping down from the Chinese trucks at the American reception center, during the first moments of repatriation, most of the returning prisoners appeared to be a little confused, and surprisingly unenthusiastic about being back. During psychiatric interviews at Inchon just a few hours later, they presented striking consistencies in their clinical pictures. The average repatriate was dazed, lacked spontaneity, spoke in a dull, monotonous tone with markedly diminished affectivity. At the same time he was tense, restless, clearly suspicious of his new surroundings. He had difficulty dealing with his feelings, was particularly defensive in discussing his prison camp behavior and attitudes. (Lifton 1954, 734–735)

What had happened to their fighting spirit, their sense of honor, and even their basic desire to return home? Stripped of such qualities, each soldier constituted an unknown. Soon there emerged a massive, covertly funded experimental program called MK-ULTRA, a “Manhattan project of the mind” (McCoy 2006), which along with related programs explored and funded explorations of a great range of “human ecology,” conditioning, influence, persuasion, and coercive techniques (Marks 1991). One result was the abridgment of subjects’ rights (an important story, not to be accounted here). Another was the establishment of training centers and a set of techniques listed within the 1963 KUBARK Counterintelligence Interrogation manual (US Central Intelligence Agency 1963). This unprecedented sourcebook’s name, KUBARK, was an acronym with no alphabetic referents (although it is a kind of code representing the CIA itself ). Thus it is an oddly appropriate moniker for a training manual that cataloged ways of engaging in secret and likely illegal activities— under the topic of “coercive . . . interrogation of resistant sources”— that easily strayed into the domain of torture. In this way, “psy- ops”— meaning “psychological operations,” the official term for military tactics that take the human psyche as their base of operations and attempt manipulation or control— was institutionalized (figure 5.1). The spate of brainwashing research that followed the 1949 Mindszenty affair and the 1951 Korean POW emergency found US experts reaching a near- unanimous conclusion: the spectacle of brainwashing was nothing new or magical but resulted from the combined effects of intensive behavioral conditioning and environmental engineering—“milieu control,” as

Guantánamo’s Catch-22 / 95

Figure 5.1. Notorious photograph of intake at Guantánamo with behavioral modification techniques in use— detainees in orange jump suits are kneeling with eye and ear protection, policed by dogs, waiting to be installed in Camp X-Ray, January 11, 2002. The 2014 public execution of a captured US journalist, James Foley, by the Islamic State of Iraq and Syria included footage of Foley dressed in an orange jumpsuit reminiscent of the ones in this photograph. Photo by Petty Officer 1st class Shane T. McCoy, US Navy. Courtesy Department of Defense (Corbis).

psychiatrist Lifton (1961) put it, the first on his list of eight necessary conditions for thought reform to take place. As US Army psychologist and future founder of corporate motivational services Edgar Schein (1961) concluded, “brainwashing” was less accurate than “coercive persuasion” as a name for the phenomenon at hand “because basically what happened to the prisoners was that they were subjected to unusually intense and prolonged persuasion in a situation from which they could not escape; that is, they were coerced into allowing themselves to be persuaded” (18). Assorted psycho-

96 / Rebecca Lemov

logical, physiological, psychiatric, and sociological theories could contribute to building a generalizable theoretical model for this process— and this model would apply not only to prison camps north of the 42nd Parallel but in all kinds of “social influence situations,” some in American society itself. During the 1960s Americans applied these coercive interrogation techniques in experiments, quasi- experimental situations (simulations), and in “operational” field trials. Investigations culminating in US Senate hearings in 1977 under Edward M. Kennedy called into question the legality and ethics of the classified program (US Senate Select Committee on Intelligence and Subcommittee on Health and Scientific Research 1977). In the 1970s and 1980s, operation KUBARK became the standard manual for training counterinsurgency forces in fields of struggle not on US soil. In Panama and Guatemala, as well as East Asia, operatives trained by these techniques carried out stabilization efforts. The School of the Americas provided technical training (Gill 2004). A second institution emerging from Cold War experiences was the SERE program, or Survival Evasion Resistance Escape. Designed by Army psychiatrist Dr. Louis Jolyon West during the Korean War years, it instilled training intended to protect US soldiers from destructive interrogation by subjecting them to stressful and sometimes debilitating mock interrogations. However, this prophylactic training also served to educate soldiers in a lesson on how in turn to “break” the will of an enemy prisoner through interrogation.16 The fact that they constituted traumatizing experiences in themselves is relevant. Glenn Peterson, a navy enlistee in 1966 who flew missions over Vietnam as a radar controller in an E- 1B reconnaissance and surveillance aircraft, recalled how the experience of “survival school” left marks that shaped his views of current interrogation practices and prisoner treatment debates: “What has remained indelible for me are the beatings and torture in the mock POW camp part of it (indeed, the whole POW camp experience) rather than the escape and evasion portions.”17 American troops were told at the start of their SERE sojourn that they were being trained in survival school because of the shoddy performance of US soldiers captured during the Korean War. Peterson gave an account of the training procedure to which he and his group were subjected— extremes of heat and cold, imposed stress positions, brutal interrogations (within a faux Eastern-European prison camp), all of which ended in his being sealed in a black metal box in the fetal position. “I find I was used as a guinea pig,” Peterson concluded after years of painful reflection and research. His reflections recall Allen Dulles’s words about communist brainwashing capabilities in his 1953 Princeton address: “We in the West are somewhat handicapped in getting all the details. There are

Guantánamo’s Catch-22 / 97

few survivors, and we have no human guinea pigs to try these extraordinary techniques” (Marks 1991, 139 [emphasis added]). Prisoners and training personnel became in effect experimental subjects. Following Dr. West’s invention during the Korean War and men like Peterson’s experiences in the Vietnam War, a variety and range of SERE camps arose— different levels of intensity, different camps for different security clearance levels, and specialized camps for different branches of the military and related operatives in the CIA and elsewhere— all of which still operate vigorously today.18 To sum up Cold War–era developments in interrogation methods, a strand of research extending back to the early twentieth century in the varieties of behavioral influence and behavioral modification techniques (including “classic” behaviorism proper, neo- behaviorism and learning theory as well as operant conditioning) formed a source for extracting and creating technical resources for both protecting US personnel and modifying enemy personnel. Results were mixed, but experts could feel that they had succeeded in demoralizing prisoners, and in changing them (perhaps not seamlessly or in the sci- fi manner sometimes fantasized about, but with definite effects). Over the course of the massive seedbed of MK-ULTRA, investigators tended to swing toward small incremental changes that were possible to institute over long periods of time, rather than the short- term dramatic changes in subjects that initiated the great alarm over brainwashing. In the end, a routinized and systematized set of institutions and practices arose. While this approach broadly remained in use through the conflicts of the 1990s, it was increasingly “outsourced” to foreign fields of engagement and not directly applied by US personnel (with the exception of intensive and growing SERE training for all branches of military). However, when the events of September 11, 2001, introduced a new set of exigencies, the United States drew on its already- existing program and employed many SERE- trained interrogators and psychological experts.19 Its higher- ups, military and civilian, polished the list of techniques, updated them, and pulled them into service, with (again) mixed results. “By the fall of 2002 . . . the trademark techniques of the SERE program soon popped up in Guantánamo and other US military prisons holding suspects from the war. Hooding, stress positions, sleep deprivation, temperature extremes, and psychological ploys designed to induce humiliation and fear suddenly seemed legion” (Mayer 2009). Even after 2005, borrowing from Cold War documents continued unchastened. Guantánamo interrogators deployed a “Chart of Coercion” directly derived from a 1957 paper investigating the sociopsychological dimension of Chinese communist brainwashing methods. More than fifty years before, the paper’s author, Albert Biderman, worked for

98 / Rebecca Lemov

Lackland Air Force Base’s “Office for Social Science Programs,” where SERE’s inventor, Dr. West, was his colleague (Biderman 1957; Shane 2008). Debate from the War on Terror’s beginning occurred over whether and how to follow Geneva conventions about the treatment of detainees. At the outset of Guantánamo’s present purpose— formerly it had been used for Haitian refugees awaiting decisions over admission to the United States— Brigadier General (now Major General) Michael Lehnert, the first superintendent of Guantánamo, and lead legal counsel for SOUTHCOM, Supervielle, recommended that Article 5 of the Geneva conventions be followed thus granting each individual an inquiry to ensure against mistaken confinement. In addition to assuaging doubt, this procedure would allow determination to be made over whether they were combatants at all or whether “actually it just turns out he was a person the other person hates, just had a family feud or something. . . . You should be a little careful about things like that,” said William Taft, State Department Legal Advisor (Constitution Project Task Force 2013, 53). No such hearings occurred, the Supervielle recommendations did not hold sway, and the subsequent administration of Guantánamo oversaw many years of detention and interrogation outside the reach of the Geneva conventions and (perhaps more pertinently) US law, which under the Reagan administration had adhered to the conventions’ guidelines.20 In the summer of 2002, an “enhanced” interrogation regime took shape under the next commander, General Michael Dunlavey, who abruptly reversed course and requested that SOUTHCOM authorize use of category I, II, and III techniques. Such techniques came directly from the 1963 KUBARK manual, although they were numbered and grouped differently. Then-Secretary of Defense Donald Rumsfeld signed off on the use of category I and II techniques, as well as one category III technique— “noninjurious personal contact.” When Rumsfeld signed off on the list, he notoriously added the following penciled note: “However, I stand for 8– 10 hours a day. Why is standing limited to 4 hours?” (Danner 2004, 181–182). In this moment, Rumsfeld was evidently imagining himself in the position of the detainee, projecting his own reaction to the imposition of these techniques. Of course, there is a difference in emotional tenor between voluntarily standing at work all day (at his now famous stand- up desk) and involuntarily being forced to hold agonizing positions, including the crucifix- like posture shortly to be immortalized in Abu Ghraib photographs. Yet the Rumsfeld pencil note does indicate a process of identifying on some physiological level with the prisoner, followed by a rapid removal of such. This brings us to a key point: in describing this final transfer of Cold War–forged

Guantánamo’s Catch-22 / 99

technique to a new conflict, the question of the subject was raised in a new way. The search for data was now the core impetus, rather than the search for ideological control. Accordingly, the United States argued it would not be violating the torture statute of the Geneva convention “because there is a legitimate governmental objective in obtaining the information necessary,” argued a military attorney (Constitution Project Task Force 2013, 55). With this shift to an information- based economy of interrogation from the earlier ecological approach, we can now turn to the question of subjecthood.

2. Questions of Subjectivity: From Engineering Ideology to Storing Up Information Brainwashing— the engineering of subjective states— both literally and figuratively forms an origin point for current practices and the growth of interrogation techniques. An overblown term evoking a cartoonishly improbable process, brainwashing may seem insufficient to describe the often understated public reaction to events in the United States’ questionably legal detention centers. Nor does brainwashing seem initially to be illuminating as a historical connection. As Andreas Killen (2011) points out, however, brainwashing crystallized many concerns about modern conditions of statehood and subjecthood: “The term also became shorthand for a multitude of forms of control, persuasion, and influence— for political but also other purposes— experienced by many people in the aftermath of World War II. If many of these practices remained intangible, viscerally felt but difficult to comprehend, the notion of brainwashing seemed to crystallize them in a powerful way, to name them as a specific pathology not just of totalitarian society but of modernity as such” (44). Knowledge about such states, pursued by experts in controlled and quasi- controlled experiments, was operationalized almost immediately and put into use in a wide range of situations involving captives and intelligence gathering. From the Cold War’s “coercive interrogation” system through the War on Terror’s “enhanced interrogation” regime, there has been a significant alteration in the subject on whom interrogation techniques are practiced. The argument here is not so much about a continuity of technique— which there demonstrably is— but a discontinuity in the subjectivity that interrogators targeted and with which they worked. The interrogation subject in the twenty- first century (the subject of “enhanced interrogation”) is different from the Cold War interrogation subject (that of “coercive interrogation”), although, as mentioned, the technical corpus of behavioral modification used to act on them is essentially the same.

100 / Rebecca Lemov

In the Cold War, the goal was to “break” a human subject in order to rebuild him with a new ideological orientation, constituting a sort of feat of ideological engineering. Subjectivity was in this regard a substrate that could be “dialed up” or reduced so as to be worked with. It was targeted.21 The goal (sometimes a fantasy, but one that could be acted on) was to reduce and then rewire a human subject, to rewrite the self. This punitive mode recalls Kafka’s prisoner in the 1914 short story “In the Penal Colony,” whose crime is etched not simply on his skin but in it, as it is written by means of a programmable loom so that he will know the law he has broken even as he is dying. The whole process takes twelve hours. The workings of the “machine” by which his etching- execution is carried out are gradually revealed, and the story is told from the point of view of der Forschungsreisender or “ResearchTraveller,” an appellation that captures the experimentalist leanings of the onlooker, who grows more and more disgusted with the now out- of- date machine and the religious fervor it has inspired in some of its operators. The method, in short, needs updating. At McGill University’s Allen Memorial Hospital during these years (1952–1964) eminent psychiatrist Ewen Cameron designed experimental protocols that took a head- on approach to behavioral modification and “milieu control,” and pushed it to extremes by use of psychiatric patients treated as experimental subjects. For Cameron, the psyche was not an entity to be deeply psychoanalyzed. His method, called “psychic driving,” explicitly forbade any but the most superficial analysis. For Cameron, the psyche was not an entity at all. Rather, it was a series of complexly interacting patterned mechanisms and feedback functions that a skilled researcher could disassemble. An even more skilled researcher could later reassemble these mechanisms and functions according to design specs. (That Cameron, quite successful in the former, did not always succeed in the Humpty-Dumpty task of putting patients back together again speaks to the gap between ambition and reality.) As Cameron claimed of the psychic- driving “dynamic implant” he installed via feedback- looped tapes fed into the subject’s waking and sleeping life, “We have shown that it is possible to activate the learning process and thereby produce new behavioral patterns” (Lemov 2011, 72). Thus one can observe not a simple shift from one regime to another, from a fully rounded “self” to a flattened, information- based no- self, but instead a change in emphasis. Certain tendencies that already existed to view the self as a product of complex web of relations and reactions are reinforced. No neat transition occurs. There is a combination and recombination of old and new techniques within new conditions. Thus despite an already- existing tendency to flatten the stimulus- response modeling of

Guantánamo’s Catch-22 / 101

human action, the thrust of these programs, as they were operationalized and (especially) discussed in the public sphere, was still to reinforce the single, unitary subject who struggles against the interrogator. It was an environmental subject, to be modified through environmental management— namely, the lists of techniques such as stress positions, temperature control, and sound bombardment. Yet it was still a bounded subject, and great alarm was expressed over “our boys” who appeared to be susceptible to new and nefarious manipulations, reinvented as “turncoat GIs.” In the years since 9/11, some vocal critics have assailed US intelligence for a lack of imagination, and in some sense the Cold War operations were far more phantasmagorical and “imaginative” in what they conceived of as possible. The 9/11 Commission’s award- winning analysis of a failureto- imagine properly the bombing of the twin towers as the source of the terrorist success has not been accompanied by a surge in imaginative constructions of what the interrogation subject might be subjected to.22 Rather, the regime has taken existing techniques and foreshortened and flattened them. Their role has been strictly reduced. The goal is no longer to rewire or remake a subject in dramatic fashion; although this may happen it is never by design, and is a mere side effect of extremely disorienting techniques or combinations of techniques.23 In the War on Terror, the goal has been different in important ways. The aim is no longer to rewrite the self and to work with subjectivity as if it were a material substrate, but to sidestep subjectivity and get direct access to information stores. Nor is honor, whether of one’s own side or the other side, seemingly at the imaginative core of programs in rendering, rendition, and questioning. Instead, what was the end is now the means. Recently, Welch (2009) characterized the transformation process to which Guantánamo detainees are subjected as one “that is acutely utilitarian” (7). An unlawful combatant, moving through the current series of six Guantánamo camps,24 each corresponding to a different degree of docility, is effectively engineered to emerge as a cooperative and useful subject for the purposes of gathering intelligence. Although some Cold War continuities exist, and techniques do still target subjects’ personal traits, medical histories, and cultural sensitivities, interrogations are often designed by “behavioral science” consulting teams (BSCTs) that use anthropological and sociological knowledge to construct powerful interrogation scenarios and combinations of techniques, the subject is no longer seen as “whole” to be broken down. Here again, the end goal is not the demoralization or transformation of self. The Arab detainee jihadist is not pushed to become an adherent of liberal democratic social forms. There

102 / Rebecca Lemov

is no interest in this. The self becomes a patterned arena within which one can trigger information access. The self becomes useful. To extract “actionable intelligence” has been the goal since 2001. The prisoner- detainee becomes a data mine. While much attention has gone to big- headline violations (such as the public- sphere obsession with waterboarding), it is rather the routine and repetitive “management- style” techniques that mostly have held sway during the estimated three- to- four thousand interrogations that have taken place since 2002. And the history of the coming into being of these techniques is itself a story of an effort at managing potential uncertainty about the subjecthood of the men held in its cells. For while knowledge about and from the men piles up, they are less and less known. This chapter so far has pinpointed areas of discontinuity in what it means to be the subject of interrogation, and the rise of the subject as a site of radical uncertainty rather than one of radical re- making. However, this argument does not mean that there are no continuities in the traumatic response to situations of torture and the infliction of pain. Just as imposed techniques and technologies have been remarkably constant in the last fifty years since the program of “forceful interrogation” was pioneered— loosely speaking, from KUBARK’s lists to Standard Operating Procedures— likewise much evidence suggests that the response of those subjected to such techniques has not become less subjective, flattened, or “post- human” in any meaningful sense. Rather, there is a great deal of similarity in the range of response to torture psychological and physical, “hard” and “soft.” (One difference still may be the extent to which responses are documented in paperwork and digitally stored as files.) Elaine Scarry’s pioneering The Body in Pain (1985) described how physical pain inflicted through torture results in the “unmaking” of language and the world. The subject feels herself rendered as an object. Scarry identifies a reciprocal “remaking” that can result from the receptivity to the world of made or manufactured objects as sources of care and potential sentience (see esp. pp. 307–326). This dynamic may resonate with the responses of recent recipients of enhanced interrogation, as firsthand accounts of a detainee (Slahi 2013), a chaplain (Yee 2005), and a translator (Kahn 2009) testify. Another form of “unmaking,” traumatic disruption to the linearity of time (Edkins 2003, 1–15), occurs in survivors of the 9/11 attacks as well as subjects of interrogation, the innocent and the guilty alike. To be clear: in describing a historical shift in the constitution of the interrogation subject, the aim of this chapter is to extricate what is newly emerging from scenarios— on a basic level, the deliberate and systematic imposition of pain by one person on another— that are, in some ways, very old (Peters 1996).25 However, the

Guantánamo’s Catch-22 / 103

goal here is not to grasp the phenomenology of the interrogated subject but rather to consider changes in the constitution of subjecthood from the point of view of states, knowledge makers, and political entities. This newly data- driven approach explains why the “ticking time bomb” scenario did not exist as a thought experiment during the Cold War: it was not essential to their project.26 Interrogators did not imagine 24- hour scenarios in which action would need to be taken against terrorist threats and a terrorist would need to be “broken” in order to save a calculated one hundred or 100,000 people. The specter against which they fought was ideological, not informational or algorithmic. Today, therefore, debate often centers on retrospective accounting: did such actions in the end prevent terrorist attacks? How many such attacks were prevented by the extraction of vital information?27 In this regard, it is interesting that President Obama defends the PRISM program of data mining by counting the number of attacks it has prevented: twenty. In the much- discussed television show 24 and the controversial film Zero Dark Thirty, the successful torturing of terrorists is depicted as yielding actionable intelligence.28 Whether or not this claim is true— and in fact CIA director John Brennan decalred it “unknown and unknowable” at a late 2014 press conference— its truth- value cannot be addressed here, where the aim is to explore the formation of discourse, policy, and technique around such debates.

3. Engineering the Future How is one to think about this profound and complex but largely unremarked shift in the subject of interrogation, in which an additive process of behavioral and ideological change is replaced by an information- based extractive operation? One starting point is to observe that knowledge is now in many ways the problem not the solution. Knowledge in many forms, and most pressingly knowledge that is named “information” and even more so “intelligence” and even more than that “actionable intelligence” is the sine qua non of such programs. After all, there are few subjects who are so “known” as these detainees. Their dossiers grow apace, even if what they tell interrogators is not necessarily reliable, gained as it was under conditions of force and manipulation. Even as the files grow, and knowledge expands, knowledge- based paralysis sets in. Just so, Ulrich Beck refers to hazards “which nowadays often cannot be overcome by more knowledge but are instead a result of more knowledge” (Beck 2009, 5). In this way, ever- increasing amounts of knowledge do not counteract ever- increasing areas of ignorance or uncertainty.

104 / Rebecca Lemov

The transition in subjecthood described here holds not just for accused spies, enemy combatants, and political operatives, but also for a larger swathe of “post- human” individuals. In other words, this shift, observable due to the narrow view afforded by interrogation’s arena, has wider purview. A flattening of self can be observed in, for example, the voluntaristic selfas- data movement and self- tracking practices. Information is constitutive, not simply indicative of a volumetric core holding “self” or soul. Michel Foucault’s prediction that the figure of “Man” would be washed away was in a sense borne out. Meanwhile, the president is said to be “almost anguished,” and as little continues to happen, prisoners push their protest to the extent possible. Further uncertainty lies at the end of the Guantánamo road in almost any scenario. A distinctive feature of the uncertainty of the subjects is that, at least during the interim of 2008 through 2014, it is technically and legally unresolvable. In this way Guantánamo constitutes a perpetual intermediate space. It holds two groups of detainees, both of which have been subject over the years of their detention to a regime of coercive interrogation amounting (as of 2005) to more than 2,600 separate interrogation sessions. One group has been deemed not a threat, and yet cannot be released. The other has been deemed a threat of an unknown and therefore especially threatening kind, and also cannot be either tried or released. For both sets, one truth holds: Even though they are “known” intimately, they are potentially uncertain—in the public’s view and in the calculations of national security experts— and also too compromised to release. This potential uncertainty does not have to do with the range of terrorist acts such people might commit if released to Yemen or other home countries. Rather, it has to do with the status of a realigned self- as- object that emerges out of these late- modern apparatuses of torture.

CHAPTER SIX

Global Humanitarian Interventions: Managing Uncertain Trajectories of Cambodian Mental Health CAROL A. KIDRON

Building upon the “Holocaust model” of post- traumatic stress disorder (PTSD; Danieli 1998) and intergenerational transmission of the effects of that disorder (Kellerman 2001; Scharf 2007), according to the illness construct of transmitted effects of trauma (TET; or intergenerationally transmitted trauma, ITT) descendants of genocide, mass violence, terrorism, and natural disasters— as well as descendants of historical trauma (such as the descendants of slaves and Native American/First Nations)— will continue to be at risk of pathology decades or even centuries after the foundational traumatizing event. Trauma brokers warn that if left untreated, TET (ITT) may take on epidemic proportions (Kaitz 2008). However, trauma theory and transmitted- trauma theory have been the target of extensive scholarly critique in recent decades (Young 1995; Summerfield 1998; Fassin 2008; Kidron 2009, 2012). Critical psychiatrists caution that resilient victims of potentially traumatizing events may be needlessly pathologized (Summerfield 1998; Sagi-Schwarz et al. 2003), transcultural practitioners (Kirmayer et al. 2011; Gone and Kirmayer 2010) voicing concern regarding the cultural competency of trauma- related illness constructs and treatment protocols that may be at odds with culturally particular experiences of well- being, distress, and healing. Shifting their gaze from the trauma clinic to macro institutional and global policy and practice, medical and psychological anthropologists have deconstructed humanitarian intervention in mental health care asserting that, as a “universalizing semiotics of suffering” (Fassin 2008; Kidron 2012), trauma theory has facilitated postcolonial governance in situations of warfare and postconflict societies, imposing a particularly Euro-Western sociopolitical “moral vision and impaired insight” across the globe (Hayden 2007).

106 / Carol A. Kidron

In response to this critique, mental health practitioners have adopted a number of paradigm shifts in trauma theory and policy. Resilience theory (Bonanno 2004) has triggered scholarly interest and clinical treatment regimes seeking to tap into and foster the innate hardiness and emotional malleability of the trauma victim. Trauma- related illness is no longer perceived as the linear outcome of a traumatizing event to be cured or even prevented but has been reconceptualized as part of a complex network of postconflict human “ecological adversity” (Kirmayer et al. 2011) in which socioeconomic and political contexts dialectically impact mental health. Ethnographic findings regarding particular idioms of distress have likewise been enlisted to establish culturally competent global health care, entailing hybrid forms of indigenous healing and Euro-Western trauma treatments (Argenti-Pillen 2000; Gone 2008; Foxen 2010; Panter-Brick and Eggerman 2010). Despite the above critique and salutogenic paradigm shift, a closer look discloses the emergence of new more subtle forms of pathologization and global psychiatric governance. Resilience theory (Bonanno 2004) has generated countless quantitative models designed to assess populations at risk of trauma- related pathology. These models measure and extrapolate the psychosocial outcome of the sum total of culturally particular stressors (e.g., cumulative trauma, poverty, ideological extremism) and protective factors (e.g., community support, family- system stability, empowering religious beliefs) that constitute resilient or vulnerable populations. They thus facilitate the profiling of resilient and vulnerable trauma victims and trauma descendants (see Agaibi and Wilson 2005). Tracing “potentially uncertain actualities” of otherwise only “possible” profiles of resilience (Samimian-Darash 2011), such profiles enable psychological policy makers and practitioners to develop technologies designed to promote protective buffers and minimize exacerbating criteria (Kirmayer et al. 2011). In this way the “engineering” of resilience emerges as a new form of mental health governance. At the same time, advocates of global mental health intervention have called for a return to a person- centered and trauma- centered approach, asserting that extreme individual trauma remains the root cause of the failure of sociopolitical and economic rehabilitation in postconflict societies, as trauma blocks normal pathways of resilience (Schauer and Schauer 2010). A new wave of scholarship has proposed that intergenerationally transmitted trauma (ITT) and historical trauma worldwide have temporally extended the potential uncertainty of traumatic suffering to countless generations of survivor offspring (Rubin and Rhodes 2005; Scharf 2007; Goodman and West-Olatunji 2008; Barel et al. 2010). Building on the epigenetic turn in

Global Humanitarian Interventions / 107

biomedicine, the threat of the mass propagation of genetically and hormonally maladaptive survivor offspring has added new urgency to the accumulation of trauma- related knowledge, prevention trajectories, early detection, containment, and management of survivor PTSD (Schauer and Schauer 2010). As outlined earlier, it has also triggered a new wave of studies calculating and tracking the potential uncertainty of trauma- descendant well- being and resilience (Agaibi and Wilson 2005). Despite attempts at detection and management, advocates of intervention warn of future “trauma pandemics” (Kaitz 2008; Kinseth 2009; Schauer and Schauer 2010) as survivor mothers (potentially bearing psychosocially maladaptive offspring) risk, in utero, not only global mental health but macro processes of economic rehabilitation, peace, and coexistence, making the management of uncertainty all the more essential. A critical examination of culturally competent health care also discloses subtle forms of hegemonic governance. Culturally competent hybrid technologies for the treatment of PTSD and transmitted PTSD may often be products of complex postcolonial politicized interrelationships between governmental agents, Euro-Western- trained indigenous and foreign nongovernmental organization (NGO) brokers, and global institutional philanthropists (Pupavec 2004; Fassin 2008). Within this entangled discursive field, truth regimes pertaining to the risk and potential uncertainty of trauma pandemics sustain emergent pathologizing policy assemblages (Wedel et al. 2005) that do not always ensure effective cultural translations of culturally particular conceptions of mental health and well- being. The following questions may therefore be posed: What are the truth claims about the future of survivor and descendant mental health governed by global and local intervention in traumatized populations? What are the technologies utilized to accumulate knowledge pertaining to the potential uncertainty of a trauma epidemic and how do local policy assemblages intervene to manage this uncertainty? In what ways are these interventionist policies and technologies translated into— or alternatively at odds with— local conceptions of illness and wellness? As truth regimes travel globally, do emergent subjectivities resist trauma profiles and concomitant conceptions of risk and uncertainty or are they compliant hybrid products of a new “glocal” (Robertson 1994) discursive reality? In order to grapple with these questions, a case study of Cambodian genocide survivor and descendant trauma- related mental health care will be examined. Content analysis of promotional material of a Cambodian mental health NGO was undertaken as well as ethnographic interviews with practitioners, victims, and descendants. The study examines the technolo-

108 / Carol A. Kidron

gies that accumulate knowledge pertaining to the potential uncertainty of a trauma- related mental health and local policy assemblages intervening to manage uncertainty. It considers the ways in which interventionist policies and technologies are translated into— or alternatively are at odds with— local Khmer conceptions of illness and wellness.

Transcultural Psychosocial Organization (TPO): Governing the “Uncertain Actuality” of Cambodian Mental Illness In 1975, the Cambodian Communist Khmer Rouge regime led by Pol Pot instigated the forced evacuation of urban populations to the countryside to work as farmers. Beyond the brutal mass roundups and executions of intellectuals, bureaucrats, businessmen, educated Cambodians, and Buddhist monks, hundreds of thousands died of starvation and disease. The total death toll between 1975 and 1979 has been estimated at between two to three million (Hinton 2004). According to Sonis et al. (2009), in 2001 28.4 percent of the Khmer population were suffering from PTSD. Highlighting the severity of psychosocial suffering, Hinton (2008) alleges that “there is no doubt that many Cambodians . . . suffer from trauma and related mental health disorders, which are exacerbated by problems like poverty, chronic uncertainty, alcoholism, domestic abuse, and the lack of social and/or therapeutic support.” With the restoration of global support in the 1990s, Cambodia’s decimated mental health sector began to rebuild itself. Working with international NGOs, the Ministry of Health trained twenty- six psychiatrists and psychiatric nurses, establishing small clinics in Cambodia’s provinces. The Transcultural Psychosocial Organization (TPO)— a branch of the Dutch- based NGO TPO International— also opened several provincial clinics. According to their website (http://www.TPO.org), TPO Cambodia is the leading NGO in Cambodia providing mental health and psychosocial interventions, its mission being “to support those who are unable to care for themselves due to mental illness, poverty and lack of support by developing programmes that directly benefit people at the grassroots level, by improving their mental health and thereby increas[ing] their ability to care for themselves and their families” (http://tpocambodia.org/index.php?id=6).

TPO Policies of Assessment and Treatment According to TPO’s website, annual reports, and published research, organizational policies include the assessment and monitoring of the preva-

Global Humanitarian Interventions / 109

lence and severity of Cambodian mental illness and psychosocial ills and their treatment. Throughout its promotional material, mental illness is defined interchangeably as PTSD and stress- related anxiety and depression. Psychosocial ills are described as covering an extremely diverse spectrum of communal and familial violence, domestic abuse, alcoholism, poverty, and/or unsustainable development. The continued threat of communal and national strife between Khmer Rouge perpetrators and victims and impasses in transitional justice and reconciliation processes are also attributed to failed personal and communal working through of genocide- related trauma and resultant anger and desire for revenge. In keeping with the “ecological perspective” (Kirmayer et al. 2011) regarding trauma, the TPO’s mission statement thus integrates the political goals of transitional justice and reconciliation with policy agendas that ambitiously seek to bridge intrapsychic/ personal and political/collective domains (Wedel et al. 2005). TPO’s intervention programs present a holistic array of treatment protocols, ranging from grassroots assessment and the delivery of medication in cases of severe PTSD to individual and communal talk therapy to a number of more innovative culturally competent forms of Buddhist meditation, mindfulness, and ritual purification. According to TPO’s executive director, Dr. Sotheara Chhim, however, successful governance and management of the potentially uncertain scenarios of mental illness and related societal adversity is extremely complex in the Cambodian context. In annual reports and organization- funded research publications, Chhim explains that traditional Khmer cultural beliefs potentially hinder the delivery of mental health care (2012). Mental illness is still stigmatized by the majority of the population, who remain unaware of the causal relationship between traumatizing events such as the genocide and their long- term impact on stress levels, anger management, addiction, family violence, communal cohesion, and national prosperity. While symptomology such as sleep disturbance, fear, and hopelessness are commonly experienced, illness constructs such as depression and anxiety are lost in cultural translation. Without creative “technologies” of intervention— such as the translation of PTSD into local idioms of distress and intensive psycho- education regarding the long- term trajectories of trauma—TPO advocates and NGO partners warn that the effective government and management of mental health will be seriously compromised.

110 / Carol A. Kidron

Technologies of Mental Health Intervention Baksbat: Culturally Translating PTSD and Creating a More Effective Tool of Assessment In numerous recent publications, Dr. Chhim sought to explain the discrepancy between research findings showing that a large proportion of Cambodians suffer from PTSD (Field and Chhim 2008, 353) and the considerably lower levels of PTSD found in clinical tests (Chhim 2012, 642). According to Chhim, rather than implying low rates of PTSD in Cambodia, the disparity between research findings and clinical tests derives from flawed clinical assessment measures and the challenge of effectively clinically “capturing,” monitoring, and treating trauma- related distress in Cambodia. Prominent among these causes are the failure to correctly translate PTSD into Khmer cultural terms and a “conspiracy of silence and . . . avoidance . . . between Cambodian therapists and their patients” (Chhim 2012, 642). Western researchers may misinterpret Khmer religiocultural perceptions of silence and karmic acceptance of one’s fate (see Thompson 2006; Hinton 2004) as signifying “wellness.” In response to these “failed translations” and in search of a more effective clinical tool of assessment that might better represent Khmer trauma- related distress, Chhim interviewed trauma victims, mental health practitioners, and Buddhist monks, exploring with his respondents the way in which their psychosocial experiences resonated with the concept of baksbat. Described by sufferers as “the fear that follows a distressing or lifethreatening situation” (Chhim 2013, 161), baksbat is characterized by submissiveness and “psychological weakness,” its symptoms including khlach ro- arh (being fearful) and bor- veas- chea- chgnay (wishing the traumatic event would go away). The syndrome also entails a “loss of courage,” passivity, and an inability to confront others. According to Chhim’s findings, baksbat better represents the broad spectrum of phenomenological experiences of Khmer distress than PTSD (2012, 2013), as many Cambodians suffering from it are “afraid to disclose their identity to anyone or talk to others about what they had experienced’ and “unable to speak about their fears.” They find it difficult to trust others, give in or accept defeat easily, feel and act as though cowardly, and cannot “stand up for themselves” (2013, 165). Unpacking the above cultural idiom of distress, one might note that the syndrome’s symptoms might account for the failure of clinicians to work with those suffering from baksbat and empirically amass evidence of the very disorder that allegedly silences them. When comparing PTSD with baksbat, Chhim found “some overlap”

Global Humanitarian Interventions / 111

between baksbat symptoms and PTSD- related depression, anxiety, and dissociative symptoms. Other parallels include withdrawal, sleep disturbances, and nightmares about the past. Its symptomology being presented as broader than PTSD, it complements, according to Chhim, rather than displaces the latter in the Cambodian context (2013, 170), its explanatory power stemming from its ability to partially account for Cambodian psychosocial responses to historical adversity. Chhim also proposes that baksbat might account for the lack of resistance to the Khmer Rouge atrocities (2003, 164). According to Chhim, because passivity is at its core, baksbat may be a major stumbling block to social development and prosperity. “When people dare not support what is just and fair, they are at risk of being exploited across generations” (2013, 164). Chhim claims that his respondents trace culturally particular Cambodian scenarios and traits such as adversity and passivity back to the fall of the Angkor period in the fifteenth century, also claiming that the syndrome has been intergenerationally transmitted (2013, 166). These lay assertions resonate with the construct and generational trajectory of historical trauma, recalling Schauer and Schauer’s (2010) warning of a potential self- propagating (in utero) epidemic of future traumatized and structurally disadvantaged generations. The TPO website in fact lists “intergenerational transmission” as a recent research incentive. Although no findings in this regard have yet been published, it will be of great interest to discover whether baksbat in future descendant generations is diagnosed as emulating survivor baksbat or whether psychosocial and psychiatric intervention might intervene in the trajectory of the “potential actuality” of the transmission process. In the hope of more effectively assessing the broader psychosocial scope (and perhaps severity) of trauma- related distress amongst Southeast Asians, Chhim recommends the adoption of a newly designed hybrid trauma inventory addressing the symptomology clusters of both PTSD and baksbat. Psycho-education: Training Lay Trauma “Workers” and Victims TPO practitioners propose that, without raised awareness and understanding of the causal trajectory between genocide, PTSD, and everyday personal, familial, and communal experiences of distress, intervention and treatment are likely to fail. Two separate psycho- educational programs have been implemented, the first designed to provide lay nonpractitioners with the psychological tools necessary for identifying and, in some cases, counseling vulnerable populations. As part of a “Participatory Rural Appraisal” (PRA) scheme, community leaders— including village chiefs, village governmental

112 / Carol A. Kidron

commune leaders, Buddhist monks, nuns, and healers— have been enlisted as partners in the TPO’s mental- health intervention. These nonprofessional community authorities undergo a brief psychological education training program to familiarize themselves with psychosocial symptomology and disorders such as trauma, depression, and anxiety. These “mental health resource workers are taught mental health first aid and provided with opportunities to facilitate self- help groups under the temporary guidance and supervision of TPO staff, until they are confident and able to take on their roles with greater autonomy” (http://tpocambodia.org/index.php?id=33). As posted on the website of the Cambodian Genocide Documentation Center or “DC-Cam,” TPO practitioners also train DC-Cam personnel responsible for interviewing Khmer Rouge survivors and documenting their testimonies for the center’s archive. The goal of this program is to train mental health “resource workers” capable of identifying psychosocially vulnerable survivors and their families and filling out their psychological inventories. Graduates of the training program subsequently conduct these new forms of testimonies/psychological assessment interviews with Khmer Rouge survivor witnesses appearing before the Extraordinary Chambers in the Courts of Cambodia (ECCC) truth and reconciliation tribunal. Constituting an innovative form of intervention technology, this hybrid testimony/psychological assessment interview transforms or expands the phenomenological and epistemological boundaries of witness testimonials on the one hand and psychological testing on the other, thereby blurring the boundaries between health/welfare services and the civic- moraljuridical act of testifying. The hybrid interview and hybrid trauma “resource worker” conflates and discursively allies (Foucault 2002[1969]) the truth claims made by the therapeutic discourse relating to intrapsychic vulnerability and healing with neoliberal truth claims regarding free speech, justice, and redemptive reconciliation. In this way, personal trauma is both discursively reconceptualized as “healed” via public testimony and the individual wounded citizen enlisted, “nationalized,” and redeemed once again by the act of performing his or her suffering for the nation. Recalling Wedel et al.’s (2005) analysis of the way in which assemblages of policy makers and brokers traverse multiple domains, TPO and DC-Cam technologies of governance merge, conflating disparate domains (health/memory/history/law) while simultaneously crossing personal, collective, and national spheres to semiotically generate personal and national healing (Kidron 2003; Feldman 2008). The second form of psycho- education seeks to provide the general Khmer population with a basic conceptual understanding of trauma- related

Global Humanitarian Interventions / 113

symptomology, thereby enabling people to identify and trace the impact of genocide trauma (in the case of survivors) and the psychosocial transmission of parental genocide trauma (in the case of descendants) and resultant symptoms on their everyday life. According to the TPO website, this form of psychosocial education: helps to provide families and individuals at the community level with an understanding of psychosocial problems, stress and trauma and enables them to see the root cause of their present mental health issues. People learn how mental health problems can impact the community, and how the community has a great impact on the mental health of individuals and families, with a responsibility to enhance well being for all in their community. Through this education and the training provided, communities are encouraged to discuss and promote mental health awareness. This information and understanding enables the community to engage in mental health promotion.

TPO practitioners and newly trained communal leaders or “mental- health resource workers” conduct short- term training workshops in which concepts such as “stress” and “avoidance” are culturally translated into more familiar psychosocial ills, such as anger, fear, sadness, silence, family conflict, abuse, and alcoholism. As villagers do not regard these ills as a form of “mental illness,” they are more receptive to cooperating with practitioners when filling out assessment inventories and receiving treatment. Colorful posters are distributed to villagers depicting PTSD symptomology such as “flashbacks”— a survivor recalling scenes of torture, hunger, and suffering during the Khmer Rouge era, for example, or present- day symptoms of “emotional arousal” or “acting out” illustrated by angry interactions between family members, violence and tears, alcoholism and vandalism (http://tpocambodia.org /index.php?id=informationmaterialleaflets). Others exemplify “depression” by showing sad, lonely, despondent farmers who, without any energy to work their land, let their families go hungry. These posters graphically illustrate the connection between psychosocial distress and past traumatic experiences. Although employing a straightforward narrative, the posters, as technology of “psy”- pathologizing subjectification (Rose 1998), may be unpacked as emplotting the multiple potential uncertain actualities (Samimian-Darash 2013) from past genocide suffering to present psychological and social distress and disadvantage. The emplotted actualities may then “make up” (Hacking 1986) the vulnerable traumatized individual and disadvantaged national and global citizen. Once victims have been subjectified, mental health care may potentially— and pastorally— manage, track,

114 / Carol A. Kidron

and govern the otherwise infinite potentialities of scope and severity of personal trauma that evolve into familial and communal distress, strife, poverty, and disadvantage. Technologies of Treatment TPO provides a variety of conventional and innovative treatment technologies. Euro-Western technologies include psychological face- to- face counseling with psychiatrists and psychologists, cognitive behavioral therapy and medical “first aid” with the help of psychiatric medication. However in the spirit of TPO’s hybrid Euro-Western and Khmer perspective, these are frequently complemented by traditional Buddhist practices of meditation and mindfulness (see Nickerson and Hinton 2011). Innovative Treatment Technologies Relief and Justice: Translating Talk Therapy into Culture-Specific Testimony Therapy

As noted earlier, witnesses appearing before the ECCC tribunal are frequently assessed by TPO practitioners and DC-Cam interviewers/resource workers. In addition to documenting their testimony for the DC-Cam archive and undergoing a psychological trauma- based interview, victim/witnesses are invited to attend a newly established commemorative ceremony followed by a Buddhist ritual at a local temple (Wat). Termed “testimony therapy,” victims recount their traumatic experiences to a counselor/resource worker, who records the ceremony in a personal folder. Formed into groups, these witnesses are then taken to the Killing Fields, where they are filmed as they recount their testimony. The films are distributed on various NGO websites that deal with mental health and/or genocide. The survivors then visit the collective commemorative monument (stupa) for genocide victims at the site of the Killing Fields, lighting incense in memory of the dead as a way of giving and receiving merit. Traditionally, veneration to the familial dead at stupa monuments serves to cosmologically liberate dead relatives so they may continue on the path to redemptive rebirth. The final stage of the testimony therapy ritual takes place in a Buddhist temple, where TPO and DC-Cam counselors/resource workers read the victims’ testimonies aloud to the gathered audience of monks, priests, witnesses, NGO advocates, and TPO staff. The survivors then receive their individual testimony folders as the counselor dramatically declares that “the survivor suffered and tried to forget but now his heart is open again to tell

Global Humanitarian Interventions / 115

his story.” Kneeling before a monk, survivors exchange their folders for purification, blessing, and merit. The monk, who accepts the testimony, places a red band on the survivor’s arm to ward off the evil spirits of the restless dead. Survivors are then purified with holy water and released from their suffering. The TPO website describes this testimony therapy as a “rights- based, culturally adapted trauma treatment approach,” explaining its semiotic and therapeutic power as taking into account the cultural and human rights dimensions of mental health in Cambodia. Khmer Rouge survivors are invited to talk about their traumatic experiences. In cooperation with a counselor they can restore their painful memories and convert them into a written document: a testimony. The testimony is read aloud and delivered to the survivors during a Buddhist ceremony in the presence of other survivors and/or community members. This practice allows victims to express and process traumatic experiences, to honor the spirits of the dead and to document human rights violations.

Echoing the hybrid technologies outlined earlier, testimony therapy both syncretizes psychological talk therapy with Euro-Western truth claims pertaining to testimony/commemoration and semiotically allies psychological discourse/practice with Buddhist traditional cosmological meanings and practices pertaining to death, karma, family ties, and merit exchange. The new hybridizing ritual triggers multiple semiotic “passages” between very different, culturally particular “cosmo- logics” of personhood, vulnerability, and memory/time— and even individual- collective roles— by “spiritually domesticating” the Euro-Western therapeutic discourse and practice of talk therapy. ”Purifying” the stigma of psychological narcissistic selfmanagement by enlisting memories of the genocide in order to fulfill the traditional Buddhist filial obligation of freeing the spirits of the dead, it also re- empowers monks and healers while reinstating them as key cultural mediators of communal well- being alongside otherwise “foreign” psy brokers/ practitioners. Significantly, transcultural psychiatrists and their NGO partners are now introducing testimony therapy throughout Southeast Asia, adapting it to local spiritual and symbolic practices (see Agger et al. 2012). Testimony therapy also represents additional global discursive alliances between psychology, humanitarian intervention, and transitional justice. As the TPO website notes: In a society where so many Cambodians continue to struggle with the aftermath of their country’s violent history, facing the past is crucial to recovery

116 / Carol A. Kidron from mass atrocity. The Extraordinary Chambers in the Courts of Cambodia (ECCC) offer a unique opportunity for survivors of the Khmer Rouge to seek truth and justice. For the first time in an international criminal tribunal, survivors of mass atrocities have been included in the trial process as Civil Parties, rather than as simple witnesses, permitting them to play a more active role in the legal proceedings with full procedural rights. Active participation in a court can have a highly empowering impact on survivors of mass atrocities. For many Civil Parties, the new Civil Party mechanism offers enhanced opportunities for healing, in as much as it gives survivors the space to recount and receive some acknowledgment for their past abuses. However, the involvement of survivors in criminal proceedings also carries the risk of adversely affecting the psychological well- being of survivors. Moreover, TPO believes that trauma recovery and reconciliation are long- term processes that require more than just retributive justice and should be carried out in tandem with psychological support services and social reconstruction efforts. Thus, TPO initiated a comprehensive psychosocial program to support witnesses and civil parties, as well as to raise awareness of mental health issues amongst the general population. Working in close cooperation with the Witness and Expert Support Unit (WESU) and the Victims Support Services (VSS) of the ECCC, TPO provides a variety of psychosocial services through its Cambodian mental health experts. These range from on- site psychological support during and after the ECCC proceedings to awareness- raising activities on mental health and community- based truth- telling and memorialization initiatives.

As the TPO website reports, public testimony without psychosocial intervention and support risks traumatization rather than recovery. Testimony may trigger resurfacing of traumatic memories and feelings of anger and revenge, thereby ultimately blocking the processes of healing and reconciliation that form the TPO and ECCC’s psychological and humanitarian goals. As intimated by the project’s name, – “Relief and Justice: Translating talk therapy into culture specific testimony therapy,” however, TPO promotional reports indicate that witnesses enjoying the support and psycho- education of TPO hybrid technologies (orchestrated by the TPO and DC-Cam) experience both relief and justice. In “the only hybrid court where local legal officials and foreign jurists work together,” witnesses are literally “relieved” of their psychological and civic burden and receive justice. TPO discourse and practice may also successfully track and help prevent the potential uncertainty of new strife between victims and perpetrators. In keeping with ecological perspectives on trauma, psychosocial treatment thus serves the global humanitarian cause of postconflict justice and reconcilia-

Global Humanitarian Interventions / 117

tion. With respect to the TPO, this step is all the more important for being achieved via traditional, culture- specific rituals, and participatory grassroots self- help resources. The organization also boasts the discursive and technological feat of constituting and managing culturally competent grassroots psychosocial self- help that can be sustained from within the policy assemblage of NGO partnerships governing global civil society. The Victim/Former Khmer Rouge Dialogue Pilot Project

Echoing the discursive alliance between juridical truth claims and psychological truth claims, the final technology, known as “The victim/former Khmer Rouge dialogue project,” facilitates collaboration between TPO and Kdei Karuna, an NGO promoting reconciliation between former perpetrators and victims (www.kdei- karuna.org/program/vfkr/). According to the TPO website: This project aimed to rebuild and understand the fragmented relationship between victims and their direct perpetrators. While calls have been made for reconciliation from the Cambodian government, victims often wish to receive acknowledgement and an apology from their direct perpetrators. For perpetrators, overcoming cultural obstacles that hinder even the acknowledgement of crimes make an apology difficult to achieve. With the fundamental belief that a grassroots participatory approach is necessary to realize these objectives, TPO and ICFC encouraged project participants to define their needs, expectations, and roles within the process to fully engage participants and the surrounding communities in addressing local needs for justice and healing.

Here, the principles of cultural competency ally psychological discourse with Khmer Buddhist civic- spiritual precepts relating to the public confession of regret and forgiveness. Yet to resurface in postgenocide Cambodia, legal scholars claim that these Buddhist precepts are inconsistent with the core logic of human rights precepts of retributive justice (Hancock 2008– 2009). Providing further evidence of the cultural incompetency of globally imposed transitional justice, this project also represents another assemblage of NGOs (TPO and ICFC) joining forces to promote their different yet now discursively and ritually allied agendas of healing and justice. It also bypasses the government and the ECCC, claiming that the public collective ECCC “showcase” of key perpetrators against the entire Khmer people is not commensurate with individual tales and scars, which require a more “communal” context for “individual” confession and apology with the potential of facilitating forgiveness processes.

118 / Carol A. Kidron

Implications for the Governance of Uncertain Mental Health and the Subjectivization of the Psychosocially Vulnerable Building upon Rabinow’s (2007) and Samimian-Darash’s (2013) perspectives on regarding uncertainty and second- order observations relating to the technologies of embedded truth claims of governance, the content analysis presented in this chapter is not concerned with the “objective first- order” outcomes of successful or failed prevention of harm or even the prevention of uncertainty. The data has rather prompted us to focus on the emergence of new policies for assessing multiple trajectories of mental health and psychosocial uncertainty and the creation of technologies for managing and governing them. Both second- order observations also embed local and global discursive truth claims about subjectivity and its role in mundane and spiritual experience. In order to examine the resultant subjectivity of vulnerable victims, it is expedient to briefly review how the TPO website depicts the psychosocial and mental health outcomes of their technologies. First and foremost, all the assessment and treatment technologies are portrayed as sufficiently culturally competent and thus effective in addressing the social welfare and psychological needs of their audience. The website also posts quantitative measures of posttherapy well- being and annual reports, adducing ameliorations in self- perceived well- being and a decline in psychologically identified trauma- related symptomology. Perhaps most interestingly from an ethnographic perspective, video clips posted on the website with English subtitles depict Khmer villager accounts of the transformative and liberating impact of psycho- education, self- help, and testimony therapy. Despite their brevity, the videos allow viewers to evaluate the impact of these technologies upon victims’ self- perceived subjectivity before and after the intervention of TPO treatment. Victims extensively refer to an improvement in family life, a new calm and productive lifestyle, the importance of remembering and sharing the past rather than forgetting the genocide for gaining a sense of freedom, and the joy of speaking of and sharing one’s feelings. These accounts suggest that the truth claims embedded in the newly implemented psychological technologies temporarily— albeit perhaps superficially— constitute a new form of subjectivity, one resembling that of “therapeutic selves” (Rose 1998). Nevertheless, questions remain with respect to the success of the policies and technologies of uncertainty. Do they mask resistance to these changes? Do the outcomes reflect the identity politics so frequently characteristic of postcolonial hybridized or glocal contexts (Kidron 2012)?

Global Humanitarian Interventions / 119

Lay Khmer Descendant Resistance to the Rhythm and Truth Claims of Uncertain Trajectories of Mental Health Discourse, Policy, and Practice Ethnographic interviews with second- generation descendants of Khmer Rouge survivors in Canada (Kidron 2012) and Cambodia were undertaken to examine Khmer descendants’ self- perceived sense of wellness and their perceptions of parental wellness/illness. Descendants in both countries insisted they and their parents were not suffering from a mental illness nor were they burdened by debilitating psycho- social distress. When describing the illness construct and the treatments for PTSD and ITT only two of the twenty- five respondents in Canada and one respondent in Cambodia had heard of these illness constructs. When the symptomology of PTSD and ITT was outlined in similar language to that of the TPO website, respondents resisted the discourse and practice of trauma- related mental health care. The overwhelming majority of descendants maintained that the sole psychosocial outcome of what they refer to as the “Pol Pot time” was occasional “sadness,” occurring when they “thought too much” about the past. The vast majority also regarded Buddhist practices such as meditation and ancestor veneration to be a source of resilience and healing for both survivors and their children. Descendants living in Cambodia in particular were adamant that meditation at a temple had allowed their survivor parents to “release” any remaining anger or sadness that could potentially cause illness. A number of descendants went as far as to assert that Western biomedical treatments would be either “poisonous” or might have dangerous side effects, Buddhist practices in contrast being wholly salutogenic. When asked whether Western forms of treatment should supplement Buddhist practices, only two Canadian Khmer descendants and one Khmer descendant living in Cambodia indicated that Western medication might be necessary in extreme cases of “shock” and “sadness.” Only one descendant reported that he had contact with mental health services (not TPO) and the technologies of trauma therapy, taking her mother to a Khmer psychologist at the local mental health clinic after repeated incidents of what she described as “sadness” and withdrawal from familial activities. Even in this case of possible exposure to Euro-Western therapy, the descendant claimed to have been encouraged to stop her mother from “thinking too much” (Keut Chran, a local idiom of distress) about the past and told that her sadness would dissipate if her family assisted her in sustaining a present- oriented perspective. The descendant asserted that her mother was able to follow the psychologist’s advice and felt she had “recovered.”

120 / Carol A. Kidron

The ethnopsychological diagnosis, treatment, and outcome embedded in this account appear to deviate from the truth claims noted earlier regarding baksbat- related Khmer fearful silence (“not wanting to hear about the past” or “wishing it would go away”) and the “testimony method” as proposed treatment. Resistance to “thinking too much” is also antithetical to Euro-Western understandings of “avoidance,” being understood from a psychotherapeutic perspective as impeding the all- important pathway to voice, testimony, and healing. In response to a more direct question regarding their own and their parents’ interest in speaking about the difficult past, respondents unanimously resisted the Euro-Western therapeutic truth claim regarding public articulation and verbal working through of painful experiences. The concept of working through the past to manage and heal (and thereby govern) an array of potential emotional scenarios in a “disordered” future was particularly inconceivable to descendants. Many respondents asserted that according to the rules of karma, which dictate that past and present actions dialectically determine the future, the future is “naturally” and dynamically uncertain— infinitely diverse and out of our control. At the same time, however, they also suggested that present orientation focusing on one’s moral behavior rather than past orientation (and dwelling on the past) would shape— if not guarantee— a better future. Khmer notions of personhood (and Buddhist conceptions of karma in particular), present and (short- term) futureoriented perspectives, and fate and the inevitability of suffering appeared to be inconsistent with what I had presented to them as trauma theory and the trajectory of managing uncertain mental health.1 Finally, in keeping with Fassin’s (2008) and Pupavac’s (2004) critique of trauma theory and related interventions, descendants who were members or advocates for local NGOs (6 out of 25 respondents in Canada and 7 out of 25 in Cambodia) were concerned that discourse regarding and funding of mental health interventions would both needlessly pathologize and disempower Cambodian survivors by “making them sick by telling them to ‘think too much’” about the genocide and depoliticize the critical and difficult process of solving social inequality— the latter not being solely an artefact of genocide trauma. In their view, present adversity was no less a product of postcolonial politics and deep- seated culture- specific socio economic hierarchies and corruption in Cambodia.

Global Humanitarian Interventions / 121

Implications for a “Future” of Managed Mental Health Uncertainties As Agger et al. (2012) argue, the contemporary globalized world can no longer be reduced to local and particular moral and spiritual understandings or parochial political organizational arrangements. Global realities require creative solutions to local problems that take into account the synergistic meeting of local and global discourse, practice, and resultant ontological phenomenological experiences of subjectivity, wellness, social order, and disorder. In this glocal reality, hybridized mental health organizations appear to harness, rather than control or prevent, the logic and technology of uncertainty to promote and sustain a new glocal hybrid health protocol that simultaneously pathologizes vulnerable victims and paradoxically heals (according to Euro-Western and Khmer concepts) them in ways that “speak” to (what entrepreneurs define as) familiar cosmo- logics. According to the new hybrid Khmer/Euro-Western entrepreneurs of health care, these protocols may be sustained within and by a glocal governmental/nongovernmental and national civil society that serves individuals and communities. These protocols are undoubtedly “making up” (Rose 1998) new hybrid, culturally particular psychological subjects of a psychological pastoral regime that at once emerges from above and below. In the process, new truth claims are formed pertaining to individual and communal well- being and civic neoliberal Khmer rights dependent upon and interwoven with the scars (be they disseminated as PTSD or baksbat) of the past and what is now termed “for them” and “by them” mental health and healing. Individual juridical roles, personal memory, voice, transmission, familial relations, and relations with the dead, which are key cosmological tenets guiding the way the world “culturally works,” are also changing. Perhaps most intriguing of all are the developments in the way the past and future are narratively constructed and emplotted. In contrast to the traditional Buddhist conception of an “unmanageable” and uncertain future, the past is conceived as something co- present, to be managed, tracked, and observed and perpetually impacting and interwoven with Euro-Western concepts of well- being. In keeping with Edkins’s (2003) insights, it would appear that these protocols are part and parcel of the “the politics of trauma,” where trauma brokers coopt and govern memories of genocide imposing a linear frame on what would otherwise be a nonlinear conception of the past. The “cosmopolitan,” utilitarian, and neoliberal reading that acknowledges the “benefits” of postcolonial hybrid mental health care technologies and truth claims must nonetheless come to terms— theoretically and

122 / Carol A. Kidron

ethically— with the very different voice of descendants who resist Khmer mental health care (Kidron 2012). These voices problematize the “inevitability” of managing uncertainty— in particular in cultural contexts where even well- intended cultural translators, civil global entrepreneurs, health practitioners, or scholars cannot easily translate “suffering” and human rational management of infinite scenarios of collective suffering, generational transmission, and individual mortal and emotional fate. Further ethnographic interviews and observations are called for in order to observe not only how Khmer survivors, descendants, and trauma victims make cultural sense of newly managed scenarios of mental health uncertainty, but also how they translate to themselves and their loved ones the moments of contact and syncretization between their cultural meaning world and that of mental health care. The texts presented herein clearly demonstrate that glocal scholars, practitioners and NGO- based entrepreneurs have also experienced these moments of translation, creating their own newly managed cosmopolitan scenarios that may no longer be, for them and for many of their glocal disciples, incommensurate with what they now perceive to be adapted (yet no less “authentic”) traditional cultural lifeworlds. Ethnographers existing on these liminal cultural boundaries and specializing in such moments of contact, continuity, and change may be well positioned to observe local tacit and vocal resistance to glocal culture loss institutionally orchestrated in the name of managing the infinite trajectories of uncertain actual futures.

CHAPTER SEVEN

The Malicious and the Uncertain: Biosecurity, Self-Justification, and the Arts of Living G AY M O N B E N N E T T

The human race is inquisitive about other people’s lives, but people are negligent to correct their own. —Saint Augustine, The Confessions

Viral Engineering In September 2011, at a resort in Malta, the European Working Group on Influenza held their triennial conference to discuss the state of play in influenza research, control, prevention, and treatment. The Malta meeting was distinct in that it was also expected to facilitate a postmortem on Europe’s response to the 2009 H1N1 “swine flu” pandemic. On the second day of the conference Dutch virologist Ron Fouchier gave a talk that would prove the most consequential of the proceedings. Fouchier was, at the time, a relatively minor player in influenza research. He was an assistant professor at Erasmus Medical Center in Rotterdam, where he was overshadowed by his director Ab Osterhaus, a figure of considerable standing in global health, deeply connected to the World Health Organization (WHO), the National Institutes of Health (NIH) influenza centers, and the pharmaceutical industry. Fouchier was slated to report on his labs’ efforts to identify biological factors that might help predict how and when influenza viruses become transmissible by air between mammals, that is, when they might be able to “go pandemic” (ESWI 2011). In a paper published shortly before the Malta conference, Fouchier’s lab described the problem: “Repeated transmission of animal influenza viruses to humans has prompted investigation of the viral, host, and environmental factors responsible for transmission via aero-

124 / Gaymon Bennett

sols or respiratory droplets. How do we determine— out of thousands of influenza virus isolates collected in animal surveillance studies each year— which viruses have the potential to become ‘airborne,’ and hence pose a pandemic threat?” (Sorrell et al. 2011) Despite use of the term “prediction,” Fouchier’s research was oriented by a slack relation between risk, uncertainty, and pandemic preparedness. The question was: What can we know about possible future pandemics by understanding the present biological conditions that might (or might not) contribute to their emergence? And assuming that we probably cannot know enough to prevent new outbreaks, can we at least learn enough to make decisions about which sequences to watch for, which vaccines to stockpile, and which culling plans to put in place? Fouchier’s lab proposed to identify the minimal set of genetic mechanisms that any influenza strain would need before it could become transmissible between mammals via aerosols or airborne droplets. Public health officials could then surveil for influenza strains containing that set as a means of anticipating where pandemics might emerge. The relation of science to the future implied is an old one: the nature of things in the present is more or less what it was yesterday and will be tomorrow. The wrinkle is the evolutionary malleability of living systems. Biology is historically underdetermined. Knowing what a living organism is today may not, in fact, tell you what it will be tomorrow. Influenza, which evolves quickly and easily, is notorious in this regard. This means that the particular form flu takes is context dependent. This also means that influenza activates the familiar political difficulties of infectious disease: disease is a social fact and the norms and forms of cultural practice, from city planning to economic circulation, function as vectors for the transformation and transmission of disease (Delaporte, 1989; Rabinow, 1995). The story of avian influenza, which is at the center of Fouchier’s work, illustrates the point. Avian flu appeared in China and Southeast Asia following a massive consolidation of poultry farming made economically feasible by changes in global shipping. These changes were in turn linked to shifts in eating habits around the world correlated to evolving norms of wealth, social status, and consumption. It would seem to follow that any sufficient public health response to avian flu requires direct intervention into the practices and institutions that drive its evolution. There seems to be a biological way out of such costly intervention, however. If the specific form flu takes is historically and contextually underdetermined in another dimension (e.g., a functional one),

The Malicious and the Uncertain / 125

the range of possible forms is limited. There are a limited number of proteins on the surface of any virus that can be involved in transmissibility and virulence, and a limited number of underlying genetic configurations that can support these proteins. The set of genetic combinations that allow for potential pandemics, in other words, is finite and susceptible to specification. From the perspective of biological function, Fouchier’s project to determine the genetic bases of transmissibility would seem to be apposite. Equipped with a determined set of sequences, public health workers might be able to ignore the uncertainties generated by the cultural and economic contingencies that otherwise drive viral evolution. Microbiology could be made to power the machinery of global public health without officials having to interfere in the economic and cultural patterns that make up our contemporary lives. By the time of the Malta conference, Fouchier’s work had not produced definitive results. His strategy had been to trace through the phylogenetic history of flu strains associated with twentieth- century pandemics. The hope was that a viral genealogy of existing samples would reveal the targeted genetic sequences. When this approach proved more or less unsuccessful, the lab decided to try things the other way around. Rather than find the sequences they decided to make them. Fouchier did not recap any of this in Malta. Instead, he reported that cases of H5N1 avian flu were on the rise in Indonesia and that the virus was evolving beyond the prophylactic aid of existing vaccines. He asserted that the reason for this rise was a growing unwillingness on the part of public health officials to pay the economic and political costs of stamping out the virus through vigilant surveillance and the culling of poultry farms. The reason, he claimed, is that these officials had become convinced that because H5N1 had not become pandemic, it could not become pandemic. Fouchier then dropped the proverbial bomb. He announced that his research group had successfully engineered H5N1 to make it transmissible, by air, between mammals. Their process had been simple, if laborious. Beginning with the wild- type strain they directly engineered three mutations thought to be associated with transmission. They then cultured the virus in a nasal rinse and washed it through the sinuses and upper respiratory tracts of several ferrets. After letting the virus incubate, they rinsed the sinuses of the infected ferrets, collected the rinse, and repeated the process. Seven passages and two further mutations later, otherwise healthy ferrets in adjoining cages became infected via airborne droplets. Fouchier suggested that in gaining new functions the virus had not thereby lost its native virulence, adding that “this was very bad news in-

126 / Gaymon Bennett

deed.” With an uncomfortable laugh, he concluded that the experiment was “really, really stupid” (ESWI 2011). In retrospect, it is unclear whether his admission referred to a relative lack of scientific sophistication or political seriousness.

Uncertainty: Modern Risks, Contemporary Existence In his essay on the logic of risk, “Describing the Future,” Niklas Luhmann proposes that we moderns find ourselves in a “completely different situation” (Luhmann, 1998).1 That difference is defined by the experience of a radical break in the continuity of past and present. That break generates a relationship of radical contingency between the present and the future. The novelty of our present situation shows itself in a double diacritic. First, we can no longer be certain that anything we have experienced in the past will be the same in the future. Certainty lies in the fact of uncertainty. Second, we are faced with the proposition that much of what will be true in the future depends on the decisions we make today. Discontinuity, contingency, and uncertainty collude to leave us with the felt insecurity of a future dependent on decisions. This redounds to a relation with the future summarized by risk. The relation is defined in factual, social, and temporal dimensions. In its factual dimension risk turns on a lack of fixity. Language and information processing have become systems of interferentiality, operating less on teleology and more on autopoesis. In this dimension one simply hopes that things operate well, making technical adjustments according to preference. In its social dimension risk is defined by a shift from authority to the politics of understanding. Authority creates the future by establishing the terms according to which it will be imagined as an elaboration of the present. The politics of understanding consist in provisos that can only be relied upon for a given time. The value of expertise in this case lies in fixing reference points that parameterize what can be talked about. This leaves time as the real problem. The future takes form in the present as uncertainty: a perpetual and proliferating series of probabilities understood in the simple sense of things that are more or less likely to happen. Systems of calculation are needed to manage this uncertainty, systems that calculate the future as that which can always be otherwise. The value of calculation lies in its endless flexibility. If calculations turn out to be wrong they will still be right: the future is uncertain, after all. All of this adds up to a situation in which uncertainty abounds and decisions must be made. Expertise becomes communication and the only limit

The Malicious and the Uncertain / 127

point— the point at which probability calculations and expert opinions break down— is catastrophe. Hence the experience of the future as insecurity and the risk of deciding. And hence Luhmann’s famous definition of risk as a function of decisions: possible but not yet determined losses that may or may not result from decisions (Luhmann 2005). We need descriptions of the future in the present which will help us decide. Fulfilling this need constitutes an ever- receding horizon (Samimian-Darash 2013). Modernity in this view is tragic and Sisyphean. It is tragic because it is not at all clear whether our decisions even matter amid the world’s tangled complexities. It is Sisyphean because faced with uncertainty we are fated to decide. Even observers of modernity cannot escape being modern. Hence Luhmann’s orienting presumption— one he shares with other critics of “risk society”: we moderns are completely different. It seems to follow that it is both possible and necessary to identify prior modes of being in the world, and our prior ways of facing unknown futures. The task in a risk society is to clarify how these prior modes are no longer sufficient so we can move beyond them. Luhmann reminds us that we are no longer Scholastics for whom the world is a cosmological synthesis in which the future is regulated, integrated, and trustworthy. Our uncertainties have become substantial and not merely accidental. And we are no longer citizens of the Enlightenment for whom the growing complexity of modern structures generated a relationship to the future predicated on projection and planning. This pocket genealogy of prior modes of being is taken as self- evident, made obvious by the stark experience of insecurity in late modernity. Indeed, reference to the dissolution of the cosmological synthesis and the failures of Enlightenment planning has become commonplace in analyses of risk society, neatly circumscribing the boundaries of our late- modern being (cf. Giddens 1991; Beck 2002). Yet such serialization, however pedagogically tidy, is problematically epochal. It cannot be squared with the plague of contingency that Luhmann insists now ails us. If we cannot be sure that tomorrow things will be like they were yesterday, then we also cannot be sure they will be different. It seems worth considering whether late modernity is, in fact, so completely different. This question is particularly important in light of events like the H5N1 affair, in which claims of radical novelty and risk play a structuring role despite the fact that the politics of the event remain blocked in part by long- standing difficulties. In this light, it seems worth asking whether our being as late moderns can be defined less as an epoch and more as an ethos (Foucault 2001). Being

128 / Gaymon Bennett

modern— or enlightened or scholastic— might then be considered in terms of possible modes of existence, and not only defined historical chapters. This approach has the virtue— to introduce a technical distinction— of testing the possibility that today we might be contemporary and not only modern (Rabinow 2007). The term “contemporary” names an ethos and a task. As an ethos, the contemporary names a situation in which old and new elements are being brought together and given distinctive forms (Rabinow 2007). Unlike a modern ethos in which significance is indexed to the new over the old, in the contemporary significance is indexed to the recombination of tradition and innovation in response to specific problems. As a task, one which is as ethical as it is scientific, the contemporary consists in specifying how it is that old and new modes of existence are being brought together in the world today— how, for example, do elements of enlightenment planning or scholastic cosmologies remain present today and partially determinative of moral formation in the space laboratory practice? And how do we determine what these combinations of old and new are actually doing and how they might need to be reformed? Unlike risk society, the operative assumption of the contemporary is that the future is not only uncertain but also underdetermined. It can be determined differently, and the range of possibilities is shaped, in part, by how we embody our relationship to the past as a means of facing the future. The contemporary thus facilitates a dispositional and modal shift in our relation to the future. The problem is no longer (primarily) one of describing the future in order to risk deciding in the face of situations marked by uncertainty. The problem is also to conduct inquiry into the determinants of our historical being in order to undertake a work of self- formation in the face of situations marked by underdetermination. In the case of H5N1, the anthropological question is this: What would it mean to think of biosecurity and uncertainty as contemporary and not only modern problems?

From Health to Security: H5N1’s Moral Determinations Initially, few people outside the world of influenza research took notice of Fouchier’s announcement. The few nonresearchers who did take note (two science bloggers), basically reproduced his story: that his work ended debates over whether or not H5N1 could ever become pandemic, and that it provided the sequence information needed to prepare countermeasures in the form of genetic surveillance, vaccines, and antivirals.

The Malicious and the Uncertain / 129

The relative lack of attention is surprising. Fouchier, like others, was funded to work on H5N1 because of its extraordinary case fatality rates (i.e., the number of people who die per infection). The WHO estimates those rates to be 30 to 60 percent. Even if that number is high by a factor of two, it still means that Fouchier’s engineered strain represented potentially the most deadly airborne virus in the world. Fouchier underscored this point (MacKenzie 2011). He and others suggested that the number of people who might die from a possible H5N1 pandemic was a measure of the number of lives his work might save. This salvational estimation seemed vindicated by the fact that all five of the mutations he identified were already known to exist in bird populations, though not yet together. Fouchier took this to mean that the certainty of an H5N1 pandemic was only a matter of time. Shortly after Malta, Fouchier submitted his work to the journal Science. At roughly the same time another research group, led by Tokyo- and Wisconsinbased virologist Yoshihiro Kawaoka, submitted parallel findings to the journal Nature. Editors at both journals expressed concern about publishing the full results— both journals had previously been embroiled in controversies concerning the security implications of publishing the sequences of deadly viruses. Editors at the journals ultimately forwarded the papers to US security officials who sent them to the NIH, which had funded the work. In early October, the papers were given to the National Science Advisory Board for Biosecurity (NSABB), which was asked to determine whether the studies constituted “dual use research of concern,” and whether they should be communicated in full.2 The NSABB’s task reactivated a long- standing shift of rhetorical frames and ontological imaginaries: biology as a matter of security and not only health. The NSABB convened a working group, which spent dozens of hours in intense and pained deliberation; intense because there was a strong sense that the papers “could be directly misapplied with a sufficiently broad scope to affect national or global security,” and pained because most members were loath to interfere with the circulation of scientific publication (Berns et al. 2012). The working group framed the question of security in terms of instrumentality and intentions: “Could this knowledge, in the hands of malicious individuals, organizations or governments, allow construction of a genetically altered influenza virus capable of causing a pandemic with mortality exceeding that of the ‘Spanish flu’ epidemic of 1918?” On the surface this framing fit the Luhmannian risk formulation wherein a question of uncertainty is linked to human decisions and potential losses. But the technical

130 / Gaymon Bennett

question was not posed in terms of probabilities, but in terms of the relative ease with which the virus might be reproduced. More important, the principle affect was not insecurity generated by uncertainty, but rather certainty about the moral topography of global health and global security. In a manner not dissimilar to Fouchier’s ethical selfstylization, the NSABB proceeded as though the benefits of “gain of function” (i.e., research whose purpose was to enhance a virus’s transmissibility or virulence research) could be taken for granted. “With [new technical capacities] has come unprecedented potential for better control of infectious disease and significant societal benefit. However, there is also a growing concern that the same science will be deliberately misused and that the consequences could be catastrophic” (2012). “Misused” is the tell. It reveals the moral character of their framing terminology; that is, the notion of “dual use.” Dual use seems conceptually uncomplicated and self- evident as “research can be used for good or bad purposes.” But troubling complications lie with the fact that the term dual use operationalizes and seems to make plausible a strategy of dealing with the dangers of contemporary biology by cantoning off the world into “bad guys” (terrorists or psychopaths) and “good guys” (scientists and funders), whose intentions— and hence research— can be assumed to be beneficent (Rabinow and Bennett 2012).

The Modern Threshold: Uncertainty, Security, and Subjectivity Standard accounts of “risk society” feature the idea that before the nineteenth century future uncertainties were not so existentially problematic (cf. Luhmann 1998). No one suggests that life was predictable and unthreatening. From Aristotle to the medievalists the question of de futuris contingentibus— contingent futures— was ubiquitous. The suggestion, rather, is that insofar as people believed in a cosmos, things visible and invisible could ultimately be presumed to add up. A certain confidence in the practice of virtue was possible. One could prepare oneself for unknown futures by cultivating virtues like “steadiness” or “tranquility” because one could be certain that these virtues were consistent with the nature of reality. Life might be troubled by unexpected events, but at least one knew which virtues were needed to face the future with confidence. “The harmonia mundi,” in short, “was beyond question” (Luhmann 1998). This pocket history is, of course, a setup in the strict sense. It allows analysts to drive home the point that previous views of the world and its future cannot be maintained in modernity. The world is too complex and

The Malicious and the Uncertain / 131

the future too unfixed. One might still say that the harmony of the world is beyond question, but one would mean something entirely different. Such a view of life before modernity may be perfunctory and familiar, but it precipitously closes the question of whether and in what ways previous understandings of the future might remain pertinent. Even if we agree that previous understandings of the future are no longer sufficient, previous ways of experiencing, problematizing, and preparing to face the future are actually still with us, however obliquely. Moreover, if we think of the situation of H5N1 as contemporary and not only modern, then the question is not how to move beyond the cosmological presumption or Enlightenment planning, but rather, how to reconstruct those histories in order to determine their present significance. In this way we find that our premodern predecessors are not only still with us, but have something to say about our contemporary predicament. This contemporary reconstruction begins with recognizing that although few premodern thinkers named the cosmological synthesis as their primary concern, theologians, philosophers, and church officials spent vast amounts of energy maintaining the presumption of an integrated reality. This is especially evident in eleventh- and twelfth- century Europe. Catholic Europe was rife with popular religious movements. The centralized power of the church— which was never as absolute as later discourse might suggest— had become increasingly tenuous. In response, four times in the twelfth century the Catholic Church held ecumenical councils aimed at reintegrating the church. Among other things, these councils deauthorized the preaching and sacramental authority of nonclergy, clarified the separation of spiritual and temporal powers, and reasserted the power of the pope (Foucault 2009). Crucially, though seemingly less consequential, the councils also created a new monastic order— the Dominicans. Popular religious movements had been animated by a sense that too many priests were living in sin and that this perverted the salvational efficacy of their preaching and sacraments. To address this, the councils called for the creation of an order of preachers capable of speaking the truth in such a way as to affect salvation in the hearer regardless of the state of the preacher’s own soul (Ozment 1980). The Dominicans were faced with the challenge of imagining and giving form to a fundamentally different relation among truth, subjectivity, and the future. As Michel Foucault puts it, whereas previously it was postulated that in order to have access to the truth the subject must undergo transformations at the level of the subject’s own being, the Dominicans now needed a relation that was predicated on the idea that “such as he is the subject has right of access to the truth” (Foucault 2005).

132 / Gaymon Bennett

Within a half century, Europe’s universities had been turned to the task of producing a model and mode of doctrinal theology whose salvational outcomes could be more or less taken for granted. Alongside new institutional norms, new theological forms were created. These aimed at the systematic, self- consistent, and self- evident presentation of Christian thought. Thomas Aquinas’s celebrated Summa Theologica constituted a definitive attempt. If the cosmological synthesis was previously assumed, with Aquinas it was given systematic form. The veridictional disposition exemplified by work on cosmological synthesis constitutes a decisive moment in the history of thought, one that remains significant for our contemporary mode of being. This moment can be glossed as the modern threshold in the relation of truth and subjectivity: “We can say that we enter the modern age when it is assumed that what gives access to the truth . . . is knowledge and knowledge alone” (Foucault 2005). This idea that knowledge alone gives access to the truth became programmatic for the Enlightenment scientists: “The scientist can recognize the truth and have access to it in himself and solely through his activity of knowing, without anything else being demanded of him and without him having to change or alter his being as a subject” (Foucault 2005). But if science is this idea’s most prominent exemplar, it is worth keeping in mind that this disposition was first formulated by scholastics in response to specific institutional breakdowns. It is worth keeping in mind because it orients inquiry toward a different problem. That problem, which links such seemingly distant practices as scholastic theology and experimental biology, is the problem of salvation. The term “salvation” in this case does not refer only or primarily to Christian ideas of heaven and hell, God, or celestial choirs. Rather, salvation (soteria), the problem of how to save (sozein), is a much older one, central to the history of European philosophy as well as theology. And it is a problem that needs to be reexamined as part of contemporary science, as the H5N1 affair shows. It is the question of how to live as individuals and citizens so as to have a good life, as well as the place of thinking in achieving such a life (Foucault 2005) The salvational problem has sometimes been cast as primarily political and instrumental, but it is more fundamentally subjectivational and formational: How does one become an ethical subject capable of living well in the face of unknown futures? The question of salvation is a problem because the world is rife with unexpected turns and turbulent forces and because we are often incapable of facing those forces in a manner we judge to be good. For antique philosophy and medieval theology the two sides of the problem—

The Malicious and the Uncertain / 133

unexpected futures and subjectivity— were brought together through the tekhne¯ tou biou, the arts of living, or what we might simply call virtue ethics; that is, the reflective and painstaking work of self- formation. The significance of the modern threshold is that it marks an event in the history of thought at which point it begins to be assumed that salvational goods can follow directly from knowledge itself and the application of knowledge, without reference to prior or ongoing ethical labor. Luhmann and others may be right that the harmonia mundi was ethically indispensable prior to this threshold. They may be right that, in this sense, it was beyond question. But what they cover over is that there was in fact a problem, and it is a problem that remains pertinent today: the salvational problem of preparing oneself for the future.

The Micropolitics of Blame In November 2011, news of the NSABB’s review ignited interest in the research, complicating the narrative and politics. In the United States, Thomas Inglesby at the Center for Health Security and Laurie Garrett at the journal Foreign Policy set the talking points (Garrett 2011; Inglesby et al. 2011). They argued that all gain of function research that expands a deadly virus’s host range should simply be ruled out of bounds. For Inglesby this meant redacted publication. For Garrett it meant resigned inevitability, in that too many eyes had seen the manuscript. In December the NIH released the NSABB recommendations: redacted publication, with the possibility of establishing a secure network on which public health workers could access the full manuscripts (NIH 2011). The tone of the release was regretful. It indicated that the risk of equipping nefarious actors was too high to warrant full publication. But the NIH remains steadfast in its commitment to the value of function research. A few days later, the politics of the situation intensified. Anthony Fauci, head of the National Institutes of Allergy and Infectious Disease (NIAID), co- authored an op- ed piece for The Washington Post with NIH director Francis Collins and NIH executive Gary Nabel, strongly supporting full publication (Fauci, Nabel, and Collins 2011). The editorial asserted that an H5N1 pandemic was now known to be virtually inevitable and that knowing the genetic sequence in advance was vital for public health. Pointedly, it did not mention the NSABB. It did suggest that the primary threat was not dual use but rather inaction in the face of the imperative of global health preparedness. The editorial had immediate effects. A week after it ran, the New York Times editorial board condemned the research, praising the NSABB and

134 / Gaymon Bennett

(echoing Garrett) referring to the Fouchier strain as a “doomsday virus.” Vincent Racaniello, the editor of the widely read “Virology Blog,” castigated the editorial board, suggesting that they should leave judgments to the experts. Those who disagreed with the research were cast as antiscientific and security- obsessed. Perhaps more important, the editorial had a chilling effect on the willingness of biologists to enter the debate. This left security experts the only voices publically critical of full publication. Three months later, at a National Academies of Sciences meeting in Washington, DC, Nobel Laureate Richard Roberts confronted Fauci, saying that he and several of his colleagues had taken his editorial as the NIH party line. They had stayed quiet for fear of losing funding. Fauci blithely replied that he had nothing to do with the review of proposals. In mid-January, ScienceInsider ran an interview with Fouchier in which he struck a tone of disbelief that so few people grasped the medical worth of his work. He subsequently agreed, with Osterhaus and Sander Herfst, the lead author on his lab’s paper, to publish a systematic justification of the research (Fouchier et al. 2012). That justification deflected the question of security by focusing on safety. It stressed Erasmus Medical Center’s compliance with regulations, and the biosafety qualities of EMC’s facilities— filters, locks, and backup generators. It repeated earlier statements about health preparedness, especially the fact that equipped by their engineered sequence public health officials could better surveil for pandemic H5N1 and stockpile vaccines. Science asked epidemiologist Michael Osterholm to respond (Osterholm and Henderson 2012). Osterholm was a long- time member of the NSABB and had become its de facto spokesperson. He was also a director of one of the NIH flu centers and hence could position himself as a critical insider. Osterholm struck only pragmatic notes in support of redaction. He pointed out that disease surveillance and vaccine production remained slow and fraught with political conflict, limiting the ability of public health workers to take full advantage of the findings. Given this, why make a potentially pandemic virus available to potentially nefarious actors? Osterholm’s response disappointed some scientific critics. Other concerned biologists were arguing (almost entirely off the record) that neither Fouchier nor Kawaoka had discovered the sequence by way of which H5N1 becomes transmissible between mammals, but only one possible sequence. The only way to know how many other combinations of mutations might produce the same affect would be to keep evolving the virus, thereby exacerbating security concerns.

The Malicious and the Uncertain / 135

Concerned biologists were also not sure what to make of the fact that all five of Fouchier’s mutations already existed in infected birds. A virus engineered in the laboratory could never tell us how likely it is that those five mutations might end up in the same bird at the same time (Relman and Brent 2013). Kawaoka remained almost entirely out of the public eye. He published one short defense of his work in Nature (2012). In it he distanced himself from Fouchier, explaining that he had deliberately attenuated his strain so that it would no longer be as virulent as the wild type. He stated that unlike Fouchier’s, his was not deadly.

A Genealogy of Malice The modern threshold of truth and subjectivity obviously entails methodological ramifications. Less obviously, it also transformed the ethical stakes of thinking. In the wake of Aquinas’s Summa, scholastic theology began to provide systematic justifications for ecclesial claims to authority, including the Roman Church’s monopoly claims over the means of salvation. Throughout the fifteenth and sixteenth centuries these claims, and the practices connected to them, proliferated, indirectly testifying to the fact that the church had never really achieved the institutional consolidation it had sought. The sixteenth century reformers were ruthless on this point (Bradstock 2006). Martin Luther and John Calvin, among others, launched theological attacks on scholasticism as a proxy war on the corruptions and overreach of ecclesial authority. Luther famously attacked the Church’s salvational presumptions by way of scathing denunciations of Thomistic and Aristotelian confidence in the human ability to control the terms of salvation. Luther appealed to Saint Augustine’s concept of the fall and original sin. He cast Aquinas’s systematic appeals to reason as counter to the reality of endemic perversions in the human spirit. The critique was polemic and arguably infelicitous. Aquinas, after all, had sought to remake Aristotelian ethics in an Augustinian mold. More important— and in the spirit of epigraph at the beginning of this chapter—Aquinas gave considerable weight to what might be called Augustine’s ethical realism: the idea that practices of selfexamination and self- formation are good and necessary even if the results are limited (Aquinas 2013). By contrast, and amid polemics, Luther rejected out of hand the value of spiritual exercise (Luther 2008). And as Foucault has wryly pointed out, it’s a historical irony that Luther, a resolute Augustinian, established the

136 / Gaymon Bennett

terms for a view of ethics largely at odds with Augustine’s realism (Foucault 2005). Luther proposed that the basic problem is that humans are “curved in on themselves”—incurvatus in se. They are self- centered to the point of moral helplessness and can only be saved by an outside force, whether divine grace or the sword of the prince. This also means that any attempt at self- improvement is as likely as not to create as many problems as it solves. Many of Luther’s interpreters took his views on sin to mean that in the end, ethics could not really be about improving one’s character— and certainly not about salvation. And from the notion of an overdetermining fall, it’s a small step to the remarkable transformation of ethical thinking across much of modern Europe according to which ethics is thought of more as a matter of will, intentions, and selflessness, than habits, dispositions, and practice (MacIntyre 2007). The problem is that if ethics is not about work on one’s own being, “correcting our own lives,” as Augustine put it, all that’s really left is (a) the state of one’s intentions or (b) the state of the world. More problematic still, as philosopher François Flahaut has incisively pointed out, a moral posture that assumes that ethics is only about intentions and the world tends to slip toward self- idealization (2003). In its most pathological forms, selfidealization tumbles to the presumption that it is really only someone else that needs to be worried about. This posture of self- idealization— malice— has a long history in the annals of virtue ethics. Malice shows itself in proclamation of good intentions, the drawing of clear and distinct lines between good and bad. Malice has traditionally been catalogued as one of the minor vices. But as Flahaut reminds us, over the last century it has proven especially nefarious, and today warrants reexamination: “The question [of malice] is important because it not only points to self- illusion but equally supports belief in illusions widely shared. These illusions are catastrophic . . . whenever they guide the political ideal of entire societies” (2003). Flahaut is referring to regimes of the right and the left in the twentieth century whose self- proclaimed good intentions directed at the construction of a new and better society resulted in the effort to eliminate individuals and groups classed as dangerous (Rabinow and Bennett 2012). Furthermore— and this is the relevant point for the H5N1 affair— in situations of breakdown, the well- intentioned of the world obfuscate the need to address situations of moral complexity through dedicated self- clarification. Flahaut summarizes two figures of evil. On one side is the Augustinian figure of original sin. On the other is the figure of Enlightenment optimism, whose apotheosis is found in nineteenth- and twentieth- century positivism.

The Malicious and the Uncertain / 137

Where for Luhmann the Enlightenment figure is one which confronts the future in a mode of projection, progressive amelioration, and the emerging problem of risk, for Flahaut the figure is defined by an overriding moral sensibility. This sensibility counter- poses an affirmation of the inherent goodness of humanity to the Augustinian view of “the fall of mankind.” Because humanity is taken to be inherently good, problematic situations only require better technologies and technocratic solutions. Instrumental amelioration of self and of society is set forward as a path to this- worldly salvation. The Enlightenment figure externalizes evil. By externalizing evil, a vast potential for work in this world— of a certain type— can be mobilized. But once evil is externalized, when the hope for progress falters, no other resources are at hand for understanding or addressing the issue (Rabinow and Bennett 2012). And risk calculation is no way forward— and not only because, as Luhmann has proposed, the uncertainty of the world exceeds technologies of control. It is no way forward because it leaves the externalization of evil firmly in place and turns a blind eye to institutionalized self- idealization.

Health, Security, Health: Externalizing Malice In mid-February 2012, the WHO announced that it was holding a closeddoor meeting to reopen the question of whether the Fouchier and Kawaoka papers should be published in full (WHO 2012). The meeting was arranged in part because the WHO was under pressure from Indonesia and other signatories of the Pandemic Influenza Preparedness (PIP) framework, which stipulates that countries that provide influenza samples to European and American researchers have guaranteed access to public health benefits deriving from research with those samples. Signatories argued that redacted publication risked violating the framework. The meeting was also arranged at the behest of Fauci’s office. Fauci believed the NSABB had put the NIAID in an untenable position. Since the early 2000s, Fauci had pushed for steep increases in dedicated funding for gain of function research on infectious diseases. That push was premised on a seemingly self- evident fit between the microbiological study of viruses and gains in public health. Twenty participants were invited, including members of the NSABB, Fauci, the researchers themselves, editors of the journals, representatives of the WHO and the PIP nations, and one bioethicist. After a day and a half of deliberations, the WHO announced that a majority had voted to rec-

138 / Gaymon Bennett

ommend full publication: new information had come to light concerning the direct public health benefits of the research. One attendee stressed that the “real” threat lay in the possibility of a naturally occurring pandemic, not the “imagined” threat of a terrorist attack. Several cast biosecurity as a distinctively American obsession. Fauci later quipped: “Nature is the real terrorist” (Fauci 2012). A week later, the American Society of Microbiology held an ad hoc session on the H5N1 affair at their annual biosecurity meeting. Fouchier led the session, speaking to a packed house. He began by chastising the media for misrepresenting his research and the NSABB for contributing to misrepresentation by delaying publication. He put up a PowerPoint slide stating that the engineered strain of H5N1 was not as contagious or virulent as everyone seemed to think. The slide reported that none of the ferrets infected via air had actually died. Following Fouchier, Fauci told a now- stunned audience that clarifications made at the WHO meeting diminished the sense of immediate danger associated with the research. He announced that he was asking the researchers to rewrite their papers to include statements explaining why engineering H5N1 is vital to public health. He said he would be asking the NSABB to reconvene to consider the rewritten manuscripts. ScienceInsider solicited reactions from NSABB members (Cohen and Malakoff 2012). Most expressed confusion. Having interviewed Fouchier and Kawaoka during their initial deliberations, they had believed that the engineered strains were deadly. They also expressed wariness about the rewriting: clarification of the putative benefits— even in light of a revised assessment of virulence— would not change the fact that the researchers had expanded the virus’s host range. Three weeks later, having read the revised manuscripts, the NSABB reconvened. Following the meeting, the NSABB announced that members unanimously recommended full publication of the Kawaoka paper and that a majority also recommended full publication of the Fouchier paper (NSABB 2012). Without elaboration, the announcement explained that the research did not “immediately enable misuse of the research” and that knowledge of these specific sequences may “improve international surveillance.” It is striking that for everyone involved, evil— whether framed as potential pandemics or potential nefarious actors— was resolutely in the world. All sides stressed their support for gain of function research. The only question was whether publication equips catastrophe or helps prevent it. Neither the institutional conditions of scientific practice, nor their formative effects were formally considered as part of the problem. No one asked publicly

The Malicious and the Uncertain / 139

when it might be safe to remove the armed guard that had been posted outside Fouchier’s lab.

US Biosecurity Policy: From Dual Use to De-Idealization In 2004, in the swirl of post- 9/11 US politics, the National Academy of Sciences published a report titled “Biotechnology Research in an Age of Terrorism” and referred to as “the Fink Report” (National Research Council 2004). Arguably, the Fink Report’s most consequential legacy was that it placed the notion of dual use at the center of thinking about biotechnology and its contemporary significance. It also introduced a shift in the moral sensibility attached to the term. Dual use had been coined in military settings in the early twentieth century to describe the movement of technologies from military to civilian settings. After World War II, the term was connected to the legacy of nuclear capabilities and given a distinctive moral cast: nuclear technologies, and the physics that made them possible, could be used for “peaceful” purposes. The term was linked to notions of good and evil in order to persuade the world that technologies designed to kill could be used to foster life. The Fink Report signaled a semantic, political, and ethical reversal. In the report, dual use named the underdetermined character of any biotechnology in its instrumental dimensions. The question was what to make of that underdetermination in a post- 9/11 world indelibly marked by terrorism. Increases in biological capacity, in short, were figured as increases in nefarious actors’ ability to act: technologies of life might be used to kill. The additional twist was that the potential benefits of scientific research were procedurally taken for granted. It was the nefarious “misuses” that needed to be given policy consideration. The Fink Report thus installed a double moralism at the heart of US biosecurity: research used for its intended purposes could be treated as good, and dangerous outcomes could be indexed to purposes that researchers did not intend; that is, the intentions of bad actors. On the day the NSABB announced its revised assessment of the H5N1 papers, the US government quietly released new guidelines for the “Oversight of Life Sciences Dual Use Research of Concern.” The guidelines connected the logic of dual use with a program of prior review originally recommended in the Fink Report. “The purpose of the policy is to establish regular review of United States Government funded or conducted research with certain high- consequence pathogens and toxins for its potential to be dual use research of concern (DURC)” (US Government 2012).

140 / Gaymon Bennett

The NIH and other agencies were instructed to review all current research and to recommend, on a case- by- case basis, any mitigation— from restriction of publication to defunding— that they deemed prudent. The new policy did not require evaluation of any claims about the benefits of research (though this was later recommended). In that sense it did not require any serious change in posture of scientists or policy makers toward the potential futures opened up by biological research. The policy, to put it differently, further consolidated and operationalized the moral simplifications of malice. In this sense, it introduced another pernicious element to an already fraught situation. By enshrining the presumption and adequacy of good intentions it operationally ruled out ethical complexity and fundamentally ruled out reality. In this light, US biosecurity policy actually makes things more dangerous. Any possible improvement in the situation requires posing the question of how the creation of new technical capacities generates situations whose ethical contours are not obvious. It thus raises the question of how scientists and others closely involved might need to work on themselves. Such work cannot be carried out without “de- idealization” (Rabinow and Bennett 2012). De- idealization is the first step toward an ethical pragmatism leading to growth in ethical capacity. Such pragmatism entails testing, evaluating, and understanding the limits of one’s findings as well as one’s self. It is a vital component of any scientific ethos worthy of the name, and it is a component that remains out of reach in situations formed by the figure of dual use (Rabinow and Bennett 2012). It bears noting that over the course of the H5N1 affair two openings to ethical pragmatism briefly showed themselves. For some members of NSABB a second question beyond dual use remained pertinent: the extent to which scientists “as members of the general public” have a “primary responsibility ‘to do no harm’ as well as to act prudently and with some humility” in view of the power of the life sciences (Berns et al. 2012). Second, in conversations concerning guidelines for future NIH funding of H5N1 government advisors asked whether researchers should have to justify claims regarding the public health benefits of their work. The former was ultimately washed out by polemics. The latter ultimately failed to take de- idealization seriously.

Risk, Uncertainty, and Underdetermination Despite talk of risk on all sides, what was really in play in the H5N1 affair were competing attributions of danger and the modes of intervention ap-

The Malicious and the Uncertain / 141

propriate to them (or not). To this extent, Mary Douglas’s diagnosis of risk and taboo remains apt (1966): dangers get coupled with blame as a means of generating certainty— certainty concerning which class of problems we should be most worried about. Added to the politics of blame is a figure of the biologist as technical expert who not only seems to provide uniquely powerful descriptions of the future (“equipped with these sequences we’ll know what to surveil for”), but whose good intentions operationally introduce a measure of certainty into an uncertain problem space. The politics of blame elides both the question of ethical pragmatism and future uncertainty. It thereby excludes serious examination of the ways in which the actors involved (scientists, funders, policy makers, ethicists, and others) are actively contributing to the substance and form of the very futures under consideration. Neither loose talk of risk (nor danger nor uncertainty) gets ethically problematized. That task is operationally curtailed by the presumption of the instrumental worth of viral engineering. Certainty about the relation between good and bad actors on the one side, and certainty about the finite set of genetic possibilities on the other, unmoors consideration of what might be called ethical uncertainty from talk of risk, leaving it to the side while connecting risk to fights over good intentions and government interference. US biosecurity policy contributes to a strange ethical amalgam, one in which a post-Reformation mode of subjectivation reduces ethics to intentions, and a positivist one reduces evil to perpetual amelioration. Because neither can really be questioned within current frameworks, “intentions,” “uses,” and “risks” persist as the only diacritics of danger. One step forward, indicated by Samimian-Darash and Rabinow, is to think more seriously about the difference between risk and uncertainty, and the various problems and modes attendant to each.3 Consonant with that step, Luhmann clarifies that uncertainty is a question of relative likelihoods where risk is a question of relative losses. The trouble in the case of the H5N1 affair is that uncertainty and risk have been bound together and made part of the same problem, technologies, and mode of subjectivation. With regard to the problem, they have been linked together through the imperative of instrumentality. Use appears as the threshold of an everreceding horizon across which risk might be convertible into loss and uncertainty into certainty. Risk and uncertainty get pulled into a single ratio that operationally proceeds as if probabilities will eventually be zeroed- out either through prevention or catastrophe, at which point things become certain and losses are simply added up.

142 / Gaymon Bennett

With regard to technologies, risk and uncertainty get linked to older regimes of calculation and control. This has taken the form of trying to institute regimes of prior review and post hoc redaction— technologies of prevention that rationalize decision making. The difficulty, as Luhmann (and others) points out, is that potential dangers can rarely be calculated and are never fully controlled. In the case of H5N1, the publication Fouchier’s genetic sequences makes the reproduction of his work significantly easier (Osterholm et al. 2012). Risk in the end is a moving target. Hence, as Samimian-Darash (2013) has explained, the appearance of technologies for the management of uncertainty. For Luhmann all of this is a function of the fact that the world is becoming more complex and more contingent. In the case of biosecurity, it is also a function of the fact that ethical complications are ruled out in advance— a disconcerting effect of policy given that those complications otherwise lie in plain sight. Most problematic, risk and uncertainty have been collapsed into a single mode of subjectivation, one in which technical expertise (including technical criticism) is the only one that counts. This mode cannot put itself in question except by calling for more and better technical expertise. Technical expertise is thus buffered against malice: actors with good intentions can calculate and decide as well as those with bad ones as long as their training is sound. In an effort to reintroduce the question of truth and ethics into the problem of biosecurity, Paul Rabinow, Anthony Stavrianakis, and I have proposed a third distinction alongside risk and uncertainty: underdetermination (Rabinow and Bennett 2012; Rabinow and Stavrianakis 2013). The term is derived from John Dewey’s notion of indeterminacy (Dewey 2005). Indeterminacy describes a situation of breakdown that occasions thinking— a breakdown of signification, meaning, and coherence. In scientific settings, indeterminacy has ontological ramifications: when signification, meaning, and coherence become indeterminate, scientists cannot reflect on the status of the objects and capacities they bring into the world. Indeterminacy requires moving, through inquiry, from lesser to greater determination, determining enough about a problematic situation to diagnose the problem, gain conceptual clarification, and make warranted judgments about whether and how new capacities should be cultivated. Our approach, at times, has been deliberately unfaithful to Dewey. Dewey usually refers to situations of ethical breakdown in terms of discordancy— the challenge of moving from greater to lesser discord. But in scientific situations, the movement from lesser to greater determination is itself ethical.

The Malicious and the Uncertain / 143

Any attempt to bring new objects into the world requires thinking through the circumstances and capacities involved in their creation. In this spirit, our question is: What capacities, including ethical capacities, do engineers of biology actually cultivate as a means of bringing new objects into the world? Such objects include things like newly engineered strains of avian flu. They also include narratives about the salvational potential of science and the institutionalization of new bioethical imaginaries. One warrant for a conceptual shift from risk and uncertainty to underdetermination is the need for moving beyond decision making as the primary locus of political and ethical concern, which, as Luhmann describes, it has become under the sign of risk. In situations where the development of new capacities is in play, focus on decision making alone (e.g., deciding which technologies pose the risk of dual use) constitutes a bureaucratic reduction. The NSABB’s reduction of the question of security to the question of whether or not to publish exemplifies both the exigency and limitations of decision making. The reduction of ethics to decisions once again screens out the formational question of truth and subjectivity. Beginning with underdetermination, we have experimented with an alternative approach to what has been called technologies of preparedness. Unlike technologies that control risk, and in a fashion consistent with the management of uncertainty, technologies of preparedness are predicated on the idea that dangerous futures are too singular for probabilistic calculation (Lakoff and Collier 2008; Samimian-Darash 2013). Preparedness escapes some of the traps of the risk formula. When prognostications fail, experts can no longer simply adjust their talking points by appealing to the banal fact that the future can always be otherwise. They cannot simply adjust, because their own capacities are made part of the problem. Talk of preparedness has become ubiquitous in bureaucratic settings. In these settings, technical expertise again tends to be the only mode and form of subjectivity that counts. This tendency covers over the fact that preparedness has a deeper history in philosophy and theology (Foucault 2005). For much of that history, preparedness has been directly linked to how one works on one’s own character in the face uncertain futures. The Greek term for preparedness—paraske¯ue— can be translated equally well with the English word “equipment”: the conceptual and pragmatic tools needed for carrying out the ethical work of capacity building (Rabinow and Bennett 2010). The work of designing contemporary equipment is blocked in the current situation of biosecurity. It is blocked because risk, communication, and malice remain dominant. Flahaut reminds us that historically, self-

144 / Gaymon Bennett

idealization has led to self- righteousness, self- justification, and associated complacency. It operates on the presumption that evil is elsewhere. In the situation of H5N1, dual use trains scientists to recognize malice in the world and denounce it in good conscience. Self- idealization is institutionalized and works against self- improvement. It is only the other who needs policing or improvement. The task and challenge is to undertake the inquiry needed to clarify these institutional dynamics, show their limitations, and design the ethical equipment needed to open up new contemporary possibilities.

PA R T T H R E E

Environment and Health

CHAPTER EIGHT

What Is a Horizon? Navigating Thresholds in Climate Change Uncertainty ADRIANA PETRYNA

Problem of Blindsidedness The influence of French philosopher Gaston Bachelard on Georges Canguilhem and Michel Foucault on critical concept work in modernity is widely acknowledged, but his notion of the phenomenotechnique as it applies to observing the birth of new sciences has had a somewhat mixed reception (Rheinberger 2010).1 Almost fifty years after the phenomenotechnique’s first appearance in a short article on microphysics in 1931 (Castelão-Lawless 1995), it found its way into Laboratory Life (Latour and Woolgar 1979), helping to illustrate how “merely technical” objects such as laboratory graphs and diagrams became materialized theories, and highlighting the role of technical activity not just in describing biochemical entities, but in bringing them about. Because the phenomenotechnique is “part thing and part theorem” (Rheinberger 2010, 27), a single or unique product is not always guaranteed.2 The phenomenon of combustion, Bachelard would note, is very different for a pyrotechnician than it is for an astronomer who studies star formation or for a poet for that matter, whose “image of fire could clog knowledge of electricity” (1964). Each employs a different science or mode of conjuring related but fundamentally different objects. This is no mere eye of the beholder view of fire, but an observation about the eccentric “phenomenotechnics” of the phenomenotechnique: differences themselves wreck prior assumptions and can accumulate an array of destabilizing unknowns. Knowledge of fire for example is continually undermined by what stands between the certainties of the pyrotechnician, the astronomer, and the poet. The phenomenotechnique of fire, then, entails the troubling implication that we live alongside some fourth way of knowing, in which certain phenomenotechniques might escape perception;

148 / Adriana Petryna

and some fifth, in which they do not or cannot credibly exist. How shall we imagine ourselves in this retreating horizon of sorts— are we being pushed toward some edge of perception, or of extinction? Or might we begin to imagine existence as mimicking, say, life in a lake? The founder of aquatic ecosystem science, American ecologist Stephen Forbes (1887), once mused this way on the properties of lake life: “The animals of such a body of water are, as a whole, remarkably isolated— closely related among themselves in all their interests, but so far independent of the land about them that if every terrestrial animal were suddenly annihilated, it would doubtless be long before the general multitude of the inhabitants of the lake would feel the effects of this event in any important way.” Forbes highlights an uncertain time trajectory from a sense of total isolation to total surprise. In contemporary ecology, this period is called a regime shift or a “rapid modification of ecosystem organization and dynamics, with prolonged consequences” (Carpenter 2003). In this chapter, I consider this uncertain time trajectory in the context of the sciences of climate change. I will suggest that there is no cumulative scientific strategy or overarching “techno- epistemic product” (Rheinberger 2010, 27) that can reliably attune contemporary knowledge of climate change with its potential magnitudes of peril. Current assemblies of models and tools are not robust enough on their own to explain apparently idiosyncratic ecosystemic behaviors or to forecast regime shifts (NAS 2013). The technophenomena and the phenomenotechnique of climate change are not always in sync; this nonconcurrence generates doubt about the generalizability of empirical findings or of the relevance of findings that certain theoretical models select for. Each can play catch- up with the other. But they can also break each other down, one through a heavy- handed technocentrism unwilling to engage seemingly chaotic patterns and the other through a complexity that overwhelms expert reason. To give an example of this breakdown: I lost hearing in my left ear following a ministroke in 2009. Doctors told me that I would never recover it as the auditory nerve, once damaged, does not repair itself. Two months later, to the surprise of every specialist I knew, my ability to hear returned; my medical problem apparently resolved itself, by itself, that is, “idiosyncratically.” When I asked a respected ear doctor how this happened, he quipped, “If I knew I could build a better hearing aid and sell it to you!” Gaps between knowledge and practice abound in medicine such that some medical situations will remain, for better or worse, inscrutable. Indeed, there is a kind of dark matter existing between the phenomenotechnique and the technophenomenon of biomedicine: actionable knowledge

What Is a Horizon? / 149

that must be there, but doesn’t really exist until someone can summon it into existence by an appropriate way of knowing it. Tools are required to challenge conventional ways of looking at bodies and ecosystems and to make new attempts at resolving their apparent physical incoherence. Such tools cannot exist however without knowledge of the mechanism of the thing they seek to address: in the case of hearing loss, better knowledge of the brain’s circuitry and plasticity could lead to better tools, or, at the very least, to selling me something. A mysterious recovery marks an “empty” space that really is not empty at all. Rather, it operates without expert intervention for a very long period of time. What are the dimensions of this time? What measures can be introduced to turn this very long time into an operative time or an actionable borrowed time? Such are the questions that Forbes’s blindsided fish and I have for expertise. Similar questions can be applied to broader ecological scales and with respect to the temporality of climate change. A 2013 report by the National Academy of Sciences, for example, called for massive strategic investments into a new science of “critical transitions” that could identify signs thought to portend potentially catastrophic changes in a variety of vulnerable ecosystems. While much progress has been made in predicting sea level rise and tracking the paths of hurricanes, the potential for surprising regime shifts is now recognized as very real. According to the US National Oceanic and Atmospheric Administration (NOAA), which is charged with tracking atmospheric and oceanic changes, some of which are linked to climate change, 2011 was the costliest year on record ($44 billion) in terms of losses linked to droughts, wildfires, hurricanes, and tornadoes and “other natural disasters were more severe, longer, more frequent and less predictable than in the past.”3 Environmental scientist and marine ecologist Jane Lubchenco noted they were “harbinger[s] of things to come for at least a subset of those extreme events that we are tallying.”4 The message from scientists and policy makers alike is that we need a radically new approach to risk. Their argument is that nature’s old adaptive resiliencies (in the form of barrier islands, for example) in the face of pressure are no longer reliable (Lubchenco and Hayes 2012; Hughes et al. 2012) and scientists and policy makers alike say we need a radically new map of critical transitions and potential imminent threats. Yet deploying more monitoring (of atmospheric greenhouse gas emissions, for example) or satellites is not enough. According to the report’s principal author, geoscientist James White, scientists must address “areas of observation where we are largely blind. . . . As a scientist, my hope is that we can study the planet well enough, monitor it well enough, understand

150 / Adriana Petryna

it well enough, that we’re not going to be blindsided. As a realist, I’m pretty sure we’re going to be blindsided.” The likelihood of blindsidedness, White admits, is now a structural feature of the sciences of climate change.5 Anthropologically speaking, blindsidedness is a state in which sudden and undeniable realities such as those cited here strike individuals or groups who have no time to craft or recover tools for guaranteeing business as usual or survival. Blindsidedness is a manifestation of extreme isolation and lost time. In this chapter I will aim to address such senses of scientific isolation through a concept of the horizon and interrogate the place of (borrowed) time in the politics of the environment amid global climate change. I explore emerging scientific and policy practices meant to monitor increasingly unpredictable ecosystemic behaviors and to inform prospective thinking about how natural systems and societal infrastructures might adapt to imminent dangers and so- called tipping points (Scheffer et al. 2001). I show how horizons become tools of optimization and potential knowledge transfer within apparently physically incoherent earth systems. The notion of horizon I propose here is a kind of contemporary equipment for monitoring, managing, and sometimes mitigating complex futures that may arise (Rabinow 2009). The horizon also sets a new stage for contemplating relations between nonparametric systems and a social science of survival. I hope that such thinking serves as an antidote to a public increasingly enamored with magic bullet solutions to the problem of climate change. In the conclusion I explore frameworks of prevention and biomanipulation than can be recovered to counteract pressures to geoengineer.

Sympathizing with Fire At the core of Bachelard’s phenomenotechnique concept is a powerful sympathy with the image of fire (1964), perhaps made even more powerful by the fact that urban conflagrations were once a “central fact of urban life” and arguably the climate change of the nineteenth century (Bankoff, Lubken, and Sand 2012, 5). While the fires that once raged across European and American cities had, for the most part, been successfully reduced to “occasional and isolated threats” (ibid.), today’s long- term drought conditions have revived fire as a singular deadly threat, particularly in the American West and Southwest where wildfires have become rampant and confront frontline emergency service workers with behaviors that fall outside the range of normal experience.6 In the context of escalating climate threats, bridging knowledge and practice becomes ever more demanding. Public policy expert Howard Kun-

What Is a Horizon? / 151

reuther and colleagues underscore this point: “The scientific understanding of climate change and its impacts has increased dramatically in recent years, but several interacting sources of uncertainty mean that future climate change and its impacts will not be known with precision for the foreseeable future” (2013, 1). When predictive models increasingly lack robustness (O’Reilly, Oreskes, and Oppenheimer 2012), instruments of policy making that quantify risk and uncertainty become crude and unreliable; averting catastrophic risk becomes a matter of luck. But the chances of such luck are diminishing fast. The possibility of a doubling of carbon dioxide levels in the next few decades is already causing significant disruption. According to physicist- engineer Robert Socolow, we know that “the risks are now and the risks get greater the longer we don’t act.” But scientists are unsure of at least two things: “where the actual line is that we must not cross” and “how the whole system is going to react to excess CO2 in the atmosphere” (2010). Where is the line? And how do we know we have crossed it? In the wake of Hurricane Sandy in October 2012, Marcia McNutt, the former director of the US Geological Survey, characterized the slide into exponential uncertainty like this: It is not the gradual rise of sea level that is going to get anyone. It is the combination of extreme events superimposed on that gradual rise. And those extreme events are destroying our natural protection and offering much less protection for future storms. So we have already crossed a threshold even though sea level has not risen a lot between last October and now. Superstorm Sandy was a threshold, and we crossed it.7

McNutt suggests that risk is not a problem of modeling, but of navigating superimposing events that cannot be taken in isolation and whose temporal and physical natures can change (from the gradual to the sudden). This sets the stage for surprises that our present tools are simply unable to manage (Carpenter 2003).8 Once a threshold is crossed, what (alternate states) exist on the other side? The recent idiosyncratic slowdown in the increase of the earth’s surface temperature, “even as greenhouse gases have accumulated in the atmosphere at a record pace” (Gillis 2013), poses yet another problem of uncertainty and its (circum)navigation. Is this slowdown mere luck or a sign that an ecosystem has already surpassed some critical edge? Superstorm Sandy showed that different mechanisms and variables can act simultaneously, but discernment of their relative importance remains a challenge. This ambigu-

152 / Adriana Petryna

ity over mechanisms circles us back to scientific isolation and confusion over the actual phenomenon and its unfoldings over time.9 The superimposition of imprecise forecasting on actual events suggests that there are limits of actuarial performances of risk. Given these limits, reinsurers call climate change a “bad risk,” with little return on investment. “The private sector . . . only handles a good risk,” notes Peter Thomas, the chief risk officer of Willis Re (a major reinsurer). This making of a good risk out of the morass of bad or hopeless risks has a history: “In 1835, in Rhode Island, someone built a state- of- the- art fire resistant factory and said to insurers, ‘I’d like to get credit for this fire rating, and he did and the factory had better loss prevention . . . 19th century Americans knew that they were living in a flammable world. They were not confused. A fire cyclone covered California to Pennsylvania, most of the U.S. . . . Fire is like climate change. But if I cannot get a rate of return on it then I am not going to do it. Looking back at fire, only political will made change possible in a world of wood.”10 At an environmental policy conference in Washington DC in 2013, Federal Emergency Management Agency (FEMA) administrator Craig Fugate echoed the theme of bad versus good risks and the “political will” that is required in the face of potential uninsurable risk. Throughout his tenure at FEMA, Fugate has repeatedly expressed frustration with the US government subsidizing flood risks below market rates for residents choosing to live in expanding flood- prone areas (especially along coastlines). He noted that these are the areas that insurers have turned their backs on, “saying, we can no longer do that.” Yet, according to Fugate, “insurance companies are not evil, they have a job to do: to manage risk in such a way that they can provide coverage at a rate which they paid out less than they take in so that money is generated to pay shareholders.”11 Fugate suggested that the government should follow suit for people who “on a day- to- day basis, on a year- to- year basis, live in a below 100- year flood zone.” Fugate noted their extreme confusion and denial over the timing of floods: “I have no idea how we ever started talking about a 100- year bet that only happens every couple of months! Your chance of buying a winning lottery ticket is less than being hit by a flood. But most people go out and buy lottery tickets and you would be surprised how many people don’t buy flood insurance.” What is climate change then? Should it be treated more like a nineteenthcentury problem of fire containment or more like a game of chance (because chance is all there is)? With fire, people “were not confused” about where the line between reason and chance stood. Yet for twenty- first- century confused underestimators, that line is being drawn for us. Thus Fugate implored

What Is a Horizon? / 153

the policy makers and laypeople listening to him to scrub the word “victim” from their vocabulary: “I am asking you to replace that word with survivor; we should all just start calling ourselves survivors.” Though much of Fugate’s frustration is well- founded, his discourse suggests that people, and not the floods, have become the bad risk that no entity, private or public, wants to insure. Such are some of the practices meant to fill gaps between scientific knowledge and policy practice. In what follows, I return to the theme of horizons and of what happens once certain thresholds are crossed via a vignette of a scientist who first took up ecosystemic tipping points as an empirical and applied resource management problem. Small experiments like those noted in the next section are being scaled up to identify “early warning signals” of tipping points for coral reefs, acidifying oceans, deforestation, and desertification.

Crossing Tipping Points May 8, 2013. Marten Scheffer, a Dutch lake ecologist, picks up and jiggles a flask of Daphnia, one of hundreds filled with zooplankton and cyanobacteria, sitting on white shelves in his laboratory at Wageningen University in the Netherlands. We have established a great rapport in the last two hours. “Daphnia,” he tells me. “Water fleas?” I respond, before we walk toward a piece of laboratory equipment. It is a “microcosm,” a small steelenclosed climate- and light- controlled lake micro- ecosystem, “the ICU of water fleas,” he says. Scheffer is world- renowned for his experimental testing of tipping points, or points at which ecosystems such as lakes, oceans, and Arctic ice sheets lose resilience. First showing their operations in shallow lakes, he turned turbid lakes into clear ones with nonobvious tools. He is also involved in experimental testing of early warning systems that can help identify when ecosystems are near such points. The steel microcosm contains a complete physical system with a laser beam and a little algae- fluid container whose dynamics, he said, “we like to think of as mimicking the dynamics of the Arctic ice sheet.” What is being modeled here is not an ice sheet but something more general: positive feedback. Setting the stage for the actual experiment, an initial “classical competition experiment” will yield pairs of algae species that, in the language of dynamical systems, creates alternate equilibrium states. Those locked- in states will be starting points for testing predictions for tipping points. With tiny perturbations (specifically, temperature increases), the fluid becomes darker; when cooled, it becomes clearer. When Arctic ice melts, darker exposed surfaces cause temperature increases; this is a posi-

154 / Adriana Petryna

tive feedback called the albedo effect. By remotely and gently pushing these fleas toward extinction under controlled conditions, empirically traceable autocorrelations and “flickerings,” in which a perturbed ecosystem takes longer and longer to recover, will lend support to the theory of generic early warning signals. Tiny test perturbations in the microcosm may be “done with little risk of causing the actual transition. For large, complex systems, it will often be difficult to systematically test recovery rates” (Scheffer 2010, 411–412). Mathematically oriented ecologists like Scheffer are rapidly building up “toolkits” for recognizing key characteristics of such processes (Scheffer et al. 2009; Lenton et al. 2008, 2013). Forming a network of scientists working in Europe, Australia, and the United States, they have theorized ecosystems in a nonequilibrium that is potentially chaotic, moving from one (alternate) stable state to another, or collapsing. Whether analyzing conservation biology, the coral reef collapse, the Arctic albedo effect, or the collapse of ocean thermohaline circulation (the ocean’s “conveyor belt”), their “tipping point” lexicon unites them in collaborative work and focus on policy action and change. Indeed, the phenomenon of abrupt shifts without signs of any apparent proximate causes is not new. The past and present include evidence of many such shifts. The Sahara Desert, for instance, was made up of numerous wetlands 6,000 years ago until it drastically switched to a desertlike condition. The Caribbean coral reef previously recovered quickly from excessive fertilizer runoff until it did no more (the overgrowth of algae caused by nitrogen loading has prevented the settlement of coral larvae, among other things). More current examples of abrupt shift include the potential shutdown of the Atlantic thermohaline circulation, the dieback of the Amazon rainforest and boreal forests, and the rapid decay of the Greenland ice sheet. Like vanishing points, tipping points have become epistemologically charged constructs that mark a hiatus of expert knowledge about what alternate states exist or coexist once certain thresholds are crossed, and the forms futures will take. While some of these ecosystems lack “convincingly established” tipping points (Lenton et al. 2008), recent abrupt and nonlinear changes have prompted researchers to expand their toolkits for making sense of how close we may be to irreversible shifts (Rockström et al. 2009). Importantly, the work of these ocean and lake ecologists, Earth- system and environmental scientists, and engineers diverges from an era of climate science in which scientific assessments of climate challenges are overly conservative and become “larded with caveats” (Brysse, Oreskes, O’Reilly, and Michael Oppenheimer 2013, 330; Hansen 2007, 2), or in which every new disaster is

What Is a Horizon? / 155

regarded as the latest wake- up call. Terms such as “regime shifting” and “alternative stable states” suggest complex states whose dynamics, we might say, have been poorly horizoned so far.

Horizoning Work Across time, people have used the horizon concept as a strategic point of reference in the navigation of different kinds of physically incoherent worlds. The word “horizon” derives from the ancient Greek ὁρίζω (horizo¯), meaning “I mark out a boundary,” and from ὅρος (oros), “boundary” or “landmark.” Renaissance architects used “horizon lines” to properly orient objects in three- dimensional space. Early modern surveyors devised mercury- filled “artificial horizons” to create an image of a level surface against which the “inconstancy of the terrestrial horizon” could be judged (Thomas 2004, 21). Today, robotics engineers encode “predictive horizons” in remote machines (such as extraterrestrial rovers) to allow for autonomous self- correction in the navigation of craterous conditions on Mars. In meeting such courseplotting challenges, data from the past is useful, but only up to a point. And precisely when data is no longer useful and prediction capability derived from past or present information becomes misleading (or yields high computational cost or instability), a new predictive horizon must be put into place (Parunak et al. 2008). As these examples suggest, horizoning work is a distinct kind of intellectual labor undertaken in conditions in which the fate of entire systems is at stake. Horizons “enfunction” uncertainty in a space of multiple parameters and provide an alternative heuristics to risk management, such as scenario planning, which have thus far failed to provide accurate depictions of risk or even clarity on decision making due to a variety of political and scientific constraints.12 Horizoning work involves the testing and assembly of empirical tools and appropriate “scaling rules” (Griffen and Drake 2009) for recognizing and “maintaining a safe distance from dangerous thresholds” (Rockström et al. 2009, cited in Hughes et al. 2012, 6). It demarcates or “increments” using known parameters, but it is also a practice of continuous self- correction vis-à- vis changing baselines of vulnerability and knowable risk. In most extreme conditions, horizoning work entails a fine- tuned awareness of a system’s exposure to jeopardy, without which navigators will inevitably be “flying blind.” Here, horizons are not open- ended or metaphysical, as in Casper David Friedrich’s famous nineteenth- century painting “Wanderer above the Sea of Fog,” but have a value and “length” assigned to them depending on their

156 / Adriana Petryna

ability to retool computational complexity and make it temporarily usable within a particular human or technical frame. Horizoning defines increments of action amid multidimensional uncertainties. In doing so, it “makes good” on faulty or fleeting information, thus allowing movement forward or preventing a crash (or disappearance) of an entire system (Verma, Langford, and Simmons 2001).13 The fact that entire trajectories, machines, or worlds may be at stake is precisely what makes horizons so real. There are plenty more examples of the arts of human interference amid physical incoherence— from aerospace guidance to the control of gear chatter, large beam deflection, and fluid motion. These phenomena contain unobservable and “discrete systemic modes” but are somehow given direction— lest they destabilize into an “exponential divergence of trajectories,” or chaos. What horizoning work imparts is not a better projection of risk or uncertainty, but real cognizance of jeopardy to whole systems and potential for recovering borrowed time in which such cognizance can be made actionable, not obsolete. The system to be stabilized takes precedence over the object of uncertainty to be known. Discrete moments of problem solving coalesce and build trajectories, representing best attempts to transform the “logos” of a problem into an ethos of solving (Rabinow 2009).

Stabilizing Strategies Until 2010, much of the policy making on climate change revolved around holding global temperature rise at or below 2 degrees Celsius above preindustrial levels; this meant keeping the atmospheric concentration of greenhouse gases from going above 450 parts per million. While global temperatures continue to rise linearly, roughly following best estimates, sea levels are rising exponentially, or 60 percent faster than projected. In May 2013, levels of CO2 in the atmosphere broke through the symbolic threshold of 400 parts per million for the first time since “three to five million years ago— before modern humans existed” (Shukman 2013). Mitigation poses a series of challenges, and not just technological ones. In 2004, ecology and evolutionary biologist Steven Pacala and physicistengineer Robert Socolow offered the climate change “stabilization wedge” concept (see figure 8.1), which imposes a timetable of costs and consequences of inaction on mitigating carbon emissions. They use a fifty- year time horizon, often employed because it is the typical span of a scientist’s career. Follow the upper paths of the triangle in figure 8.1 indicated by the upward pointing arrows. This illustrates that maintaining the “business as

What Is a Horizon? / 157

Figure 8.1. Stabilization wedges, 2004. Credit: S. Pacala and R. Socolow, “Stabilization Wedges: Solving the Climate Problem for the Next 50 Years With Current Technologies,” Science 305, no. 5686 (August 13, 2004): 968–972.

usual” approach to climate change over the next fifty years results in emissions that double greenhouse gases. Follow the lower paths of the triangle indicated by the downward pointing arrows, which illustrates that with aggressive efforts the emissions rate will remain constant over the same time period. (Staying on this lower path is consistent with “beating doubling,” i.e., capping the atmospheric carbon dioxide concentration at below twice its preindustrial concentration.) The lines transecting the interiors of the triangle make up the “wedges of stabilization,” each of which represents a discrete mitigation strategy (a wedge of vehicle fuel efficiency, a wedge of wind power, a wedge of avoided deforestation, etc.). Together the “wedges of stabilization” represent the number, variety, and magnitude of the strategies thought to be necessary for counteracting the CO2 levels at a given point along the fifty- year time span. Unfortunately the changes planned for 2004 never got off the ground, and by 2011, when Pacala and Socolow revisited their earlier projections, they were forced to expand the number of mitigation strategies to compensate for the unchecked increase in greenhouse gases in the intervening years. They added a second stabilization wedge above the first one in 2011 (see figure 8.2). Inertia, intransigence, and the politics of old energy technology got in the way so much so that “nine wedges [we]re required to fill the stabilization triangle, instead of seven.” Today, according to a recent study, and given the emissions trajectory, “eliminating emissions over 50 years would require 19 wedges: 9 to stabi-

158 / Adriana Petryna

Figure 8.2. Stabilization wedges, 2011. Credit: R. Socolow, “Wedges Reaffirmed,” Climate Central, http://www.climatecentral.org/blogs/wedges-reaffirmed, accessed June 10 2013.

Figure 8.3. Stabilization wedges, accumulating cost of delay, 2013. Credit: Steven J. Davis, Long Cao, Ken Caldeira, and Martin I Hoffert, “Rethinking Wedges,” Environmental Research Letters 8, no. 1 (2013), http://iopscience.iop. org/1748- 9326/8/1/011001/pdf/1748- 9326_8_1_011001.pdf, accessed June 10 2013.

lize emissions and an additional 10 to completely phase- out emissions (see figure 8.3). And if historical, background rates of decarbonization falter, 12 ‘hidden’ wedges will also be necessary, bringing the total to a staggering 31 wedges” (Davis et al. 2012). By 2013, the wedge doubles over itself (see figure 8.3). The fifty- year horizon was meant to act as a hyperpragmatic and rational

What Is a Horizon? / 159

tool that reorients the policies and ethos of modern energy consumption. What is concretized instead is a very climatologically costly delayed trajectory in which the idea of living on borrowed time is no mere metaphor. Equally real are the uncertainties of what kind of time reckoning is required. The wedge becomes a parable, of sorts, about a further unmooring from safety and danger. It begins to embody a situation not of recovering past CO2 levels but of an ever- receding horizon of possible recoverability.

Tipping Ecosystems Back Horizons and the edges of extinction are themselves somewhat malleable; at least according to some scientists (marine ecologists, theoretical biologists, environmental scientists, and coral reef specialists, to name a few). As we have seen, incremental changes in “climate, nutrient loading, habitat fragmentation or biotic exploitation” can push an ecosystem toward a “tipping point,” but the dynamic of the actual transition into an alternate stable state is not very well known (Scheffer and Carpenter 2003). Also, little is known about the nature of recovery of ecosystems that have already flipped. Recovery can backtrack along the original gradient of its earlier decline (more or less) or require an entirely different strategy and trajectory. Marten Scheffer, the mathematical ecologist who showed the existence of regime shifts in an ecosystem, specifically in shallow Dutch lakes, also developed a way of “flipping back” those lakes into a recovered state.14 As he told me, Dutch inland lakes were once inundated with nitrogen and phosphates from local agricultural fertilizer runoff. Once- clear lakes had suddenly turned turbid— sooner or later the fate of all lakes served as destinations for fertilizer runoff. Scheffer explained some of the challenges of finding a solution and how he finally came upon a successful biomanipulation strategy for an apparently unrecoverable and algae- covered and hypoxic lake. “There was a theory around, but no one could really show how it worked.” It “resonated with an intuition that people had,” but no one could show it in practice, and how it scales.” A fellow ecologist “had this intuition that once a lake would be clear with aquatic vegetation, that it would remain stable [i.e., clear], and that basically it could not be destabilized. The reason he thought this is because he had done experiments on small canals and ditches that he had compartmentalized and put lots of fertilizer into. And those ditches never became turbid.” But ditches, dynamically speaking, are different from shallow lakes. According to Scheffer: “You cannot upscale from a little compartment of a

160 / Adriana Petryna

canal to a lake. In a canal those nutrients get absorbed very quickly. Also, there is no wind effect in ditches and there are no big fish. The situation is different in a lake. A lake will become destabilized.” Recovery can only take place within a given scale. Scheffer explains, “There are things you cannot downscale in ecosystems. A small system [like a canal or ditch] has many properties like a lake, but some essential properties it does not have, like the wind and wave effects and the effects of the big fish. So he would be right for ditches but not for lakes. When he would upscale that intuitively for lakes, that does not work.” Scheffer explained to me how he and colleagues developed a framework of recovery. For instance, the movements of predatory midsized fish kicked up sediment, prohibiting sunlight from reaching lake- bottom plants. This meant the plants could not grow or serve as hiding places for zooplankton, which filtered the water. The zooplankton were then exposed to the open predation of the fish, which meant that the lake would remain turbid. Given such perturbations, Scheffer showed that the lake’s recovery was not just a matter of removing the fertilizer. Fertilizer removal, the standard practice, always failed. For the lake to flip back required taking into account the dynamical systems [theory] of the lake’s ecology. Scheffer then had tens and thousands of fish that kept the lake in a mucky alternate stable state experimentally removed. The results were “quite spectacular . . . It was a recovery that was relatively cheap to do, easy to do, and quite spectacular, and there was a theory behind it . . . And so then you have all the elements for [the theory’s] rapid spread.” This method is now standard practice for saving shallow lakes from eutrophication. Tipping- point science creates, in Scheffer’s words, a “search image,” undermining other images of static equilibrium and linear change in conservation biology. To date, “[s]cientists appear to have a very limited capacity to predict the behavior of nonlinear dynamic systems, including ecological systems” (Boyd 2012, 306). And they confront a singular paradox: “As understanding of these systems improves, uncertainty around system behavior tends to increase rather than decline.” Not only has “the uncertainty in the models used for prediction been underestimated” (ibid.), but so has uncertainty in the ecological phenomena themselves.15 This fact alone provides a stark ethical choice in science— to provide more research about the fact of uncertainty or to provide reasonable schemes of navigation that make good on incomplete information while contending with the “large realm of the given and undeniable”(Geertz 1975, 7).

What Is a Horizon? / 161

Bending the Curves Downward In 2009, twenty- eight renowned Earth system and environmental scientists, including Nobel laureates and affiliates of the Stockholm Resilience Center, attempted to define the state of knowledge, however ambiguous, with respect to nine key planetary “life support systems” (Rockström et al. 2009) with critical thresholds, beyond which there is a risk of “irreversible and abrupt environmental change.” Domains requiring urgent attention include the global hydrological cycle and stratospheric ozone layer as well as climate- carbon cycle feedbacks, biodiversity loss, ocean acidification, chemicals emissions, land use, nitrogen and phosphorus inputs to the biosphere and oceans, and atmospheric aerosol loading. Three thresholds (linked to the climate- carbon cycle, biodiversity loss, and nitrogen inputs) have already been crossed. A well- publicized graph, representing the Earth system as a plot of time and space defined by major interlinked systems with each having thresholds (figure 8.4), was meant to be a game changer. “Our proposed boundaries are rough, first estimates only, surrounded by large uncertainties and knowledge gaps” (ibid.). According to Rockström (2010), one of the graph’s architects, humanity operates “as if it were driving only on a dark straight highway.” But now, “surprise is universal. Systems tip over very rapidly, abruptly and often irreversibly. We have entered the great acceleration. Everything is accelerating, species loss, nitrogen, forest decline. We have to bend the curves downward. We have to recognize that systems have multiple stable states separated by thresholds. This is the decade when we have to bend the curves.” In this grand exercise of backcasting, “planetary boundaries” are not proven in advance but are a virtual marking off of a borrowed time—“a window of opportunity to return to safer conditions before the new state eventually locks in and equilibrates” (Hughes et al. 2012, 149). Is “stabilizing long- term concentrations of carbon dioxide at 350 parts per million the right target for avoiding dangerous interference with the climate system”? Is it desirable to “cap species extinctions at ten times the background rate, as is being advised” (Nature 2009, 448)? The planetary boundaries idea turns these scientific questions into second- order concerns as the “complexities of interconnected slow and fast processes and feedbacks in the Earth System provide humanity with a challenging paradox,” to protect “the resilience that enables planet Earth to stay within a state conducive to human development” or be lulled into a “false sense of security because incremental change can lead to the unexpected crossing of thresholds that

162 / Adriana Petryna

Figure 8.4. Planetary boundaries concept. Credit: Azote Images/Stockholm Resilience Centre.

drive the Earth System, or significant sub- systems, abruptly into states deleterious or even catastrophic to human well- being” (Rockström et al. 2009). Attempts to demarcate safe distances from alleged points of no return (like the problem of abrupt changes in ecosystems) are not new. Paul Ehrlich’s “Population Bomb” arguably was one such false horizon, fueling hysteria in the 1970s and 1980s of the prospect of mass starvation from overpopulation. To some skeptics, the planetary boundaries concept might look, in fact, like a grand exercise in hubris. Detractors claim that there are plenty of ecosystems that lack early warning signals (Boettiger, Ross, and Hastings 2013). Similar claims can be made about “extinction” processes in other systems. The precise location of extinction boundaries separating the great geologic periods has confounded geologists (Fowell and Olsen 1993). Prediction of time of imminent death to allow referral of cancer- stricken humans to separate rooms remains elusive (Hwang et al. 2013). In vulnerable species, long before physiological intolerance to high temperatures predominates, other proximate factors can cause extinction, as when hosts and

What Is a Horizon? / 163

predators do not migrate at a similar pace in response to warming (Cahill et al. 2012, 7). The tipping points that presumably lurk in bodies, ecosystems, and geologic records are literally intangible. Rather, the trajectories of species or habitat survival unfold in a “borrowed time” and space (Hughes et al. 2012). How then should humans interfere with a physically incoherent earth system? Environmental ethicist Dale Jamieson, in Reason in a Dark Time (2014), notes that “early on in the climate change discussion there was a tripartite division of possible responses: prevention, mitigation, and adaptation.” Today the prevention piece (preventing further carbon emissions) “has largely dropped out” and has been replaced by geoengineering, such as the injecting of sunlight- scattering sulphate aerosols into the stratosphere, as a stop- gap and untested mode of temporarily offsetting warming (Jamieson 2014, 201–202). Prevention should be “repurposed” by limiting concentrations of greenhouse gasses and, as I have argued here, better equipment for wresting actionable time from limited borrowed time must be sought (ibid. 201–202). This chapter introduced the concept of horizoning work and explored its function in climate change policy and scientific research. Currently there is a gap between knowledge of climate change phenomena and their interacting sources of uncertainty and practical intervention. In this gap, some see sudden, dramatic, or irreversible processes that legitimate apocalyptic thinking or geoengineering as the key countervailing strategy. Others see complex spatial and temporal dynamics and abrupt transitions that have been poorly horizoned so far. Accomplishing “the complex task of facing the future” (Rabinow 2009, 10) requires schemes and sensibilities that can reckon with problems that are seemingly distant yet close, material, and so right at hand. While the stabilization wedge failed to gain traction as a problem solver, and the planetary boundaries concept is still subject to a form of empirical testing we would rather avoid, this chapter has described efforts to fold various uncertainties linked to climate change into a new search image. This new search image suggests that what we are accustomed to calling an uncertainty problem requiring more data may, in fact, be a complex system on the verge of collapse. What concerns most experts is not proof of the existence of climate change, but finding the tools of existence making in the face of undeniable threat. While the doubt- mongering noise of climate change denial persists (Oreskes and Conway 2011), this chapter explored the science of complexity within realms of the “given and undeniable.” Within these realms,

164 / Adriana Petryna

horizons present us initially with fleeting if somewhat arbitrary endpoints, where vision literally disappears. They shed this arbitrary quality and reveal their value in coordinating human movement in spite of always incomplete knowledge. One could say that arrival at a desired endpoint is their final product. Marten Scheffer, speaking like the scientific horizon- worker that he is, summarized: “You don’t want to overfear, but give people just enough room so that they can move.”

CHAPTER NINE

Sentinel Devices: Managing Uncertainty in Species Barrier Zones FRÉDÉRIC KECK

Emerging Infectious Diseases and the Redefinition of Public Health In 1978, after the success of the global campaign of the vaccination against smallpox, the World Health Organization (WHO) announced that the fight against infectious disease was over. It soon appeared that this promise was premature: new infectious diseases have since emerged, such as the pandemic of HIV/AIDS, outbreaks such as the Ebola and Nipah viruses, and severe acute respiratory syndrome (SARS). The smallpox vaccination campaign was successful due to the fact that the virus was passing from humans to humans, with few transmissions of monkeypox or chickenpox to humans. On the contrary, new infectious diseases emerge from an animal reservoir: HIV/AIDS and Ebola came from African monkeys and Nipah and SARS from South Asian bats. It is estimated that 75 percent of emerging infectious diseases come from mutations of pathogens in animal reservoirs, while others come from the resistance developed by pathogens to antibiotics or reemergence of older pathogens due to poor health systems. Monkeys are significant modes of transmission because of their phylogenic proximity to humans, but bats probably constitute the most important reservoir for viruses because of their biological diversity (Quammen 2012). The emergence of new infectious diseases is not only a blow to modern dreams of controlling nature. It also expresses transformations in the relation between humans and their environment. Whereas the animal origins of HIV/AIDS remain the object of controversy (although they certainly are linked to the history of human/animal relationships in twentieth- century Africa), it is clear that Ebola or Nipah would not have spread from the forest to villages if deforestation had not pushed monkeys and bats close to human habitats. Dengue is a classical example of the effects of climate

166 / Frédéric Keck

change. The vectors of this infectious disease endemic in Africa and Asia, Aedes aegypti, have moved into South America and are now at the borders of the United States and Europe. Another mosquito, Culex pupiens, was transported by plane from the Middle East to New York in 1999 and introduced West Nile virus to the United States, where the virus it carried caused the deaths of 286 people and thousands of birds in 2012. This change in the nature of diseases, and in man’s relation to nature as measured by diseases, has led to a change in public policy. The weapons designed after the microbiological revolution at the turn of the twentieth century— vaccines, antivirals, antibiotics— could be used as pathogens whose behavior was already known, on the basis of immunology as knowledge of the human body’s defenses against external forces. But if the frontiers between humanity and nature change constantly, classical weapons have to be reframed. The objective of eradicating infectious diseases has been replaced by strategies for anticipating epidemics. Since pathogens emerge constantly in humans from the animal reservoir, and since the human population is not immune to these pathogens, it is necessary to mitigate the effects of these pathogens when they emerge. A new rationality has consequently been applied in public health that considers the emergence of a new pathogen from the animal reservoir as an event with low probability but catastrophic consequences for which humans must be prepared (Lakoff 2007). The shift from prevention to preparedness can also be described as a shift from risk to catastrophe, or from possible to potential uncertainty (Samimian-Darash 2013). The vocabulary of preparedness, coming from the world of civil defense, was transferred to natural disasters and public health: It is impossible to predict when and where a new pathogen will emerge, so it is better to behave as if it were already here. Such a rationality entails a collective imagination of a world where the catastrophic event has already happened, so that humans build competences to react in the real world. Simulations of epidemics rely on mathematical models as well as computer exercises or real- ground drills for decision makers, hospital staff, and fake patients. Preparing for future epidemics also affects pharmaceutical management. Vaccines cannot be manufactured until the new pathogen has emerged, but pharmaceutical companies need to be ready to make vaccines in a very short time. They must consequently select the pathogens for which they develop backbone models ready for insertion of the pathogen once it emerges. They make contracts with nations under constrained conditions who stockpile antivirals and antibiotics in preparation for an epidemic. Such national strategies raise tensions between countries where pathogens emerge— often

Sentinel Devices / 167

in the South— and countries with the means to develop the tools against them— often in the North. Global biobanks based on mutual exchange are often designed as ideal solutions, but they can be difficult to develop. These two techniques of preparedness— simulation scenarios and stockpiling medicine— raise fascinating issues on the management of uncertainty in the context of emerging infectious diseases. However, I will focus on a third technique: sentinel devices.1 While the first two techniques tend to reduce potential uncertainty to possible uncertainty, it can be argued that sentinel devices address potential uncertainty for itself at the level of the virtual. These are techniques that anticipate the emergence of pathogens by an attentive surveillance of its mutations in the animal reservoir. The model for this technique can be found in the work done at the Hong Kong University (HKU) Department of Microbiology. After successfully identifying the coronavirus for SARS and its animal origins in South Chinese bats, these researchers published an article that concluded: “The studies on the ecology of influenza led in Hong Kong in the 1970s, in which Hong Kong acted as a sentinel post for influenza, indicated that it was possible, for the first time, to do preparedness for flu on the avian level” (Shortridge, Peiris, and Guan 2003). Doing preparedness for flu at the avian level meant tracking the flu virus in the animal reservoir to anticipate potential pandemic viruses, and sending early warning signals of their emergence. This technique is similar to what is called “syndromic surveillance,” based on the collection of data on the increase of demand for flu treatment. But while syndromic surveillance requires the coordination of networks of physicians, avian flu preparedness requires a coordination between networks of specialists of birds, both wild and domestic. In the 1950s, the regular emergence of pandemic flu viruses in the twentieth century (1918, 1957, 1968) was explained by the fact that flu viruses mutate in wild birds, particularly waterfowl, and are transmitted to humans by the “mixing vessel” of pigs, who combine receptors for human and bird viruses in their respiratory tracts. Pandemics occur when flu viruses mutate or reassort in the animal reservoir, and replace seasonal flu viruses in the human population. This “ecology of influenza” was particularly salient in south China, where pigs and ducks, considered as “sane carriers,” live close to humans in rice paddies. The increase of intensive poultry farming, where genetic homogeneity enhances the spread of pathogens, combined with the persistence of live poultry markets, where humans are at risk of close transmission, have led to consideration of South China as a potential site for the next pandemic, which the founder of the department of microbiology at HKU, Kennedy Shortridge, called “an influenza epicenter” (Shortridge and Stuart-

168 / Frédéric Keck

Harris 1982). Considering Hong Kong as a “sentinel post” for avian flu was a way to transform this vulnerability into an asset: microbiologists in Hong Kong should use birds in the region— both wild and domestic— to warn the world about future pandemics, acting as global whistleblowers. This chapter investigates how the management of the uncertainties of flu viruses at the human- animal interface has redefined public health in Hong Kong in relation to animal health. Sentinel devices in Hong Kong were built by connecting two networks that constantly produce data about birds: veterinarians and birdwatchers. Virologists often advise states on how to prepare for global epidemics, but birdwatchers are discarded as local amateurs. I will show that both groups practice the same policy when they build sentinel devices for environmental threats.2 If the link between these two groups is constitutive of the “One Health” policy at the level of international organizations such as the WHO, International Organization for Animal Health (OIE), Food and Agriculture Organization (FAO), and Wildlife Conservation Society (WCS), how does this connection work in a local context, such as in Hong Kong, and how does it change the relations between risk calculation, imaginative action, and management of virtuality?

Virus Hunters and Potential Uncertainty It is often argued that emerging infectious diseases mark a historical shift that can be compared to the Neolithic Revolution. Humans have lived for centuries with species they have learned to domesticate, and they have developed co- immunity with the microbes carried by these animals. But with the “livestock revolution,” by which is meant the increase of the number of animals domesticated for their meat along with other environmental changes, the relations between humans and animals have been transformed in such a radical way that microbes coming from animals kill human organisms instead of replicating within them (Greger 2006). Contemporary societies would then be in a similar position to hunter- gatherer societies faced with animals with which they have to communicate through invisible entities. This idea is strikingly caught in comments regularly used by experts in emerging infectious diseases, such as “nature strikes back” or “nature is the greatest bioterrorist threat” (Drexler 2002). As an anthropologist, I will not discuss the epochal value of these statements as a historian of the environment might. But rather, I will note them as exemplifying a new rationality of uncertainty. What does the idea that microbiologists are hunters communicating with a natural enemy tell us about their management of uncertainty? How, in particular, does it distinguish

Sentinel Devices / 169

their work from risk management? Can the anthropology of hunting shed light on the management of potential uncertainty? The metaphor of “virus hunters” is often used to suggest that microbiologists go “into the wild” to capture dangerous specimens from animal species. I want to take this metaphor seriously as describing a form of rationality dealing with uncertainty in a way that differs from “pastoral” rationalities of risk. This implies confronting the anthropology of risk with the anthropology of nature: what does the way societies think about nature tell us about the way they frame risk? Michel Foucault has shown that the vocabulary of risk comes from a pastoral practice: counting the number of individuals in a population (Foucault 1981). In the eighteenth century, epizootics were occasions to count the number of casualties in the flock and anticipate the consequences of future outbreaks. Vaccination used an animal disease— cowpox— to produce immunity for a human disease— smallpox (Moulin 1996). The idea that there was a species barrier between cows and humans was a guarantee that the disease would be attenuated when the vaccine was distributed. Veterinarians and physicians could collaborate on a shared knowledge of species barriers as a distinctive milieu of actual events. But if species barriers are constantly crossed by pathogens, it becomes necessary to govern through time rather than around a specific milieu (Samimian-Darash 2013). Rather than looking at the crossing of species barriers as an actual, sometimes catastrophic, event, microbiologists now anticipate this as a virtual event: the barrier has become a critical site of enquiry. But what is the field of this enquiry? In Mary Douglas’s framework, pathogens crossing species barriers are perceived as dangerous because they are at the borders of well- defined categories (Douglas 1966). If the management of uncertainty consists in putting new beings in a category, uncertainty is the beginning a process of inquiry for accountability. In her work with Aaron Wildavsky (1982), Douglas argues that the calculation of risk is only an extension of this mode of reasoning through long lists of cases. Lucien Lévy-Bruhl’s separation between “primitive mentality” and modern science is thereby criticized: In the primitive worldview, according to Lévy-Bruhl, everything that seems abnormal is explained by the intervention of mysterious agencies called into existence by common fears and common perceptions, the mystic participations of the culture. For him, the defining feature of primitive mentality is to try to nail a cause for every misfortune; and the defining feature of modernity, to forbear to ask. Lévy-Bruhl’s astonishment would be great if he were to behold us now, moderns using advanced technology and asking those famous primitive questions as if there was no such thing as natural death, no

170 / Frédéric Keck purely physical facts, no regular accident rates, no normal incidence of disease. (Douglas and Wildavsky 1982, 32)

Douglas takes the position of Emile Durkheim against Lévy-Bruhl: our modern ways of thinking are framed by institutions; that is, collective ways to answer uncertainties faced by primitive mentality. Sacrifice of nonhuman beings, says Durkheim, is a primitive form of reduction of uncertainty: it draws a separation between humans and animals at the very moment when this separation is blurred by “collective effervescence.” Comparing Durkheim to Lévy-Bruhl draws a sharp distinction between emotion and reason, between danger and risk, which fails to convey the multiple practices of uncertainty. Claude Lévi-Strauss (1963) noted that, in the debate between Durkheim and Lévy-Bruhl, Henri Bergson had invented an interesting position, which can shed light on the discussion about uncertainty. Bergson takes the example of the hunter who anticipates the outcome of the hunt by invoking the spirit of the prey. It is neither a space of confusion between humans and animals à la Lévy-Bruhl (the hunter doesn’t fear the prey) nor is it a space of collective representations à la Durkheim, which would cover the uncertainties of hunting. Collective representations, says Bergson, lead to thinking that catching the prey was possible once the hunt is finished. But rather than calculating a possibility a posteriori, it is better to describe hunting as opening a virtual space between intentions and outcomes in which the hunter communicates with the prey through rituals and narratives; what Bergson called “virtual instinct” and Lévi-Strauss later refers to as “symbolic function.” The prey is equipped in such a way that the uncertainty of its interactions opens a space of signaling and communication. Lévi-Strauss thus showed that the uncertainties in the behavior of animals allow them to express many traits of human societies: when some possibilities of communication are exhausted, it is possible to find other traits in animals’ behavior to express other relations. In the management of emerging infectious diseases, Lévy-Bruhl’s position is clearly illustrated by images in the media (uncertainties about animal diseases are represented as “mad cows” or “terrorist birds” as contradictory as the famous “Bororos are Araras”) and Durkheim’s theory applies well to the politics of public health (a sacrifice is necessary to blame the food industry for turning animals into monsters): this is the opposition between danger and risk, or possible uncertainty. But Bergson’s concepts allow analysis of the work of “virus hunters” at the level of potential uncertainties where animals are equipped through sentinel devices. Sentinel devices give

Sentinel Devices / 171

access to meaningful signs in the environment, in an intermediary level between emotions and representations. Thus, when the Hong Kong government killed all the live poultry on its territory (1.5 millions) in 1997, microbiologists advised them to equip farms with unvaccinated chickens that would die first in the wake of an outbreak (in Chinese: shaobingji, “chickens who whistle like soldiers”). Bergson’s propositions have been confirmed by recent analyses of hunting societies. Philippe Descola recalls that “animistic” hunters are trapped between the identification with the animals they hunt and the necessity to kill them; without the institution of sacrifice to operate this cut (Descola 2005). Tim Ingold has described the difference between hunting societies and pastoral societies in the way they perceive the environment: as lines that must be followed and intertwined for the first, as lands that must be measured and counted for the second. He argues that, in hunting societies, “epizootic outbreaks are the occasion to evoke a set of statements and counterstatements, whose result is a flux of animals, from those who suffered least to the most injured” (Ingold 1980, 170). Roberte Hamayon has shown that chamanistic rituals in Mongolia simulate the movements of animals to prepare for the uncertain encounter with them. Since hunting is an uncertain practice, it is better not to boast about how many animals have been hunted, and to turn to experts who will produce signs of the outcome of the hunt, to increase the strength of the hunter. Hence the fundamental role of playing in the rituals that precede hunting (Hamayon 2012). It has often been stressed that techniques of preparedness (simulation, stockpiling, sentinels) have a fictional dimension (Zylberman 2013), suggesting they are concerned with unreal problems (in opposition to real problems such as malaria, whose risk is known). I suggest that these techniques are concerned with the same problems as those faced by hunting societies, which can be called potential uncertainty. I will follow this hypothesis by looking at how virologists and birdwatchers, considered as “virus hunters,” are critical actors in regard to governments who sacrifice humans or animals in the name of development or public health, changing the way environmental problems are cast and knowledge about the future is produced.

Viral Lines and Phylogenetic Trees The HKU Department of Microbiology is divided into two parts, referred to as the wetlab and the drylab. In the wetlab, virologists manipulate frozen pathogens in high conditions of biosecurity. Carlo Caduff recalls that

172 / Frédéric Keck Since 1933, biomedical scientists and public health experts have invested considerable resources in generating and channeling a seamless flow of viral strains not only across species but also across countries, institutions and disciplines. It was this controlled flow of biological matter that allowed influenza research to become independent of the seasonal occurrence of epidemic disease. (Lakoff and Collier 2008, 267)

Flu viruses are introduced in chicken embryos or ferrets or monkey cells to follow their replication, particularly the entrance (initiated by hemaglutinin) and their release (favored by neuraminidase)— hence the letters H and N letters assigned to viruses. It is possible to count the number of cells after the introduction of viruses by flow cytometry, or to see a cell undergoing apoptosis under electronic microscopy. The wetlab is exposed to biosecurity measures just like a poultry farm, because it “nourishes” living material in such a way that epidemics become apparent (Keck 2012). In the drylab, by contrast, virologist analyze genetics sequences on their computers. Rather than searching for their impact on human health, they trace their mutations in the animal reservoir. Diseases are reduced to errors in coding, mismatches in replication and communication, as parts of a “molecularization of life” (Rabinow 1996; Rose 2006). Epidemic outbreaks are visualized as continuous shifts and catastrophic drifts. Since viruses are pieces of information that require a cell to replicate, they can be treated adequately in these two perspectives: either in a cycle of life and death or through lines of mutations. Sequences of viruses, as well as all types of living beings, are available on Genbank, a website constituted by the National Center for Biotechnology Information. In 2012, this data bank contained more than 100 million sequences, and this figure was estimated to double every ten months. The drylab bypasses biosecurity constraints of the wetlab: it is not necessary to collect samples in farms or markets, or to ask other labs to send them, since they are available as computer sequences. The shift from the wetlab to the drylab is described as a change of status. It gives access to prestigious scientific journals such as Science or Nature who valorize this kind of computer work, but it is also described as a shift from “dirty” to “clean.” Vijaykrishna, one of the researchers of the drylab, said in 2009: “In general, I don’t do the lab work, everything that makes your hands dirty. I work mainly on computers. But if I didn’t have the staff to do the sequencing, I could do it because that’s what I’ve done for years.”3 The relation between wetlab and drylab can be compared to that between “back office” and “front office” in the world of finance. Virologists look at

Sentinel Devices / 173

coded sequences on computer screens like traders in a front office; but it is only because they have worked in the wetlab to produce them that they know their biological meaning, as traders move from the back office to the front office when they gain competences. Vincent Lépinay shows that with the advent of global financial derivatives, “the locus of uncertainty has been displaced: it was once the market of one security with its chess player, but it is now the portfolio with its intricate correlation spun by the formula” (2011, 80). He notes that “the unpredictable animation of these portfolios is akin to dealing with wild organisms, and, unsurprisingly, the term ‘beast’ comes frequently to the forefront in the traders’ conversations about their products” (84–85). In the same way, virologists displace the uncertainty from the reaction of the organism to a virus to its mutations as it circulates around the globe and across species. If collecting samples is like catching wild specimens, visualizing their sequences on a screen displaces them in a virtual space where they produce knowledge about future pandemics. A specific technique has been formed to give a biological meaning to computer sequences: bioinformatics. As MacKenzie writes: “Bioinformatics takes a strong interest in potentiating living bodies, in opening a field of possible transformations and substitutions around them, even if it itself as a material practice does not get its feet wet in living things” (2003, 318). How is life turned into potential transformations at the level of virtual sequences? By drawing correlations between one sequence and another. “Sequence comparison is in fact the operation through which horizontal translation occurs, and hence a flattening of genetic time occurs. In a sense, reading the sequence itself turns out to be far less important than reading the sequence alongside other sequences” (MacKenzie 2003, 321). This procedure, called alignment, is possible thanks to powerful software such as BLAST (Basic Local Alignment Search Tool) or MSA (Multiple Sequence Analysis). But if correlations are proposed by the computer based on probabilities of mutations, they must be confirmed by the virologist based on biological knowledge. When two sequences are close to each other, it can be through mutations (ATCG becomes ATCA) or deletions (ATACAG becomes ATCG), but it may also be due to errors in sequencing. To decide which correlations to take into account and which to consider as irrelevant, virologists can use software (such as Bootstrap, Jukes Cantor, or Tamura) providing the probability of correlations based on a given scenario. Bioinformatics relies on a Bayesian logic that integrates the effect of a subjective decision on the objective result. If the uncertainty of the mutations of pathogens between species is integrated in the software that calculate probabilities (possible uncertainty), it relies ultimately on the competences of biologists who “see” mutations.

174 / Frédéric Keck Imagine you’ve sequenced a virus, you want to know the evolution, where it comes from. You download all these sequences from Genbank and then you make the alignment to check which nucleotides are important. But if there’s something obscure in the sequence, you just check the references and ask them “what month did you do the analysis?” For Influenza there is no problem because it’s only the big labs who do it. (Vijaykrishna 2009)

There is a tension here between the values of collaboration or Open Access and the values of competitiveness or scale in the production of data banks. Robert Webster came to prominence in the world of influenza research as the creator of the biggest biobank, at Saint-Jude Hospital in Memphis, Tennessee. Most of the researchers currently at HKU came from his lab. Webster is a real “virus hunter,” having collected samples of flu viruses from birds all over the world. Researchers in Hong Kong can benefit from their position on the “sentinel post” where new viruses emerge: when they sequence a virus, they can trace it to known viruses to follow its routes. Catching a new virus becomes meaningful only in regard to the data bank on which it adds a new line. Data banks need to be connected to sentinel posts to produce knowledge about emerging viruses, through visual maps in which any actuality increases uncertainty. The goal of bioinformatics is therefore to issue a meaningful image that captures the newness of a virus in the accumulated knowledge, referred to as a phylogenetic tree. This synthetic framework, which aligns sequences depending on their distances in space, time, and species, relies on Darwinian hypothesis: similitudes in mutations correspond to biological kinship. When a virus jumps from one species to another, it inaugurates a new set of lines. The “immunity pressure” of the environment can create an “evolutionary bottleneck” that allows this mutation to replicate. Drawing back these lines then leads to “The Most Recent Common Ancestor.” The real problem for bioinformatics becomes the [sic] calculating the “optimum alignment,” the minimum number of edits needed to move from one sequence to another. Bioinformatics almost holds as a founding axiom that optimum alignment expresses similarity or kinship between biochemical entities. (MacKenzie 2003, 323)

There is a bet that the phylogenetic tree expresses a real process, based on the representation of nature as trying all mutations to multiply in an optimal way. This hypothesis allows the biologist to move from the virtual to the actual, from relations to an absolute event. Behind the probabili-

Sentinel Devices / 175

ties of mutations, the biologist looks for the causality that explains these mutations, referred to as the “molecular clock.” “We try to get close to the real tree, that which explains genetic sequences: what we call the absolute evolutionary tree.”4 This method has one important consequence: genetic sequences allow the identification of emerging viruses that were not detected by surveillance but may have caused known outbreaks. The molecular clock displaces the time of emergence from the symptomatic present to a genetic past. Researchers at HKU (who now work in Singapore) have shown that the SARS virus mutated in bats in South China before being transmitted by civet cats in Guangzhou in 2003 (Vijaykrishna et al. 2007); that the H5N1 virus had taken advantage of the poultry vaccination campaign organized in Vietnam and China in 2005 (Smith et al. 2006); that the H1N1 virus emerging in Mexican swine in 2009 had a “twin” virus found on Hong Kong pigs in 2004 (Smith et al. 2009a); and that the H1N1 virus causing the 1918 pandemic was already circulating in pigs in 1911 (Smith et al. 2009b). Their conclusions have a strong policy impact: better surveillance of animals could have led to earlier detection of emerging viruses. The emergence of new pathogens crossing species barriers is not predicted but detected when it has already been actualized, which indicates a need for strengthening the system of animal surveillance. “Despite widespread influenza surveillance in humans, the lack of systematic swine surveillance allowed for the undetected persistence and evolution of this potentially pandemic strain for many years” (2009a). If building sentinel devices involves constant surveillance of pigs and poultry, it also implies better coding at the level of data banks. The phylogenetic trees produced by HKU microbiologists take the eight segments of the flu virus genome— and not only the H and N genes. They build eight trees to make better correlations between times of emergence. This capacity to sequence long genomes comes from their training on fungi and coronaviruses: fungi have 600,000 nucleotides, coronaviruses have 30,000, while flu viruses have 15,000: “If you go back a few years, people would look at specific genes, like H and N, because it was most important to characterize and find how the virus was evolving. But now it has become our theme to do full- genome sequencing.”5 Sequencing more means producing more knowledge on mutations, increasing the potential uncertainty: many more emerging viruses could be found than those that were actually detected. During a course on bioinformatics, Justin Bahl simulates the situation of the 2009 H1N1 pandemic virus. The first step of research, he says, is to produce more knowledge than can be handled; then the inquiry will lead, following the hypothesis that

176 / Frédéric Keck

emerges, to reducing the scope of research: “We’ve received this new sequence from Atlanta. Paste it to Blast, and take the more information as possible. Make a tree so big you can never publish it on a sheet of paper: it will be a good material to reduce the inquiry.”6 Hong Kong microbiologists compare this increase in available knowledge to the retention of information by Chinese authorities. HKU is linked to the University of Shantou in Fujian, where Hong Kong microbiologists regularly go to collect samples in poultry farms and markets. But when their publications threaten the interests of the Department of Agriculture in Beijing, their access to Shantou is cut down, and they have to stay in Hong Kong to work on the samples they have collected. “Closing Shantou is basically a shame,” says Vijaykrishna. “All the information that we’ve had in the past ten years is because of this massive surveillance that’s been going on in this region. And when the H5N1 virus spreads, we could say: this is the genotype that is spreading, WHO could be preparing, we could send vaccines to these countries where the virus is endemic. But since 2006 when Shantou was closed, we don’t know much about this region, so you suppose these nucleotides are spreading outside.” If what happens in China is uncertain, virus sequencing allows the transformation of this uncertainty into potentiality by using the information that is already available and detecting events that have not yet been detected. The uncertainty, at this level, is not whether the virus is dangerous and which species it comes from, but when it jumped from one species to an other and which segment of the genome can prove it. If infectious diseases know no borders, nations can put limits on the work of microbiologists, but these borders can virtually be crossed through the use of bioinformatics. “We’re sitting on a bunch of information,” says Vijaykrishna. “Viruses are there, still unknown. Even if we don’t do surveillance, we have enough informations to work for five years.” Justin Bahl strikingly claims: “We are destroying China’s Great Wall. Every virus sequenced is a brick that comes out.” Viruses draw the invisible lines that governments try to hide by building borders. Sentinel devices equip the border (both ontological, between different species, and political, between governments) in such a way that it can be virtually explored. A recent change in the nomenclature of flu viruses has led to the erasure of the names of countries and provinces in favor of the position on the phylogenetic tree. Before this change of regulation, GenBank gave the name of the province and species on which the virus had been found: for instance, A/Goose/Guangdong/1/96 H5N1 (or Gs/Gd) was linked with the H5N1 virus declared in Hong Kong in 1997 and considered as its Chinese precursor. But other flu viruses found in China, such as

Sentinel Devices / 177

the Fujian strain that spread around Asia, or the Qinghai clade that spread in Europe, were called “Clade 2.3.4” and “Clade 2.2” (Butler 2008). Reading phylogenetic trees entails political knowledge of borders crossed by viruses, but this knowledge is coded in a language that displaces their political meaning into a biological anticipation. If the world of animal surveillance is strongly influenced by political interests and limitations, managing uncertainty of cross- species transmission at the potential level is a way to raise these limitations and point to global networks of surveillance. At this level, uncertainty doesn’t come from the behaviors of human communities considered as risk factors, but from the plural lines of viral mutations, on which human behaviors only exert evolutionary pressure. Virologists have been successful in creating a virtual space where potential uncertainty is displaced from the lethality of viruses when they cross species borders to the time and lines of mutation. This potential uncertainty comes from the equipment of animals through sentinel devices, the constitution of a massive and continuous web of surveillance that allows the tracking of virus mutations. Virologists have the same relation to virus hunters as chamanes to animal hunters: they create a space of virtual relations in which they can anticipate actual relations between humans and animals by acting on invisible entities situated between humans and animals (Ingold 2007).

Birdwatchers as Sentinels If this hypothesis is relevant, it could describe the work of another group that reacted to avian influenza: birdwatchers. While virologists faced the uncertainty of the mutations of bird flu viruses when they cross political borders between China and Hong Kong, birdwatchers face the uncertainty of viruses when they cross ontological frontiers between wild and domestic birds. What does their relation to hunters reveal about their practice of uncertainty? In Hong Kong, birdwatchers have been allied with virologists to produce data about the risks of avian flu. By looking at this alliance, I will show how it has transformed risk management through the use of sentinel devices. The Hong Kong Birdwatching Society (HKBWS) was founded in 1957 by British officers who wanted to formulate a list of the bird species on the territory to engage for their conservation. It is estimated that 400 bird species can be observed in Hong Kong due to its position on the migratory flyways between the northern and southern region of East Asia. Following a tradition of Western naturalists in China (Fan 2004), it also developed a distinctive mode when Hong Kong was recovered by China in 1997. As the number of Chinese members increased to one thousand, there was a rais-

178 / Frédéric Keck

ing involvement in teaching nature conservation on the other side of the border. The HKBWS, along with its Taiwanese counterpart (Weller 2006), considered itself as a sentinel of nature conservation in the Chinese space. This position was actively defended in the HKBWS’s relationship with the government after 1997. The most important bird reserve in Hong Kong, situated in the wetland of Mai Po at the mouth of the Pearl River Delta, was run by the World Wildlife Fund but owned by the government. When the government decided to close the Mai Po reserve for three weeks every time a wild bird was found with H5N1 within 3 kilometers of the reserve, birdwatchers decided to become experts in avian flu risk management. In 2007, they organized a press conference with HKU microbiologists to show that no H5N1 had been discovered on wild birds around Mai Po. They also showed that many wild birds with H5N1 were found dead around the bird market of Mong Kok, which they explained by the release of birds for spiritual purposes. Birds that had been confined to cages were released in parks close to the market, and the stressful conditions under which they had been transported made them susceptible to infectious diseases (Ellis 2009). The argument of the HKBWS was that the Hong Kong government closed the Mai Po reserve, where few people were going, to avoid closing Mong Kok bird market, which had a more important economic impact. Mike Kilburn, vice- director of the HKBWS, notes: Mai Po is probably the most tested place in the world for wild birds. We’ve been collecting records of birds for fifty years in HK. This gives us an authority that nobody can question on birds because the HKBWS was started by English birdwatchers who had this amateur birdwatching model: you write down the birds that you see and you submit your records to the society at the end of the year and those records are available for anybody who wants to use them. The Conservation department has avowed to me that when they try to catch up an area of biodiversity they don’t see the point to try to compete with the birdwatching society.7

The distinction between bird reserves and bird markets is interesting, as it is also a distinction between birdwatchers and bird collectors. Birdwatchers have distinguished their practices from those of hunters who collect specimens (Barrows 1998; Moss 2004): they refrain from catching birds and content themselves with observing. We can contrast three kinds of practices among amateurs of wild birds in Hong Kong. Bird collectors think about the cage in which they conserve the birds, transport it in the city with a veil so that the bird is undisturbed by the noises of the city, and spend time

Sentinel Devices / 179

listening to their songs in the market and comparing it with others as an indicator of birds’ health. They have constituted the bird market as an urban ecosystem, but they are the victims of biosecurity measures imposed on the market. Buddhists who buy birds to release them consider bird markets as places of suffering: they think that releasing a bird increases the “merits” (gongde) of humans, and they consider that avian flu is a sign of the mistreatment of birds in markets. But they contribute to the production of avian flu by releasing birds in improper environments (some add, by catching the birds who have just been released and selling them again). Birdwatchers have therefore proposed to release birds they extract from illegal trade in suitable environments, and they even publish booklets where they teach amateurs where and how to release animals properly. We have here an interesting controversy showing the uncertainties of pathogens circulating from birds to humans. Do pathogens come from the mistreatment of birds by humans, so that a careful attention to their movements and needs would be enough to cure them, as bird collectors and Buddhist monks tend to think? Or is it necessary to have an ecosystemic perpective in which lethal pathogens emerge when the distance between species is too large? Birdwatchers have transformed the possible uncertainty of bird flu management in markets into the potential uncertainty of biodiversity knowledge on the whole territory. For bird collectors, birds are souls that suffer from diseases; for birdwatchers, birds are sentinels that send signals on environmental threats. There is a fundamental uncertainty in the sentence “birds are sick with flu that is potentially human.” Is this flu transmissible and if so, through which birds? Birdwatchers have answered this question by defining bird health as a potentiality in ecosystems. Michelle Murphy found that women’s mobilization on chemical exposure became a test to define “sick building” (Murphy 2006). Similarly, the environmental stakes of bird release are a test for the definition of what is a sick bird. In the same way as women have translated subjective complaints about working conditions into popular epidemiology, birdwatchers have transformed concerns over animal suffering into knowledge on biodiversity. This knowledge is particularly built through maps. In a 2007 press conference, the HKBWS showed a map of the cases of H5N1 where it was clear that most cases were found around Mong Kok. This map displaced the focus on avian flu from the borders, where viruses are supposed to arrive through wild birds, to the center of the territory. Mong Kok was the place where SARS was transmitted by a physician who had treated patients in Guangzhou in 2003. If Mong Kok is a place of risk management, Hong Kong borders are sites of anticipation of coming threats. Most places where birdwatchers

180 / Frédéric Keck

count biodiversity are situated at the borders: Mai Po, Long Valley (an agricultural site where the HKBWS succeeded in stopping a railway construction project showing the presence of rare endemic species), Tap Mun Chau island (where the HKBWS monitors sterns threatened by fishermen’s activities). These sites configure an “ecology of endangerment” (Choy 2011). Knowledge about birds’ biodiversity is also produced by the sharing of pictures. The presence of a new species is an uncertain event since it is a singular encounter. Pictures transform it into a record that can be shared with other birdwatchers. Through the website, other birdwatchers can view the rare species and confirm its presence. But the use of pictures is controversial because of the arrival of new birdwatchers who practice it as a leisure activity and as a way to show their wealth through expensive material. “Birds are the greatest challenge to birdwatchers because they are mobile,” says Mike Kilburn.8 “Before, birdwatchers used to organize exhibitions; now they post pictures on the Internet and 25 persons tell them ‘Wow, it’s a great picture.’ The HKBWS asks them to indicate the place where they took the picture, but they want to keep the scoop. There are rival groups of photographers who share the information between them but not with others.” This criticism by birdwatchers recapitulates the historical tendency by which they are separated from hunters: photographers are accused of being more interested in the catch than in the encounter with a bird. However, the HKBWS has used photographs as indicators of environmental degradation, asking their members to send pictures of local birds to the government for the protection of Long Valley from new construction projects, or pictures of degraded sites to enforce environmental laws.9 The controversy surrounding photographs can thus be compared to the discussions among virologists on phylogenetic trees and names of viruses. Both aim to displace the uncertainty of real encounters with sick birds in a site where it produces knowledge; but the uncertainty bounces back in the use of the technologies that make threats visible. How can phylogenetic trees and bird photographs be used so that the encounter with threatened/ threatening birds becomes an actual event? This is a problem for a politics of sentinel devices: how to build devices that send early warning signals in such a way that the large amount of data leads to a significant mobilization (Chateauraynaud and Torny 1999).

How does considering virologists and birdwatchers in relation to “hunters” reveal their specific roles in the management of uncertainty on pathogens crossing species borders? We have seen that the opposition between danger

Sentinel Devices / 181

and risk fails to account for their practices because it remains enclosed in pastoral spaces of containment of diseases. Virologists and birdwatchers, when they follow viruses or threatened birds in their ecosystems, produce knowledge about events that have not yet been detected, but that become meaningful through the use of devices such as phylogenetic trees or bird photographs, thus raising public awareness of potential threats. They open a virtual space of communication with animals through the use of pathogens as measures of the distances between species. Sentinel devices rely therefore on another concept of uncertainty than other techniques of preparedness of epidemics. While simulation and stockpiling create fictions out of an actual event, sentinel devices detect events at a virtual stage where they still need to produce a mobilization. It is therefore difficult to criticize a sentinel device for detecting events that have not yet proved to be threats, as the role of sentinel devices is to produce more knowledge than what is actually needed for the mitigation of known threats. Sentinel devices open a space of reflection from biosecurity to biodiversity: the management of risk of pathogens crossing species borders entails a larger map of mutations and transformations occurring between species. The question for a policy of sentinel devices is how to involve those who live constantly with animals in this new knowledge without using old narratives of fear or sacrifice. A policy of sentinel devices does not seek to restore borders that have been crossed between humans and animals, but is attentive to the multiplicity of relations between them.

CHAPTER TEN

Spaces of Uncertainty: Governing Urban Environmental Hazards AUSTIN ZEIDERMAN

Uncertainty has long preoccupied attempts to plan, build, and govern cities. Urbanists of a modernist persuasion have often sought antidotes to that which could not be known or managed. The model, the zone, the census, and the plan all position uncertainty in opposition to modernist visions of the city, treating it as a problem to be resolved through rational, scientific principles. In recent years, however, uncertainty has taken on new urgency. Beyond the perfunctory prediction that “the future is urban,” there is no shared vision of how this future will unfold. There is disagreement at the most basic level over how to describe the global urban condition and how to analyze and interpret urban life. If there is any consensus about twentyfirst- century cities, it is around the impossibility of predicting what will happen in them or what they will become. With the widespread notion that we now live in turbulent times— economically, politically, and ecologically— uncertainty is no longer opposed to technical expertise; it has become normalized within contemporary urban theory, policy, and practice. Whether present futures are ontologically more uncertain than futures past is beyond the scope of this essay. What matters here is the mounting sense in popular and scholarly discourse that we now live in a worldhistorical age of uncertainty. Public debates are suffused with projections of indeterminate futures that demand anticipatory action in the present. Elected officials, media pundits, environmental activists, security consultants, health experts, and others converge on the notion that uncertainty is on the rise and is outrunning attempts to manage it. This view of global transformation has had real consequence: one response to the belief that we live in an uncertain age is the political imperative to govern in anticipation of potential events (Amoore 2013). Mechanisms of “securitization” have proliferated in an attempt to generate knowledge about uncertain fu-

Spaces of Uncertainty / 183

tures and mitigate their potential effects (Aradau, Lobo-Guerrero, and Van Munster 2008). They have emerged across a range of apparently disparate domains in both the global North and South as essential elements of “good governance,” and the field of urban policy and practice is no exception (Graham 2011; Hodson and Marvin 2009). The rise of “urban ecological security,” according to Mike Hodson and Simon Marvin, represents “a paradigm challenge to our conventional understanding of contemporary urbanism” (2010, 131). As urban spaces, infrastructures, economies, and lives are increasingly problematized as security concerns, it is common for cities to be governed in relation to a range of threats, from financial crises and terrorist attacks to disease outbreaks and natural disasters. Municipalities worldwide are experimenting with how to manage populations, events, and environments whose characteristics and behaviors cannot be known or foreseen. As assertive visions of urban futurity cede ground to tentative experiments in “future proofing” cities, governments and populations alike must orient themselves toward the unknown. The currency of concepts like “preparedness,” “resilience,” and “flexibility” reflects an acceptance of future uncertainty as increasingly fundamental to contemporary urbanism (Amin 2013). Urbanists now grapple with the challenge of reacting to varying degrees, as well as different categories, of uncertainty. This chapter examines approaches to governing the uncertain future of cities and urban environments in Colombia, focusing specifically on their implementation in the self- built settlements on the periphery of Bogotá.1 It examines the designation of these spaces as “zones of high risk” and the everyday work performed by government officials responsible for determining the risk of landslides within them. Reflected in this policy is a technique of government that initially seeks to evaluate the likelihood of events and estimate potential damages throughout the city. It problematizes the urban environment as a space of calculable probability, or what Michel Foucault (2007) called a “milieu” (20–21). However, an ethnographic approach reveals that, in practice, the field of intervention is a space of “potential uncertainty” (Samimian-Darash 2013) or “possibility” (Amoore 2013) that need not be measured to be managed. Differentiating between governmental policy and practice reveals that uncertainty and techniques for managing it both exist in a variety different forms.

The Map of Risk The Caja de la Vivienda Popular is the branch of Bogotá’s municipal government that, since 2003, has managed a housing resettlement program for

184 / Austin Zeiderman

families living in zonas de alto riesgo, or “zones of high risk.” Its official duty is to protect the lives of populations deemed vulnerable to environmental hazards, such as landslide and flood, by relocating them. In 2008 and 2009, I spent a good deal of time chatting with the social workers, architects, and lawyers who manage the day- to- day operations of the Caja’s field office in Ciudad Bolívar— the poor area in the mountainous southern periphery of Bogotá where the majority of these zones are located. Although the Caja’s headquarters are in the city center, this office serves as the hub of the resettlement program for the many households undergoing relocation. And yet it is not merely a place where policies developed elsewhere are put into effect. It is a key site in which risk is actively managed— that is, where techniques for governing uncertainty are implemented on an everyday basis. On one occasion, we were discussing a recent political scandal when two women, both resettlement program “beneficiaries,” arrived holding a newspaper clipping advertising houses for sale. Although the Caja is responsible for relocating “at risk” populations, it relies on the initiative of beneficiaries to manage their own resettlement. These women had been searching for housing to purchase with the subsidy guaranteed to the population living in “zones of high risk,” and they believed they had finally found dwellings that suited their criteria. Before they could proceed, however, their selections had to be approved by the equipo técnico, or “technical team,” according to a strict set of norms, of which the risk designation of the new property was the most elemental. One of the Caja’s architects, Ana María, took the paper and looked at the address: “Unfortunately,” she said, “these properties are in a zona de alto riesgo.” In a pedagogical tone, Ana María offered to explain. Rising from her chair, she shifted attention toward a map hanging on the wall, which showed the streets of Ciudad Bolívar shaded over in green, yellow, and red to indicate low, medium, and high risk. Ana María first pointed to the two properties, which were in a yellow zone (medium risk). But her finger then slid up the map an inch, stopping at two red lines just north of the street on which these two houses were located. “These red lines,” she explained, “are seismic faults where earthquake damage can be severe.” Ana María, regardless of the map, pronounced these properties “high risk” and, thereby, disqualified them. Calculative predictions of disaster risk in Bogotá— that is, which neighborhoods are designated “zones of high risk” and therefore subject to relocation— were made initially in the late 1990s at the time of major transformations in urban policy. The more general process of establishing policies for managing risk on a national level began in the previous decade. Yet

Spaces of Uncertainty / 185

as Ana María demonstrated by her simultaneous reference to and adjustment of the map, neither these policies nor their predictive calculations of risk and vulnerability are fixed. In the practice of implementation, they are repeatedly formed and reformed. Techniques for rendering the uncertain future actionable in the present are reconfigured as calculation is overlaid with intuition. The urban environment can be problematized both as a space of probability and predictability and as a space of potentiality and possibility.

Zones of Uncertainty These observations recall recent shifts in paradigms of urbanism, which assert that cities and their environments should be managed according to the rationality of risk. In both the global North and South, it is common for cities to be governed through probabilistic calculations of potential events (Hodson and Marvin 2009). With no consensus, however, on how to calculate such probabilities, risk belongs to a range of techniques used to render the uncertain future an object of official decision making in the present (Lakoff and Klinenberg 2010). Increasingly relevant are Foucault’s (2007) lectures on the rise of “security” as a technology of power that governs through predictive calculations of the probability of uncertain future events. But we must expand upon this body of work in order to recognize different forms of uncertainty and account for the diverse techniques devised for governing them. In these lectures, Foucault examines the emergence of security as a key technology of power in France. To summarize: around 1800 juridical and disciplinary mechanisms were joined by apparatuses of security; security was a technology of power based on risk; risk was a mechanism of decision making that relied on techniques of calculative prediction; and these techniques were inherently spatial. Unlike sovereignty and discipline, which prohibited and prescribed, security mechanisms inserted their object of concern into a series of probable events whose likelihood of occurring in the future could be calculated in the present and then managed according to an average considered optimal or acceptable. Spatial control over bodies, populations, and objects was released as security encouraged their circulation and freedom. Foucault (2007) illustrates the working of these three distinct yet overlapping political technologies— sovereignty, discipline, and security— in the domain of health. While lepers of the Middle Ages were excluded from the general population by juridical mechanisms, and the plagues of the sixteenth and seventeenth centuries were controlled through disciplinary tech-

186 / Austin Zeiderman

niques, the problem of smallpox in the eighteenth century was tackled according to the logic of security (9–10). In the latter case, statistics were used to calculate the probability of infection across the population, and decisions to vaccinate were based on these calculations (10). It is at this moment, then, that we see the emergence of what Foucault calls the “absolutely crucial notion of risk” (61). In contrast to sovereignty and discipline, both of which would have tried to eliminate the threat and protect the population from infection, these inoculation practices calculated the likelihood of contracting or dying from smallpox within population groups. And these calculations revealed that the population was distributed spatially into “zones of higher risk” and “zones of lower risk” (61). Geographers note that Foucault’s initial emphasis on the spatial problem of the town wanes in these lectures, to such a degree that, as Stuart Elden (2007) points out, “territory is marginalized in Foucault’s story” (32). But Foucault makes a critical point about spaces of security: “The specific space of security refers . . . to a series of possible events; it refers to the temporal and the uncertain, which have to be inserted within a given space” (2007, 20). He calls the space “in which a series of uncertain elements unfolds” the “milieu” (ibid.). This is the field in which security intervenes according to probabilistic calculations. Foucault does not, however, concern himself with different forms of uncertainty nor with the possibility that they may require different governmental techniques. The field of security interventions, or the “milieu,” is for Foucault necessarily a space of calculated probability. Uncertainty is, therefore, limited to a “series of possible events” whose likelihood cannot be fully known but can be rationally assessed. But what about approaches to governing uncertainty that do not necessarily depend on calculative techniques, such as probability and prediction? This question has not been pursued in depth by the literature on risk and security. Analysts tend either to treat techniques of calculative prediction as inherent to the political rationality Foucault calls “security,” or they consider what happens when these mechanisms no longer serve to manage future uncertainty. François Ewald (1991) represents the former approach. He describes insurance— or, the framework through which European societies begin to analyze themselves and their problems from the nineteenth century onward— as a “schema of rationality” based on the “technology of risk” that is “formalized by the calculus of probabilities” (199, 210). Whereas in what Ewald calls “everyday language” the term “risk” is “a synonym for danger or peril,” in the domain of insurance it takes on a strictly technical identity: “For an event to be a risk, it must be possible to evaluate its probability” (202). In the majority of studies inspired by Foucault and Ewald that track

Spaces of Uncertainty / 187

the genealogy of security mechanisms, risk is assumed to both produce and adhere to forms of reasoning that seek prediction on the basis of calculation. The second approach is best exemplified by the debate surrounding the work of Ulrich Beck (1992, 2009). Emerging out of a different theoretical tradition, Beck’s work raises questions for Foucauldian analytics of security, especially regarding the contemporary limits of calculative prediction. Like Ewald, Beck (2002, 40) sees risk as implying calculation, rational decision making, and the promise of control. But when technologies of prediction and mitigation meet threats that exceed the rationality of insurance, Beck argues that we have reached the threshold of industrial modernity and entered an epoch of uncertainty. Critiques leveled at his theory of “world risk society” notwithstanding, Beck poses a crucial question: What happens when uncertainty exceeds the limits of rational calculative prediction? In contrast to claims that nonrational, noncalculative decision making takes over, the most insightful responses have come from those who have analyzed the emergence of other techniques for governing potential threats, such as “preparedness” and “enactment,” that rely on different methods of assessment and intervention (Lakoff 2007; Collier 2008). As will soon become clear, my position differs from the first by questioning whether risk necessarily produces and adheres to calculative, predictive logics. With the second, I seek to examine the limits of this mode of governing the future; I do not claim, however, that we have witnessed a shift to an age of uncertainty in which calculative prediction is no longer possible. And while my approach resembles those that track the emergence of new techniques for managing potential threats, I subject them to a slightly different type of analysis. Given the fundamentally processual and contingent nature of risk, I argue that what governors do is of central importance. Thus I follow Collier, Lakoff, and Rabinow (2004) in emphasizing both the “forms of reasoning” and the “practices” with which experts bring threats into frameworks of technical intervention (5–6). In contrast to studies that privilege forms of governmental technique, I argue for an ethnographic analysis of their formation. When this lens is focused on the policy and practice of risk management, it becomes visible how different forms of uncertainty are met by a range of responses that only occasionally rely on calculative predictions.2 This line of inquiry is opened up by Limor Samimian-Darash’s (2013) discussion of “possible uncertainty” and “potential uncertainty” and by Louise Amoore’s (2013) contrast between the “politics of probability” and the “politics of possibility.” Despite key differences, both Samimian-Darash and Amoore draw attention to the fact that future uncertainty can be prob-

188 / Austin Zeiderman

lematized and governed in multiple ways. In my analysis of risk governance, the domain of policy is instantiated by studies, maps, plans, and documents that offer calculative predictions of the likelihood of future events. The domain of practice, on the other hand, is comprised of site visits, office conversations, and door- to- door inspections that aim to arrive at a decision about the future based on forms of assessment that seek neither to calculate nor predict.

Risk Management in Colombia Colombia has long been a country plagued by violence where security is the dominant logic through which state, territory, and population are understood. Despite popular conceptions of “state failure,” in the past two decades Colombia has come to be seen as an exemplary site of innovation in approaches to governing crime, violence, and insecurity. On the one hand, the state has advanced a militaristic approach to controlling territory, establishing sovereignty, and defeating “narcoterrorism.” On the other hand, Colombia has witnessed an elaboration of policies whose aim is to protect the life of the population against threats— from guerrilla attack to natural disaster. The emergence of a broad range of security mechanisms reflects a shift in how the state governs uncertain futures. For most of the twentieth century, the dominant logic governing the Colombian state’s approach to disasters was emergency response (Ramírez Gomez and Cardona 1996, 256). As in other Latin American countries, the role of the state was complemented by the Red Cross Societies, which provided additional humanitarian aid to victims in the aftermath of catastrophic events. Most disaster management took place on the local level, however, until 1949 when Colombia established a national policy for handling largescale, public emergencies. Following the assassination of populist presidential candidate Jorge Eliécer Gaitán, and the ensuing revolt that left much of Bogotá in ruins, the Colombian government signed an agreement with the Red Cross to create a semipublic parastatal organization, the Socorro Nacional de la Cruz Roja, which would thereafter be responsible for managing emergency response on a national scale (Ramírez Gomez and Cardona 1996, 264). This burden shifted in the 1960s, when Latin American countries influenced by the US Agency for International Development (USAID) began promoting agencies of civil defense. Adhering to national security policy, these offshoots of the military were given the mission of establishing ties with the civilian population in the interest of combating the “internal enemy.”

Spaces of Uncertainty / 189

The threat of political subversion posed by local communist movements motivated civil defense agencies to respond to disasters of accidental or nonhuman origin in order to prevent social and political instability (Ramírez Gomez and Cardona 1996, 265). These entities came to the aid of the Colombian city of Popayán following the 1983 earthquake there— a disaster understood primarily as a problem of rescue, recovery, and reconstruction. In contrast, two catastrophes that converged in November 1985— the eruption of the Nevado del Ruiz volcano that killed nearly thirty thousand people and the guerrilla siege of the Palace of Justice in Bogotá— precipitated a shift toward future- oriented policies of risk management. The 1985 events were seen not solely as emergencies requiring response, but as catastrophes that could have been foreseen and either prepared for or prevented. The current National Plan for Disaster Prevention and Response affirms that a transition occurred at this moment: “The general policy of the Colombian state has been, since 1986, to consolidate and incorporate the mitigation of risks and the prevention of disasters in the socioeconomic development process of the country” (DNPAD 1998, 21). Disaster risk management has gone in and out of political favor since the mid- 1980s. But from this moment onward, the Colombian state would be obligated both to respond to emergencies after they occurred and to plan and govern the national territory according to calculative predictions of threat and danger. The 1985 volcanic eruption and guerrilla attack were quite different catastrophes, and yet together were instrumental in establishing risk management policy in Colombia. The linkage between diverse security threats at this moment was not new. As mentioned, the responsibility of civil defense agencies to protect the nation against foreign and internal enemies implied the imperative to respond to natural disasters. President Belisario Betancur exemplified this approach to disaster management in 1983 when he assured earthquake survivors that the army would ensure that subversive elements did not create disorder. However, the events of November 1985 brought about a new connection between human and nonhuman dangers. It was believed that the likelihood of the “political” threat of armed insurgency and the “physical” threat of natural disaster were both known to be high, and yet no action was taken in either case. Whatever the cause of catastrophe in the future, response alone would no longer be sufficient. Nevertheless, a boundary separating political- military threats from those of a technical- civilian nature would soon be drawn. When Colombian disaster policy had been limited to emergency response, the same entities aided in rescue, recovery, and reconstruction, whether the event was a bomb explo-

190 / Austin Zeiderman

sion or a seasonal flood. But once risk management policy was established, the job of collecting, analyzing, and acting upon prognostic information about threats was divided between national security agencies on the one hand and technical agencies on the other. The institutional disarticulation of disaster risk management and national security policy led to separate techniques for predicting threat and calculating risk— that is, for governing future uncertainty. President Betancur, who was heavily criticized for his handling of the 1985 events, was succeeded in 1986 by Virgilio Barco. In seeking to rectify the crisis that plagued his predecessor, and recognizing that doing so demanded future- oriented policies of risk management, President Barco summoned a select group of experts. As one of these experts later recalled, Barco told them: “Look, I don’t want a Ruiz to happen to me. I don’t want what happened to Betancur to happen to me. I don’t know anything about this stuff. You guys, watch what you’re doing.” Following Barco’s plea, the National Office of Emergency Response (ONAE) was created in 1986 with support from the United Nations Development Programme (UNDP) (Ramírez Gomez and Cardona, 1996, 271). The municipal government of Bogotá followed suit in 1987 after the city council passed an accord creating a Fund for Emergency Prevention and Response. Its primary objective would be to “finance the creation of a program . . . to prevent disasters based on studies and inventories of risk” (Consejo de Bogotá 1987). Its priority would be communities located in “zones of risk” (zonas de riesgo) and these would be identified on a citywide risk map. In 1988, the Colombian government created a National System for Disaster Prevention and Response (SNPAD), which sought to integrate risk management policy on a national scale and with local- level institutions of government. A sense of urgency was evident in Representative Darío Martínez Betancourt’s plea for Congress to approve its creation: “There are numerous events that could lead to death and that threaten the life and tranquility of Colombians. Colombia is a country of extremely high risks (un país de altísimos riesgos)” (Congreso de Colombia 1988). Once the SNPAD was adopted, the Colombian state thereafter was obligated to identify risks, develop techniques for detecting and measuring them, and communicate this information to the public. Authority was distributed among government agencies according to their particular areas of technical expertise. By this point, the classification of possible events had been codified in the legal and policy frameworks governing emergency response. Upon its initial creation in 1984, the National Disaster Fund followed an all- hazard preparedness approach; it could be applied to “catastrophes” caused by

Spaces of Uncertainty / 191

“artificial or natural phenomena of great intensity or violence; unique or repetitive unfortunate events; diseases or medical conditions of an epidemic nature; and acts of hostility or armed conflict of a national or international scope that effect the population” (Presidente de la República de Colombia 1984). However, the 1989 law that brought the Fund into operation replaced the term “catastrophes” with “situations of disaster, calamity, or those of a similar nature” and confined its application to “floods, droughts, frosts, hurricane winds, earthquakes, tidal waves, fire, volcanic eruptions, avalanches, landslides and technological risks in declared disaster zones” (Presidente de la República de Colombia 1989). Likewise, the 1988 law creating the SNPAD also defined “disaster” in such a way as to limit its scope: it became “the grave damage or alteration of the national conditions of life in a determinate geographic area, caused by natural phenomena or by catastrophic effects of the accidental actions of man” (Congreso de Colombia 1988). With the shift of emphasis from response to risk, future uncertainty was partitioned according to its human and nonhuman dimensions. Risk became a technique for governing Colombia’s cities in 1989 when Congress reformed urban policy and obligated every municipality to formulate development plans (Pecha Quimbay 2008). Although it was a move in the direction of decentralization, this law nevertheless determined the basic criteria for how cities throughout the country would be legally bound to govern themselves. Contained within these policies was the imperative that urban governments establish inventories of vulnerable human settlements by calculating their exposure to flooding, landslide, or other unhealthy living conditions. And if localized mitigation projects were impossible, municipal authorities would be required to relocate the inhabitants of “zones of high risk” (zonas de alto riesgo). The establishment of risk management policy in Colombia was not the result of national events alone. The UNDP had sent recovery aid after the 1985 volcanic disaster and was actively involved in the initial creation of the National Office of Emergency Response. The influence of the international community intensified when the UN General Assembly proclaimed that the 1990s would be the “International Decade for Natural Disaster Reduction.” In the late 1990s, the UN created an interagency collaboration called the International Strategy for Disaster Reduction (ISDR) to pursue the initiatives begun in the previous decade. The ISDR partnered with the World Bank in 2006 to form the Global Facility for Disaster Reduction and Recovery (GFDRR), which would provide technical and financial assistance to “high risk low- and middle- income countries” in order to focus their national development plans on disaster reduction. By 2009, the World Bank was

192 / Austin Zeiderman

providing Colombia with more than $340 million in funding for disaster management projects and Colombia had become recognized internationally as a leader in this field (GFDRR 2009, 224–229). While the coinciding catastrophes of 1985 created an initial opening, the establishment of risk management policy in Colombia reflects a global shift in ways of thinking about and acting upon cities.

“Zones of High Risk” This section focuses on the formation of “zones of high risk” in Bogotá. The story begins in late 1987 when the Bogotá city council created the Fund for Emergency Prevention and Response. This fund was dedicated to financing a comprehensive disaster prevention program based on studies of zonas de riesgo, or “zones of risk” as they were initially called. Shortly thereafter, in 1989, the Colombian legislature enacted a broad reform of urban policy, which obligated every municipality with a population of more than 100,000 to maintain an inventory of zonas de alto riesgo, or “zones of high risk,” and to begin mitigation work or relocation programs in these areas (Pecha Quimbay 2008). Although this law applied to all cities, the municipal government of Bogotá became one of the first to implement it. In 1994, Bogotá’s mayor, Jaime Castro, directed what was then the Office of Emergency Prevention and Response to begin analyzing the distribution of environmental risk across the city. A consulting firm, Ingeocim, was hired to conduct these studies and to devise a methodology for calculating risks of a geophysical nature. The imperative to govern the city as a space of risk extended beyond the domain of environmental hazards in 1995 when an unconventional mathematics professor and university administrator, Antanas Mockus, became mayor of Bogotá following a barrage of homicides, political assassinations, crime waves, and bomb attacks. Searching for innovative strategies with which to confront problems of insecurity, Mockus found inspiration in Cali, Colombia’s third largest city, and its novel Program for Development, Security, and Peace (Martin and Ceballos 2004, 222). This program was an initiative of Cali’s mayor, Rodrigo Guerrero, a medical doctor whose graduate studies in public health had led him to approach insecurity from an epidemiological perspective (Rivas Gamboa 2007, 157, 180–181). The Mockus administration subsequently adopted a similar strategy in the creation of a Unified Information System of Violence and Delinquency (SUIVD) in Bogotá. Like its predecessor in Cali, this system was based on the idea that outbreaks of crime and violence could be treated like diseases of unknown

Spaces of Uncertainty / 193

origin. By collecting and analyzing existing crime data, the SUIVD sought to identify risk factors that could be used to predict when and where violence would be likely to occur in the future. Once the spatial distribution of risk factors was determined, the system could guide a coordinated strategy of governmental intervention targeting high- risk activities, populations, and neighborhoods. The violence prevention strategy that emerged in the mid- 1990s in Bogotá paralleled attempts to map the city’s vulnerability to environmental hazards. Shared between these efforts was the idea that Bogotá was a space of risks that could be calculated probabilistically and governed through interventions concentrated in specific zones. The consulting firm, Ingeocim, evaluated threats of various kinds— earthquakes, landslides, floods, fires, and industrial accidents— and mapped their spatial distribution. “Threat” referred to the probability of occurrence of a given phenomenon over a specific period of time. To calculate the threat of landslide, for example, they used GIS mapping techniques, photographic interpretation, field surveys, and historical records. They broke down landslide threat into three levels— low, medium, and high— and mapped them accordingly.3 Ingeocim then overlaid levels of landslide threat with measures of social and physical vulnerability (e.g., housing type, tenure, access to services, literacy, occupation), which when combined resulted in a “total loss index” (índice de pérdida global) (DPAE 1999; Ingeocim 1998). “Zones of high risk” were those in which the expected average loss of life, property, and infrastructure in the event of landslide was estimated at over 62.5 percent of the total existing in the defined area (DPAE 1999). Based on these studies, the municipal government issued citywide risk maps and priorities were established for more detailed calculations on the neighborhood and household scale. In 2000, these priorities, based on the intersection of high risk levels and low socioeconomic status, were incorporated into Bogotá’s master plan. Under the leadership of Mayor Enrique Peñalosa, the master plan sought to recuperate public space, improve the distribution of services, construct a mass transit system, and regulate urban land use (Pecha Quimbay 2008, 100). In terms of housing, it was also an attempt to address Bogotá’s massive problems of informality and illegality. To upgrade self- built settlements lacking basic necessities and legal titles, the plan proposed to provide them access to comprehensive neighborhood improvement programs. However, since preliminary studies had shown that a large percentage of the city was vulnerable to environmental hazards or otherwise unfit for habitation, this implied the need to resettle thousands of households (Pecha Quimbay 2008, 101). As a result, the master plan adopted Ingeocim’s identification of

194 / Austin Zeiderman

areas in which landslides and floods were likely to cause major damage and loss of life. Bogotá’s Directorate for Emergency Prevention and Response (DPAE) was then given the technical responsibility for the ongoing evaluation and monitoring of “zones of high risk,” and the Caja was granted the authority to manage the relocation of populations inhabiting them.

Technical Diagnosis Thus far, we have seen how the calculative predictions leading to the designation of “zones of high risk” were incorporated into urban policy and government. This may give the impression that risk management, once established on a national and urban scale, has been relatively static. The problem, however, is that the topography of the city is not. The municipal government always recognized this; a caveat accompanying DPAE maps states: “The map of landslide threat is of a temporal nature, and therefore subject to conditions present at any given moment. Since these are changeable through time, the threat levels could be variable” (2007). Because of the city’s unstable ground, calculative predictions of risk must be regularly revised. Thus, DPAE created an instrument called the diagnóstico técnico (or, technical diagnosis)— an on- site evaluation conducted by staff engineers and architects. During these visits, DPAE technicians perform visual inspections and qualitative assessments of both the property and the neighborhood. The technicians then recommend immediate action, if necessary, and decide whether the property should be included in the Caja’s resettlement program. The routine work of conducting a technical diagnosis reveals that, in practice, the “zone of high risk” is equipment for managing that which is not easily amenable to calculative reason (Rabinow 2003). DPAE technicians operate in a space of potential uncertainty that need not be measured to be governed. On the ride to the barrio of San Rafael, the two government technicians I’m accompanying, Tatiana and Miguel, explain the purpose of our trip. We’re setting out to monitor risk in a neighborhood bordering the site of a 1999 landslide, which destroyed hundreds of houses and took several lives. We pass the Escarpment of San Rafael, which DPAE regards as the phenomenon causing this mountain to rise in some places, fall in others, and occasionally open up to swallow houses into gaping crevices. More than 150 homes were affected by a second landslide in 2000; 85 more were toppled in 2001; and then, in 2002, a major collapse damaged more than 800 dwellings in the same vicinity (DPAE 2006). Although technical studies began in 1999 immediately after the initial disaster— and rescue, reconstruction, and resettlement efforts have been underway ever since— the area was not

Spaces of Uncertainty / 195

Figure 10.1. Evacuated “zone of high risk.” Credit: Photograph by author, 2009.

officially declared a “zone of high risk” until more than 1,000 properties had been damaged or destroyed. Its official boundary was demarcated in 2004 and, from that moment on, it determined who would be entitled to the Caja’s resettlement program. However, the boundary of the risk zone— and, therefore, who has the right to housing subsidies and who does not; who is destined for relocation and who will remain— must remain flexible in order to keep up with the shifting ground beneath it. The movable boundary of the risk zone is also the flexible geography of governmental responsibility. The truck bumps and grinds up the steep, rutted dirt roads typical of the hillside settlements that make up the southern periphery of Bogotá. Tatiana points to a vast stretch of empty land once inhabited by more than three thousand families (figure 10.1). She then indicates where mitigation works will soon begin to protect the neighborhoods surrounding the now evacuated zone from further damage. Drains will be installed to channel water and retaining walls will be built to stabilize collapsing hillsides. Before starting work, however, DPAE must conduct house- by- house inspections to determine whether the phenomenon causing landslides in the adjacent area is also threatening this neighborhood.

196 / Austin Zeiderman

We’re dropped off at the border separating the evacuated zone from a dense settlement of self- built houses varying in size, material, and construction. Though some are two or three stories tall with concrete or brick facades, most are rudimentary, one- floor dwellings made of cinderblock, wood, and tin. As we begin our survey, a middle- aged woman with a small child passes by and asks: “Are you here to evict us?” Responding to her perception of the work the government does here in Ciudad Bolívar, Miguel reassures her: “No, we’re from the DPAE and we’re just monitoring the area.” Tatiana raps on a window of the first house on our list. A woman answers from behind a locked door, “Who is it?” Looking down at his map, Miguel asks, “Is this lot number 10, Señora?” Somewhat tentatively, she responds, “Yes, it’s lot 10.” “OK, good,” Tatiana says. “We’re with the DPAE, and we’re doing inspections of the houses on this block. Due to the landslide phenomenon we have behind us,” she says gesturing toward the “zone of high risk” in the distance, “we need to verify whether you have recently noticed any cracks in the walls, floors, or roof of your house.” “No, I haven’t noticed anything like that,” the woman answers. Miguel follows with a further request: “Would you allow us to come in and look around and maybe take some photographs of the walls and the floor? Are you sure that your house has not been affected recently by any sort of cracking?” The woman hesitates for a moment before consenting. “Sure, you can come in if you’d like.” The three of us enter, glance quickly at the walls, floors, and ceiling, take a few photographs and then leave, thanking her for her time. Tatiana then scrolls down to “Block 2, Lot 10” on her list and next to it enters: “No apparent effects.” The rest of our monitoring day goes about like this. We move from house to house, introduce the purpose of our visit, ask whether the resident has noticed any recent cracking, and then conduct a brief inspection (figure 10.2). Though occasionally we have to skip properties where no one is home, many encounters are identical to the one described above. Sometimes, however, the head of the household is not so trusting and cooperative. On a number of occasions, we are told bluntly: “No, there’s no cracking here.” And when Tatiana or Miguel asks permission to conduct a brief inspection, the request is met with abrupt rejection: “No, at the moment, no.” Perhaps it is simply an inconvenient time. But in many cases, there is noticeable hostility. Experience might lead people to assume that if government officials are knocking on doors today, they will be delivering eviction notices tomorrow. As far as the monitoring goes, however, these rejections do not pose a problem. Tatiana simply responds, “OK, fine, many thanks,” and marks down on her list: “Entry not permitted. Resident reports no apparent effects.”

Spaces of Uncertainty / 197

Figure 10.2. Diagnóstico técnico in action. Credit: Photograph by author, 2010.

I start to get bored, and find myself actually hoping that we’ll stumble upon something exciting— a collapsing wall, a falling roof, perhaps a cracking floor. And eventually we do. Upon entering one house, we find fissures that appear to be serious in both the walls of the bedroom and the living room floor. Looking concerned, Miguel snaps some photographs, and then

198 / Austin Zeiderman

goes outside to examine the cinderblocks behind the cracking walls. He returns, informing the residents that their house is in danger of collapsing, but that the problem is due to deficiencia constructiva (faulty construction) and not to un fenómeno natural (a natural phenomenon). As Tatiana makes note of this evaluation on her list and we return to the street, I begin to wonder how one differentiates between these two types of damage. I ask, somewhat self- consciously: “How do you distinguish cracks caused by faulty construction from those caused by movement of the earth?” Miguel patiently explains: “When there are problems with the way concrete is mixed, you see small cracks between the cinderblocks, telling you the builder did not use enough cement.” I then look to Tatiana to get her view: “You just know when it’s mala construcción (bad construction) and when it’s algo natural (something natural).” We continue with our technical diagnosis, moving to an adjacent block. Turning the corner, we stumble upon a mud- filled ditch where a water supply tube is actively leaking. Though originally installed by the utility company, it was later tapped into by households. Water gushes out of a thin rubber hose, which is most likely the work of the local fontanero, a self- employed water manager who installs informal connections, regulates flows, and fixes leaks. Miguel and Tatiana examine the ditch. In addition to the water escaping from the hose, the ditch collects and channels rainwater during the wet season, both of which further destabilize ground already dangerously unstable. Miguel and I follow the seepage to the point at which it eventually disappears beneath a nearby house, while Tatiana marks down our exact location. “I’m going to have to alert the gestora local,” or local manager who represents DPAE at the neighborhood level. “She’ll contact the water utility and have them remove the connection and repair the leak.” Thinking we have finally discovered reason to declare this a “zone of high risk,” I ask: “Will this affect the boundary and therefore who will be relocated?” “No, no, no,” she responds; “we’re looking for structural damage caused by geophysical movement. Anything beyond that is a different kind of problem. In fact,” she goes on to explain, “it’s good that we saw this; it means that any subsidence downhill from here is probably attributable to this.” The day is finally winding down and only one more property remains. Miguel knocks on the door and the woman who opens it looks at his yellow DPAE jacket and asks with a smile, “Are you coming to kick me out?” “No,” Tatiana responds, “we’re just monitoring the block to see if there are signs of movement and we haven’t found any.” “Too bad,” the woman says, “I want to get out of here! I’ve been here for two years and I don’t like this neighborhood. I don’t like living up here on this hillside.”

Spaces of Uncertainty / 199

Still hoping that we will find evidence that the ground beneath her house is unstable, she invites us to come in and inspect. She guides us from room to room, showing us where the roof leaks when it rains and where the floor is not level— signs, she hopes, that will convince us that she should be eligible for resettlement. “Unfortunately,” Tatiana informs her, “these problems have nothing to do with the geophysical movement occurring nearby.” With this final inspection our technical diagnosis comes to an end, and the boundary remains as before.

Conclusion This is the work that goes into implementing risk management policy in Bogotá. It shows that the boundaries of high- risk zones are not based solely on predictive calculations of uncertain future events. In their efforts to evaluate the constantly changing likelihood of landslide, government technicians operate in a space of potential uncertainty that need not be measured to be governed. The boundaries of the zone might have been expanded to include these households had Tatiana and Miguel seen things differently, had people reported more recent damage, had their walls and floors cracked in another way. There is no infallible method of calculating whether a fissure is the result of a poorly built house or the unstable ground beneath it— or perhaps both. And although calculative predictions of disaster risk in Bogotá are established in official planning regulations, these assessments are recognized as perennially changeable and therefore subject to regular inspection. What was encountered during our visit— collapsing houses, bursting water tubes, willing evacuees, their recalcitrant neighbors— did not collectively translate into a definitive change of risk level. Despite the difficulty of relying on prognostic data, information was gathered and a decision made— it did not inhibit the governing of the area. The neighborhood remained, at least for the time being, outside the “zone of high risk.” On the basis of this ethnographic material, we might wonder whether the designation of risk in Bogotá should be called “calculative” at all. The monitoring efforts described in this chapter involve observations, intuitions, negotiations— but calculations? Certainly not, if when we think of calculation we imagine a scientist reading a precise numerical measurement off a technical instrument. But rather than calling this a “noncalculative” rationality, I would argue that, in practice, this approach to governing uncertainty does not rely, at least not exclusively, on calculative predictions of future events. For risk managers, the field of intervention is a space of potential uncertainty to be governed by other means. One implication is that official

200 / Austin Zeiderman

risk assessments become entangled with those being made simultaneously by nonexperts. Repeated interactions between DPAE technicians and inhabitants of these areas reveal that such assessments ultimately depend on how both groups engage in the sometimes calculative, sometimes intuitive process of codifying the world in terms of risk. As Wittgenstein (1972) puts it, “out of a host of calculations certain ones might be designated as reliable once for all, others as not yet fixed . . . even when the calculation is something fixed for me, this is only a decision for a practical purpose” (48–49). As policies of risk management enter the practical realm of implementation, uncertainty changes from something that can be calculated, predicted, and managed to something that must be observed, intuited, and negotiated. In sum, the imperative to govern urban environmental hazards through policies of risk management utilizes but does not depend on techniques of calculative prediction. It operates within a space of intervention in which uncertainty is an assumed characteristic of the city that can be approached by a range of methods. Risk management began in Bogotá as an attempt to calculate the likelihood of catastrophic events in the self- built settlements of the urban periphery and then intervene accordingly. However, the establishment of governmental frameworks eventually led to their practical implementation, and this required ways of dealing with potential threats that could not be easily measured or foreseen. One form of uncertainty gave way to another, and in response alternative approaches to governing the urban future emerged.

A F T E R WO R D

Problematization and Uncertainty

Work: That which is susceptible of introducing a significant difference in the field of knowledge, at the price of a certain difficulty for the author and the reader, and with the eventual recompense of a certain pleasure, that is to say access to a different figure of truth. —Michel Foucault, Dit et ecrits II, 1976–1988

The chapters in this volume adroitly analyze a variety of aspects of a broad phenomenon that we identify as uncertainty. The great range and differing scale of the cases presented underscore a claim that a motion of problematization is taking place in the world. The notion of problematization comes from Michel Foucault: [It] does not mean the representation of a pre-existing object nor the creation through discourse of an object that did not exist. It is the ensemble of discursive and non-discursive practices that make something enter into the play of true and false and constitute it as an object of thought (whether in the form of moral reflection, scientific practice, political analysis, etc.). (Rabinow 2003, 18)

We are not attempting to characterize the ultimate contours of the problematization. It is no doubt too early to identify definitive changes in the terms life, labor, and language from the meanings, institutions, power relations, contestations, and multiplication of domains out of which risk technology emerged and was made into a dominant mode of understanding and practice. Simply by pointing to massively more efficient and accelerative modes of transmission of information, such as the Internet; the expansion

202 / Paul Rabinow and Limor Samimian-Darash

of ever more comprehensive and powerful instruments and institutions of global capitalism; and the unification, through the last two vectors, of the life sciences is to indicate the obvious. Less obvious is how to undertake the work to be done to develop new modes of understanding and practice appropriate to these major changes in the world. The chapters in this volume make a signal contribution in the right direction. What are the stakes? At the very least, they concern the status of the human, humanity, anthropos, or whatever term one chooses to privilege; being anthropologists, we choose anthropos. It was entirely appropriate that the figure of anthropos that Mary Douglas and Aaron Wildavsky (1982; see also Douglas 1992) were concerned to characterize and the one that Ulrich Beck (1992, 2009) narrated were different. Although we think it is premature to name a new figure of anthropos coming into being today, we are convinced that the older figures and their clear-cut oppositions are being surpassed, rearranged, and reconfigured. We are convinced that the task of a twenty-first-century anthropology will have to confront these changes in multiple ways, perhaps most importantly in the form of its inquiry; that is to say, in its practice. Today we argue that, standing outside the figurations taking shape, the way one could perhaps do in the case of cultural formations that turned on danger/certainty and risk/probability, would be to inevitably miss the modes of uncertainty we claim are taking shape today and tomorrow. If one’s object was the cultural formation of a different society in an ethnographic present, as the disciplinary norm had it, then sculpting a form that could be viewed from a certain distance, appreciated for its elements and their combinations and, ultimately, for their distance from the observer, made sense. Participant-observation in this mode did not mean becoming Melanesian or Moroccan but, rather, living in sufficient proximity to those one was attempting to represent and possessed of sufficient linguistic and analytic skills to be able to portray a way of life different from one’s own but not so different as to no longer qualify as comprehensibly human. If one’s object was a long series of documented incidents that could be ordered through patient enumeration and classification in such a way that operations could then be performed on them to allow prediction within parameters of probability, as was the case for one mode of risk analysis, then embracing risk required a certain proximity to the phenomena at issue but also a certain distance. Understanding industrial accidents, suicide, or influenza epidemics as risks did not mean either that the observer was unconcerned with their impact or that they could be taken up as artifacts to be ordered and compared for their own sake.

Afterword / 203

These modes were not epochal markers, as they overlapped historically. It took almost a century, after all, for culture and society and their associated concepts and technologies of management and understanding to be articulated. During that century, as capitalism and the newly institutionalized social sciences advanced, the phenomenon of risk became increasingly central to the manner in which these societies and cultures, especially industrializing ones, were governed in the broad sense of the term. This centrality, we argue, did not simply displace or destroy attempts to understand and/or to manage difference of a cultural or societal sort but, rather, brought them into a complex and shifting set of relationships with risk technologies. Emile Durkheim’s (1951) classic analysis of suicide was statistical and social at the same time. Today, as risk technologies are present everywhere in the world, as is discourse about identity formation, cultural difference, and the like, and as the chapters in this volume demonstrate in a remarkable variety of ways, the primacy, indeed, the very existence, of these two previous figurations is receding. They are being recombined, reconfigured, reevaluated, and in certain cases replaced. Perhaps, one day, the concepts, technologies, and modes of inquiry appropriate to uncertainty will be articulated, tested, and modified so as to achieve an apparatus-like stability and instrumentality. That day, however, is yet to come. We are still experimenting and exploring how to situate ourselves as scientific practitioners. Yet we do know that unmodified, older modes of participant-observation obfuscate what we are attempting to understand. They provide false assurance that the forms and standards requisite for further inquiry are already at hand. To claim that the stakes for developing such modes of understanding, self-formation, and governmentality are high is no exaggeration.

Mode The problem reduced to its lowest terms is whether inquiry can develop in its own ongoing course the logical standards and forms to which further inquiry shall submit. —John Dewey, Logic: The Theory of Inquiry

Research on preparedness for biothreats in Israel, conducted by SamimianDarash between 2004 and 2007, made it clear that a new problem space was emerging. The notions of risk and risk technology were insufficient in confronting that problem space. For example, as Israeli health experts prepared

204 / Paul Rabinow and Limor Samimian-Darash

for a pandemic flu event in 2005–2007, prompted by the global outbreak of avian influenza at the time, they had to wrestle with the conundrum that, until the event took place, it was impossible to know what actual virus strain would be involved. Although “the pandemic was already there,” the virus (or its specific pandemic strain) had yet to appear. This problem was reframed by Samimian-Darash as potential uncertainty: rather than a case of future risk (rendered through calculations and possibilities based on information accessible through past events), the potential future pandemic was a situation of “a pandemic without a virus,” which no known possibility was sufficient to counter. This raised the question of what preparedness practices to put forward before the nature of the actual event was known; that is to say, how was it possible to prepare for the potential uncertainty manifested in the case of pandemic influenza (Samimian-Darash 2013)? While the problem of risk seemed to be changing in the world, anthropologists and sociologists were still trapped in concepts embedded in older problems, which prevented them from seeing the challenge of the new problem space. New concepts were needed and, subsequently, were developed (e.g., a pre-event configuration of preparedness, potential uncertainty, and event technology) to better understand and analyze the phenomenon of uncertainty. Initially, this research achieved its goal in terms of providing appropriate tools for anthropological inquiry, but, eventually, following Dewey, the need arose for further rectification and conceptual development. A core aim in the genesis of this book was to test this framework anthropologically in diverse domains of study. During the course of this work, however, an additional challenge arose concerning the problem of uncertainty, specifically, the mode of anthropological inquiry appropriate for its study. Uncertainty, we realized, was neither uniquely a distinctive mode of conceptualization nor simply the mode of technology applied to the problem; that is, the rationality of the event technology or other technologies of governmentality. We sensed that uncertainty opened up a distinctive unexplored mode of anthropological inquiry and participation. In defining this mode, we distinguished it from three other modes of inquiry: theoretical/ totalizing, descriptive/cultural, and philosophical/external. Against a Theoretical/Totalizing Mode How is rendering uncertainty as a mode of inquiry different from turning the concept of uncertainty into a theory, be it a theory of danger or of risk? As the case of influenza preparedness shows, multiple ways of dealing with uncertainty emerge in practice as policy makers, security analysts, and

Afterword / 205

scientific experts tackle the problem. The results of their efforts do not correspond to a singular logic or a grand theory but, rather, a tight set of concepts and relational practices directed to meeting diagnosed threats. This work, however, also reveals specific resistances and limits. The anthropological concern, then, comes to focus on bringing the various forms of preparing for future potential uncertainty into analytical discussion, keeping their pragmatic heterogeneity vivid, and remaining attentive to issues of management and policies affecting those whose job it is to address such concerns. We conclude, then, that studying uncertainties and their governmental technologies requires an analytical approach that facilitates the delineation of their complex formations, contemporaneous existence, and diverse conceptualizations of the future in concrete forms of life; that is, they must be studied anthropologically. The cases in this book illustrate such a mode of analysis in its diversity. Adriana Petryna’s work, for example, shows how various experts use new tools to challenge conventional ways of looking at ecosystems (usually based on the rationality of risk) and to take stock of phenomena that cannot be accounted for in the old framework. Such diagnosis and reflection before closing a conceptual grid is what constitutes good science, anthropological or other. Petryna thus introduces the concept of horizon as a new technology; she does not, however, conceptualize this technology into an all-encompassing theory. On the contrary, Petryna not only emphasizes the appearance of a new form for governing uncertainty but also indicates those areas that governments cannot protect, that is, the limits of this newly introduced rationality. The cases in this book, instead of offering one central depiction of a problem of “society,” demonstrate that testing different uncertainty concepts and technologies as they appear in the various anthropological cases, is scientifically and pragmatically rewarding. As opposed to a modernist analysis, our intention is not to point to an all-pervasive change in the world, for example, a shift from risk society to uncertainty society. Rather, we aim to promote a mode of analysis through which one may observe different modes of danger, risk, and uncertainty and the various technologies that are applied to them. Against a Descriptive/Cultural Mode Is uncertainty as a mode of inquiry simply thick description and acknowledgment of “heterogeneity”? How is this work different from “classic” cultural-social anthropological inquiry?

206 / Paul Rabinow and Limor Samimian-Darash

The conceptual diagnostic we suggest offers, along with the multiplicity of technologies we identify, more than an open-ended description (more or less thick)—more, that is, than a boundless uncertainty. Rather, the multiplicity is set within a structure or a particular form of conceptualization of objects. This anthropological labor is expressed in various cases in this book. One enlightening case is provided in Austin Zeiderman’s study on governing the uncertain future of cities and urban environments in Colombia. Zeiderman shows how, in practice, the policy and governing of risk is transformed into the management of uncertainty. The field of intervention, once defined by experts in terms of zones of high risk, becomes a space of potential uncertainty that does not require strict measurement to be managed. Rebecca Lemov, through an analysis of the US military prison at Guantánamo, describes a shift from coercive to enhanced interrogation in the apparatus of torture. These distinct technologies are correlated with a change from risk-based rationality to information-based rationality. The new apparatus of torture constitutes a new problem—an impossible-to-calculate reality— that is also reflected in the formation of potentially uncertain subjects, that is, “unknown unknowns,” who need to be managed. An anthropological mode of uncertainty is also concerned with the logos, the logic and coherence, of the inquiry. The ability to extract the concept from the field of possibilities/objects, the problem from its various solutions, is the foundation of this mode and inescapably implicates the researcher. We argue that a mode of uncertainty is not a condition or state derived from a “lack of knowledge” or an absence of the “right meaning” that can gradually be overcome by additional information (description). The uncertainty we identify is an approach to a problem space; it derives from a form of thought and practice engaged with the capacity to define and understand sets of events, without which those events would remain a “collection of details” or “things in the world” rather than analyzable objects or concepts.

Against a Philosophical/External Mode Philosophy of science without history of science is empty; history of science without philosophy of science is blind. —Imre Lakatos, “History of Science and Its Rational Reconstructions.”

We would take Imre Lakatos’s assertion a step further and add that history and philosophy without anthropology are static. In a turbulent and chang-

Afterword / 207

ing world, we argue, remaining static is scientifically inadmissible, ethically deficient, and misleading for those concerned with policy. It became apparent to us that, as enamored as we are with the work of certain philosophers, the philosophical mode—even at its best—has critical limitations. The philosophical mode that enabled one both to create a diagnostic framework and at the same time to avoid reducing the multiplicity of cases seemed to render the anthropologist external to the concepts and objects observed, that is, an observer. The mode of uncertainty as a practice of inquiry requires participant-observation. It thus entails (1) conceptualizing while understanding the complexity and multiplicity of the phenomenon; (2) retesting the conceptual framework; and (3) maintaining awareness that, through this practice, not only does the domain in question change but so also does the researcher her- or himself. In this process, uncertainty becomes not merely a mode of research but also a mode of the anthropologist’s subjectivation. These aspects, together, establish a distinctive mode of inquiry, one that takes uncertainty as an anthropological mode, through which the inquirer is not external to the inquiry but rather changed by it.

NOTES

CHAPTER ONE

1.

2.

3.

4.

This chapter draws upon an earlier paper presented as part of the Leipzig University 400th anniversary celebrations, and published electronically in 2009 as “Uncertainty Makes Us Free: Security, Modern Government and Liberal Freedom,” Behemoth: A Journal on Civilization 3: 37–57. It is worthy of note that in nineteenth- century tort law, a right to recovery lay only where there was a contractual relationship between parties. So, for example, it could still be argued in the early twentieth century that a person injured by an exploding soda bottle, or made ill by the contents of a soft drink, could only have remedy from the purveyor of the goods, not the manufacturer with whom they had no contract. Normally this denied recovery since the purveyor played no part in the production of the harmful commodity and knew nothing of its risks. It is worthy of note that in nineteenth century cases dealing with contract and tort laws, judges never mentioned statistical probabilities when discussing foresight and expectations. This has remained substantially the case throughout the twentieth century. When pressed, judges resort to everyday expressions such as “in the natural course of things,” “more likely than not,” “on the cards” and so on (O’Malley 2000b). In practice, nineteenth- century insurance contracts penalized negligence through different techniques, such as raising premiums or refusing insurance. C H A P T E R T WO

1.

This organizational literature is vast. In addition to the fourteen contributions that appear in the 1998 special issue of Organization Science (9 [5]), examples include a special issue of International Studies of Management and Organization (Kamoche, Cunha, and Cunha 2003a) and numerous single articles in various journals (e.g., Akgun et al. 2007; Chelariu, Johnston, and Young 2002; Eisenberg 1990; Hatch 1999a, 1999b; Kamoche and Cunha 2001; Kamoche, Cunha, and Cunha 2003b; Mantere, Sillince, and Hamalainen 2007; Moorman and Miner 1998; Vendelo 2009; Zack 2000). For an early review of the literature, see Cunha et al. (1999). Although calls have been made within this field to move “beyond the jazz metaphor” to other metaphors of improvisation (Kamoche et al. 2003b), and despite cautionary notes against overestimating the efficacy of metaphors in general (Hatch 1999b), the jazz

210 / Notes to Pages 30–50 metaphor has continued to dominate scholarly discussions of organizational improvisation in organizational studies. 2. See the American Quarterly special issue on post-Fordist affect (2012), edited by Muehlebach and Shoshan, for a review of this literature. 3. For example, the institutionalization of accidents as a taken- for- granted feature of organizations in organizational studies has found expression in the notion of “normal accidents” (Scott 2003, 94). 4. For a discussion of creativity in jazz and its relation to contingency, see Wilf (2013a). 5. The notion of an “aesthetic of imperfection” was developed by jazz scholar Ted Gioia (1988), who has written at length about the importance of errors and retrospective sense- making in jazz improvisation. 6. Organizational theorists have also emphasized the fact that different subunits within companies— for example, R&D versus production units— also face different challenges and hence must be structured differently. 7. Indeed, as Brian Massumi (2002) argues, “it is the edge of virtual, where it leaks into actual, that counts. For that seeping edge is where potential, actually, is found,” as well as “emergence” and inducement of “the new” (43). 8. This is not to deny that at times the jazz band— particularly the big bands of the swing era— was understood and interpreted as a well- oiled machine, often derogatively so (Adorno 1982; Stowe 1994; Dinerstein 2003). 9. Of course, jazz symbolized these values also outside of the United States. For example, in France it influenced artists within surrealism (e.g., René Magritte) and new wave cinema (e.g., Jean-Luc Godard), as well as philosophers in the existential school (e.g., Jean-Paul Sartre). See Nettelbeck (2005, 95–188). 10. At the same time, some of this fascination with jazz’s emergent nature was the product of the incorrect belief that improvisation is creation ex nihilo. CHAPTER THREE

1.

2.

3.

4.

The 2006 Unlawful Internet Gambling Enforcement Act (UIGEA) criminalized the transfer of funds from financial institutions to online gambling sites, making banks largely responsible for preventing their American clients from gambling. The law, however, did not make it illegal— or impossible— for Americans to place bets online; nor did it take full effect until 2010, by which point anti-UIGEA legislators were making headway with their agenda. A poker site offers tips on how to arrange tables on ones screen for optimal play: “If you play only a small number of tables simultaneously, it makes sense to arrange them in a tiled fashion all next to each other so that you can follow the action at all tables. If you multitable eight, twelve or even more tables, you should switch to a ‘cascading’ or ‘stacked’ table arrangement” (see http://www.tournamentterminator .com/tournament- strategy/online- poker/tips- multitabling- effective/). The site recommends that players buy a second monitor. The term “grinding” in online poker has a different connotation than in online video games like World of Warcraft, or in live land- based gambling where “grind joints” are mocked as places for the poor and unwise. Online multitablers go as far as to boast of their grinding powers, some even claiming the title in their online name, e.g. “grinder007.” While Lee and Lipuma (2012) rightly point out that the game of poker has become morally and culturally valorized for its high risk and volatility, online poker has valorized a low- volatility, seemingly unheroic mode of play. The concept of “deep play” was first elaborated by Jeremy Bentham to describe play

Notes to Pages 50–65 / 211

5.

6.

7.

8.

9.

10. 11.

12.

13.

14.

in which financial stakes run “irrationally” high despite the fact that chance will determine the outcome (in Geertz 1973, 431). Decision making, a number of scholars have argued, is a distinctively fraught domain of contemporary life. “Everyday risks present us with the necessity of making a seemingly never- ending set of choices,” writes Hunt (2003, 169). “Modern individuals are not merely ‘free to choose,’” Rose (1999, 87) elaborates, following Giddens (1991), “but obliged to be free, to understand and enact their lives in terms of choice.” Melucci (1996, 44) similarly writes that “choosing is the inescapable fate of our time.” Software- assisted, online poker is an arena in which players are grappling with this fate. PokerTracker, originally developed in 2001 and today in its fourth iteration, is credited with bringing information technology solutions to online poker. Today Hold’em Manager is the leading poker software product. Heads- up displays (HUDs) are a common feature of other online gaming interfaces such as World of Warcraft, in which HUDs hover over other players’ avatars, communicating information about their status, their strengths, their historical record, and the like (e.g. see Galloway 2012). As Zaloom (2006, 142) notes in her study of financial trading, although numbers are typically associated with “objectivity and certainty,” in some cases they indicate qualitative, fluctuating information rather than stable points of certainty. HUD technology is not unlike the “syndromic surveillance” system that SamimianDarash identifies at work in contemporary public health, similarly described by Lakoff (2013) as a program of “vigilant monitoring”: its problem is how to know what data is significant and its solution is to track and compile as much information as possible. In poker as in public health, the law of large numbers holds: the more events one tracks, the more trust one can place in the exceptionality of the patterns detected. Thanks are due to Limor Samimian-Darash for suggesting the phrase “reverse scenario” to describe retrospective poker simulations. Financial traders, Zaloom (2006, 128) reports, are similarly invested in “dismantling narratives of success or failure.” She describes how managers at one trading firm claimed they didn’t care if traders made or lost money as long as they practiced discipline: “The trader’s responsibility was to his technique of self- regulation, not to the profit and loss figure at the end of the day” (129). “Each hand interlocks with the next,” wrote the author of a 2006 profile of online poker addiction (Schwartz 2006, 55). “Time slows down to a continuous present, an unending series of buildups and climaxes. The gains and losses begin to feel the same.” For an extended account of the ways in which technological interface contributes to the experience of gambling addiction, see Schüll 2012. Appadurai (2011, 524) writes: “We might say that while some actors in the field of finance do know what they don’t know, and perhaps also what they would like to know, they certainly have no good way to measure what they don’t know, and even more, they do not know how to measure it probabilistically. Thus uncertainty remains outside all financial devices and models.” In contemporary financial risk taking, Appadurai (2011) discerns a dispositional turn away from the methodicality and self- doubt of Puritanism toward a heady, “swashbuckling” confidence. In his account, as market devices become hypermethodical, market actors become “avaricious, adventurous, exuberant, possessed, charismatic, excessive, or reckless in the manner that Weber argued was exactly not the spirit

212 / Notes to Pages 65–72 of modern capitalism” (524) Online poker players grinding methodically through poker hands in front of their multiple screens paint a rather less exuberant profile of contemporary market actors and also suggest that devices and actors are more blurred than they are divided. Their mode of uncertainty— their “uncertainty imaginary,” to use Appadurai’s term— is a dispositional admixture of anxious self- discipline and speculative ambition (in a dose of exactly one unit, in Justin’s case) that is well captured by O’Malley’s phrase, “enterprising prudentialism” (2000, 465). CHAPTER FOUR

1.

2.

3.

4.

5.

6.

7.

8.

9.

See the “Complete 9/11 Timeline” at http://www.historycommons.org, an opencontent investigative journalism project, on matters of fact not otherwise referenced throughout this chapter. I use the term “police” throughout this chapter to refer to the large range of subfederal law enforcement in the United States, which includes city, county, state and tribal police; sheriff departments; and highway patrol, among others. I researched security practices among police and in parts of the intelligence community from 2006 through the beginning of 2014, conducting multisited fieldwork in the United States and at Interpol headquarters in France. I use pseudonyms to refer to all individuals with whom I spoke during fieldwork; additionally, some quotations were adapted to avoid revealing geographic location when it could identify the interviewee. Whereas previously a division was marked between foreign and domestic intelligence, “national intelligence” appeared as a significant term after 9/11. Thus, while the collection and use of intelligence was not new, its production was evolving, diversifying in type and institutional components. The FBI was simultaneously developing its capacity in domestic counterterrorism, including the eGuardian portal through which it could receive reports of suspicious activity directly from local officers and other government actors, such as the military. The Bureau wanted to assess and retain information for incorporation in its classified Guardian system. This suited some policing organizations, particularly those that did not have advanced intelligence capacity and did not want the added burden, but concerned others that wanted to retain control over information about citizens to whom they were accountable. On risk and prevention, see, for example, Beck 1992; Ericson and Haggerty 1997; Luhmann 1993; and O’Malley 2000. On preemption, see Amoore and De Goede (2008); Anderson (2010); de Goede and Randalls (2009); Ericson (2008); and Massumi (2007). As Macedo (2008: 9–10) differentiates, the “first and clearest case of just war is defensive: to defend against an unjust attacker and to secure the conditions of peace” but there is also “one additional narrow category of just war: preemptive war in response to a threat of attack that is not only overwhelming but also so imminent as to allow no time for deliberation and no choice of means.” Policy commentators were duly observant. Macedo observed, “the Bush Doctrine of prevention [is] misleadingly labeled as ‘preemption’” (2008, 10; italics in the original). A RAND publication noted that the Bush administration’s redefinition of the term differed from “generations of scholars and policy- makers” (Mueller 2006, xi). François Ewald influentially analyzes precaution as stemming from the “uncertainty of scientific knowledge itself” (2002, 286). Prominent in European environmentalism, the precautionary principle has been explored as the justification for taking

Notes to Pages 72–86 / 213

10.

11.

12.

13.

14. 15.

16.

17.

18.

19. 20.

measures, despite lack of certainty, in the face of the threat of terrorism (cf. Aradau and van Munster 2007; de Goede and Randalls 2009; Stern and Weiner 2006). Preemption in this literature is differentiated by the way uncertainty is taken up and made to manifest. Notably, when these concern terrorism, intentionality is assumed and these are taken to be performative utterances endowed with great illocutionary force. See Austin, John L., How to Do Things with Words (1962 [1955]). When CVE shifts from addressing causes to intervening in how subjects to respond to them, however, potential reenters and uncertainty gains a positive valence, as “resilience” is built up in subjects perceived as vulnerable to radicalization. One version of “anticipatory action” is that it is enabled to work upon imaginations of the future (cf. Anderson 2010). Also, Amoore (2011, 2013) describes how data derivatives are put to use in an anticipatory logic, which does not predict what will happen but projects the fragments as a way of rendering calculable possible futures. Here, I wish to distinguish a distinct anticipatory mode of uncertainty and Suspicious Activity Reporting as a technology that works in that mode. But see Hidek (2011, 247), who argues, “by claiming that state and local intelligence networks evolved from the ‘bottom- up’, those responsible for their birth and expansion (defence contractors supported by key federal officials) maintain the ability to carry out the work without informed oversight and regulation.” See, for example, what is provided under “Background” by the Information Sharing Environment at ise.gov. Authorizing instruments include the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act of 2001; the Homeland Security Act of 2002; and the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA). These included the IACP, the National Sheriffs’ Association, National Organization of Black Law Enforcement Executives, the Major Cities Chiefs Association, and the Police Foundation. This included by 2007, when fusion centers were being actively promoted by the federal government, five interconnected grant programs, (1) the State Homeland Security Program, (2) Urban Area Security Initiative (UASI), (3) the Law Enforcement Terrorism Prevention Program (LETPP), (4) the Metropolitan Medical Response System (MMRS), and (5) the Citizens Corps Program (CCP). Judeo-Christian tradition held that it was not possible to know God’s mind, but that it was possible to discern the significance in our moment in history. Scholarship on discernment took a different turn with the work of Heidegger; the effect of the hermeneutic shift was a decoupling of discernment and history within these analyses. Discernment as a capacity developed by police officers, however, is linked to neither comprehensive history nor the universal/eternal, instead operating in, per Paul Rabinow (2011), a contemporary mode with attention to the historical aspects of the world, which are inescapable, as well as the distinctive forms of objects and life practices that are all around. “South Florida Locations Frequented by the 9- 11 Terrorists,” Sun Sentinal, http:// www.sun- sentinel.com/sfl- terrorists- florida- map,0,2619314.mapmashup. This excludes “foreigners largely radicalized abroad” and acts attributed to “violent extremists inspired by non- jihadist causes such as radical environmentalism, animal rights, or anti- abortion causes” but includes the “shoe bomber” Richard Reid (December 2001), the Transatlantic Airliners plot (August 2006), the attempted airline

214 / Notes to Pages 86–89 bombing by Farouk Abdulmutallab (Christmas Day 2009), the Printer Cartridge plot (2010), or Quazi Mohammad Rezwanul Ahsan Nafis’s attempt on the Federal Reserve Bank of New York (2012). See Bjelopera (2013, 1). 21. Ten defendants charged with terrorism- related crimes formally, but unsuccessfully argued the entrapment defense in six trials between 9/11 and early December 2011. See Bjelopera (2013). CHAPTER FIVE

1.

2.

3. 4.

5.

6.

In December 2013, after a compromise defense bill from Congress cleared the way for possible overseas transfers, three Chinese ethnic Uighur men were transported to Slovakia, a move that came more than five years after a US federal judge ruled their detention unlawful. Note: Between the time of my first draft and of publishing this chapter, a few prisoners have gained their release, and debate has continued. Yet the camp remains unclosed. Despite the US government’s declaration as of 2013 that the hunger strike was largely over (see discussion below), hunger strikers continue. I have had to refrain from further updates as of late 2014; however, the main argument about Guantánamo as a Catch- 22 of uncertainty continues to hold. Rumsfeld’s much- jeered- at term, “unknown unknowns,” it seems, already existed in academic sociological circles (Kerwin 1993) as well as among designers and engineers. More broadly, on the varieties of nonknowledge and the growth, in late capitalism, of such forms that pair knowing and ignorance, both increasing exponentially, see the helpful essay by Matthias Gross (2007): “The matter is not about ‘convert[ing] possible uncertainty into calculated risks . . .”— that is, finding a way to release the prisoners to reduce the risk of their launching terrorist attacks. It is more about a kind of uncertainty that is impossible to calculate or reckon with, and therefore leads to a sort of absurd stasis— all of which occurs on the world’s stage. In fact, the president used the phrase “no man’s land” to describe Guantánamo in the same April 30, 2013, press conference. While one hundred of the detainees refused to eat, a corps of forty health workers was dispatched to force them to do so. The longer they were held, the more they resembled “mere life,” as noted by one of their defense attorneys. They were no longer really living, just existing full of despair and suffering from PTSD as the result of their detention. See accounts by defense attorneys in Goodman (2013). Literature on biopower and “bare life” is vast; but see Agamben (1998), Foucault (2004), and Rabinow (1998). This is one of three kinds of agnotology Procter and Schiebinger examine: the first is ignorance as a native state, the second is ignorance as a lost realm (or selective choice), and the third is ignorance as a strategic ploy, or active construct (3). Johns addresses the Bush administration in this 2005 article, written before the Obama administration inherited the problem of DCGB. Johns’s more recent book (2013) addresses more broadly the problem of states’ use of lethal force (nonlegality in international law), often urged to be subject to procedures of transparency and accountability. Instead, what seems to be a “vaccum” of power is very often intensely regulated and regulable (7–8). This is the case at Guantanamo, said to operate in a “black hole” of international law, but more accurately viewed as an intensively regulated site of emerging political- social- scientific relationships. As we will see, the camp’s detainees are the subjects both of knowledge (the piling up of reports, the regularization of interrogations) and ignorance (the risk they embody, the gray areas

Notes to Pages 89–93 / 215

7. 8.

9.

10.

11.

12. 13.

14.

15.

and lacunae of their potential and actual behavior). Both stem from minutely focused and generative procedures and normative practices. On the unique history of and ambiguous space occupied by Guantanamo see Kaplan (2005); on Foucauldian normalizing technologies at Guantanamo see Welch (2009). President Obama proposed building a Super-Max prison in the Midwest to house them eventually, but even this not- very- visionary solution met with fierce public opposition: it is too expensive. The bill stipulated no funds could be “used to transfer, release, or assist in the transfer or release to or within the United States, its territories, or possessions, Khalid Sheikh Mohammed or any other detainee.” Released via WikiLeaks (Gitmo Files), the dossiers of Guantánamo- based interrogations and current prisoners are available online (e.g., The Guantánamo Docket 2013). However, full information about the interrogations conducted on “Black Sites” as part of the CIA’s once secret program that ran from 2002 to 2006 and that still enjoys an unprecedented legal protection in courts (as of June, 2013), are not available, even to defense attorneys representing those prisoners who were interrogated in black sites. A system has been set up to provide “summaries” of black- site treatment and results of interrogation to defense teams; judges in the case will compare the summary and the full record to see that the summaries are not misrepresentative. However, this puts the prisoner (when tried, as in the case of Abd al Rahim al Nashiri currently, accused of planning the USS Cole bombings) in the position of not knowing what the government claims his treatment consisted of: “Kammen also complained that there was no way to verify the accuracy of the prosecution’s summary of how Nashiri had been treated because under the rules of the commission, Nashiri is not allowed to read the classified summaries” (Rosenberg 2013). Ibrahim Othman Ibrahim Idris, 52, arrived at the prison in January 2002 and was never charged; he was transferred with Noor Uthman Muhammed, 51. (“Guantanamo Detainees Transferred to Sudan,” December 9 2013, http://www.bbc.com/news /world- us- canada- 25443897.) Five Algerian detainees also have been transferred: two in August 2013, two in December 2013, and one in March 2014. The final report determined that forty- eight detainees were “too dangerous to transfer but not feasible for prosecution” (US Department of Justice 2010). A 566- page report issued by a bipartisan task force organized by the nonprofit Constitution Project and co- chaired by Asa Hutchinson, a former Bush administration official, and James Jones, a former Democratic representative and ambassador to Mexico, concluded that “it is undisputable that the United States engaged in the practice of torture,” and that “the nation’s most senior officials . . . bear ultimate responsibility for allowing and contributing to the spread of illegal and improper interrogation techniques” (Constitution Project Task Force 2013). The condition of evoking something beyond imagining and beyond the strategy of scenario- running and simulation is typical of “potential uncertainty” and its accompanying set of ‘event technologies’ (Samimian-Darash 2013). However, it is important to recognize the Cold War origins of such technologies: on “simulating the unthinkable,” see Ghamari-Tabrizi (2000); on futurist scenarios and risk technologies associated with simulation, see Mallard and Lakoff (2010). During the Cold War, each military service hired its own experts: Julius Segal worked for the Army’s HUMMRO (previously the Air Force’s Human Relations Research Laboratory); Edgar H. Schein for Walter Reed Army Institute and then MIT; Albert Biderman for Air University, Maxwell Air Force Base; Harry Harlow, with I. E. Farber

216 / Notes to Pages 93–101

16.

17.

18.

19. 20.

21.

22.

23.

and Louis Jolyon West, for the Air Force. Raymond Bauer worked at Harvard and MIT, including its CIA- connected CENIS. On the creation of SERE by West, see Mayer (2005, 2009); Lemov (2005); Alexander (1979). On the current functioning and proliferation of SERE camps, see Mayer (2009), and also additional journalists’ and first- hand accounts (CBS News 2008; Christopher Hitchens 2008). Glenn Petersen, e- mail message to author, March 9, 2012. Petersen is now Professor of Anthropology and International Affairs, and Chair of the Department of Sociology & Anthropology of Baruch College, City University of New York. As recently as 2009, the American military has used the existence of SERE camps as proof that it did not engage in torture: A full- page ad in the New York Times asserted that because “our own troops are subjected to waterboarding as part of their training,” it cannot be torture, and that the US media have therefore “been misleading the world that the United States condones techniques of barbarous cruelty” (New York Times 2009). On this, specifically the use of SERE psychologists’ firm Mitchell and Associates to design interrogations, see Mayer (2009) and Melley (2011). Eventually Lt. Col. Diane Beaver wrote that “enhanced” techniques, however, would not violate the torture statute of the GC “because there is a legitimate governmental objective in obtaining the information necessary . . . for the protection of the national security of the United States, it citizens, and allies. Furthermore, these methods would not be used for the ‘very malicious and sadistic purpose of causing harm’” (Constitution Project Task Force 2013, 55 [emphasis added]). “Technically” these techniques might violate the universal code of military justice, but nonetheless they should be approved and granted prospective impunity: “it would be advisable to have permission or immunity in advance . . . for military members utilizing these methods” (ibid.). Despite concerns among many (Marine Corps, Army Navy JAGs, as well as Guantánamo crime investigative task force) the Beaver recommendations were approved. On the legal justification for sidestepping the Geneva conventions and US law, see in addition the memoranda of Jay Bybee and John Yoo, which were secured through a journalist’s FOIA request and are included in Danner (2004). There are parallels in biomusic used to create “total environments” that engineer emotional states (Joseph 2011), as well as the Cold War dolphin research conducted by John Lilly. As discussed in O’Malley (2012): “The Commission convinced itself that ‘the most important failure was one of imagination’. . . . The theme of imagination recurs throughout the report, leading to a general conclusion that ‘the possibility was imaginable and imagined.’ While acknowledging that imagination is not a gift usually associated with bureaucracies, the Commission argued that ‘it is therefore crucial to find a way of routinizing, even bureaucratizing imagination’” (6; references omitted). O’Malley argues that a neoliberal managerial response to the threat of terrorism and other uncertainties is not so much the increase of imaginative faculties but of “resilience.” Honor was central to the “paradigm” of the self— of the soldier undergoing interrogation, of whatever side. As Peterson recalled, “While the program was certainly meant to prepare us as individuals to survive, the main emphasis was in fact placed on preparing us to uphold American honor— we were being trained to withstand torture so that our troops would not look so bad once again.” Peterson continued,

Notes to Pages 101–103 / 217

24.

25. 26.

27.

28.

“We were taught unequivocally in the SERE program that as Americans we were morally superior to those who would torture us if we were captured, and that by undergoing this practice form of torture at home we would increase America’s store of honor. But instead I now find that I was used as a guinea pig, allowing my government to develop exactly the skills our enemies were condemned for. The CIA and others honed those skills as a consequence of my willingness to have them experiment on me. Where’s the honor in that?” (Peterson, unpublished op- ed document). Detention Center Guantanamo Bay as of 2009 comprised three detention sectors: Camp Delta, which contains six camps; Camp Echo, a 612- unit detention center (considered part of Delta); and Camp Iguana, a low- security compound about a half a mile away from the main facilities. Camp 6 relatively recently gained a new $30 million medium- security facility modeled after a county jail in southern Michigan. Camp 7 is a source of controversy; according to artist Molly Crabapple, who visited Guantanamo in 2013, “This is a place where they won’t even admit that Camp 7— which is where Khalid Sheikh Mohammed and Abd al Rahim al Nashiri are . . . they won’t admit it exists. This is a place where even how KSM dyes his beard is classified” (Thompson 2013). Crabapple also mentions that the detention center has a gift shop selling commemorative t- shirts and shot glasses. Peters considers the longer history of torture, which developed in medieval and early modern Europe primarily as a means of gathering evidence in ordinary criminal cases. The scenario is a thought experiment in which the exigencies of a possible or potential terrorist act in the future cause effects in the present, and a suspect may be justifiably tortured or unencumbered of his or her human and other rights. Ticking time bomb as thought experiment is discussed widely (e.g., Mayer 2009). On this, see debates and counter- claims, especially Dick Cheney’s versus CIA interrogator Kriakis, and current opinions about NSA success in this regard. In a 2005 “think piece” for the New York Times, Joseph Lelyveld asked whether scientific torture methods— threats and administration of somewhat bearable or at least survivable pain, disorientation, the conveyance of hopelessness— can be justified using a calculus of sorts. Would it be worth sanctioning as national policy to save ten thousand lives? “If it could be shown with some certainty that, say, 10,000 lives would be saved, few purists would argue against the infliction of pain,” Lelyveld wrote. “If the number was a much smaller multiple of 10 and the degree of uncertainty candidly acknowledged, the true murkiness of the issue in the real world would have to be faced” (Lelyveld 2005, emphasis added). Justice Antonin Scalia reportedly approved of the ticking- time- bomb justificatory logic as of 2007: “During an Ottawa conference of international jurists and national security officials last week. During a panel discussion about terrorism, torture and the law, a Canadian judge remarked, “Thankfully, security agencies in all our countries do not subscribe to the mantra ‘What would Jack Bauer do?’” Justice Scalia responded with a defense of Agent Bauer, arguing that law enforcement officials deserve latitude in times of great crisis. “Jack Bauer saved Los Angeles. . . . He saved hundreds of thousands of lives,” Judge Scalia reportedly said. “Are you going to convict Jack Bauer?” He then posed a series of questions to his fellow judges: “Say that criminal law is against him? ‘You have the right to a jury trial?’ Is any jury going to convict Jack Bauer?” “I don’t think so,” Scalia reportedly answered himself. “So the question is really whether we believe in these absolutes. And ought we believe in these absolutes” (Wall Street Journal 2007).

218 / Notes to Pages 103–151 CHAPTER SIX

1.

Many of the respondents were unfamiliar with terms such as PTSD and ITT or even Buddhist theological terminology. Nevertheless, with the help of an informant/ translator, painstaking methods were used to assist respondents in the graphic and narrative illustration of their phenomenologies of time, fate, and intergenerational silent relations and intergenerationally transmitted parental legacies of “suffering.”

1.

On the strategic crux of tropes of modern difference in sociological thinking about risk see Samimian-Darash and Rabinow’s introduction to this volume. The critique of dual use elaborated in this chapter was developed with Paul Rabinow in Designing Human Practices: An Experiment in Synthetic Biology (Chicago: University of Chicago Press, 2012). See the introduction to this volume.

CHAPTER SEVEN

2.

3.

CHAPTER EIGHT

1.

2.

3.

4.

5.

6.

7. 8.

I am grateful to Michael Joiner, Utpal Sandesara, Paul Mitchell, and Janet Monge for their engagement and invaluable conversation and comments. My gratitude also extends to Marten Scheffer for time he spent explaining his work to me. Latour and Woolgar employ the concept of “reified theory” to show how biochemical entities are construed through instrumental activity. My interpretation of the phenomenotechnique is slightly different as some scientific processes described here do not have any straightforward instrumental meaning. Rather, horizoning work asks questions such as: Where does the critical fold between the actionable and the nonactionable reside? Or, how is rising CO2 as an exposure event? The answers may not come fast enough to create the necessary tools to equitably manage environmental risk. See Petryna (2013[2002]). From “Stimulating the Economy While Stopping Irreversible Climate Change Credit Risk Including Contagion.” Available at http://mts.sustainableproducts.com/ Capital_Markets_Partnership/Contagion/Report%206- 21- 13%20(2).pdf. Talk given at the American Geophysical Union’s 2011 fall meeting, from http:// thinkprogress.org/climate/2011/12/07/384524/noaa- us- sets- record- with- a- dozen - billiondollar- weather- disasters- in- one- year/ One example of being blindsided relates the disappearance of the earth’s “carbon sinks” (Canadell et al. 2007). The sudden loss of nearly a third of CO2- offsetting reservoirs has been noted at a time when levels of CO2 are surpassing the symbolic threshold of 400 parts per million (Shukman 2013). While most CO2 goes into the atmosphere, land, and ocean, researchers are not sure where the inordinately excessive CO2 will go. Also see Lahsen (2009). The 2013 Yarnell wildfire killed nineteen of a twenty- member crew of elite firefighters “in the worst loss of firefighters in a single wildfire in eighty years” (Freedman 2013). Firefighters using pop- up emergency fire shelters (a last- ditch tool that failed to withstand the prolonged heat) were overrun by an unpredictably moving wildfire. A National Weather Service incident meteorologist on the scene said that “highly localized and chaotic winds” made individual wildfires more erratic. Marcia McNutt, public lecture, National Conference and Global Forum on Science, Policy and the Environment, Washington DC, 2013. Carpenter, Stephen. 2003. “Regime Shifts in Lake Ecosystems: Pattern and Variation.” Accessed June 1, 2013. http://limnology.wisc.edu/regime/chapter3_14jul03.pdf.

Notes to Pages 151–193 / 219 9.

10. 11. 12. 13. 14. 15.

Underestimation of such behaviors is a problem. The recent East Coast superstorm was not even a worst- case scenario: it reflected a mere “moderate” warming scenario. In this scenario, CO2 concentration and global temperature “closely match the projections” (Rahmstorf, Foster, and Cazenave 2012, 4). Peter Thomas, public lecture, National Conference and Global Forum on Science, Policy and the Environment, Washington DC, 2013. Craig Fugate, public lecture, National Conference and Global Forum on Science, Policy and the Environment, Washington DC, 2013. On the problem of preparedness when “risks are not located in the present or in the future but in a ‘shared’ temporal space,” see Samimian-Darash (2011). I use the term system- stabilizing as opposed to life- saving action since the object is not life itself, but a dynamical system that includes, and sometimes, excludes life. Personal communication, Marten Scheffer, May 6, 2013. The lack of correspondence between modeling and reality is a byproduct of a certain idea of conservation framed by legal definitions focusing on numerical head counts and inconsistent meanings of geographic range. The technical definition of extinction leaves much to be desired in terms of coming to grips with its real dimensions (Cahill et al. 2012). CHAPTER NINE

1.

2. 3. 4. 5. 6. 7. 8. 9.

I use the term “devices” in the sense of a tool that connects the body to its environment, while I use “techniques” in the sense of the application of a rationality to the world. See Keck and Lakoff (2013). The same kind of comparison between microbiologists and naturalists has been made by Greenough (2003). Personal communication, Dhanasekaran Vijaykrishna, July 23, 2009. Personal communication, Justin Bahl, July 15, 2009. Personal communication, Dhanasekaran Vijaykrishna, July 23, 2009. Personal communication, Justin Bahl, July 22, 2009. Personal communication, Mike Kilburn, September 25, 2007. Personal communication, Mike Kilburn, July 8, 2011. http://hkbws.org.hk/BBS/viewthread.php?tid=10197&extra=page%3D4; http://www .civic- exchange.org/wp/111006bioindicators_en/. CHAPTER TEN

1. 2.

3.

For an earlier analysis of some of the material on which this chapter is based, see Austin Zeiderman, Environment and Planning A, 44, no. 7 (2012): 1570–1588. “Risk management” is a translation of gestión del riesgo, which refers to a specific set of policies and programs that have been assembled in Colombia over the past few decades. High threat was characterized by “hillsides exhibiting processes of active or inactive instability and/or intensive erosion” where the probability of landslide was above 44 percent in a ten- year period. Medium threat referred to “hillsides without evidence of current instability, with erosion processes of medium or high intensity” with a probability of landslide between 12 and 44 percent over the same period of time. And low threat corresponded to “low sloping hillsides, high sloping hillsides on rock, rectilinear hillsides, or flat zones in consolidated urban areas” with a probability of under 12 percent of landslide.

REFERENCES

Adorno, Theodor W. 1982. “On the Fetish-Character in Music and the Regression of Listening.” In The Essential Frankfurt School Reader, edited by Andrew Arato and Eike Gebhardt, 270–299. New York: Continuum. Agaibi, Christine E., and John P. Wilson. 2005. “Trauma, PTSD, and Resilience: A Review of the Literature.” Trauma, Violence & Abuse 6 (3): 195–216. Agamben, Giorgio. 1998. Homo Sacer: Sovereign Power and Bare Life. Palo Alto, CA: Stanford University Press. Agamben, Giorgio, and Raulff, Ulrich. 2004. “Interview with Giorgio Agamben—Life, a Work of Art without an Author: The State of Exception, the Administration of Disorder and Private Life.” Review. German Law Journal. http://www.germanlawjournal.com /article.php?id=437. Agger, Inger, Victor Igreja, Rachel Kiehle, and Peter Polatin. 2012. “Testimony Ceremonies in Asia: Integrating Spirituality in Testimonial Therapy for Torture Survivors in India, Sri Lanka, Cambodia, and the Philippines.” Transcultural Psychiatry 49 (3–4): 568– 589. Ake, David. 2002. Jazz Cultures. Berkeley: University of California Press. Akgun, Ali E., John C. Byrne, Gary S. Lynn, and Halit Keskin. 2007. “New Product Development in Turbulent Environments: Impact of Improvisation and Unlearning on New Product Performance.” Journal of Engineering and Technology Management 24 (3): 203–230. Alexander, Shana. 1979. Anyone’s Daughter: The Times and Trials of Patricia Hearst. New York: Viking. Amin, Ash. 2013. “Surviving the Turbulent Future.” Environment and Planning D: Society and Space 31 (1): 140–156. Amoore, Louise. 2011. “Data Derivatives: On the Emergence of a Security Risk Calculus for Our Times.” Theory, Culture and Society 28 (24): 24–43. ———. 2013. Politics of Possibility: Risk and Security Beyond Probability. Durham and London: Duke University Press. Amoore, Louise, and Marieke De Goede. 2008. “Transactions after 9/11: The Banal Face of the Preemptive Strike.” Transactions of the Institute of British Geographers 33 (2): 173– 185. Anderson, Ben. 2010. “Preemption, Precaution, Preparedness: Anticipatory Action and Future Geographies.” Progress in Human Geography 34 (6): 777–798.

222 / References Appadurai, Arjun. 2011. “The Ghost in the Financial Machine.” Public Culture 23 (3): 517– 539. ———. 2012. “The Spirit of Calculation.” Cambridge Anthropology 30 (1): 3–17. Aquinas, Thomas. 2013 [1265–1274]. Summa Theologica. Translated by Fathers of the English Dominican Province. New York: Cosimo Classics. Aradau, Claudia. 2007. “Law Transformed: Guantanamo and the ‘Other’ Exception.” Third World Quarterly 28 (3): 489–501. Aradau, Claudia, and Rens van Munster. 2007. “Governing Terrorism through Risk: Taking Precautions, (Un)knowing the Future.” European Journal of International Relations 13 (1): 89–115. Aradau, Claudia, Luis Lobo-Guerrero, and Rens van Munster. 2008. “Security, Technologies of Risk, and the Political: Guest Editors’ Introduction.” Security Dialogue 39 (2–3): 147–154. Argenti-Pillen, Alexandra. 2000. “The Discourse on Trauma in Non-Western Cultural Contexts: Contribution of an Ethnographic Method.” In International Handbook of Human Response to Trauma, edited by Arieh Y. Shalev, Rachel Yehuda, and Alexander C. McFarlane, 87–102. New York: Plenum Publishers. Arnoldi, Jakob. 2004. “Derivatives: Virtual Values and Real Risks.” Theory, Culture and Society 21 (6): 23–42. Augustine, Saint. 2009. The Confessions. Translated by Henry Chadwick. Oxford World Classics. Oxford and New York: Oxford University Press. Austin, John L. 1962 [1955]. How to Do Things with Words. Oxford: Clarendon. Bachelard, Gaston. 1964. The Psychoanalysis of Fire. Boston: Beacon Press. Baker, Tom, and Jonathan Simon, eds. 2002. Embracing Risk: The Changing Culture of Insurance and Responsibility. Chicago: University of Chicago Press. Bankoff, Greg, Uwe Lubken, and Jordan Sand, eds. 2012. Flammable Cities: Urban Conflagration and the Making of the Modern World. Madison: University of Wisconsin Press. Barrett, Frank. 1998. “Coda: Creativity and Improvisation in Jazz and Organizations: Implications for Organizational Learning.” Organization Science 9 (5): 605–622. Barrows, Mark. 1998. A Passion for Birds: American Ornithology after Audubon. Princeton: Princeton University Press. Beck, Ulrich. 1992. Risk Society: Toward a New Modernity. New York: SAGE Publications. ———. 1994. “The Reinvention of Politics: Towards a Theory of Reflexive Modernization.” In Reflexive Modernism: Politics, Tradition, and Aesthetics in the Modern Social Order, edited by Ulrich Beck, Anthony Giddens, and Scott Lash, 1–55. Stanford, CA: Stanford University Press. ———. 2002. “The Terrorist Threat: World Risk Society Revisited.” Theory Culture Society 19 (4): 39–55. ———. 2009. World at Risk. Cambridge: Polity Press. Belgrad, Daniel. 1998. The Culture of Spontaneity: Improvisation and the Arts in Postwar America. Chicago: University of Chicago Press. Benjamin, Walter. 1968 [1939]. “On Some Motifs in Baudelaire.” In Illuminations: Essays and Reflections, edited by H. Arendt, translated by H. Zohn, 155–200. New York: Schocken. Bentham, Jeremy. 1962. The Works of Jeremy Bentham. Vol. 1. Edited by J. Bowring. New York: Russell and Russell. Berliner, Paul. 1994. Thinking in Jazz: The Infinite Art of Improvisation. Chicago: University of Chicago Press. Berns, Kenneth I., et al. 2012. “Adaptations of the Avian Flu Virus Are a Cause for Concern.” Nature 9 (482): 153–154.

References / 223 Bernstein, Peter. 1998. Against the Gods: The Remarkable Story of Risk. New York: John Wiley. Beveridge, William. 1942. Social Insurance and Allied Services (Cmd. 6404). London: Her Majesty’s Stationery Office. Biderman, Albert D. 1957. “Communist Attempts to Elicit False Confessions from Air Force Prisoners of War.” Bulletin of the New York Academy of Medicine 33(9): 616–625. Bjelopera, Jerome P. 2013. “American Jihadist Terrorism: Combating a Complex Threat.” Washington DC: Congressional Research Service. Boettiger, Carl, Noam Ross, and Alan Hastings. 2013. “Early Warning Signals: The Charted and Uncharted Territories.” Theoretical Ecology doi:10.1007/s12080- 013- 0192- 6. Bonanno, George A. 2004. “Loss, Trauma and Human Resilience: Have We Underestimated the Human Capacity to Thrive after Extremely Aversive Events?” American Psychologist 59: 20–28. Boyd, Ian K. 2012. “The Art of Ecological Modeling.” Science 337 (6092): 306–307. Bradstock, Andrew. 2006. “The Reformation.” In The Blackwell Companion to Political Theology, edited by Peter Scott and William T. Cavanaugh. New York: Wiley-Blackwell. Bush, George W. 2002. “Text of Bush’s Speech at West Point.” The New York Times, June 1. Butler, Declan. 2008. “Politically Correct Names Given to Flu Viruses.” Nature 452: 923. Butler, Judith. 2004. “Indefinite Detention.” In Precarious Life: The Powers of Mourning and Violence, edited by Judith Butler, 50–100. London: Verso. Cahill, Abigail E., Matthew E. Aiello-Lammens, M. Caitlin Fisher Reid, et al. 2012. “How Does Climate Change Cause Extinction?” Proceedings of the Royal Society 280 (1750): 1–9. Caplan, Pat, ed. 2000. Risk Revisited. London: Pluto Press. Carpenter, Stephen. 2003. “Regime Shifts in Lake Ecosystems: Pattern and Variation.” http://limnology.wisc.edu/regime/chapter3_14jul03.pdf. Castelão-Lawless, Teresa. 1995. “Phenomenotechnique in Historical Perspective: Its Origins and Implications for Philosophy of Science.” Philosophy of Science 62 (1): 44–59. CBS News. 2008. “Officer: Military Demanded Torture Lessons.” http://www.cbsnews.com /stories/2008/09/25/terror/main4476962.shtml. Chateauraynaud, F., and D. Torny. 1999. Les sombres précurseurs, Une sociologie pragmatique de l’alerte et du risque. Paris, EHESS. Chelariu, Cristian, Wesley J. Johnston, and Louise Young. 2002. “Learning to Improvise, Improvising to Learn: A Process of Responding to Complex Environments.” Journal of Business Research 55 (2): 141–147. Chhim, Sotheara. 2012. “Baksbat (Broken Courage): The Development and Validation of the Inventory to Measure Baksbat, a Cambodian Trauma- based Cultural Syndrome of Distress.” Culture Medicine and Psychiatry 36: 640–659. ———. 2013. “Baksbat (Broken Courage): A Trauma- based Cultural Syndrome in Cambodia.” Medical Anthropology 32: 160–173. Choy, Timothy. 2011. Ecologies of Comparison: An Ethnography of Endangerment in Hong Kong. Durham: Duke University Press. Cohen, Jon, and David Malakoff. 2012. “NSABB Members React to Request for Second Look at H5N1 Papers.” ScienceInsider (March). Collier, Stephen J., Andrew Lakoff, and Paul Rabinow. 2004. “Biosecurity: Towards an Anthropology of the Contemporary.” Anthropology Today 20 (5): 3–7. Collier, Stephen J. 2008. “Enacting Catastrophe: Preparedness, Insurance, Budgetary Rationalization.” Economy and Society 37 (2): 224–250. Congreso de Colombia. 1988. “Ley 46.” Bogotá. Consejo de Bogotá. 1987. “Acuerdo 11.” Bogotá.

224 / References Considine, Mark. 2001. Enterprising States. The Public Management of Welfare- to-Work. Cambridge: Cambridge University Press. Constitution Project Task Force. 2013. Report on Detainee Treatment. http://detaineetask force.org/read/index.html. Cunha, Miguel Pina, Joao Vieira da Cunha, and Ken Kamoche. 1999. “Organizational Improvisation: What, When, How and Why.” International Journal of Management Reviews 1 (3): 299–341. Danieli, Yael. 1998. “Introduction: History and Conceptual Foundations.” In International Handbook of Multigenerational Legacies of Trauma, edited by Yael Danieli, 1–17. New York: Plenum Press. Danner, Mark. 2004. Torture and Truth: America, Abu Ghraib, and the War on Terror. New York: New York Review Books. Davis, Steven J., Long Cao, Ken Caldeira, and Martin I. Hoffert. 2012. “Rethinking Wedges.” Environmental Research Letters 8 (1). doi: 10.1088/1748- 9326/8/1/011001. de Goede, Marieke, and Samuel Randalls. 2009. “Precaution, Preemption: Arts and Technologies of the Actionable Future.” Environment and Planning D: Society and Space 27 (5): 859–878. Dean, Mitchell. 1996. “Putting the Technological into Government.” History of the Human Sciences 9 (3): 47–68. Defense Authorization Act. 2011. http://www.gpo.gov/fdsys/pkg/CRPT- 111hrpt491/pdf /CRPT- 111hrpt491.pdf. Delaporte, Francois. 1989. Disease and Civilization: The Cholera in Paris, 1832. Translated by Arthur Goldhammer. Cambridge, MA: MIT Press. Deleuze, Gilles 1990. The Logic of Sense. Translated by M. Lester and C. Stivale. New York: Columbia University Press. Descola, Philippe. 2005. Par delà nature et culture. Paris: Gallimard. Dewey, John. 1938. Logic: The Theory of Inquiry. New York: Henry Holt. ———. 2005. The Quest for Certainty: A Study of the Relation of Knowledge and Action. Whitefish, MT: Kessinger Publishing. Dinerstein, Joel. 2003. Swinging the Machine: Modernity, Technology, and African American Culture between the World Wars. Amherst: University of Massachusetts Press. DNPAD. 1998. “Plan Nacional Para La Prevención y Atención de Desastres.” Bogotá: Ministro del Interior, Dirección Nacional para la Prevención y Atención de Desastres. Doran, Nob. .1994. “Risky Business: Codifying Embodied Experience in the Manchester Unity of Oddfellows.” Journal of Historical Sociology 7: 131–154. Douglas, Mary, and Aaron Wildavsky. 1982. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. Berkeley: University of California Press. Douglas, Mary. 1966. Purity and Danger: An Analysis of Concepts of Pollution and Taboo. London: Routledge and Kegan Paul. ———. 1994. Risk and Blame: Essays in Cultural Theory. London: Routledge. Doyle, Michael W. 2008. “Standards.” In Striking First: Preemption and Prevention in International Conflict, edited by S. Macedo. Princeton: Princeton University Press. DPAE. 1999. “Concepto Técnico No 3280.” Bogotá: Dirección de Prevención y Atención de Emergencias, Área de Análisis de Riesgos, Secretaría de Gobierno, Alcadia Mayor de Santa Fe de Bogotá. ———. 2006. “Plan de Acción Para La Mitigación de Riesgos y Rehabilitación En El Sector Altos de La Estancia, Localidad de Ciudad Bolívar.” Bogotá: Dirección de Prevención y Atención de Emergencias, Secretaría de Gobierno, Alcaldía Mayor de Bogotá, D.C.

References / 225 Drexler, Madeline. 2002. Secret Agents. The Menace of Emerging Infections. Washington DC: Joseph Henry Press. Durkheim, Emile. 1951. Suicide: A Study in Sociology. Translated by John A. Spaulding and George Simpson. New York: The Free Press. Dyba, Tore. 2000. “Improvisation in Small Software Organizations.” IEEE Software 17 (5): 82–87. Edkins, Jenny. 2003. Trauma and the Memory of Politics. Cambridge: Cambridge University Press. Eghigian, Greg. 2000. Making Security Social. Disability, Insurance and the Birth of the Social Entitlement State in Germany. Ann Arbor: University of Michigan Press. Eisenberg, Eric M. 1990. “Jamming: Transcendence through Organizing.” Communication Research 17 (2): 139–164. Elden, Stuart. 2007. “Rethinking Governmentality.” Political Geography 26 (1): 29–33. Ellis, T. et al. 2009. “Analysis of H5N1 Avian Influenza Infections from Wild Bird Surveillance in Hong Kong from January 2006 to October 2007.” Avian Pathology 38 (2): 107–119. Ericson, Richard, and Kevin Haggerty. 1997. Policing the Risk Society. Oxford: Clarendon. Ericson, Richard. 2008. “The State of Preemption: Managing Terrorism Risk through Counter Law.” In Risk and the War on Terror, edited by L. Amoore and M. de Goede, 57–76. London and New York: Routledge. European Scientific Working- group on Influenza (ESWI). 2011. “H5N1: A Persistent Danger.” The Influenza Times Conference News Letter 13 (September). Ewald, François. 1991. “Insurance and Risk.” In The Foucault Effect: Studies in Governmentality, edited by Graham Burchell, Colin Gordon, and Peter Miller, 197–210. Chicago: University of Chicago Press. ———. 2002. “The Return of Descartes’s Malicious Demon: An Outline of a Philosophy of Precaution.” Translated by Stephen Utz. In Embracing Risks: the Changing Culture of Insurance and Responsibility, edited by Tom Baker and Jonathan Simon, 273–301. Chicago: University of Chicago Press. Executive Office of the President of the United States of America. 2011 “Empowering Local Partners to Prevent Violent in the United States.” http://www.whitehouse.gov/sites /default/files/empowering_local_partners.pdf. Fan, Fa- ti. 2004. British Naturalists in Qing China: Science, Empire, and Cultural Encounter. Cambridge, MA: Harvard University Press. Farber, I. E., Harry Harlow, and Louis Jolyon West. 1957. “Brainwashing, Conditioning, and DDD (Debility, Dependency, Dread).” Sociometry 20: 271–285. Fassin, Didier. 2008. “The Humanitarian Politics of Testimony: Subjectification through Trauma in the Israeli-Palestinian Conflict.” Cultural Anthropology 23 (3): 531. Fauci, Anthony. 2012. National Academy of Sciences Meeting on H5N1. http://sites.natio nalacademies.org/PGA/stl/H5N1/. Fauci, Anthony, Gary Nabel, and Francis Collins. 2011. “A Flu Virus Risk Worth Taking.” Washington Post, December 30, 2011. http://articles.washingtonpost.com/2011- 12- 30 /opinions/35285482_1_influenza- virus- public- health.Federal Support for and Involvement in State and Local Fusion Centers, Majority and Minority Staff Report. 2012. Washington DC: Permanent Subcommittee on Investigations, Committee on Homeland Security and Governmental Affairs. Feldman, Jackie. 2008. Above the Death Pits, Beneath the Flag: Youth Voyages to Holocaust Poland and Israeli National Identity. New York: Berghahn Books.

226 / References Field, Nigel P., and Sotheara Chhim. 2008. “Desire for Revenge and Attitudes Toward the Khmer Rouge Tribunal Among Cambodians.” Journal of Loss and Trauma 13 (4): 352–372. Flahaut, François. 2003. Malice. Translated by Liz Heron. New York: Verso. Florida, Richard. 2003. The Rise of the Creative Class: And How It’s Transforming Work, Leisure, Community, and Everyday Life. New York: Basic. Forbes, Stephen A. 1887. “The Lake as a Microcosm.” http://people.wku.edu/charles.smith /biogeog/FORB1887.htm. Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan. New York: Vintage Books. ———. 1981. “Omnes et Singulatim: Towards a Criticism of Political Reason.” In The Tanner Lectures on Human Values, vol. 2, edited by S. Mac Murrin, 223–254. Salt Lake City: University of Utah Press. ———. 2001. Dit et ecrits II, 1976–1988. Paris: Gallimard. [The quote was translated by Paul Rabinow.] ———. 2004. The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979. New York: Picador. ———. 2005. Hermeneutics of the Subject: Lectures at the Collège de France 1980–1981. Edited by Frederic Gros, Francois Ewald, Alessandro Fantana, and Arnold Davidson. Translated by Graham Burchell. New York: Picador. ———. 2007. Security, Territory, Population: Lectures at the Collège de France, 1977–78. Edited by Michael Senellart. New York: Palgrave Macmillan. ———. 2008. The Birth of Biopolitics: Lectures at the College de France, 1978–1979. New York: Palgrave Macmillan. ———. 2009. Security, Territory, Population: Lectures at the Collège de France 1977–1978. Edited by Frederic Gros, Francois Ewald, Alessandro Fantana, Michel Senellart, and Arnold Davidson. Translated by Graham Burchell. New York: Picador. Fouchier, Ron A. M., Sander Herfst, and Albert D. M. E. Osterhaus. 2012. “Restricted Data on Influenza H5N1 Virus Transmission.” ScienceExpress (January 19). Fowell, S. J. and P. E. Olsen. 1993. “Time- calibration of Triassic/Jurassic Microfloral Turnover, Eastern North America.” Tectonophysics 222: 361–369. Foxen, Patricia. 2010. “Local Narratives of Distress and Resilience: Lessons in Psychosocial Well- being among the ‘Kiche’ Maya in Post- war Guatemala.“ The Journal of Latin American and Caribbean Anthropology 15 (1): 66–89. Freedman, Andrew. 2013. “Incident Meteorologists Are on Front Lines of Wildfires.” Climate Central. http://www.climatecentral.org/news/incident- meteorologists- on- the- front - lines- of- wildfires- 16189. Galloway, Alexander R. 2012. The Interface Effect. Cambridge, UK: Polity. Garrett, Laurie. 2011. “The Bioterrorist Next Door.” Foreign Policy (December 15). Geertz, Clifford. 1973. The Interpretation of Cultures: Selected Essays. New York: Basic Books. ———. 1975. “Common Sense as a Cultural System.” Antioch Review 33 (1): 5–26. GFDRR (Global Facility for Disaster Reduction and Recovery). 2009. “Disaster Risk Management Programs for Priority Countries.” Washington DC: Global Facility for Disaster Reduction and Recovery, The World Bank, International Strategy for Disaster Reduction. Ghamari-Tabrizi, Sharon. 2000. “Simulating the Unthinkable: Gaming Future Wars in the 1950s and 1960s.” Social Studies of Science 30 (20): 163–222. Giddens, Anthony. 1991. Modernity and Self-Identity. Cambridge, UK: Polity.

References / 227 ———. 2000a. Runaway World: How Globalization Is Reshaping Our Lives. New York: Routledge. ———. 2000b. The Third Way and Its Critics. Cambridge, UK: Polity. Gill, Lesley. 2004. The School of the Americas: Military Training and Political Violence in the Americas. Durham, NC: Duke University Press. Gillis, Justin. 2013. “What to Make of a Warming Plateau.” New York Times, June 10. http:// www.nytimes.com/2013/06/11/science/earth/what- to- make- of- a- climate- change - plateau.html?_r=0. Ginsburg, Justice Ruth Bader. 2004. Oral Arguments, Rasul v. Bush, 542 U.S. 466 (Nos. 03- 334, 03- 343), 51. www.supremecourt.gov/oral_arguments/argument_transcripts /03-334.pdf. Gioia, Ted. 1988. The Imperfect Art: Reflections on Jazz and Modern Culture. Oxford: Oxford University Press. Goffman, Erving. 1961. “Fun in Games.” In Encounters: Two Studies in the Sociology of Interaction, edited by E. Goffman. Indianapolis: Bobbs-Merrill Educational Publishing. ———. 1967. Where the Action Is: Three Essays. London: Allen Lane. Gone, Joseph P. 2008. “Introduction: Mental Health Discourse as Western Cultural Proselytization.” Ethos 36 (3): 310–315. Gone, Joseph P., and Laurence J. Kirmayer. 2010. “On the Wisdom of Considering Culture and Context in Psychopathology.” In Contemporary Directions in Psychopathology: Scientific Foundations of the DSM-V and ICD- 11, edited by Theodore Millon, Robert F Krueger, and Erik Simonsen, 72–96. New York: The Guilford Press. Goodman, Amy. 2013. “Prisoner Protest at Guantanamo Stains Obama’s Human Rights Record.” The Guardian. March 14. http://www.theguardian.com/commentisfree/2013 /mar/14/prisoner- protest- guantanamo- stains- obama. Goodman, Rachel D., and Cirecie A. West-Olatunji. 2008. “Transgenerational Trauma and Resilience: Improving Mental Health Counseling for Survivors of Hurricane Katrina.” Journal of Mental Health Counselling 30 (2): 121–136. Gosden, Peter. 1973. Self Help: Voluntary Associations in the 19th Century. London: BT Batsford. Graham, Stephen. 2011. Cities Under Siege: The New Military Urbanism. London: Verso. Greenough, Paul. 2003. “Pathogens, Pugmarks and Political ‘Emergency.’” In Nature in the Global South: Environmental Projects in South and Southeast Asia, edited by P. Greenough and Tsing A. Lowenhaupt, 201–230. Durham, NC: Duke University Press. Greger, Michael. 2006. Bird Flu: A Virus of Our Own Hatching. New York: Lantern Books. Griffen, Blaine D., and John M. Drake. 2009. “Scaling Rules for the Final Decline to Extinction.” Proceedings of the Royal Society 276 (1660): 1361–1367. Gross, Matthias. 2007. “The Unknown in Process: Dynamic Conceptions of Ignorance, Non-Knowledge, and Related Concepts.” Current Sociology 55: 742–759. Guantánamo Docket, The. 2013. http://projects.nytimes.com/guantanamo. Haberstroh, Chadwick J. 1965. “Organization Design and Systems Analysis.” In Handbook of Organizations, edited by James G. March, 1171–1211. Chicago: Rand McNally. Hacking, Ian. 1975. The Emergence of Probability. Cambridge: Cambridge University Press. ———. 1986. “Making Up People.” In Reconstructing Individualism: Autonomy, Individuality and the Self in Western Thought, edited by T.C. Heller, 222–236. Stanford: Stanford University Press. ———. 1990. The Taming of Chance. Cambridge: Cambridge University Press. Hamayon Roberte. 2012. Jouer. Une étude anthropologique. Paris: La Découverte.

228 / References Hancock, Virginia. 2008–2009. “No Self at Trial: How to Reconcile Punishing the Khmer Rouge for Crimes against Humanity with Buddhist Principles.” The Wisconsin Law Review 26: 87–130. Hansen, James E. 2007. “Scientific Reticence and Sea Level Rise.” Environmental Research Letters 2: 1–6. Hatch, Mary J. 1999a. “The Jazz Metaphor for Organizing: Historical and Performative Aspects.” Paper Presented at the Critical Management Studies Conference, Manchester, UK, July. ———. 1999b. “Exploring the Empty Spaces of Organizing: How Improvisational Jazz Helps Redescribe Organizational Structure.” Organization Studies 20(1): 75–100. Hatch, Mary J., and Karl E. Weick. 1998. “Critical Resistance to the Jazz Metaphor.” Organization Science 9 (5): 600–604. Hayden, Robert M. 2007. “Moral Vision and Impaired Insight.” Current Anthropology 48: 105–131. Hidek, Matt. 2011. “Military Doctrine and Intelligence Fusion in the American Homeland.” Critical Studies on Terrorism 4 (2): 239–261. Hinchliffe, Steve, and Stephanie Lavau. 2013. “Differentiated Circuits: The Ecologies of Knowing and Securing Life.” Environment and Planning D: Society and Space 31: 259– 274. Hinton, Alex L. 2004. Why Did They Kill: Cambodian in the Shadow of Genocide. Berkeley: University of California Press. ———. 2008. “Truth Representation and Politics of Memory after Genocide.” In People of Virtue: Reconfiguring Religion, Power, and Moral Order in Cambodia, edited by Alexandra Kent and David P. Chandler, 62–84. Copenhagen: NIAS. Hitchens, Christopher. 2008. “Believe Me, It’s Torture.” Vanity Fair Magazine (August 1). http://www.vanityfair.com/politics/features/2008/08/hitchens200808. Ho, Karen. 2009. Liquidated: An Ethnography of Wall Street. Durham, NC: Duke University Press. Hodson, Mike, and Simon Marvin. 2009. “‘Urban Ecological Security’: A New Urban Paradigm?” International Journal of Urban and Regional Research 33 (1): 193–215. ———. 2010. World Cities and Climate Change: Producing Urban Ecological Security. New York: Open University Press/McGraw Hill Education. Hughes, Terry P., Cristina Linares, Vasilis Dakos, Ingrid A. van de Leemput, and Egbert H. van Nes. 2012. “Living Dangerously on Borrowed Time during Slow, Unrecognized Regime Shifts.” Trends in Ecology and Evolution 28 (3): 149–155. Huizinga, Johan. 1950 [1938]. Homo Ludens: A Study of the Play Element in Culture. Boston: Beacon Press. Hunt, Alan. 2003. “Risk and Moralization in Everyday Life.” In Risk and Morality, edited by R. V. Ericson and A. Doyle, 165–192. Toronto: University of Toronto Press. Hwang, In Cheol, Hong Yup Ahn, Sang Min Park, Jae Yong Shim, and Kyoung Kon Kim. 2013. “Clinical Changes in Terminally Ill Cancer Patients and Death within 48 h: When Should We Refer Patients to a Separate Room?” Support Care Cancer 21: 835– 840. Illouz, Eva. 2008. Saving the Modern Soul: Therapy, Emotions, and the Culture of Self-Help. Berkeley: University of California Press. Information Sharing Environment (ISE). 2008. “Information Sharing Environment (ISE) Functional Standard (FS) Suspicious Activity Reporting (SAR). Version 1.0.” http://ise .gov/sites/default/files/ISE-FS- 200%20(SAR%20Functional%20Standard_Issuance _Version_1.0_Final_Signed).pdf

References / 229 ———. 2009. “Information Sharing Environment (ISE) Functional Standard (FS) Suspicious Activity Reporting (SAR). Version 1.5.” http://ise.gov/sites/default/files/ISE -FS- 200_ISE-SAR_Functional_Standard_V1_5_Issued_2009.pdf Ingeocim. 1998. “Zonificación de Riesgo Por Inestabilidad Del Terreno Para Diferentes Localidades En La Ciudad de Santa Fe de Bogotá D.C.” Bogotá: Fondo de Prevención y Atención de Emergencias de Santa Fe de Bogotá, Unidad de Prevención y Atención de Emergencias de Santa Fe de Bogotá. Inglesby, Thomas, et al. 2011. “The Risk of Engineering a Highly Transmissible H5N1 Virus.” Baltimore, MD: UPMC Center for Health Security. http://www.upmchealth security.org/website/resources/publications/2011/2011-12-15-editorial-engineering -H5N1. Ingold, Tim. 1980. Hunters, Pastoralists and Ranchers : Reindeer Economies and Their Transformations. Cambridge, Cambridge University Press. International Association of Chiefs of Police (IACP) 2002. “Criminal Intelligence Sharing: A National Plan for Intelligence-Led Policing at the Local, State and Federal Levels: Recommendations from the IACP Intelligence Summit.” http://www.cops.usdoj.gov /Publications/criminalintelligencesharing_web.pdf. Jamieson, Dale. 2014. Reason in a Dark Time: Why the Struggle Against Climate Change Failed— and What It Means for Our Future. Oxford: Oxford University Press. Jasanoff, Sheila. 1990. The Fifth Branch: Science Advisers as Policymakers. Cambridge, MA: Harvard University Press. Johns, Fleur. 2005. “Guantánamo Bay and the Annihilation of the Exception.” The European Journal of International Law 16 (4): 613–635. ———. 2013. Non-Legality in International Law. Cambridge: Cambridge University Press. Joseph, Branden W. 2011. “Biomusic.” Grey Room 45 (Fall): 128–150. Kahn, Mahvish. 2009. My Guantanamo Diary: The Detainees and the Stories They Told Me. New York: Public Affairs. Kaitz, Marsha. 2008. “The Transgenerational Effects of Trauma from Terror: a Developmental Model.” In Global Terrorism Issues and Developments, edited by Rene A. Larch, 187–212. New York: Nova Science Publishers. Kamoche, Ken, and Miguel Pina e Cunha. 2001. “Minimal Structures: From Jazz Improvisation to Product Innovation.” Organization Studies 22 (5): 733–764. Kamoche, Ken, Miguel Pina e Cunha, and Rita Campos e Cunha, eds. 2003a. “Improvisation in Organizations.” International Studies of Management and Organization 33 (1). Kamoche, Ken, Miguel Pina e Cunha, and Joao Vieira da Cunha. 2003b. “Towards a Theory of Organizational Improvisation: Looking Beyond the Jazz Metaphor.” Journal of Management Studies 40 (8): 2023–2051. Kaplan, Amy. 2005. “Where Is Guantánamo?” American Quarterly 57 (3): 831–858. Kawaoka, Yoshihiro. 2012. “H5N1: Flu Transmission Work Is Urgent.” Nature 482 (7384). Keck, Frédéric. 2012. “Nourrir les virus. La biosécurité dans les fermes et les laboratoires,” Réseaux 171, 21–44. Keck, Frédéric, and Andrew Lakoff, eds. 2013. “Sentinel Devices,” Limn, 3: 38–40. Kellerman, Natan P. F. 2001. “Psychopathology in Children of Holocaust Survivors: A Review of the Research Literature.” The Israel Journal of Psychiatry and Related Sciences 38 (1): 36–46. Kerwin, Ann. 1993. “None Too Solid: Medical Ignorance.” Knowledge: Creation, Diffusion, Utilization 15 (2): 166–185. Kidron, Carol A. 2003. “Surviving a Distant Past: A Case Study of the Cultural Construction of Trauma Descendant Identity.” Ethos 31 (4): 513–544.

230 / References ———. 2009. “Toward an Ethnography of Silence: The Lived Presence of the Past among Holocaust Trauma Descendants in Israel.” Current Anthropology 50 (1): 5–27. ———. 2012. “Alterity and the Particular Limits of Universalism: Comparing JewishIsraeli Holocaust and Canadian-Cambodian Genocide Legacies.” Current Anthropology 53 (6): 723–753. Killen, Andreas. 2011. “Homo pavlovius: Cinema, Conditioning, and the Cold War Subject.” Grey Room 45 (Fall): 42–59. Kinseth, Ashley S. 2009. “Unspoken Trauma: Breaking the Silence to Heal Cambodia’s Youth.” International Studies Association West Annual Conference: Human Consequences of State Political Action. September 25–26. San Francisco, CA. Kirmayer, Laurence J., Dandeneau, Stéphane, Elizabeth Marshall, Morgan Kahentonni Phillips, and Karla Jessen Williamson. 2011. “Rethinking Resilience from Indigenous Perspectives.” Canadian Journal of Psychiatry 56 (2): 84–91. Knorr Cetina, Karin, and U. Bruegger. 2000. “The Market as an Object of Attachment: Exploring Post-Social Relations in Financial Markets.” Canadian Journal of Sociology 25 (2): 141–168. ———. 2002. “Traders’ Engagement with Markets: A Postsocial Relationship.” Theory, Culture and Society 19 (5–6): 161–185. Kreitner, Roy. 2000. “Speculations of Contract, or How Contract Law Stopped Worrying and Learned to Love Risk.” The Columbia Law Review 100: 1096–1127. Kunreuther, Howard, et al. 2013. “Risk Management and Climate Change.” Nature Climate Change 3: 447–450. http://www.nature.com/nclimate/journal/v3/n5/full/nclimate 1740.html. Lahsen, Myanna. 2009. “A Science-Policy Interface in the Global South: The Politics of Carbon Sinks and Science in Brazil.” Climate Change 97: 339–372. Lakatos, Imre. 1971. “History of Science and Its Rational Reconstructions.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association. 1970: 91–136. Lakoff, Andrew. 2007. “Preparing for the Next Emergency.” Public Culture 19 (2): 247–271. ———. 2008. “The Generic Biothreat, or, How we Became Unprepared.” Cultural Anthropology 23 (3): 399–423. ———. 2013. “A Dearth of Numbers: The Actuary and the Sentinel in Global Public Health.” LIMN 3: 41–45. Lakoff, Andrew, and Stephen Collier, eds. 2008. Biosecurity Interventions: Global Health and Security in Question. New York: Columbia University Press. Lakoff, Andrew, and Eric Klinenberg. 2010. “Of Risk and Pork: Urban Security and the Politics of Objectivity.” Theory and Society 39 (5): 503–525. Latour, Bruno, and Steve Woolgar. 1979. Laboratory Life: The Construction of Scientific Facts. Princeton: Princeton University Press. Lee, Ben, and E. Lipuma. 2012. “A Social Approach to the Financial Derivatives Markets.” The South Atlantic Quarterly 111 (2): 289–316. Lelyveld, Joseph. 2005. “Interrogating Ourselves.” New York Times, June 12. Lemov, Rebecca. 2005. World as Laboratory: Experiments with Mice, Mazes and Men. New York: Hill and Wang. ———. 2011. “Brainwashing’s Avatar: The Strange Career of Ewen Cameron.” Grey Room 45: 61–87. Lenton, Timothy, et al. 2008. “Tipping Elements in the Earth‘s Climate System.” PNAS 105 (6): 1786–1793. Lenton, Timothy M. and Hermann Held, Elmar Kriegler, Jim W. Hall, Wolfgang Lucht, Stefan Rahmstorf, and Hans Joachim Schellnhuber. 2008. “Tipping Elements in the

References / 231 Earth’s Climate System.” Proceedings of the National Academy of Sciences 105(6): 1786– 1793. Lépinay, Vincent Antonin. 2011. Codes of Finance. Engineering Derivatives in a Global Bank. Princeton: Princeton University Press. Lévi-Strauss, Claude. 1963. Totemism. Boston: Beacon Press. Lifton, Robert J. 1954. “Home By Ship: Reaction Patterns of American Prisoners of War Repatriated from North Korea.” American Journal of Psychiatry 110: 732–739. ———. 1961. Thought Reform and the Psychology of Totalism: A Study of “Brainwashing” in China. New York: Norton. Lubchenco, Jane, and Jack Hayes. 2012. “A Better Eye on the Storm.” Scientific American. 306: 68–73 Luhmann, Niklas. 1993. Risk: A Sociological Theory. New York: Aldine de Gruyter. ———. 1998. “Describing the Future.” In Observations on Modernity, translated by William Whobrey. Palo Alto: Stanford University Press. ———. 1998. Observations on Modernity. Stanford, CA: Stanford University Press. ———. 2005. Risk: A Sociological Theory. Translated by Rhodes Barrett. Piscataway, NJ: Aldine Transaction. Luther, Martin. 2008 [1520]. The Freedom of a Christian. Translated by Mark D. Tranvik. Minneapolis: Fortress Press. Lutz, Catherine. 1997. “The Epistemology of the Bunker: The Brainwashed and Other New Subjects of Permanent War.” In Inventing the Psychological: Toward a Cultural History of the Emotions, edited by Joel Pfister and Nancy Schnog, 245–268. New Haven: Yale University Press. Macedo, Stephen, ed. 2008. Striking First: Preemption and Prevention in International Conflict. Princeton: Princeton University Press. MacIntyre, Alasdair. 2007. After Virtue: A Study in Moral Theory, 3rd ed. South Bend, IN: University of Notre Dame Press. Mackenzie, Adrian. 2003. “Bringing Sequences to Life: How Bioinformatics Corporealizes Sequence Data.” New Genetics and Society 22 (3): 315–332. MacKenzie, Debora. 2011. “Five Easy Mutations to Make Bird Flu a Lethal Pandemic.” New Scientist 26 (September). Malaby, Thomas. M. 2003. Gambling Life: Dealing in Contingency in a Greek City. Urbana: University of Illinois Press. Mantere, Saku, John A. A. Sillince, and Virpi Hamalainen. 2007. “Music as a Metaphor for Organizational Change.” Journal of Organizational Change Management 20 (3): 447–459. Marks, Jonathan. 1991. The Search for the Manchurian Candidate: The CIA and Mind Control: The Secret History of the Behavioral Sciences. New York: Norton. Martin, Gerard, and Miguel Ceballos. 2004. Bogotá: Anatomia de Una Transformación, 1995–2003. Bogotá: Editorial Pontificia Universidad Javeriana. Martin, Randy. 2002. Financialization of Daily Life. Philadelphia: Temple University Press. Massumi, Brian. 2002. Parables for the Virtual: Movement, Affect, Sensation. Durham, NC: Duke University Press. ———. 2007. “Potential Politics and the Primacy of Preemption.” Theory & Event 10 (2). Mayer, Jane. 2005. “The Experiment.” The New Yorker, July 11. http://www.newyorker.com /archive/2005/07/11/050711fa_fact4. ———. 2009. The Dark Side: The Inside Story of How the War on Terror Turned Into a War on American Ideals. New York: Anchor. McCoy, Alfred. 2006. A Question of Torture: CIA Interrogation, from the Cold War to the War on Terror. New York: Macmillan.

232 / References Melley, Timothy. 2011. “Brain Warfare: The Covert Sphere, Terrorism, and the Legacy of Cold War.” Grey Room 45 (Fall): 18–39. Melucci, Alberto. 1996. The Playing Self: Person and Meaning in the Planetary Society. Cambridge: Cambridge University Press. Meyer, Alan, Peter J. Frost, and Karl E. Weick. 1998. “The Organization Science Jazz Festival: Improvisation as a Metaphor for Organizing—Overture.” Organization Science 9 (5): 540–542. Mindszenty, József Cardinal. 1974. Memoirs. New York: Macmillan. Mirvis, Philip H. 1998. “Practice Improvisation.” Organization Science 9 (5): 586–592. Monson, Ingrid. 1995. “The Problem of White Hipness: Race, Gender, and Cultural Conceptions in Jazz Historical Discourse.” Journal of the American Musicological Society 48 (3): 396–422. ———. 1996. Saying Something: Jazz Improvisation and Interaction. Chicago: University of Chicago Press. Moorman, Christine, and Anne. S. Miner. 1998. “The Convergence of Planning and Execution: Improvisation in New Product Development.” Journal of Marketing 62 (3): 1–20. Morgan, Gareth. 1986. Images of Organization. Newbury Park, CA: Sage. Morris, David J. 2009. Empires of the Mind: SERE, Guantánamo, and the Legacies of Torture. Virginia Quarterly Review (Winter). Helms interview by journalist David Frost: http://www.vqronline.org/articles/2009/winter/morris- sere/. Moss, Stephen. 2004. A Bird in the Bush. A Social History of Bird Watching. London: Aurum Press. Moulin, Anne-Marie. 1996. L’aventure de la vaccination. Paris: Fayard. Muehlebach, Andrea, and Nitzan Shoshan. 2012. “Introduction.” Anthropological Quarterly 85 (2): 317–344. Mueller, Karl P., et al. 2006. Striking First: Preemptive and Preventive Attack in U.S. National Security Policy. Santa Monica, CA: RAND Corporation. Murphy, Michelle. 2006. Sick Building Syndrome and the Politics of Uncertainty: Environmental Politics, Technoscience and Women Workers. Durham, NC: Duke University Press. National Academy of Sciences (NAS). 2013. Abrupt Impacts of Climate Change: Anticipating Surprises. Washington DC. National Institutes of Health (NIH). 2011. Press Statement on the NSABB Review of H5N1 Research. http://www.nih.gov/news/health/dec2011/od- 20.htm. National Research Council. 2004. Biotechnology Research in an Age of Terrorism. Washington DC: The National Academies Press. National Science Advisory Board for Biosecurity (NSABB). 2012. March 29–30, 2012, Meeting of the National Science Advisory Board for Biosecurity to Review Revised Manuscripts on Transmissibility of A/H5N1 Influenza Virus. http://oba.od.nih.gov /oba/biosecurity/PDF/NSABB_Statement_March_2012_Meeting.pdf. Nature. 2009. “Earth’s Boundaries?” Editorial. 461 (7263): 447–448. Nebraska Symposium on Motivation. 1953–1954. http://psychology.unl.edu/symposium /symposia- 1953- 2013. Nettelbeck, Colin. 2005. Dancing with de Beauvoir: Jazz and the French. Melbourne: Melbourne University Press. New York Times. 2009. Advertisement, Accuracy in Media. May 14. Nickerson, Angela, and Devon E. Hinton. 2011. “Anger Regulation in Traumatized Cambodian Refugees: The Perspectives of Buddhist Monks.” Culture, Medicine and Psychiatry 35 (3): 396–416.

References / 233 Ogren, Kathy J. 1992. The Jazz Revolution: Twentieth America and the Meaning of Jazz. New York: Oxford University Press. Ogus, Anthony. 1982. “Great Britain.” In The Evolution of Social Insurance 1881–1981, edited by P. Kohler and H. Zacher, 150–264. New York: St. Martin’s Press. O’Malley, Pat. 2000a. “Introduction: Configurations of Risk.” Economy and Society 29 (4): 457–459. ———. 2000b. “Uncertain Subjects: Risks, Liberalism and Contract.” Economy and Society 29 (4): 460–484. ———. 2002. “Imagining Insurance: Risk, Thrift, and Life Insurance in Britain.” In Embracing Risk: The Changing Culture of Insurance and Responsibility, edited by T. Baker and J. Simon, 97–115. Chicago: University of Chicago Press. ———. 2004a. “The Uncertain Promise of Risk.” The Australian and New Zealand Journal of Criminology 37: 323–343. ———. 2004b. Risk, Uncertainty and Government. London: Cavendish. ———. 2012. “From Risk to Resilience: Technologies of the Self in the Age of Catastrophes,” Symposium on the Future of Risk, hosted by the Chicago Center for Contemporary Theory, May 11. http://ccct.uchicago.edu/media/files/the- future- of- risk /O’Malley_Resilience.pdf. O’Reilly, Jessica, Naomi Oreskes, and Michael Oppenheimer. 2012. “The Rapid Disintegration of Projections: The West Antarctic Ice Sheet and the Intergovernmental Panel on Climate Change.” Social Studies of Science (June 26). doi:10.1177/0306312712448130. Oreskes, Naomi, and Erik Conway. 2011. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press. Osborne, David, and Ted Gaebler. 1993. Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. New York: Plume Books. Osterholm, Michael T. and David A. Henderson. 2012. “Life Sciences at a Crossroads: Respiratory Transmissible H5N1.” ScienceExpress (January 19). Ozment, Steven. 1980. The Age of Reform: 1250–1550. New Haven, CT: Yale University Press. Paden, William E. 1988. “Theaters of Humility and Suspicion: Desert Saints and New England Puritans.” In Technologies of the Self: A Seminar with Michel Foucault, edited by L. Martin, H. Gutman, and P. Hutton, 64–79. Amherst: University of Massachusetts Press. Panter-Brick, Catherine, and Mark B. Eggerman. 2010. “Suffering, Hope, and Entrapment: Resilience and Cultural Values in Afghanistan.” Social Science & Medicine 71 (1): 71–83. Parunak, H. Van Dyke, et al. 2008. “Prediction Horizons in Agent Models.” In Engineering Environment-Mediated Multiagent Systems, 88–102. Berlin and Heidelberg: SpringerVerlag. Accessed May 20. http://link.springer.com/chapter/10.1007%2F978- 3- 540- 850 29- 8_7#page- 1. Pasley, Virginia. 1955. 21 Stayed: The Story of the American GIs Who Chose Communist China: Who They Were and Why They Stayed. New York: Farrar Straus and Cudahy. Pecha Quimbay, Patricia. 2008. Historia Institucional de La Caja de La Vivienda Popular. Bogotá: Alcaldía Mayor de Bogotá, D.C., Secretaría General, Archivo de Bogotá. Peters, Edward. 1996. Torture. Philadelphia: University of Pennsylvania Press. Peters, Tom. 1987. Thriving on Chaos: Handbook for a Management Revolution. New York: Knopf. Peterson, Marilyn. 2005. “Intelligence-Led Policing: The New Intelligence Architecture.” Washington DC: Bureau of Justice Assistance, US Department of Justice.

234 / References Petryna, Adriana. 2013 [2002]. Life Exposed: Biological Citizens after Chernobyl. Princeton, Princeton University Press. Porter, Theodore. M. 1995. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press. Power, Michael. 2004. “The Risk Management of Everything.” Journal of Risk Finance 5 (3): 58–65. ———. 2007. Organized Uncertainty: Designing a World of Risk Management. New York: Oxford University Press. Presidente de la República de Colombia. 1984. Decreto 1547. Bogotá: June 21. Presidente de la República de Colombia. 1989. Decreto 919. Bogotá: May 1. Procter, Robert, and Londa Schiebinger, eds. 2008. Stanford, CA: Stanford University Press. Pupavac, Venessa. 2004. “Psychosocial Interventions and the Demoralization of Humanitarianism.” Journal of Biosocial Science 36 (4): 491–504. Quammen, David. 2012. Spillover: Animal Infections and the Next Human Pandemic. New York: W. W. Norton. Quattrone, Paolo. 2004. “Accounting for God: Accounting and Accountability Practices in the Society of Jesus (Italy, XVI–XVII centuries).” Accounting, Organizations and Society 29: 647–683. Rabinow, Paul. 1995. French Modern: Norms and Forms of the Social Environment. Chicago: University of Chicago Press. ———. 1996. Making PCR: A Story of Biotechnology. Chicago: University of Chicago Press. ———. 1998. “Artificiality and Enlightenment: From Sociobiology to Biosociality.” In Incorporations, edited by Jonathan Crary and Sanford Kwintner, 234–252. New York: Zone. ———. 2003. Anthropos Today: Reflections on Modern Equipment. Princeton: Princeton University Press. ———. 2007. Marking Time: On the Anthropology of the Contemporary. Princeton: Princeton University Press. ———. 2011. The Accompaniment: Assembling the Contemporary. Chicago: Chicago University Press. Rabinow, Paul, and Gaymon Bennett. 2010. Contemporary Equipment: A Diagnostic. http:// anthropos- lab.net/documents/contemporary- equipment- diagnostic. ———. 2012. Designing Human Practices: An Experiment with Synthetic Biology. Chicago: University of Chicago Press. Rabinow, Paul, and Anthony Stavrianakis. 2013. Demands of the Day. Chicago: University of Chicago Press. Rahmstorf, Stefan, Grant Foster, and Anny Cazenave. 2012. “Comparing Climate Projections to Observations up to 2011.” Environmental Research Letters 7 (4).doi:10.1088 /1748- 9326/7/4/044035. Ramírez Gomez, Fernando, and Omar Darío Cardona. 1996. “El Sistema Nacional Para La Prevención y Atención de Desastres En Colombia.” In Estado, Sociedad y Gestión de Los Desastres En América Latina: En Busca Del Paradigma Perdido, edited by Allan Lavell and Eduardo Franco, 255–307. Lima: La RED, FLACSO, ITDG-Perú. Relman, David, and Roger Brent. 2013. “Comments from Brent and Relman to the National Institutes of Health,” January. http://depts.washington.edu/ssnet/biological_futures /Files/Brent-Relman%20as%20sent%209%20January.pdf. Rheinberger, Hans-Jorg. 2010. An Epistemology of the Concrete: Twentieth-Century Histories of Life. Durham, NC: Duke University Press.

References / 235 Rivas Gamboa, Ángela. 2007. Gorgeous Monster: The Arts of Governing and Managing Violence in Contemporary Bogotá. Saarbrücken: VDM Verlag Dr. Müller. Robertson, Roland. 1994. “Globalization or Glocalization?” Journal of International Communications 1 (1): 33–52. Robin, Ron. 2003. The Making of the Cold War Enemy. Princeton: Princeton University Press. Rockström, Johan. 2010. “Let the Environment Guide Our Development.” http://www.ted .com/talks/johan_rockstrom_let_the_environment_guide_our_development.html. Rockström, Johan, et al. 2009. “Planetary Boundaries: Exploring the Safe Operating Space for Humanity.” Ecology and Society 14 (2): 32. www.ecologyandsociety.org/vol14/iss2/ art32. Rose, Nikolas. 1989. Governing the Soul: The Shaping of the Private Self. London: Routledge. ———. 1998. Inventing Our Selves: Psychology, Power and Personhood. Cambridge: Cambridge University Press. ———. 1999. Powers of Freedom: Reframing Political Thought. Cambridge: Cambridge University Press. ———. 2006. The Politics of Life Itself: Biomedicine, Power and Subjectivity in the Twenty-First Century. Princeton: Princeton University Press. ———. 2008. “Psychology as a Social Science.” Subjectivity 23: 1–17. Rosenberg, Carol. 2013. “Guantánamo Prosecutor Uncovers CIA Photos of Accused Bomber.” Miami Herald, June 24. http://www.miamiherald.com/2013/06/24/3468326 _p2/Guantánamo- prosecutor- uncovers.html. Rubin, Audrey, and Lorna Rhodes. 2005. “Narrative and the Intergenerational Transmission of Trauma among Cambodian Refugees.” In Perspectives in Cross- cultural Psychiatry, edited by Anna M. Georgiopoulos and Jerrold F. Rosenbaum, 157–179. Philidelphia: Lippincott William and Wilkins. Rumsfeld, Donald. 2002. “Speech to National Defense University.” http://www.defense link.mil/speeches/speech.aspx?speechid=183. Sagi-Schwartz, Abraham, Marinus H. Van Ijzendoorn, Klauss E. Grossmann, Karin Grossmann, and Nina Koren-Karie. 2003. “Attachment and Traumatic Stress in Female Holocaust Child Survivors and Their Daughters.” American Journal of Psychiatry 160 (6): 1086–1092. Samimian-Darash, Limor. 2009. “A Pre- event Configuration for Biological Threats: Preparedness and the Constitution of Biosecurity Events.” American Ethnologist 36 (3): 478–491. ———. 2011. “Governing through Time: Preparing for Future Threats to Health and Security.” Sociology of Health & Illness 33 (6): 930–945. ———. 2013. “Governing Future Potential Biothreats: Toward an Anthropology of Uncertainty.” Current Anthropology 54 (1): 1–22. Scarry, Elaine. 1985. The Body in Pain: The Making and Unmaking of the World. New York: Oxford University Press. Scharf, Miri. 2007. “Long-Term Effects of Trauma: Psychosocial Functioning of the Second and Third Generation of Holocaust Survivors.” Development and Psychopathology 19 (2): 603–622. Schauer, Maggie, and Elisabeth Schauer. 2010. “Trauma Focused Mental Health Interventions: A Paradigm Shift in Humanitarian Assistance and Aid Work.” In Trauma Rehabilitation after War and Conflict: Community and Individual Perspectives, edited by E. Martz, 389–429. New York: Springer. Scheffer, Marten. 2010. “Complex Systems: Foreseeing Tipping Points.” Nature 467: 411– 412.

236 / References Scheffer, Marten, Steve Carpenter, Jonathan A. Foley, Carl Folke, and Brian Walker. 2001. “Catastrophic Shifts in Ecosystems.” Nature 413: 591–596. Scheffer, Marten, and Stephen Carpenter. 2003. “Catastrophic Regime Shifts in Ecosystems: Linking Theory to Observation.” Trends in Ecology and Evolution 18(12):648- 656. Scheffer, Marten, et al. 2009. “Early-Warning Signals for Critical Transitions.” Nature 461: 53–59. Schein, Edgar. 1961. Coercive Persuasion. New York: Norton. Schüll, Natasha D. 2102. Addiction by Design: Machine Gambling in Las Vegas. Princeton: Princeton University Press. Schwartz, Mattathias. 2006. “The Hold ‘em Holdup.” New York Times, June 11. Scott, James. C. 1998. Seeing Like a State. New Haven, CT: Yale University Press. Scott, Richard W. 2003. Organizations: Rational, Natural, and Open Systems (5th edition). Upper Saddle River, NJ: Pearson. Shane, Scott. 2008. “China Inspired Interrogations at Guantánamo.” New York Times, July 2. Shenhav, Yehouda. 1999. Manufacturing Rationality: The Engineering Foundations of the Managerial Revolution. Oxford: Oxford University Press. Shortridge K. F., J. S. M. Peiris, and Y. Guan. 2003. “The Next Influenza Pandemic: Lessons from Hong Kong.” Journal of Applied Microbiology 94: 70–79. Shortridge K. F., and C. H. Stuart-Harris. 1982. “An Influenza Epicentre?” Lancet (ii): 812– 813.Shukman, David. 2013. “Carbon Dioxide Passes Symbolic Mark.” BBC. May 10. http://www.bbc.co.uk/news/science- environment- 22486153. Simon, Stephanie. 2012. “Suspicious Encounters: Ordinary Preemption and the Securitization of Photography.” Security Dialogue 43 (2):157–173. Skolnik, Sam. 2011. High Stakes: The Rising Costs of America’s Gambling Addiction. Boston: Beacon Press. Slahi, Mohamedou Ould. 2013. “The Guantánamo Memoirs of Mohamedou Ould Slahi.” Slate. May 13. http://www.slate.com/articles/news_and_politics/foreigners/2013/04 /mohamedou_ould_slahi_s_guant_namo_memoirs_how_the_united_states_kept_a _gitmo.html. Smiles, Samuel. 1907. Thrift: A Book of Domestic Counsel. London: Murray. ———. 1913. Self Help. With Illustrations of Conduct and Perseverance. London: Murray. Smith, G., et al. 2006. “Emergence and Predominance of an H5N1 Influenza Variant in China.” PNAS 103 (45): 16936–16941. Smith G., J. Bahl, D. Vijaykrishna, et al. 2009a. “Origins and Evolutionary Genomics of the 2009 Swine-Origin H1N1 Influenza A Epidemic.” Nature 459: 1122–1125. ———. 2009b. “Dating the Emergence of Pandemic Influenza Viruses.” PNAS 106: 11712 Socolow, Robert. 2010. How Would We Act If We Took Climate Change Seriously. The Earth Institute’s Lamont-Doherty Earth Observatory Earth Science Colloquium. http://www .ldeo.columbia.edu/video/how- would- we- act- if- we- took- climate- change- seriously. ———. 2011. Wedges Reaffirmed. http://www.princeton.edu/mae/people/faculty/socolow /Wedges- reaffirmed-PLUS- ten- soliticed- comments- 9- 29- 11.pdf. Sonis, Jeffrey, James L. Gibson, Joop T. de Jong, Nigel P. Field, Sokhom Hean, and Ivan Komproe. 2009. “Probable Postraumatic Stress Disorder and Disability in Cambodia: Associations with Perceived Justice, Desire for Revenge, and Attitudes toward the Khmer Rouge Trials.” Journal of the American Medical Association 302 (5): 527–536. Sorrell, E. M., et al. 2011. “Predicting ‘Airborne’ Influenza Viruses: (trans-) Mission Impossible?” Current Opinion in Virology 1 (6): 635–642. Stalcup, Meg, and Joshua Craze. 2011. “How We Train Our Cops to Fear Islam.” Washing-

References / 237 ton Monthly, March/April. http://www.washingtonmonthly.com/features/2011/1103. stalcup- craze.html. Stern, Jessica, and Jonathan B. Weiner. 2006. “Precaution against Terrorism.” Journal of Risk Research 9 (4): 393–447. Stewart, Phil. 2014. “U.S. Calls Guantanamo Hunger Strikes ‘Non-Religious Fasting.’” Reuters. March 12. Stowe, David W. 1994. Swing Changes: Big-Band Jazz in New Deal America. Cambridge, MA: Harvard University Press. Summerfield, Derek. 1998. “The Social Experience of War and Some Issues for the Humanitarian Field.” In Rethinking the Trauma of War, edited by Patrick .J. Bracken and Celia Petty, 9–37. London: Free Association Books. Thomas, Martin. 2004. The Artificial Horizon: Reading a Colonised Landscape. Melbourne: Melbourne University Publishing. Thompson, Ashley. 2006. “Buddhism in Cambodia: Rupture and Continuity.” In Buddhism in World Cultures: Comparative Perspectives, edited by Stephan. C. Berkwitz, 129– 169. Santa-Barbara, CA: ABC-CLIO. Torgovnick, Marianna. 1990. Gone Primitive: Savage Intellects, Modern Lives. Chicago: University of Chicago Press. US Central Intelligence Agency. 1963. KUBARK: Counterintelligence Interrogation. US Department of Justice. 2010a. “Final Report of the Guantánamo Review Task Force.” http://www.justice.gov/ag/Guantánamo-review-final-report.pdf. ———. 2010b. “Final Report: Information Sharing Environment (ISE)–Suspicious Activity Reporting (SAR) Evaluation Environment (ISE-SAR EE).” http://ise.gov/sites/default /files/BJA_Final_Report_ISE_SAR_EE_0.pdf. US Government. 2012. “Oversight of Life Sciences Dual Use Research of Concern.” http:// oba.od.nih.gov/oba/biosecurity/PDF/United_States_Government_Policy_for_Oversight_of_DURC_FINAL_version_032812.pdf. US Senate Committee on Homeland Security and Governmental Affairs (HSGAC). 2012. “Federal Support for and Involvement in State and Local Fusion Centers, Majority and Minority Staff Report.” http://www.hsgac.senate.gov/download/?id=49139e81- 1dd7 - 4788- a3bb- d6e7d97dde04. US Senate Select Committee on Intelligence and Subcommittee on Health and Scientific Research. 1977. “Project MK ULTRA. The CIA’s Program of Research in Behavioral Modification,” 95th Congress, First Session, August 3. US Government Printing Office: Washington, DC. Vendelo, Morten Thanning. 2009. “Improvisation and Learning in Organizations—An Opportunity for Future Empirical Research.” Management Learning 40 (4): 449–456. Verma, Vandi, John Langford, and Reid Simmons. 2001. “Non-Parametric Fault Identification for Space Rovers.” http://www.ri.cmu.edu/pub_files/pub2/verma_vandi_2001_1 /verma_vandi_2001_1.pdf. Vijaykrishna D., G. Smith G. et al. 2007. “Evolutionary Insights into the Ecology of Coronaviruses.” Journal of Virology 81 (8):4012–4020. Wall Street Journal. 2007. “Law Blog: Justice Scalia Hearts Jack Bauer.” http://blogs.wsj .com/law/2007/06/20/justice- scalia- hearts- jack- bauer/. Walters, William. 2000. Unemployment and Government: Genealogies of the Social. Cambridge, UK: Cambridge University Press. Weber, Max. 1958 [1905]. The Protestant Ethic and the Spirit of Capitalism. New York: Charles Scribner’s Sons. Wedel, Janine R., Cris Shore, Gregory Feldman, and Stacy Lethrop. 2005. “Toward an An-

238 / References thropology of Public Policy.” The Annals of the American Academy of Political and Social Science 600: 30–51. Weick, Karl E. 1969. The Social Psychology of Organizing. Reading, MA: Addison-Wesley. ———. 1998. “Improvisation as a Mindset for Organizational Analysis.“ Organization Science 9 (5): 543–555. Welch, Michael. 2009. “Guantanamo Bay as a Foucauldian Phenomenon: An Analysis of Penal Discourse, Technologies, and Resistance.” The Prison Journal 89 (1): 3–20. Weller, Robert P. 2006. Discovering Nature: Globalization and Environmental Culture in China and Taiwan. Cambridge, UK: Cambridge University Press. Whyte, William H. 2002 [1956]. The Organization Man. Philadelphia: University of Pennsylvania Press. Wilf, Eitan. 2010. “Swinging within the Iron Cage: Modernity, Creativity and Embodied Practice in American Postsecondary Jazz Education.” American Ethnologist 37 (3): 563– 582. ———. 2011. “Sincerity versus Self-Expression: Modern Creative Agency and the Materiality of Semiotic Forms.” Cultural Anthropology 26 (3): 462–484. ———. 2012. “Rituals of Creativity: Tradition, Modernity, and the ‘Acoustic Unconscious’ in a U.S. Collegiate Jazz Music Program.” American Anthropologist 114 (1): 32–44. ———. 2013a. “Social Robots, Jazz Musicians, and Divination: Contingency as a Cultural Resource for Negotiating Problems of Intentionality.” American Ethnologist 40 (4): 605–618. ———. 2013b. “Streamlining the Muse: Creative Agency and the Reconfiguration of Charismatic Education as Professional Training in Israeli Poetry Writing Workshops.” Ethos 41(2) 127–149. ———. 2014. School for Cool: The Academic Jazz Program and the Paradox of Institutionalized Creativity. Chicago: University of Chicago Press. Wittgenstein, Ludwig. 1972. On Certainty. Edited by G. E. M. Anscombe and G. H. von Wright. New York: Harper & Row. World Health Organization (WHO). 2012. Report on Technical Consultation on H5N1 Research Issues. February 16–17. Geneva. Yee, James. 2005. For God and Country: Faith and Patriotism Under Fire. New York: Public Affairs. Young, Allan. 1995. The Harmony of Illusions: Inventing Post- traumatic Stress Disorder. Princeton: Princeton University Press. Zack, Michael H. 2000. “Jazz Improvisation and Organizing: Once More from the Top.” Organization Science 11 (2): 227–234. Zaloom, Caitlin. 2006. Out of the Pits: Traders and Technology from Chicago to London. Chicago: University of Chicago Press. Zwick, Detlev. 2005. “Where the Action Is: Internet Stock Trading as Edgework.” Journal of Computer-Mediated Communication 11 (1): 22–43. ———. 2013. “Digital Virtual Consumption.” In Digital Virtual Consumption, vol. 23, edited by Mike Molesworth and Janice Denegri Knott. New York: Routledge. Zylberman, Patrick. 2013. Tempêtes microbiennes. Essai sur la politique de sécurité sanitaire dans le monde transatlantique. Paris: Gallimard.

CONTRIBUTORS

Gaymon Bennett is assistant professor of Religion, Science, and Technology at Arizona State University. He works on the problem of modernity in contemporary religion and bioengineering: shifting moral economies, contested power relations, and uncertain modes of subjectivity. His book Technicians of Human Dignity: On the Late Modern Politics of Intrinsic Worth (Fordham 2015) examines the rise of the figure of human dignity in twentieth- century international and religious politics and its current reconfigurations. His coauthored book Designing Human Practices: An Experiment with Synthetic Biology (Rabinow and Bennett 2012) chronicles an anthropological experiment in ethics with engineers reimagining the boundary of biology and computation. Frédéric Keck is the head of the research department at the Musée du quai Branly, and is attached to the Laboratoire d’Anthropologie Sociale at the Collège de France in Paris. He has done research on the history of social sciences and on the anthropological aspects of avian flu. He is the author of Claude Lévi-Strauss, une introduction (La Découverte, 2005), Lucien LévyBruhl, entre philosophie et anthropologie (CNRS Editions, 2008), and Un monde grippé (Flammarion, 2010). He has contributed to the volumes Biosecurity Interventions: Global Health and Security in Question (S. Collier and A. Lakoff, eds., SSRC–Columbia University Press, 2008) and Food: Ethnographic Encounters (L. Coleman, ed., Berg, 2011). He is the coeditor (with Andrew Lakoff ) of the issue of the journal Limn on sentinel devices (2013). Carol A. Kidron is associate professor in the Department of Sociology and Anthropology at the University of Haifa, Israel. Kidron has undertaken ethnographic work with Israeli Holocaust descendants and Cambodian and Cambodian Canadian genocide descendants. Her research interests include Holocaust and genocide commemoration, exploring the way trauma in-

240 / Contributors

forms subjectivities. Kidron’s publications include “Toward an Ethnography of Silence: The Lived Presence of the Past in the Everyday Life of Holocaust Trauma Survivors and Their Descendants in Israel” (Current Anthropology 2009) and “Alterity and the Particular Limits of Universalism: Comparing Jewish-Israeli Holocaust and Canadian-Cambodian Genocide Legacies” (Current Anthropology 2012). Rebecca Lemov is associate professor in the Department of the History of Science at Harvard, with interests in the history of behavior modification, data collection in the human sciences, brainwashing techniques, and technologies of self. Her current book, Database of Dreams: Social Science’s Forgotten Archive of How to Be Human (Yale University Press, forthcoming) examines attempts to map the elusive and subjective parts of the human psyche via once- futuristic data- storage techniques. In 2010–11 and 2013– 14 she was a visiting scholar at the Max Planck Institute for the History of Science in Berlin. Pat O’Malley is Professorial Research Fellow in Law at the University of Sydney and previously was Canada Research Chair in Criminology and Criminal Justice in Ottawa. Recent publications on uncertainty and risk include Crime and Risk (SAGE Publications, 2010), The Currency of Justice (Routledge-Cavendish, 2009), Gendered Risks (Routledge-Cavendish 2007), and Riesgo, neoliberalismo y justice penal (Ad-Hoc, 2006). Adriana Petryna is the Edmund J. and Louise W. Kahn Term Professor in the Department of Anthropology at the University of Pennsylvania. She has written on twentieth- and twenty- first- century risks and the role of scientific and ethnographic inquiry in realms of nuclear and medical uncertainty. She is the author of Life Exposed: Biological Citizens after Chernobyl (Princeton University Press, 2013 [2002]) and When Experiments Travel: Clinical Trials and the Global Search for Human Subjects (Princeton University Press, 2009). She edited, with Andrew Lakoff and Arthur Kleinman, Global Pharmaceuticals: Ethics, Markets, Practices (Duke University Press, 2006) and, with João Biehl, When People Come First: Critical Studies in Global Health (Princeton University Press, 2013). Paul Rabinow is professor of Anthropology at the University of California, Berkeley, where he has been teaching since 1978. His recent publications include Designs on the Contemporary: Anthropological Tests (with Anthony Stavrianakis; University of Chicago Press, 2014), Demands of the Day: On the Logic of Anthropological Inquiry (with Anthony Stavrianakis; University of Chicago Press, 2013), Designing Human Practices: An Experiment with Synthetic Biology (with Gaymon Bennett; University of Chicago Press, 2012), The Accompaniment: Assembling the Contemporary (University of Chicago Press,

Contributors / 241

2011), Designs for an Anthropology of the Contemporary (with George Marcus, James Faubion, and Tobias Rees; Duke University Press, 2008), and Marking Time: On the Anthropology of the Contemporary (Princeton University Press, 2007). Limor Samimian-Darash is a cultural anthropologist and assistant professor in the Federman School of Public Policy and Government at the Hebrew University of Jerusalem. Her research interests focus on security, risk, and disasters. She has conducted fieldwork on preparedness for biological threats in Israel and on biosecurity in the United States. She is currently researching technologies of uncertainty such as scenarios. Her work has been published in American Ethnologist, Sociology of Health and Illness, Anthropological Theory, Current Anthropology, and Ethnography. Natasha Dow Schüll is a cultural anthropologist and associate professor in MIT’s Program in Science, Technology, and Society. Her first book, Addiction by Design: Machine Gambling in Las Vegas (Princeton, 2012), drew on extended research among compulsive gamblers and the designers of the slot machines they play to explore the relationship between technology design and the experience of addiction. Her current book project, Keeping Track: Personal Informatics, Self-Regulation, and the Algorithmic Life, concerns the rise of digital self- tracking technologies and the new modes of introspection and self- governance they engender. Meg Stalcup is assistant professor in the École d’Études Sociologiques et Anthropologiques at the University of Ottawa. She has written on aspects of security, digital media, global health, ethnobotany, and research collaboration, most recently contributing to Revista Cadernos do CEOM, Health, Culture and Society, and the edited volume Policing and Contemporary Governance (Palgrave Macmillan 2013). Educated in the United States and Brazil, she studied biology and art before obtaining a PhD in Medical Anthropology from the University of California, Berkeley and San Francisco. Eitan Wilf is assistant professor in the Department of Sociology and Anthropology at the Hebrew University of Jerusalem. He received his PhD in anthropology from the University of Chicago in 2010. His research interests focus on the institutional transformations of creative practice in the United States and Israel. He has conducted ethnographic research on the academization of socialization into jazz music and poetry, the cultural logic of creative practice in business corporations, and the development of artproducing computerized technologies. He is the author of School for Cool: The Academic Jazz Program and the Paradox of Institutionalized Creativity (University of Chicago Press, 2014). Austin Zeiderman is assistant professor of Urban Geography in the De-

242 / Contributors

partment of Geography and Environment at the London School of Economics and Political Science. He is an interdisciplinary scholar who specializes in the cultural and political dimensions of urbanization in Latin America, with a specific focus on Colombia. He holds a PhD in anthropology from Stanford University and a master’s degree from Yale University, and his work has appeared in venues such as Public Culture (forthcoming), American Ethnologist, Environment and Planning A, openDemocracy, The Guardian, and Le Monde Diplomatique.

INDEX

Agaibi, Christine, 106, 107 Agamben, Giorgio, 90, 214n4 Agger, Inger, 115, 121 agnotology, 89, 214n5 Akgun, Ali, 30, 36, 209n1 All- in Expected Value (AIEV) calculator, 57–58, 211n11 Amoore, Louise, 72, 182, 183, 187, 213n12 anticipatory technology, 16, 70, 74–75, 84, 87, 182, 213n12 anti- viral drugs, 5–6 Appadurai, Arjun, 47, 63, 65, 211n13, 211n14 Aquinas, Thomas, 132, 135 Aradau, Claudia, 89, 183, 213n9 Arnoldi, Jakob, 47, 55, 64 attribution scenarios, 6, 140–141 avian influenza: birdwatchers’ and, 167, 178; engineered H5N1 strain, 124–125, 129, 141; H5N1 paper publication, 137–138; security versus science debate, 133–135 Bahl, Justin, 175, 176 baksbat (in Cambodia), 110–111, 119 Barrett, Rhodes, 29, 30, 35, 37, 41, 45 Beck, Ulrich, 2, 4, 103, 127, 187, 202 Bennett, Gaymon, 8, 123, 130, 136, 137, 140, 142, 143 Bentham, Jeremy, 20–22, 26, 210n4 Bernstein, Peter L., 13, 14, 26 Beveridge, William, 23, 24 Biderman, Albert, 97–98, 216n15 bioinformatics: excess information strategy,

175–176; global nature of, 173, 176– 177; goal of, 173–175; mutations and, 175; processing of uncertainty, 173–174, 176, 177; time of disease emergence and, 175; wetlab and drylab work, 171–173 biosecurity: avian influenza and, 124– 125; dual use research of concern, 129, 130, 139–140, 218n2; ethics and, 140, 143; Fink Report, 139; Fouchier’s research, 123–129; H5N1 paper publication, 129, 137–139, 141; history of malice and, 136–137, 140, 142; indeterminacy in, 142–143; Kawaoka’s research, 129, 134, 135, 137, 138; management of uncertainty in, 142; Nature, 129, 135, 172; preparedness and, 143–144; risk and uncertainty in, 124, 141; Science, 129, 134, 172; security versus science debate, 129–130, 133–135; underdetermination and, 142, 143 birdwatching (in Hong Kong): avian influenza and, 167, 178; ecosystemic perspective, 179; excess information strategy, 181; HKBWS founding, 177–178; mapping use, 179–180; photograph sharing, 180; practices of, 178–179; virtual space use, 181 Bjelopera, Jerome, 85, 86, 214n21 brainwashing, 92–97, 99. See also interrogation Bush administration, 71–72, 92, 213n8 Butler, Judith, 90, 177

244 / Index Cahill, Abigail, 163, 219n15 Cardona, Dario, 188, 189, 190 Carpenter, Stephen, 148, 151, 159 Castelão-Lawless, Teresa, 147 Chhim, Sotheara, 109, 111 CIA (US Central Intelligence Agency), 93 climate change: actuarial performances of risk, 152; aquatic ecosystems and, 148; Arctic ecosystem model, 153–154; bad versus good risk, 152– 153; complexities of processes and, 161–162; costs of, 149; ecosystem recovery and, 159, 160; horizoning work in, 155–156, 163–164, 205, 218n2, 219n13; models of, 148–150, 154–156; phenomenotechnique and, 147–148, 150, 218n2; planetary domains, 161; repurposing prevention and, 163; scalability issues, 159–160; stabilization wedge, 156–159, 157f, 158f, 163; system behavior and, 160, 219n9, 219n15; threshold graph, 161, 162f; threshold of risks in, 151–152; tipping points in ecosystems, 153–154; uncertainty and, 150–151; viewpoint of skeptics, 162–163 Cold War, 92 Collier, Stephen, 5, 143, 172, 187 contract law, 19–20 Coronavirus (CoV), 167 Countering Violent Extremism (CVE), 73, 213n10, 213n11 Cunha, Miguel Pina, 30, 36, 45, 209n1 danger: actions based on, 198; certainty, risk, and, 1–3, 141, 170–171, 180; creation of, 130, 136, 138, 139, 140– 142; detainees’ release and, 91; disaster risk management and, 189; horizoning work and, 155; in insurance, 19; preparedness and, 143; tilt management and, 59 Dean, Mitchell, 4, 75 Defense Authorization Bill (2011), 90 de Goede, Marieke, 72, 213n9 Deleuze, Gilles, 54, 74 Department of Homeland Security, 76, 213n17 Department of Justice, 69 Dewey, John, 142, 204 Directorate for Emergency Prevention and

Response (DPAE), Colombia, 194, 195–199 Doran, Nob, 17, 18 Douglas, Mary, 2, 15, 141, 169, 202 Drake, John M., 155 Drexler, Madeline, 168 dual use, 130, 133, 139, 140, 143, 144 dual use research of concern, 129, 130, 139–140, 218n2 Dulles, Allen, 93, 96 Durkheim, Emile, 170, 203 Dyba, Tore, 30, 36 Ebola, 165 ECCC (Extraordinary Chambers in the Courts of Cambodia), 112, 116 economics: entrepreneurs and, 14, 15, 26; foresight and, 20–21; global financial crisis and, 13–14; models of, 65, 211n14; risk’s role in, 26; worker insecurity and, 40 Edkins, Jenny, 102, 121 Eghigian, Greg, 22, 23 enterprising prudentialism, 47, 212n14 entrepreneurship, 14, 15, 19–22, 24, 26, 121, 122 event technologies, 6–7, 74, 87, 211n9, 215n14. See also hazards Ewald, François, 3, 47, 70, 73, 186, 213n8 Extraordinary Chambers in the Courts of Cambodia (ECCC), 112, 116 Fassin, Didier, 105, 107, 120 Fauci, Anthony, 133, 134, 137, 138 FBI, 70, 212n5 financial models, 65, 173, 211nn13–14 Fink Report, 139 Flahaut, François, 136, 137 flood insurance, 152–153 Florida, Richard, 31, 40 Food and Agriculture Organization (FAO), 168 foresight, 16, 19–22, 24, 52, 211n8 Foucault, Michel, 4, 75, 112, 127, 128, 131, 132, 135, 136, 143, 169, 183, 185–187, 214n4 Fouchier, Ron, 123, 134, 137, 138, 142 freedom of contract, 18, 23 fusion centers: bias in the system, 82; collaborating agencies, 76–77, 213n16; criminal intelligence approach, 77–

Index / 245 78; incident documentation, 84–85; management of uncertainty, 74–75, 82; in Miami-Dade, 81–82; postevent labeling standard, 80; potential uncertainty and, 78–79; presupposition of discernment, 78, 79, 82–84, 213n18; purpose of, 76, 77; questioning of data, 84; standards development, 79–80; system test, 80–81, 86 Gaebler, Ted, 14, 16, 24 Galloway, Alexander, 55, 211n7 Garrett, Laurie, 133, 134 Geertz, Clifford, 50, 160, 210n4 Genbank, 172 Geneva conventions, 98 Giddens, Anthony, 2, 15, 24, 127, 211n5 Gioia, Ted, 43, 210n5 Global Facility for Disaster Reduction and Recovery (GFDRR), 191 global financial crisis, 13, 14 Goffman, Erving, 50, 63 Gone, Joseph, 105, 106 Goodman, Amy, 106, 214n4 grinding in online poker, 49, 210n3 Guantánamo Bay Detention Camp: detainees’ potential for action, 89, 90; enhanced interrogation techniques and (see interrogation); hunger strikes, 90, 91; knowledge versus uncertainty, 89, 214n5; “not feasible for prosecution” designation, 91–92; potential uncertainty and, 92, 104, 215n14; practice of torture and, 92; prisoners’ status, 89, 214nn3–4, 215n11; prison- facility construction, 90–91, 215nn8–9; psyops techniques at, 95f; status of, 88–90, 214nn1–2, 214–215nn6–7 H5N1 avian flu. See avian influenza; biosecurity Hacking, Ian, 47, 113 Hamalainen, Virpi, 30, 209n1 Harlow, Harry, 93, 216n15 Hatch, Mary, 41, 45, 209n1 hazards: historical age of uncertainty, 182–183; key element of security, 185; management of in Colombia (see zones of high risk); past political technologies, 185–186; policy setting process, 184– 187, 199–200; risk governance analysis,

187–188, 219n2; uncertainty and, 186, 187, 199, 200; urban environmental risks, 183 Hinton, Alex, 108, 110, 114 HIV/AIDS, 165 Hodson, Mike, 183, 185 Homeland Security Act (2002), 213n15 Hong Kong Birdwatching Society (HKBWS), 177–178 Hong Kong University (HKU), 167, 171 horizons of climate change. See climate change HUD (heads-- up display): behavioral indications, 52–54; customization potential, 53; data shown on, 51, 55, 211n7; foresight offered by, 52, 211n8; note taking utility, 54; uncertainty cultivation using, 55–56; vigilant monitor role, 54–55, 211n9 Hughes, Terry, 149, 155, 161, 163 IACP (International Association of Chiefs of Police), 76, 213n16 improvisation, 29–30, 34–38, 209n1, 210n5, 210n7 infectious diseases, 165–167, 169, 186. See also avian influenza; pandemic flu Ingold, Tim, 171, 177 insurance: actuarially based principles in, 17–19, 152; bad versus good risk, 152–153; British liberalism and, 16–17; centralized control of, 17–18; contract law and, 20; flaws in private insurance, 21–22; foresight and, 19–22, 24; freedom and, 18, 25, 27; means taken to deal with risk, 73; rate of return for risk and, 152; risk and uncertainty in, 15–16, 24–28; risk- based insurance, 18; security provisions and, 20; social insurance and, 21, 23–24; statistics and probability in, 13–14, 22–23, 26; tort law and, 22; working class insurance, 17 intergenerationally transmitted trauma theory (ITT). See trauma theory/ intergenerationally transmitted trauma theory International Association of Chiefs of Police (IACP), 76, 213n16 International Organization for Animal Health (OIE), 168

246 / Index International Strategy for Disaster Reduction (ISDR), United Nations, 191 interrogation: brainwashing and, 92–93, 99; coercive techniques use, 94–96; Cold War model, 97, 100; criticism of US intelligence, 101, 216n22; experimental program for, 94; framing of the techniques, 98–99, 101–104, 206, 216n20; Geneva conventions and, 98; mental state of released POWs and, 93– 94; origins of policies, 92; precaution and, 5, 213n9; psychic driving method, 100; psychological versus physical torture, 102; psy- ops techniques, 95f; rationale for enhanced methods, 103, 217–218nn26–28; resilience and, 213n11, 216n22; SERE program and, 96–98, 216n18; subject of interrogation techniques, 99–101; transition in subjecthood, 103–104 ISDR (International Strategy for Disaster Reduction), United Nations, 191 jazz: artistic modernism and, 42–43, 210nn9–10; criticisms of the jazz metaphor, 45; emergent nature of, 37; encouraging innovation and, 36– 37, 42, 210n6; improvisation model, 29–30, 34–38, 210n5, 210n7; middleclass Americans’ fascination with, 43; organizational theorists interest in, 30, 35–36, 39, 43–44, 209n1, 210n5; potential uncertainty and, 30, 37, 38, 39; relation to uncertainty, 35, 42, 44– 45; self- expression versus sincerity and, 38–39; straight- ahead jazz process, 34 Johns, Fleur, 89, 214n6 Joint Terrorism Task Force, 70 Kaitz, Marsha, 105, 107 Kamoche, Ken, 30, 36, 45, 209n1 Kawaoka, Yoshihiro, 129, 134, 135, 137, 138 Keck, Frédéric, 9, 165, 172, 219n1 Kidron, Carol A., 8, 105, 112, 118, 119, 122 Kirmayer, Laurence, 105, 106, 109 KUBARK Counterintelligence Interrogation manual, 94, 96, 98 Kunreuther, Howard, 150–151 Lakoff, Andrew, 5, 56, 143, 166, 172, 185, 187, 211n9, 215n14, 219n1

Latour, Bruno, 147, 218n2 Lemov, Rebecca, 8, 88, 100, 206, 216n16 Lévi-Strauss, Claude, 170 Lévy-Bruhl, Lucien, 169–170 Luhmann, Niklas, 3, 50, 52, 64, 126, 130, 133, 137, 141, 142, 143 Luther, Martin, 135–136 Macedo, Stephen, 71, 213n7, 213n8 MacKenzie, Adrian, 173, 174 malice, 136–137, 140, 142 Mantere, Saku, 30, 37, 209n1 Martin, Randy, 47, 50 Marvin, Simon, 183, 185 Massumi, Brian, 54, 71, 72, 73, 74, 210n7 Mayer, Jane, 97, 216n16, 216n19, 217n26 McCoy, Alfred, 92, 94 mental health (in Cambodia): baksbat and, 110–111, 119; dialogue project, 117; Euro-Western and Buddhist therapies, 115; future of, 121–122; hybrid interview/technologies use, 112, 116–117, 120; potential uncertainty and, 113–114, 116–117; precipitating event, 108; psychological assessments, 110, 113; psychosocial education, 112– 113; PTSD and, 105, 107, 108, 110–111, 113, 119; technologies of intervention, 109; testimony therapy, 112, 114–117; training programs, 111–112, 113 Metropolitan Medical Response System (MMRS), 213n17 Miami-Dade Fusion Center (MDFC), 81 Mindszenty, Jószef, 92–93, 94 Miner, Anne, 30, 36, 209n1 Mirvis, Phillip, 40, 41, 43–44 Monson, Ingrid, 38, 42 Moorman, Christine, 30, 36, 209n1 Muehlebach, Andrea, 30, 40, 210n2 Mueller, Karl, 71, 212n8 Multiple Sequence Analysis (MSA), 173 National Academy of Sciences, 138, 139, 149 National Center for Biotechnology Information, 172 National Institutes of Allergy and Infectious Disease (NIAID), 133 National Institutes of Health (NIH), 123, 129 National Insurance Act, England, 23

Index / 247 National Office of Emergency Response (ONAE), Colombia, 190 National Plan for Disaster Prevention and Response, Colombia, 189 National Science Advisory Board for Biosecurity (NSABB), 129, 133 National System for Disaster Prevention Response (SNPAD), Colombia, 190 Nature, 129, 135, 172 NIAID (National Institutes of Allergy and Infectious Disease), 133 NIH (National Institutes of Health), 123, 129 9/11 Commission, 101, 216n22 NOAA (US National Oceanic and Atmospheric Administration), 149 NSABB (National Science Advisory Board for Biosecurity), 129, 133, 138 Obama administration, 88, 92 OIE (International Organization for Animal Health), 168 O’Malley, Pat, 4, 8, 13, 18, 19, 22, 47, 209n3, 212n14, 216n21 ONAE (National Office of Emergency Response), Colombia, 190 online poker: AIEV calculator, 57–58, 211n11; behavioral indications, 52– 54; bots’ infiltration, 66; chat logs and hand histories, 51; deep play concept, 50, 210n4; first online poker game, 48; grinding, 49, 210n3; HUD display, 51– 56, 211n7, 211n9; legality of, 48, 210n1; multitabling strategy, 49, 63–64, 210n2; note taking utility, 54; object of the game, 57–58, 65; post- session process, 46, 56–58; probability and statistics and, 47; provisional foresight and, 52, 211n8; return on investment tradeoffs, 49–50; revisiting of the game scenarios, 56; risk in, 50, 211n5; self- regulation and, 60–63; success of play criteria, 64– 65; tilt and, 58–62, 211n12; uncertainty and, 47–48, 56, 63–65, 211n10 Oppenheimer, Michael, 151, 154 O’Reilly, Jessica, 151, 154 Oreskes, Naomi, 151, 154, 163 organizational uncertainty: the arts and, 39; focus on managerial leadership, 32; jazz and, 29–30, 35–38, 43–44, 209n1, 210nn5–7; new organizational

templates and, 30–31; potential uncertainty and, 39; scientific management of production and, 31–32, 41–42, 210n8; uncertainty and, 32, 33– 35; worker insecurity and, 40–41 Osborne, David, 14, 16, 24 Osterhaus, Albert, 123, 134 Osterholm, Michael, 134, 142 pandemic flu, 5–7, 123, 124, 125, 134, 138, 140, 167, 204–205. See also avian influenza; biosecurity Pandemic Influenza Preparedness (PIP) framework, 137 Participatory Rural Appraisal (PRA), 111– 112 Pecha Quimbay, Patricia, 191, 192, 193 Peters, Tom, 15, 24, 217n25 Peterson, Glenn, 77, 96, 216n17, 216– 217n23 Petryna, Adriana, 9, 148, 205, 218n2 PFR (Pre-Flop Raise), 52–53 phenomenotechnique, 147–148, 150, 218n2 post- traumatic stress disorder (PTSD), 105, 107, 110–111, 113, 119 potential uncertainty: accountability and, 169; anticipation and, 74, 213n12; bioinformatics and, 176; contemporary urbanism and, 183; creation of a virtual space and, 177; described, 54, 186, 187, 199, 200; ecosystem recovery and, 159; enhanced interrogation and, 103, 217–218nn26–28; Guantánamo detainees and, 89, 90, 92, 104, 215n14; hazards management (see zones of high risk); in hunting, 170; in jazz improvisation, 30, 37, 38, 39; in national security (see Suspicious Activity Reporting); in online poker, 54–56, 62–63, 211n9; preemption and, 72; preparedness and, 166–167, 204; prevention and, 74, 116–117, 213n11; resilience and, 106, 107; risk and, 129– 130, 152; in a science publication, 138– 139; sentinel devices and, 170–171; transgenerational suffering and, 106– 107; trauma association, 113–114 precaution, 5, 213n9 preemption, 71, 72–73 Pre-Flop Raise (PFR), 52–53

248 / Index preparedness: attribution scenario and, 6, 140–141; biosecurity and, 133, 134, 143–144; pandemic flu and, 5–7, 123, 124, 125, 134, 167, 204–205; potential uncertainty and, 166–167, 204; sentinel devices and, 171, 181; technologies for, 143–144, 166–167 prevention, 71–74, 116–117, 163, 212n9, 213n11 problematization, 201–207 Procter, Robert, 89, 214n5 PTSD (post- traumatic stress disorder), 105, 107, 110–111, 113, 119 public health: bioinformatics and, 133, 134, 137, 172; pandemic flu preparedness, 5–7, 123, 124, 125, 138, 140, 167; politics of, 170; threat from animals, 166; trauma victims and (see mental health (in Cambodia)). See also infectious diseases; sentinel devices Pupavac, V., 107, 120 Rabinow, Paul, 118, 124, 128, 130, 136, 137, 140, 141, 142, 143, 150, 156, 163, 172, 187, 194, 213n18 Ramírez Gomez, Fernando, 188, 189, 190 rational system, 32–33 Regional Information Sharing System centers, 76 resilience: in Buddhism, 119; climate change and, 153, 161; future uncertainty and, 183; interrogations and, 213n11, 216n22; trauma victims and, 106, 107 Rheinberger, Hans-Jorg, 147, 148 risk: actuarial performances of, 152; bad versus good, 152–153; concept of, 3, 127; danger versus, 141; dimensions of, 126; domains in, 7–9; foresight and, 22; governments and, 4–5; management of in Colombia, 188–192; modernity and, 2–3; risk society approach (see risk society approach); technologies for, 4, 5–6, 37, 203, 215n14; threshold in climate change, 151–152; uncertainty and, 1, 3–4, 129–130 risk society approach: contemporary versus modern, 2, 128, 147, 187; danger and, 2, 3; dimensions of risk, 126–127; framing of risk in, 129–130; insurance and (see insurance); technologies use, 4–6

Rockstrõm, Johan, 154, 155 Rose, Nikolas, 93, 113, 118, 121, 172, 211n5 Samimian-Darash, Limor, 5, 30, 37, 54, 70, 74, 92, 106, 113, 118, 127, 141, 142, 143, 166, 169, 187, 203, 204, 211nn9– 10, 215n14, 219n12 SAR. See Suspicious Activity Reporting SARS (severe acute respiratory syndrome), 165 Scharf, Miri, 105, 106 Schauer, Elisabeth, 106, 107, 111 Schauer, Maggie, 106, 107, 111 Scheffer, Marten, 150, 153, 154, 159, 160, 164 Schein, Edgar, 93, 95, 216n15 Schiebinger, Londa, 89, 214n5 Schüll, Natasha Dow, 8, 46, 48, 211n12 Science, 129, 134, 172 Scott, James, 15, 31, 32, 33, 34, 36, 210n3 security: anticipation and, 74–76, 82, 213n12; national (see Suspicious Activity Reporting); overview, 70– 71; preemption and, 71, 72–73; preparedness and, 143–144; prevention and, 71, 73–74, 212n9, 213nn10–11; in public health measures (see biosecurity; sentinel devices) sentinel devices: accountability and, 169; bioinformatics and, 173–177; birdwatchers’ knowledge sharing, 178– 180; ecosystemic perspective, 179; excess information strategy, 175–176, 181; HKBWS founding, 177–178; humananimal interface and, 168; hunting societies and, 170–171; management of uncertainty and, 168–169; molecular clock and, 175; pandemics and, 167; potential uncertainty and, 170– 171, 176; primitive mentality versus modern science, 169–170; syndromic surveillance and, 167; virtual space use, 177, 181; wetlab and drylab work, 171–173 SERE (Survival Evasion Resistance Escape), 96–98, 216n16, 216n18 severe acute respiratory syndrome (SARS), 165 Shoshan, Nitzan, 30, 40 Sickness Insurance Law, England, 23 Sillince, John A. A., 30, 209n1

Index / 249 Simon, Stephanie, 5, 80 smallpox vaccination, 165, 169, 186 SNPAD (National System for Disaster Prevention Response), Colombia, 190 Socolow, Robert, 151, 156 stabilization wedge, 156–159, 157f, 158f, 163 Stalcup, Meg, 8, 82 Stavrianakis, Anthony, 142 SUIVD (Unified Information System of Violence and Delinquency), 192 Survival Evasion Resistance Escape (SERE), 96–98, 216n16, 216n18 Suspicious Activity Reporting (SAR): attempted terrorism in the US, 86, 213–214n20; chance encounters, 69, 212nn1–3; entrapment defense, 86, 214n21; impetus for, 75–76, 213n13; incident documentation and, 70, 78– 79, 84–85; legality of, 76, 213n15; non- federal venues (see fusion centers); potential uncertainty inherent in, 72, 82; preemptive case building, 85–86; purpose of, 69, 81, 86–87, 212n4; uncertainty in public activities and, 70 syndromic surveillance, 6–7, 167, 211n9 taming of chance, 47 testimony therapy, 112, 114–117 Thompson, Ashley, 110, 217n24 tort law, 19–20, 22, 209nn2–3 Transcultural Psychosocial Organization (TPO), 108–109, 116, 118–119. See also mental health (in Cambodia) trauma: intergenerational transmission theory, 105–107, 118–122; posttraumatic stress disorder, 105, 107, 110– 111, 113, 119; potential uncertainty and, 113–114; public health and (see mental health (in Cambodia)); resilience and, 106, 107; TPO and, 108–109, 116, 118– 119 trauma theory/intergenerationally transmitted trauma theory (TET/ITT), 105–107, 118–122. See also mental health (in Cambodia) uncertainty: biological (see bioinformatics); Catholic Church and, 131–132; in climate change (see climate change); concept of, 3; contemporary versus

modern, 128; danger and, 2; descriptive/ cultural mode, 205–206; domains in, 7–9; foresight and, 22; freedom and, 13–17, 24–25, 28; management example, 5–7; modes of inquiry for, 207; neoliberals’ view of, 15–16, 24– 25, 27, 28; notion of problematization and, 201–207; in online poker, 47–48, 56, 63, 64, 65, 211n10; in organizations (see organizational uncertainty); philosophical/external mode of, 206–207; politics of blame and, 141; potentialities inherent in (see potential uncertainty); preparing for the future and, 124, 132–133; risk society approach and, 2–6, 126–130, 147, 187, 205; risk versus, 1, 3–4, 27–28, 129– 130; systems versus system behaviors and, 160, 219n15; technologies of, 5, 8, 22, 25, 121, 205; theoretical/totalizing mode, 204–205; underdetermination and, 142, 143 underdetermination, 142–143 UNDP (United Nations Development Programme), 190 Unified Information System of Violence and Delinquency (SUIVD), 192 US Agency for International Development (USAID), 188 US Central Intelligence Agency (CIA), 93 US National Oceanic and Atmospheric Administration (NOAA), 149 Van Munster, Rens, 183, 213n9 Victims Support Services (VSS), 116 Vijaykrishna, Dhanasekaran, 174, 175 Voluntary Put in Pot (VPIP), 52–53 War on Terror, 92, 101 WCS (Wildlife Conservation Society), 168 Weber, Max, 15, 62 Wedel, Janine, 26, 107, 109, 112 Weick, Karl, 29, 30, 33, 45 West, Louis Jolyon, 93, 96, 98, 216n15 West Nile virus, 166 WESU (Witness and Expert Support Unit), 116 WHO (World Health Organization), 123, 137, 165, 168 Wildavsky, Aaron, 2, 169, 202 Wildlife Conservation Society (WCS), 168

250 / Index Wilf, Eitan, 8, 29, 38, 39, 42, 44 Wilson, Joseph, 106, 107 Witness and Expert Support Unit (WESU), 116 Woolgar, Steve, 147, 218n2 World Bank, 191 World Health Organization (WHO), 123, 137, 165, 168 Zaloom, Caitlin, 47, 59, 211n8, 211n10 Zeiderman, Austin, 9, 182, 206

zones of high risk: emergency fund, 192; epidemiological perspective in, 192– 193; geophysical risk calculation, 192; governing of potential uncertainty, 199, 206; government decision making, 183– 184, 193–194; inspection process, 194, 195–199; man- versus nature- made damage, 198; mitigation process for, 195; neighborhood conditions, 194– 195; risk identification methodology, 193; threat levels, 193, 194, 219n3