Heading for the Scene of the Crash: The Cultural Analysis of America 9781785336485

American anthropologists have long advocated cultural anthropology as a tool for cultural critique, yet seldom has that

175 46 3MB

English Pages 198 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
List of Illustrations
Figure
Introduction
Chapter 1. Jonestown: An Ethnographic Essay
Chapter 2. News Flash! Cultural Anthropology Solves Abortion Issue! Story at Eleven! (Being a Cultural Analysis of Sigourney Weaver’s Aliens Quartet)
Chapter 3. Lance Armstrong: The Reality Show
Chapter 4. Shit Happens: An Immoralist’s Take on 9/11 in Terms of Self-Organized Criticality
Bibliography
Index
Recommend Papers

Heading for the Scene of the Crash: The Cultural Analysis of America
 9781785336485

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

HEADING FOR THE SCENE OF THE CRASH

Loose Can(n)ons

Editor: Bruce Kapferer, Professor Emeritus of Anthropology, University of Bergen, and Honorary Professor, University College London Loose Can(n)ons is a series dedicated to the challenging of established (fashionable or fast conventionalizing) perspectives in the social sciences and their cultural milieux. It is a space of contestation, even outrageous contestation, aimed at exposing academic and intellectual cant that is not unique to anthropology but can be found in any discipline. The radical fire of the series can potentially go in any direction and take any position, even against some of those cherished by its contributors. Volume 1 Starry Nights Critical Structural Realism in Anthropology Stephen P. Reyna

Volume 2

PC Worlds Political Correctness and Rising Elites at the End of Hegemony Jonathan Friedman

Volume 3

Heading for the Scene of the Crash The Cultural Analysis of America Lee Drummond

HEADING FOR THE SCENE OF THE CRASH

n

THE CULTURAL ANALYSIS OF AMERICA

Lee Drummond

berghahn NEW YORK • OXFORD www.berghahnbooks.com

First published in 2018 by Berghahn Books www.berghahnbooks.com © 2018 Lee Drummond All rights reserved. Except for the quotation of short passages for the purposes of criticism and review, no part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system now known or to be invented, without written permission of the publisher. British Library Cataloguing in Publication Data A C.I.P. cataloging record is available from the Library of Congress British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library



ISBN 978-1-78533-647-8 hardback ISBN 978-1-78533-648-5 ebook

In Memoriam Edmund Leach, A Runaway World? (1968) (a half century later, and accelerating)

n Contents List of Illustrationsviii Introduction1 Chapter 1.  Jonestown: An Ethnographic Essay

15

Chapter 2. News Flash! Cultural Anthropology Solves Abortion Issue! Story at Eleven! (Being a Cultural Analysis of Sigourney Weaver’s Aliens Quartet)

55

Chapter 3. Lance Armstrong: The Reality Show

98

Chapter 4. Shit Happens: An Immoralist’s Take on 9/11 in Terms of Self-Organized Criticality

120

Bibliography 177 Index183

n Illustrations Figure 0.1 Apartment buildings, Pacifica, California, during the 2016 El Nino

ix

Figure 0.2 Quantum entanglement

5

Figure 2.1 Saartjie Baartman, on display 1810–1815, London, Paris

84

Figure 2.2 Venus of Willendorf, circa 22,000 years bc

85

Figure 0.1. Apartment buildings, Pacifica, California, during the 2016 El Nino. © 2002–17 Kenneth and Gabrielle Adelman, California Coastal Records Project, www.Californiacoastline.org.

n Introduction Cultural Analysis as Cultural Critique

T

hirty years after the publication of Marcus and Fischer’s Anthropology as Cultural Critique (1986), American anthropologists have done little to realize that project. Principal evidence of that failure is the near-total lack of an anthropological voice in forums for public intellectual debate in the United States. That deficiency is less of a problem in Europe, where social-cultural anthropologists have a cachet denied their American colleagues. It is also true that innovative print and online resources in which the anthropologist engages the modern world are both more successful and more substantive in Europe than in the United States.1 Despite these encouraging developments there remains, I think, a need following on Marcus and Fischer’s decades-old call for a sustained cultural critique of American society. To conduct an exploratory move toward that end is my goal in this collection of essays. While trying to take into account international work in the discipline of anthropology, I am resolute in my attempt to examine prominent features of American life from the perspective of an American anthropologist—a native writing about the natives, if you will. I write in what I perceive is a time of crisis for American anthropology, that “queen of the sciences” that has pretty much abdicated its responsibility in the crucial matter of explaining ourselves to ourselves. My sense is that someone needs to carry the torch, however weak and flickering, into public discussions, discussions that today are of increasingly grave importance. While it is far too ambitious to suppose that an anthropological voice will be heard amid the din of 24/7 cable news and its army of talking heads or in the increasingly beleaguered world of thoughtful print journalism, it is still worth making the effort to chip away at our discipline’s willful irrelevance. Not to sound too old and grumpy, academic anthropology in the United States is in frightful shape. For the most part we are still, in the well-worn phrase, the

2  Heading for the Scene of the Crash

eccentric in pursuit of the exotic. If anything, I’m afraid quite a few anthropologists have gone in the opposite direction, accepting politically correct accounts of neoliberalism, globalization, climate change, gender, race, etc., which they should examine critically. Compounding that problem is a distressing tendency in much anthropological writing to raise the academic drawbridge, embedding topics that just might have real-world importance in a matrix of bloodless and essentially incomprehensible terms. A little time spent looking over the titles of leading anthropology journal articles and symposia produces the following specimens: “theorizing the contemporary” “socialities of indignation” “individualized tourist mobilities” “temporalities of displacement” “the injury of precarity” “articulating potentiality” “the critical imaginary” “the architecture of collective intimacy”

What could these phrases possibly mean, and why should they matter a whit to anyone outside the tiny community of social-cultural anthropologists? Note the obscurantist techniques employed here. Perfectly good adjectives in the English language, such as “contemporary” and “imaginary,” are rendered as nouns, leaving the reader to puzzle over: the contemporary … what?, the imaginary … what? And everywhere there is an apparently irresistible urge to add the abstract suffix “-ity” to ordinary words, blurring their meaning and introducing a stilted, antiseptic tone to the prose: “sociality,” “alterity,” “temporality,” “precarity.” It is a practice befitting Castiglione. Just as we badly need to broaden our appeal to the thinking public (admittedly a shrinking constituency), we retreat to a “safe space”—the going trope for intellectual cowardice—where, rather than cookies and milk, a tasteless gruel of gibberish is served up. Cultural anthropologists used to be explorers of the diverse social worlds out there; they have now become refugees in their own societies. Whatever happened? This collection of four essays represents my attempt to refashion cultural analysis (or, as I have come to prefer, anthropological semiotics) into a hardedged critical tool for the study of American society and culture. Be advised: there are no “safe spaces” in what follows. In the four essays I develop a fairly radical form of cultural critique that owes a great deal to the thought of Friedrich Nietzsche. The analyses offered here proceed from an aphorism in Twilight of the Idols ([1889] 1954: 473): Could it be that wisdom appears on earth as a raven, inspired by a little whiff of carrion?

Introduction  3

In their role as ethnographers, cultural anthropologists often emphasize the importance of empathy with their subjects; they strive to see things “from the native’s point of view” (Geertz 1983: 55) and somehow to translate that vision into discourse accessible to a Western audience. Here I suggest we adopt a different, harsher stance: the anthropologist needs to steel himself or herself to function as a pathologist of the social, to follow that “little whiff of carrion” as it leads into the bowels of human existence. In his laboratory the medical pathologist dissects the diseased or broken bodies brought to him; in the social milieu of the field (whether in faraway places or, as in the present essays, right here at home) the cultural pathologist delves into the beliefs and institutions of societies experiencing major crises. Stark as this vision is, I think it is true to the original mission of cultural anthropology, which was charged (by its imperialist masters) with conducting up-close, on-the-ground studies of the fragmented, sick, and dying societies left in the wake of Western expansion. We have had a lot of experience of living with and studying pathology; tragic as it may seem, it is our heritage and should not be denied.

The Essay Topics The four essays focus on the Jonestown, Guyana, massacre-suicides, the Lance Armstrong sports-doping scandal, the Aliens movies as they bear on the abortion issue, and the 9/11 terrorist attacks. What do these four subjects possibly have in common? My answer in brief: they are all high-profile, “big ticket” events or cultural productions that have engaged the American public at large. If cultural anthropology is to have any chance of acquiring that public voice whose absence I lamented above, I think it is essential to apply whatever analytic skills we may possess to matters of real importance and interest to the people around us. It should be noted that my perspective here cuts against the grain of traditional anthropology, in which the common, the sensational, the popular are regarded as somehow inauthentic, unworthy of scholarly attention, the province of journalists and the swollen mob of media pundits. While the latter yearn to be flies on the wall of a room where big decisions are being made, big events planned, the anthropologist as ethnographer spends his or her time in rooms where nothing much ever happens. No wonder, then, that his or her reports inspire so little excitement in the general public. Even though the traditional haunts of anthropologists—the Amazon, New Guinea, the South Seas—have given way to fieldwork sites in national societies, these still tend to be “exotic” in the sense of being apart from our

4  Heading for the Scene of the Crash

daily lives. For example, here are a few titles of articles from the May 2016 issue of American Ethnologist: “Belonging in Ethno-erotic Economies: Adultery, Alterity, and Ritual in Postcolonial Kenya” “SIM Cards of Desire: Sexual Versatility and the Male Homoerotic Economy in Urban Congo” “Everyday Recomposition: Precarity and Socialization in Thailand’s Migrant Workforce” “Fixed Abodes: Urban Emplacement, Bureaucratic Requirements, and the Politics of Belonging among West African Migrants in Paris” “Sharia, Charity, and minjian Autonomy in Muslim China: Gift Giving in a Plural World”

Our friends “alterity” and “precarity” make their reliable appearance here, along with a new entry, “recomposition,” or specifically, “everyday recomposition.” Again, a cloud of dust bunnies begins to form in one’s brain—at least in mine. Isn’t “composition” pretty much a one-off affair? If you “compose” a letter, a poem, a song and then change it, aren’t you revising or editing and not “recomposing” it? That would seem to be a very special kind of change, not “everyday” at all. Also, if the subject is a living being, wouldn’t it experience rejuvenation rather than “recomposition”? Please note that these remarks are not criticisms of the individual articles. I have not read them; they may well be erudite, perceptive pieces. It is rather that I approach them in something of the way that a sailor casts out a sounding line in shoal waters: here is shallow water, avoid it; there is deeper water, steer toward it. Search out the deep water—a useful guide, I think, for doing cultural analysis. It is true that much current anthropology focuses on matters closer to home than adulterers in Kenya, West African migrants in Paris, or Muslims in China. In the United States ethnographers study a wide range of subjects: inner-city ethnic neighborhoods, migrant camps, retirement communities, hospitals, cults, motorcycle gangs, and hedge fund companies, to name but a few. My point here, however, is that all these research topics are sufficiently small scale and “other” to qualify as legitimate subjects for anthropological treatment. They are decidedly not the headline stories or blockbuster movies I have staked out as my subject matter. The essays collected here pursue that contrary path. I contend it is precisely those cultural productions, events, beliefs, practices that attract or hold sway over tens of millions of Americans that are the key to understanding what American society is all about. I develop this argument at length in my book, American Dreamtime: A Cultural Analysis of Popular Movies, and Their Implications for a Science of Humanity (Drummond 1996), which contains detailed analyses of several blockbuster movies.

Introduction  5

Analytical Approach Quite apart from their subjects, the four essays depart from traditional anthropological practice in another important respect. That departure consists in an inquiry into the nature of an event. Most of us are comfortable with the usual definition of “event”: it is an occurrence; something that happens at a particular place and time, perhaps something notable that causes us to single it out from the daily flow of our experience. It is discrete, bounded, not only in space and time but in our thought. You can draw a mental line around it. So when someone mentions Jonestown, 9/11, the Aliens movies, Lance Armstrong’s appearance on the Oprah show, our response is rooted in that deeply held assumption that these are all well-defined “events.” These essays explore the idea that the event is far more complex. When something happens, it generates an intricate web or network of associations and interpretations that radiate out indefinitely. The here and now come to affect the there and then; a single experience makes itself felt in other contexts and, of utmost importance here, acquires a set of often discrepant meanings. It may be helpful to liken this idea—in a purely analogical way to be sure; I do not want to be accused of physics envy—to the concept of quantum

Figure 0.2. Quantum entanglement. © Natali art collections/Shutterstock.com.

6  Heading for the Scene of the Crash

entanglement in theoretical physics. Once two particles have interacted, they continue to influence each other and the entire series of particles with which they interact in the future. Here’s a nice representation of the quantum entanglement of particles, events, and, yes, society-culture. This is the critter with which cultural analysis must deal. I suggest that a cultural event, like a quantum event, is unbounded in this sense. Its meaning is somehow—and here the physicists seem none too sure of how it works—transmitted, reflected, refracted, contradicted as it reverberates through the human and physical worlds. Failure to incorporate—or even consider—the possibility that the event is such a complex thing is responsible for the alarming fact that most public intellectual debate of important topics (and even not-so-public discussions among anthropologists) exhibits a curious form of ethnocentrism. By this I mean that the usual reaction to an important event is to place it within an existing structure of belief, rather than use it as a means for interrogating or— remember the cultural pathologist!—dissecting that structure. The assumption is that we already know what happened; now we just have to explain it. The whole point here, however, is to use the essay topic—Lance Armstrong, 9/11, etc.—as a lens or probe with which to view and dissect the social-cultural context of the event. I want to upend the usual process of interpretation—certainly that of the media-pundit variety—in which one begins with the assumption that the subject of discussion is well defined and blithely proceeds to flesh out its meaning in a particular social context. It is rather that the meaning—or set of discrepant meanings—of the social context itself (i.e., “society”) emerges only through a close inspection/dissection of the event. If my proposal seems more suggestive than substantive, the following synopses of the four essays may firm it up.

“Jonestown: An Ethnographic Essay” Until the 9/11 terrorist attacks, the mass suicides-massacre at Jonestown, Guyana, in 1978 represented the largest number of civilian deaths in modern American history. Over nine hundred people died. The atrocity triggered an avalanche of newspaper and magazine articles, television news stories and specials, and a number of books. A minor contribution to that burgeoning corpus was a letter to the New York Times, “No End of Messiahs,” by the renowned anthropologist and public intellectual Marvin Harris (1978). The piece is a glaring example of that ethnocentrism I spoke of, anthropology’s version of original sin, and here committed by one of its luminaries. For Harris, and for virtually all writers on the subject, Jonestown was an American tragedy. Jim Jones and his Peoples Temple followers were Americans who established their community in Indiana before relocating to California and

Introduction  7

from there to Guyana. For its chroniclers, including Harris, the fact that the Peoples Temple was located in Guyana was little more than an exotic backdrop for a phenomenon that whetted the American appetite: cults. Harris’s piece was a harangue against cults and, he argued, the deplorable American educational system that facilitated their formation. He even managed to implicate the editorial board of the American Anthropological Association in all this. This is the very opposite of true ethnography, an arrogant travesty that insists on framing everything, even something as bizarre as Jonestown, in terms familiar to an American audience. My essay adopts the contrary perspective: that the jungle tragedy be treated, again, as a lens through which to view the social and historical context of Jonestown. I am singularly qualified to pursue that tack, having spent over two years in Guyana, most of that time in Amerindian villages within a hundred miles of the Peoples Temple community. Like that image of quantum entanglement, the event of Jonestown is complex, incorporating matters as diverse as American politics and race relations, corruption within the Guyanese government, Guyanese land development programs, Caribbean reggae, Georgetown (Guyana) street gossip, and even an eerily similar millenarian cult that flourished among Amerindians of the Guyanese interior some 135 years before Jonestown. The essay thus argues that ethnography—anthropology’s stock in trade—consists in identifying a host of discrepant meanings and puzzling out their possible interconnections. And, most important, in that quest the ethnographer does not have any sort of authoritative voice; his puzzling over things is of a piece with that of the “natives.” We are all conflicted, just struggling to find our way.

“News Flash! Cultural Anthropology Solves Abortion Issue! Story at Eleven! (Being a Cultural Analysis of Sigourney Weaver’s Aliens Quartet)” Abortion is perhaps the most divisive issue within contemporary American society, with all indications being that both its rhetoric and its violence will intensify over the coming years. This essay proposes that the conflict is not amenable to any conventional solution: the forces of light or darkness will neither triumph nor agree to compromise. Rather than American society figuring out what to do about the abortion issue, in all likelihood the intractable nature of the problem will prove a key element in transforming fundamental cultural values and ideas concerning human reproduction, medical science, and the emerging phenomenon of biotechnology. Given the critical nature of the problem, it is disappointing that partisans and media commentators have done little more than bundle up the platitudes of “freedom to choose” and “right to life” in more or less strident rhetoric. The most radical and far-reaching treatment of human reproduction in a future world of biotechnology has come from a perhaps unexpected source: Sigourney Weaver’s

8  Heading for the Scene of the Crash

Aliens quartet. The essay conducts a cultural analysis of those movies and, in the process, identifies a solution to the abortion issue. Story at eleven!

“Lance Armstrong: The Reality Show” When Armstrong appeared on Oprah in January 2013 and confessed to years of taking “performance-enhancing drugs” there was widespread dismay, condemnation, a sense of betrayal among the public. A favorite son had violated a sacred taboo: the athlete is supposed to be a superb example of the possibility of physical perfection. He or she is natural. As with Jonestown, the corporate media seized on the occasion, printing or airing exposé after exposé of the racer’s career, tracking down trainers and physicians who might have been involved, detailing the physical dangers of taking those drugs (and thus providing moral guidance to kids just getting interested in endurance sports). But what was behind that reaction? A thorough cultural analysis— again, a pathologist’s dissection—treats the Armstrong event as, yet again, a lens through which to examine its underlying beliefs. Of utmost interest here is the belief that there exist two fundamental categories of being: the natural and the social (here including technology). That belief is the foundation of a folk taxonomy that operates in American (and generically Western) society. Please note that this claim should not be confused with Lévi-Strauss’s magisterial argument that the Nature/Culture opposition is at the wellspring of human thought. Mine is not nearly so grand an argument. Here I am simply suggesting that ordinary Americans (and, again, generic Westerners) going about their daily lives see their world as neatly divided between what is natural or physical and what is artificial or technological. When someone or something appears to cross that boundary marker, it constitutes a violation of a sacrosanct taboo (not unlike the incest taboo), which then triggers a powerful, society-wide emotional response. All hell breaks loose. For us, the athlete is clearly in the domain of the physical. Armstrong’s self-serving behavior broke the rules; he contaminated his superbly conditioned physical body with the dangerous products of the pharmaceutical industry. For that, he must be chastened and must pay dearly. Incidentally, a drama of just this sort, but without a readily identifiable miscreant, is being played out in the furor over introducing genetically modified organisms (GMOs) into our food supply. The Lance Armstrong scandal involves a second major foundation of American/Western culture: the ethic of competing to win. My title describes the scandal as a “reality show” because Armstrong’s actions exemplify the sort of behavior that has become endemic in popular culture with the ascendancy of television reality shows. Their popularity attests to the fact that their theme of competing to win resonates mightily with audiences; Americans/Westerners consider it entirely natural—that word again—for people to compete over

Introduction  9

everything in life and, most important, to compete to win. The essay traces some implications of that ethic throughout American history and into the present day. As I write (July 2016), a certain individual is attempting to parlay our obsession with winning into a seat in the Oval Office. As not a few have suggested, our presidential politics may be the ultimate reality show. In a suitably anthropological manner, the essay points out that competing to win is not a universal human drive; it is, rather, a feature of our particular pathology that the cultural analyst dissects. I develop that idea through a close comparison between an episode of the most famous reality show, Survivor, and the former practice of log racing among Gê-speaking tribes of central and northeastern Brazil.

“Shit Happens: An Immoralist’s Take on 9/11 in Terms of Self-Organized Criticality” I have referred to the arrogant ethnocentrism of Marvin Harris (1978). However, this pales in comparison with that of George Jr. immediately following the 9/11 attacks. When Bush surfaced that evening, he assumed a sad puppy-dog expression and declared, “Today the world has changed.” The world, not the United States. The media firestorm following the actual firestorm of the twin towers gave the impression that was indeed the case. Virtually all accounts of the 9/11 attacks agreed on just what had happened; the nonstop television images of the planes slamming into the buildings were burned onto the retinas of viewers. It was The Event, its nature indisputable. The dichotomies tumbled out and were paraded as God’s truth: Good vs. Evil; Freedom vs. Tyranny; Rationality vs. Fanaticism. But consider my remarks here regarding the nature of an event. Staggering catastrophe that it was, the 9/11 attacks were not a discrete phenomenon, an occurrence well-defined in time and place whose what-ness was indisputable and whose why-ness needed only to be determined by the dozens of pundits drawn to it like bees to honey. No, as with Jonestown and the Lance Armstrong scandal, the meaning of 9/11 as event is unbounded, multivocal, emergent, filled with discordant, contradictory messages. Approaching 9/11 from that perspective, the cultural analyst again assumes the role of pathologist of the social, picking through a host of discrepant meanings to determine how they may be connected. Here she adopts the persona of Nietzsche’s “immoralist” as that figure informs Nietzsche’s philosophy, from Daybreak ([1881] 1997) all the way through The Will to Power ([1883–88] 1968). A principal goal here is to overthrow the ethnocentrism that prevents a comprehensive inquiry into 9/11 by cloaking it in a strictly American context. To accomplish that, the essay ranges far: from contemporary responses to the disaster, media accounts of the war in Afghanistan, Chinese earthquakes,

10  Heading for the Scene of the Crash

Afghanistan of two thousand years ago, and on through the tenants and contents of the World Trade Center. Because Nietzsche proposed going “beyond good and evil” through a “revaluation of all values,” the essay takes up what may be the ultimate ethical question: the value of human life. In a departure from Nietzsche, the essay proposes the application to 9/11 of a modern approach he may well have rejected: complexity theory. A crucial feature of that theory is “self-organized criticality,” the proposition that complex systems—including societies—arrange themselves through a series of checks and balances to the point that they are perpetually on the verge of undergoing fundamental transformation. That is the idea I apply to the earthshaking, supposedly carved-in-stone Event of 9/11. It is a logic of things that just happen. And in that dreadful case, shit happened. The reader should note that the essay was composed during the months following the attacks, and as such reflects the raw emotions of that time.

Historical and Theoretical Perspectives The History of American “Social Science” These essays follow that “little whiff of carrion” Nietzsche took to be the stimulus of a nascent human intellect. But what is behind that “little whiff ”? Where is the actual carrion, the rotting corpse that gives off that stench? Not to mince words, where the discipline of anthropology and much of American social thought is concerned I suggest that rotting corpse is the entire edifice of American social research that took shape immediately after World War II. I believe future intellectual historians will look with amazement and horror at the formation at Harvard in 1946 of the Department of Social Relations and its official platform, if you will, as embodied in the major work of its first chairman, Talcott Parsons: The Social System, published in 1951. In that work Parsons launched the scheme of pattern variables to describe all social action and embedded them in a systems-theory framework mostly derived from Norbert Wiener and his concept of homeostasis (1948). Society, the social system, possessed a well-defined set of subsystems that interacted to produce homeostasis or stability. The structure functioned to maintain itself as a well-integrated, stable whole. This doctrine of “structural functionalism,” as it came to be called, spread from the temple of Harvard University to nascent sociology and anthropology departments across the United States; it became the gospel for indoctrinating future generations of social researchers. The horror, the sheer intellectual dishonesty and travesty of all this, is that Parsons and his eminent cohorts (including an elder statesman of anthropology, Clyde Kluckhohn) advanced their theory at a time when the smoke had

Introduction  11

hardly lifted from the great killing fields of World War II and when the stacks of rotting corpses still gave off a stench that should have been discernible even on the Harvard campus. Some seventy million people were killed in that war, and surely at least an equal number left maimed or with permanently blighted lives; how could any thinking person advance the grotesque idea that human society possessed an inherent rationality, coherence, systematicness? In the shock and shambles of the immediate postwar years it was incumbent on social thinkers to produce accounts of how social processes could lead, over the course of a few years, to cataclysm and genocide. Even if they lacked a sense of smell, Parsons et al. should have had the rudimentary vision to see that human society is inherently unstable, conflict ridden, forever teetering on the edge of chaos. Things just do not make sense; events cannot be slotted into neat analytical categories. Humanity is the very opposite of system theory’s favorite metaphor, the thermostat, which operates on the principle of change things a little bit this way, then change them back the other way, and keep going so that you maintain, yes, homeostasis. A far more accurate metaphor for the nature and fate of humanity, as I have suggested here, is a runaway train: it is barreling down the tracks, completely out of control, and God knows what it is going to hit. But instead of anything like this, the entire academic establishment labored mightily, and at the highest levels, and came up with the totemic AGIL four-square box to describe all social action. It bore a suspicious resemblance to a university campus quadrangle.

Heading for the Scene of the Crash Anthropologists have known for years that the world Parsons and his coterie described does not exist, but they have had trouble coming up with a generally acceptable alternative. Here is not the place to launch into a “history of theory” discourse; I will simply note that the intractable problem of putting together a new theory of society-culture has led many, and probably the discipline of American cultural anthropology as a whole, to reject the very idea of “theory” itself as hopelessly burdened with elements of an imperialist past. The catchphrases are that any discourse masquerading as theory is far too “hegemonic” and “essentialist” to be objective. As some recent discussions have asked, how can you hope to “decolonize” anthropology when the field is a product and, historically, standard bearer of colonialism (McGranahan and Rizvi 2016)? Parsons et al. blatantly ignored the social ruins of the war and instead crafted their theory to fit what they saw as an emergent world order. Nation-states—some pre-existing such as Western democracies, others newly minted as in Africa and the Middle East—would provide the template for that order. As we know, and now more than ever, it has not worked out that way. The nation-state is everywhere under siege, on the verge of being torn

12  Heading for the Scene of the Crash

apart (or already destroyed) by the internal tidal forces of xenophobia, religious fanaticism, ethnic and racial violence, and a burgeoning population. And from the outside immensely powerful multinational corporations, richer than many nations, subvert national boundaries through a project of economic and ideological globalism. Yeats’s profoundly disturbing poem, “The Second Coming,” may require a certain reinterpretation of its famous line, “Things fall apart; the centre cannot hold” (1919). The truth is, there is no center. Never was. Hence the title of this book, Heading for the Scene of the Crash. In one of his better-known aphorisms Nietzsche advised that every truth should be accompanied by at least one joke. This is generally taken to mean that the heavy sledding of intellectual discussion should be broken from time to time with a little light relief, a momentary distraction from the work at hand. I suggest a different interpretation. Following Freud (who greatly admired Nietzsche) ([1905] 1960) and Mary Douglas (1999), the joke actually expresses and calls attention to a deeper, elemental meaning. It is a sudden outpouring of unconscious, repressed knowledge. In that spirit, the joke I reference here as the linchpin of my arguments in the four essays (perhaps reminiscent of Lévi-Strauss’s “key myth” in Mythologiques) is in fact a performance piece by the comic Ron White (2003), formerly of the Blue Collar Comedy Tour. White is overweight, boozy, crude, and, I think, funny as hell (but then my tastes are hopelessly lowbrow). The piece, which forms part of a longer show, is sometimes called “Plane Crash,” though it goes by several labels. I doubt many anthropologists are familiar with the routine, which is a great pity (I heartily recommend it after a couple of hours spent slicing and dicing “precarity” and “alterity” in the pages of the Ethnologist). You really should check it out on YouTube. But for the comedically challenged, here is the piece rendered as the narrative of a myth. Ron finds himself on a small commuter plane on a short hop from Beaumont to Houston, Texas. Midway through the flight the pilot announces that one of the plane’s two engines has lost oil pressure and they are turning back to Beaumont. Ron observes that the other passengers are alarmed by the news. However, since he had been drinking all day, he did not much care what happened and called out to the pilot to go ahead and ditch, merely asking him to slam into something hard because he did not want to be stretchered off and live as a cripple. This talk terrifies the passenger sitting next to Ron, who, he notes, must have had a lot more to live for than he did. The passenger begins calling out to the flight attendant for reassurance about how far the plane could travel with only one engine. Offering his seatmate cynical reassurance, Ron turns to him and says, “All the way to the scene of the crash. Which is pretty handy, because that’s where we’re headed,” adding that they will probably beat the emergency rescue crews there by a half-hour.

Introduction  13

It is boorish to explain a joke, but since my argument pivots on it I must delve into it. The important question here is: Why is this funny? Earlier in his routine, White makes a series of jokes about the inadequacy of the plane (omitted from the above narrative), but these are all a lead-in to the punch line: “All the way to the scene of the crash … because that’s where we’re headed.” I suggest that the effectiveness, not to say brilliance, of that punch line consists in its introducing the perspective of an outside, objective observer into an ongoing, deeply traumatic, and subjective experience (which it certainly was for White’s seatmate). If you are a passenger on that stricken plane, you are not thinking about just where it might crash—and you might die. On the contrary, your mind is in turmoil over what is happening at the moment. Are we losing altitude? Do the pilots still have control of the plane? Is the failed engine catching fire? Oh God, my family, my family. I don’t want to die! Assume the plane crashes, killing all on board. Then the media vultures would swarm the wreckage, almost beating those paramedics referred to in the joke. And the news anchor would announce, in suitably somber tones, “We have John Kelly reporting from (ah yes) the scene of the crash.” Then cut to John, who delivers the gruesome news with the smoking wreckage nicely framed in the background. Or consider an air traffic controller at the Houston airport, monitoring the radar blips of the plane on his instruments, until those blips disappear and he is prepared to estimate the scene of the crash. See my discussion of perspective in the essay on 9/11. Some three thousand persons died in the 9/11 attacks; the Afghan factional conflicts in Mazari-Sharif claimed eight or ten thousand. But the dead in 9/11 were Americans, and they died in the metropolitan heart of the nation. The dead in Mazar-iSharif, on the other hand, were Afghan tribesmen who fought and died in a place virtually unknown to an American audience. The value of human life is weighed on different scales. The perspectival shift in “Plane Crash” is more fundamental and goes to the heart of our ongoing discussion regarding the nature of an event. At that heart is a paradox: The individual wholly caught up in an unfolding catastrophe cannot frame her experience as an “event” because she experiences it as an unbounded phenomenon. She lives the experience; she does not draw a mental line around it, classifying it as this-happened or that-happened. The subjective experience is incompatible with what comes later: an objective account fixing the third-party experience within some interpretive scheme. White’s joke succeeds so well because it forces the terminology “scene of the crash” and its accompanying objective perspective into the subjective experience of those on board the stricken plane, who know that something awful is about to happen, but cannot say just when or where. Passengers aboard a doomed craft—it is a disturbing image that applies to much of human experience. Your life will end someday, but when, where,

14  Heading for the Scene of the Crash

how? And when the end comes you cannot know it as an event; that will come later, will be supplied by others. Your society, in these essays the United States, will become something wholly other. But what, and when? As an American you can only speculate, can only grasp at phantoms masquerading as facts. But you know you are on that journey, know you are headed for … (But it was fun while it lasted!)

Note  1. Compare, for example, Anthropology News and Savage Minds in the United States with Anthropology Today and Open Anthropology Cooperative in Great Britain. There are now some excellent transatlantic hybrids: the series Prickly Pear Pamphlets begun by Keith Hart and Anna Grimshaw in Cambridge, England, which transmuted into Prickly Paradigm Press, edited by Marshall Sahlins and published through the University of Chicago. Extending this trend toward short, engaged work is the series Critical Interventions: A Forum for Social Analysis (Kapferer 2004–). Anthropology News is published by the American Anthropological Association and may be found at http://www.anthropology-news.org (retrieved 6 June 2017). Savage Minds may be found at https://savageminds.org (retrieved 6 June 2017). Anthropology Today is published by the Royal Anthropological Institute and may be found at https://www. therai.org.uk/publications/anthropology-today (retrieved 6 June 2017). The Open Anthropology Cooperative is perhaps unique in the anthropological community, for it is both open access and open comment. It may be found at http://openanthcoop. ning.com/ (retrieved 7 June 2017).

Chapter 1

n Jonestown

An Ethnographic Essay

S

everal years before Jim Jones established his community, I was living in an Arawak village in northwestern Guyana, about seventy-five miles from the future site of Jonestown.1 It was an Amerindian village, but a very modern one—everyone spoke Guyanese English Creole, wore Western clothing, and embraced Western values generally. The family I knew best was especially progressive by local standards—of five mature children, three were training for professions: teacher, surveyor, engineer. The family was also musical, and two of the boys, responding to their changing environment, had acquired Japanese-made electric guitars and amplifiers. One brother taught at a nearby school and was soon to continue the training that would make him a highly qualified government surveyor. Music and technology, two domains of a modernity he was anxious to make his own, provided the basis of many conversations between us. The conversation I remember best was about the film Woodstock (1970). My friend had returned to the village after a visit to the capital, Georgetown, where Woodstock was playing to large and enthusiastic audiences. At the beginning of the 1970s Guyanese youth were caught up in the complex forces of an imported counterculture and a local nationalism newly stimulated by the 1970 proclamation of Guyana as a “Cooperative Republic.” So, besides his reaction to the music in Woodstock, I was interested in my friend’s reaction to the festival as a cultural event—in how he interpreted the happening from a Guyanese perspective. We talked for some time about the various bands and singers, and in that discussion I, as a nonmusician, was clearly a listener and learner. But before I could steer the conversation toward the festival as cultural event, my friend asked a question that stopped me cold. Interrupting his commentary on singers and their songs, he paused and with a perplexed expression asked, “But tell me, where did they find all those actors?” “Where did they find all those actors?” My thoughts raced; the world turned over. It was one of those moments described by Kafka, when the world

16  Heading for the Scene of the Crash

at its most mundane—acquaintances having a casual conversation—becomes suddenly vertiginous and alien. Amid a tumult of thoughts I realized that my friend and I, although perfectly able to carry on an extended conversation, had not been talking about the same thing. For me, the film Woodstock documented a novel and exciting event in American life. The 200-odd thousand who attended were not paid actors, but, on the contrary, people affirming a mode of life they wished to create and experience. My friend, being more pragmatic and—as it turned out—more accurate, regarded all the events in Woodstock as parts of a staged performance he thought of as an “American movie.” And the “American movie” was fiction; it paraded unreal lives lived out against backgrounds of unattainable luxury. When I tried to explain that Woodstock “really happened,” that it was not “just a movie,” he became as uncomfortable with that idea as I had been with his interpretation (after all, the ethnographer does not have a monopoly on Kafkaesque experiences). A crowd a third the size of the entire population of his country had gathered on that New York farm that weekend, and just to listen to music and indulge in bizarre, outrageous behavior. That an event of such magnitude and theatricality should just happen spontaneously was an uncomfortable, irrational notion. He found it more consistent with all he knew and believed to regard Woodstock as one more American movie, but with the proverbial “cast of thousands.” Hence his question, “Where did they find all those actors?” I have no evidence on how representative my friend’s interpretation was—I did not conduct a survey—but my knowledge of Guyana makes me feel that his was not an unusual view among Guyanese at the time. And whether or not that is true, the episode taught, or began to teach, me about the nature of ethnography and prepared me to understand a future American cult event, this one staged on Guyanese soil and mostly away from the recording photographic eye.

The Ethnographic Event How does one begin (or rather, recommence after this faux départ) to make anthropological sense of Jonestown? I propose to do this (indeed, I hope I have already begun) by dwelling on ethnographic events I have experienced, using those to reflect on the nature of ethnography, and situating that reflexive discussion within a large and growing corpus of texts on Jonestown produced by various sources—Guyanese and American, anthropologist and nonanthropologist. But what—since we are consciously striving here to come to terms with ethnography—is that “ethnographic event” to which I just referred? How do we know when we are doing ethnography?

Jonestown  17

Surely this question is vital to the project at hand, for in seeking to establish the connection between semiotics and ethnography we are responding to an intellectual tradition that has already—even if only implicitly—identified that relationship as a disjunction. “Semiotics,” however vague and mysterious the word may sound, can be classified in that tradition as some form of analysis or, in a more literary mode, interpretation. And what does one analyze or interpret? Well, the data—the experiences, the events, the discourse—one has been exposed to in the course of fieldwork. Data and analysis, fact and theory, experience and interpretation, action and reaction, these disjunctions are the comfortable foundations of our thought. And when we consider the particular disjunction that makes anthropology a distinctive field—the native and the ethnographer—it seems quite natural to collapse it within those other, more encompassing disjunctions of modern thought. The native is the source and repository of factual experience; he is studied and made to yield up that body of fact, that authentic experience that, through a painstaking process spanning years, is massaged into analytical or interpretive statements. Within this general intellectual framework, then, the ethnographic event corresponds to the simplest model in information theory: the native is Sender, his action or statement is Message, and the ethnographer is Receiver. The reality, the factualness, of the native’s actions and statements are incontestable within such a model. The ethnographer observes and records or interprets, much as an experimenter charts the events in a physical laboratory or a chronicler the experiences of those around him. Within the tradition I am describing the ethnographic event is a positive, healthy-minded action; it is much like collecting specimens of rock (we have heard the geological metaphor before) to take back to the lab for—what else?—classification and analysis. Or, in a dramatically different vein but still within the same tradition, the ethnographic event is the suspension of that very subject/object distinction (an embarrassingly nineteenth-century credo, after all) in a psychic blending of diverse intelligences, an empathy, that opens the heart and mind of the ethnographer to the compelling integrity of the native’s experience. The crucial point here, often lost sight of, is that the empathic ethnographer, like the “data and analysis” ethnographer, believes that the native is in full possession of his world, is its architect and privileged resident. Despite massive evidence to the contrary, both approaches embrace the popular illusion of the native as a unique being poised in a social world of delicately balanced harmony. The empathic ethnographer in particular wants to acquire a lease on that world, wants to understand the native’s experience as an interconnected, intelligible system. He would proceed, like Pilgrim, from a state of ignorance and confusion to one of relative enlightenment, via any number of Sloughs of Despondency familiar to all of us.

18  Heading for the Scene of the Crash

The problem with these two conceptions of the ethnographic event, as with the tradition of disjunction from which they derive, is that they insist on a radical separation—native and ethnographer—that somehow, through the latter’s analytic or empathic efforts, gives way to a delicious sharing of secrets and worldview. The ethnographer is a vessel, waiting to be filled (there was another vessel, waiting to be drunk, brimming with a lavender horror of Kool-Aid and cyanide) from the fount of native wisdom. “But where did they find all those actors?” The question recurs, for in thinking about my own experiences as ethnographer, I cannot dispel the memory of that acute disorientation. In attempting to frame a notion of the ethnographic event, I find the little anecdote I have related much more revealing than the glossy textbook accounts of ethnography as one-way empathy or objective analysis. How is it revealing? How does the incident show that ethnography is going on? There are two critical points here. First, the colossal discrepancy in our understanding of Woodstock became apparent only after a fairly lengthy, relaxed conversation in which both my acquaintance and I seemed perfectly in control of the subject matter. Rapport, intelligibility, the long, drawnout process of empathic understanding were already established when that little bit of Kafka intruded. And the disorientation, the shock, was greater than if we had been consciously groping for a common linguistic or cultural ground. Second, the disorientation was mutual: the “native” was as fallible as I. “Drinking at the fount of native wisdom” is impossible when the native is as parched as oneself. The peculiar turn in our conversation was edifying precisely because it revealed our mutual misunderstanding; we were forced to recognize that our ability to make sense to one another rested on a profound difference in basic outlook. I have come to regard that underlying difference, that “edifying puzzlement,”2 as the critical feature of the ethnographic event. For it seems precisely then, when both parties are seized by a kind of cultural vertigo, that understanding is possible. It is then that the Other surreptitiously enters. The view of ethnography that emerges from this discussion is not that of the “data and analysis” or even the empathic approach. The latter is unsatisfying because it retains a Benedictian faith in the internal consistency and boundedness—the wholeness—of cultures. And while insisting on their integrity, the empathic approach curiously strives to penetrate those whole cultures, to stand beside, or behind, the native and “read over his shoulder.”3 The ethnographer is a privileged reader, but the native retains full possession (and authorship; he holds the copyright) of the text. The “data and analysis” approach emphasizes answers (hypothesis testing) rather than questions and thereby discounts the interpretive process of creating a cultural text. There are variations on this approach, and I do not propose to examine them here; they range from the search for environmental determinants of behavior to

Jonestown  19

ethnosemantic works that give context and interpretation an important place in models of conceptual relations. I find the approach generally mistaken, for I think that analytical entity we call “culture” is really a set of fundamental questions and their groping, desperate responses, and not a recipe-like set of answers—a view developed in my American Dreamtime (1996), “Culture, Mind, and Physical Reality: An Anthropological Essay” (2010), and “The Serpent’s Children: Semiotics of Cultural Genesis in Arawak and Trobriand Myth” (1981). The models we sometimes construct, even when they do not profess to identify the “causes” of cultural phenomena, constrain our thinking in such a way that the restlessness of thought in the rough gives way to the poise of an analytical construct. The degree of underlying organization or structure of cultural productions is a major issue in anthropology today; it is certainly the major issue in anthropological semiotics. The little anecdote about Woodstock serves to call attention to that issue and, hopefully, to promote a skeptical attitude toward ethnographic accounts that render the described culture altogether explicable. Failures, sometimes colossal, to understand oneself, one’s group, and other groups are as much part of culture as are orderly semantic categories. In what follows, I pursue the idea that self-doubt and vertigo are fundamental aspects of cultural activity and, hence, of the ethnographic event.

Jonestown as Ethnographic Event In the sense I have just outlined, Jonestown is for me an ethnographic event, though I was certainly not present at Port Kaituma on that fateful day in November 1978, nor even in the country at the time (I arrived about two months later, on my third research visit to Guyana). Jonestown was a colossal failure, a hideous stain on the fabric of the human spirit. The suicides/ murders threatened to overwhelm the last refuge of a basic human decency and caring in a modern consciousness numbed by the imminence of nuclear holocaust and the commonness of individual violence. Somehow the horrors of nuclear war, assassination, and urban crime paled by comparison with a community of persons ready, at the command of a madman, to destroy not only themselves but their children. I was closer to it than most, but its appalling normlessness and vertigo afflicted all of us. And yet the event’s notoriety and ugliness are precisely what would lead most anthropologists to discount it as an ethnographic subject. The Jonestown horror is a specifically non-ethnographic event for them because it was so bizarre and so modern. Although we do not write a great deal about the subject in methodological texts, we do cling to a distinction—intuited more than taught—between journalistic and ethnographic events. And Jonestown was decidedly a journalistic event; like

20  Heading for the Scene of the Crash

a major earthquake or political assassination it seemed to demand the kind of attention only international correspondents could give. And give it they did, to the extent that it became arguably the truest media event in American life prior to 9/11. What other isolated jungle community has been so much discussed in newspapers, on television, and in popular literature? But it is not my intention to dwell on Jonestown as media event (although that would form part of a comprehensive study). I will begin instead by asking why the professional instinct of my anthropological colleagues is to ignore Jonestown. I believe two factors are at work here: the event’s sensationalism and its ugliness. While I find the two interconnected, I feel my colleagues would tend to emphasize the first, claiming that Jonestown was a bizarre and probably unique event precipitated by an American outcast in a foreign land. Hence, its analysis would yield nothing in terms of understanding the regularity, the everyday functioning, of any social or cultural system. This caveat, however, disguises a more fundamental reason for avoiding a professional examination of Jonestown, and that is the basic human reluctance to confront malignancy—a reluctance we actually enshrine in our theories of society by representing social processes and institutions as long-term, adaptive, integrative affairs. The feeling is that a social or cultural system must be systematic, must possess a high degree of organization and stability, if it is to persist over time. Although we are prepared to discount functionalist explanation in contemporary theory, we do not seem to have abandoned its penchant for assuming that enduring, shared sets of social relations and beliefs are the proper objects of study. Social change may be studied, to be sure, but only if it is orderly (and preferably part of a large-scale, seemingly irreversible process, such as “liminality,” “urbanization,” “class conflict,” or “revolution”). Disorderly change, the eruption of the bizarre and ugly into our placid lives, is rarely discussed in our monographs. The problem is essentially that we are ideologically committed to the idea of society or culture as an enduring, growing, constructive entity, and we regard the study of that entity as an enlightening, in some ways almost religious, pursuit. Jonestown thrusts its malignancy in our faces and, unless we isolate and dismiss it as aberrant, forces us to reconsider the nature of social inquiry. Ethnography as it has generally been practiced is the methodical study of Radcliffe-Brown’s “social morphology,” of that healthy, whole, functioning totality called “society.” Yet if we are to confront Jonestown as ethnographic event, it is necessary to reflect that ethnography—and anthropology itself— is a species of inquiry closely linked to pathology. Our discipline historically has concentrated on the systematic, firsthand inspection (dissection) of imperiled, sick, and dying societies left in the wake of Western colonialism. It is an intellectual curiosity (and scandal) worth some reflection that this project, while insisting on direct empirical investigation of societies on the threshing floor of history, should

Jonestown  21

have produced the theory of structural functionalism. And while we pride ourselves today on having transcended the empty formulae of functionalism, it is still true that we long for integrity—and even prettiness—in our ethnographic subjects. We begin and end our studies by asking the native to be beautiful and whole. But what if he is deformed and self-destructive? That is the question Jonestown poses. I approach that question from two directions. I will first work from the inside out, focusing on my own reactions to Jonestown as an ethnographer of Guyanese society and one well acquainted with life in that part of the interior where the community was located. Then I reverse my analytical direction, working from the outside in to consider a few of the many texts others have produced on the subject. From the “data and analysis” perspective (and, strictly speaking, the empathic perspective as well) my first strategy is suspect, since the ethnographer’s impressions are not admissible data (and are, in fact, to be suppressed lest one’s readers suspect that the analysis is “subjective” and “impressionistic”). This objection cannot be sustained here, however, for—at the risk of seeming presumptuous—my own thought, my own personal reactions to Jonestown furnish one of the more valuable texts on the subject (Herzfeld’s [1983] notion of the ethnographer as text suggests that my claim is not so idiosyncratic). And am I modifying, producing, or reproducing those impressions as I write this, that is, as I “translate” those impressions into written text? Is the “text” in fact the joining/hinging (Derrida’s brisure) of those impressions I have built up for years and this actual exercise of writing? When did the writing process begin? For the present, I will resort to the device of pretending (but am I pretending?) to describe my immediate impressions as they came to me in the first hours and days following the news of Jonestown. Besides myself, I know of only one North American anthropologist (Kathleen Adams) who did extensive fieldwork in the northwestern interior of Guyana during the 1970s. I do not know if she has chosen to write about Jonestown, but I do know from speaking with her that the event sent comparable shock waves through both our ethnographic consciousnesses. However, one anthropologist (Marvin Harris) did respond to the news immediately in writing, while the US Army burial squad was still completing its grisly assignment. But Harris’s article in the New York Times (1978), while full of impressions and a revealing ethnographic document of its own accord, is not ethnography. It actually serves to define what ethnography is not.4 Presumptuously or not, then, I regard the flood of impressions I experienced in late November 1978 as extremely valuable information. For me they have come to be the meaning(s) of Jonestown, and thinking about them has taught me something about how events mean. Because those meanings are so different from interpretations that have poured from television, newspapers,

22  Heading for the Scene of the Crash

magazines, books—and even a prominent anthropologist—I would like to record them here.

An American Movie “But where did they find all those actors?” I have spent nearly three years in Guyana, long enough to experience as my own this (for me) characteristically Guyanese reaction upon learning the news of Jonestown. I have queried North American friends—anthropologists and nonanthropologists—about their immediate reactions to Jonestown and have come to the conclusion that the horror and disbelief, the sense of a nauseating disaster, that they reported have a strong North American cast to them. Americans believed the reports about Jonestown; as with Woodstock, they had a documentary sense of, even an appetite for, the event. Guyanese, however, waited for the credits and house lights. In their eyes, Jim Jones had been staging a performance since his appearance on the national scene in late 1974 and was continuing that performance with Jonestown. Americans experienced Jonestown as a media event, beginning with Congressman Leo Ryan’s departure for Guyana several days before the fateful 18 November. But for Guyanese the experience was more like being in a movie than watching one. The very notion of “media event” is alien to Guyana. In 1978 there was no television (making this surely one of the last outposts), and the movies that arrived—even if occasionally, like All the President’s Men, about dramatic real-life events—were so outdated that they lacked any sense of immediacy. Lacking these amenities, Guyanese were spared the appalling news photographs of stacked and bloating corpses that Walter Cronkite served up to Americans with their evening meal. And the Guyanese press and radio—as state-controlled agencies of a state implicated in a very ugly mess—were virtually silent about an event that was receiving world attention. If Guyanese were dazed by the enormity of the news, they were also thrown off balance in finding themselves suddenly discovered by the world. Overnight, the name of a country ordinary North Americans hazily associated with Africa (Ghana? Guinea?) or the South Seas (New Guinea?) was splashed across the pages of international newspapers and magazines (which did find their way into the country and were avidly collected). In a country where the “news” consisted of the Guyana Chronicle feverishly proclaiming that the government was about to do something, that something was finally about to happen, there materialized entire metropolitan news crews, Lear jets, helicopters, crates of video equipment, fleets of taxis. The staggering reality of Jonestown as it appeared to the American receiver of all those video broadcasts and newspaper articles was for Guyanese

Jonestown  23

a distant puzzle, which had occurred in a part of the country very few had ever seen or cared much about. It appeared rather that the movie was happening in Georgetown. Guyanese began to perceive themselves, unbelievingly, as actors on an invisible stage where news teams roamed, hungry for scraps of information and forbidden to travel into the interior, interviewing and photographing in the streets. When I arrived in late January it was still possible to feel the stage consciousness. Jonestown, the American movie set never directly experienced, had been dismantled, and most of the American journalists and cameramen who created it had returned to their homes in the north, homes that contained the television sets and newspapers for which the movie set had been constructed (although a few stayed on; Larry Layton was on trial). But Georgetown and the Guyanese remained: locally recruited extras whose airfares had not been part of the agreement.

It Has Happened Again If one of my first impressions was of the unreality of Jonestown, my sense as an ersatz Guyanese that the nightmare was performance, another of my immediate impressions was of the chilling recurrence of a horror from the past: it had happened before. Jonestown burst on the American and Guyanese consciousnesses alike as an utterly novel event; commentators and scholars were hard pressed in the days that followed to come up with the analogies of Masada, Tai Ping, the Ghost Dance. Yet the closest analogy I know comes from Guyana itself, more than a century earlier and some four hundred miles in the deep interior south of Jonestown. There, in a millenarian community called Bekeranta, an Arekuna Indian named Awakaipu created a hell on earth that eerily parallels Jim Jones’s own.5 Awakaipu was a member of the Arekuna tribe of Guyana’s mountainous interior. By the young age of twenty-five, however, he had traveled widely, spent time in the capital, Georgetown, and learned some English. After working for a time for the explorer-botanist brothers Robert and Richard Schomburgk, and thus becoming somewhat familiar with the ways of the colonists, Awakaipu became possessed with the idea of acquiring the seemingly unlimited power of the whites. He sent word to Amerindian groups throughout British Guiana to converge at the base of Mount Roraima (inspiration for Arthur Conan Doyle’s The Lost World). His invitation promised that he would bestow on them not only the power but the physical personae of white people. Some thousand Amerindian men and women gathered there, forming a new village Awakaipu named “Bekeranta” (from the Dutch Creole beke, or white person). There Awakaipu encouraged daily bouts of manioc-beer drinking and dancing (in the mode of today’s Amerindian “spree”). As the days passed

24  Heading for the Scene of the Crash

he grew paranoid, convinced that some were plotting against him. In the midst of one drunken orgy he announced that the revelers should kill one another, whereupon the deceased would return in a few days reincarnated as white people. Awakaipu then took his war club, smashed the skull of a hapless bystander, and drank his victim’s blood from a calabash of manioc beer. A killing frenzy ensued, leaving only half the group as survivors. As the days passed and hunger gripped the camp, a party of men went to Awakaipu’s hut and murdered him. The survivors melted into the surrounding forest. The structural points and counterpoints of the two events add up to a staggering irony. Blind devotion to a charismatic leader, a community hidden away in the tropical forest from the outside world, the leader’s paranoid concealment and sexual license, his fear of betrayal that unleashes a murderous orgy, murder and self-destruction as sacrifice: all these unite Bekeranta and Jonestown in a hellish alliance that spans the miles and years separating the two communities. And yet how differently—for what different ends—they were constituted. The Arekuna Indian Awakaipu established the Land of the Whites in his homeland and peopled it with local Indians. The white American Jim Jones created a community named after himself in a foreign land ruled by blacks and filled it mostly with black refugees from white American society. Awakaipu’s village was a land of plenty; there the Indian fantasy of subsisting on cassava beer was realized. Jones’s was from start to finish a place of mean privation: backbreaking work and a wretched diet were imposed on all but the chosen few. Bekeranta was meant to be a way station to the material luxury of the whites who had just made their appearance in the Guyanese interior. Jonestown, for all the poverty demanded of its residents, was in some respects the materialist dream for which the people of Bekeranta and contemporary Guyanese longed. I have said that Guyana lacked television. Actually, until November 1978 one community did possess its own closed-circuit system of cameras and monitors—Jonestown. The community was remarkable to me, as an ersatz Guyanese, for its material wealth, acquired by the shipload in US cargoes routed around Guyanese customs and offloaded directly at Mabaruma, on the Venezuelan border. In Georgetown, the country was running out of money and the people were running out of food; crowds waited and pushed outside shops temporarily stocked with a few cases of rare commodities such as tinned milk, soap, coffee, and toilet paper. But in Jonestown, buried in the tropical forest, Jones and a few of his followers enjoyed tinned meats and other items not only unavailable throughout the country but the object of import bans, whose violation was punishable by fine and imprisonment. On the coast, East Indian peasants were being charged when army or police roadblocks caught them with a bag of potatoes or a few onions. In a grotesque irony, the KoolAid that served its fatal purpose—that combination of innocent childhood

Jonestown  25

beverage and lethal poison that so repulsed North Americans—was for Guyanese an avidly desired contraband item. These incongruities of “luxury” foods and television pale, however, when one considers the guns. For over fifteen years the government of Forbes Burnham kept in force a Security Act, passed during a time of racial disturbances, that severely restricted private citizens’ possession of guns and ammunition. During the same period Burnham built up armed military and paramilitary forces at an alarming rate, so that in 1978 approximately one in thirteen adults was armed by the state. The paramilitary accounted for much of this figure, which is to say that Burnham put guns into the hands of the party faithful. The effect of the Security Act on Amerindians and interior settlers was that hunting and predator control were possible only through the most circuitous means. In the area where I lived (the upper Pomeroon River) three men owned antique, single-firing shotguns that they used and loaned out to dozens of other men anxious to bag the occasional bush hog, curassow, or labba. Ammunition for those shotguns was handled like the precious commodity it was; a hunter typically would not use a half-dozen shells on a hunting trip lasting several days and would complain bitterly if he missed and wasted a shot. Yet Jonestown possessed an arsenal of automatic weapons, wielded by the fanatics Jones had fashioned into his security force and, later, executioners. And Jonestown’s guns have killed again. Impounded as evidence and kept in a police armory in Georgetown after the massacre, they were stolen in June 1979 by a daring band masquerading as police officers. The impostors simply checked them out of the armory “for investigative purposes” and drove away. While the state press hinted darkly that the thieves were tied to the now martyred Walter Rodney’s Working People’s Alliance, the consensus “on the street” was that a rival strong-arm faction within Burnham’s own People’s National Congress party was responsible. Bekeranta and Jonestown, Awakaipu and Jim Jones, the squalor of Georgetown and the smuggled American tinsel of the Peoples Temple, an Amerindian hunter conserving his ammunition and the killing bursts of automatic weapons fire on the landing field at Port Kaituma—the points and counterpoints swirled in my mind as the news of the jungle massacre unfolded. Jonestown seemed at once the antithesis of everything I knew about the Guyanese interior and Guyanese, and the embodiment—a macabre resurrection—of Awakaipu and his victims. It had happened again. This haunting juxtaposition of American and Amerindian cultists remains my strongest impression of Jonestown. It is also a highly idiosyncratic reaction, for the mass of publications and news broadcasts contained no reference to Awakaipu, although many were filled with accounts of cults and cultism—one of the conventional interpretations of Jonestown. (While the reaction was idiosyncratic, it was not unique to me, for in the first days of the event, talking with

26  Heading for the Scene of the Crash

Kathleen Adams on the phone, we discovered that each of us had immediately thought of the Roraima tragedy.) Yet it is precisely in this idiosyncratic reaction to Jonestown that I find the most valuable clue to understanding it as an ethnographic event. For the juxtaposition of American and Amerindian cultists, with all its interwoven points and counterpoints, poses a fundamental question: Where did Jonestown happen? We know the physical location of the Peoples Temple community, but what are its cultural coordinates? That is, in which culture did Jonestown occur? When we consider Jonestown as ethnographic event it is easy to assume conditions of an event in general. We believe that an “event” has a time, a place, a perpetrator. Something happens somewhere; someone does something someplace, sometime. I suggest that these assumptions do not fit the ethnographic event and, in the specific case of Jonestown, are at the root of a great deal of journalistic and literary misinterpretation. It is probably true that the assumptions do not fit events at all, but that is a philosophical question I address here only from the particular standpoint of ethnography. As I reflect on and attempt to organize my impressions, the critical idea that emerges is an enormous paradox: the meaning of Jonestown is at once utterly transparent (for who is to doubt its ugliness?) and inscrutable (for its madness precipitates a flood of conflicting interpretations). Its meaning is univocal in that all our natural impulses recoil from the atrocity; we label it unambiguously insane. Yet its meaning is also multivocal, since it is artifice and death, a movie set and mass murder in one. The meanings we assign the nightmare continuously shift from one perspective to another, so that the discreteness or boundedness of Jonestown as event breaks down (we will see that the corpus of texts dealing with Jonestown also possesses the mercurial quality of my impressions). What remains in these ruins of the event is the imprint of the several perspectives—the several trains of thought, past experiences, etc.—one develops and wrestles with in trying to come to terms with the enormity of what took place. And if Jonestown is found to consist of assorted imprints of previous events, can one commit the pedanticism—itself rather monstrous, considering what we are discussing—and call Jonestown a text? I can see no alternative. Certainly the event that is Jonestown has been textualized to a greater extent than the life and death of any other small community. Our first impulse on hearing the news of the jungle deaths is to recoil in horror—and then plunge into this new abyss in our sensibility and attempt to fill it with sense. What, after all, is the event as physical fact? A large group of people—Americans— found life unbearable and arranged or submitted to their own destruction in a foreign land. Nothing remained, nothing but stacks of corpses rotting in the jungle. Through an extraordinary military and diplomatic mission, these were flown out of the jungle, back to the United States, subjected to patho-

Jonestown  27

logical examination, and buried in a mass grave. It was a brief trauma in our daily lives that left few tangible reminders: the community in life was inaccessible; even the location of Guyana was a mystery; survivors were few and did not become media personalities (contrast the public reaction to Jonestown with the Iranian hostage crisis). For all the platitudes about American racial oppression underlying Jonestown, none of the hundreds of poor, black victims have been elevated to the status of even a minor martyr. It is as if commentators on Jonestown seek out collective, sociological reasons for the event in order to avoid having to deal with the more than nine hundred individual stories of lives that took various paths to the jungle slaughterhouse. The event itself is always out of reach, a transient, elusive disturbance in the fabric of time and place. It was over and done with before we caught our breath, leaving us to produce and consume texts about the event and, individually, to mull over impressions. Public texts and private impressions both confront the paradox of transparent opacity; an event that seems to demand no classification defies classification.

Pioneers In searching for the where and what of Jonestown I have so far discussed those of my impressions that involved the unusual and puzzling. Adopting a Guyanese perspective, I reacted to Jonestown as though it were an American movie. Simultaneously drawing on a specialized knowledge of Guyana that even few Guyanese possess, I was struck by the parallels between Awakaipu and Jim Jones, Bekeranta and Jonestown. The incongruity of these impressions alone is enough to confuse, enough to undermine the comfortable notion that particular events have particular meanings. And welling up in my mind during the first days of the Jonestown story was a third interpretation, in its way more bizarre than the remembered episode of Woodstock or my knowledge of Bekeranta: Jonestown was an extreme consequence of Guyanese economic development programs. As with the story of Awakaipu, very few North American commentators on Jonestown possessed the background knowledge that would have led them to compare Jones’s experiment with rural socialism to similar experiments undertaken by the Guyanese government. One of the first questions asked about Jonestown was how conditions in the camp could have reached such extremes without Guyanese authorities intervening. Jones and his followers were, after all, citizens of a foreign nation not well regarded by Guyana’s ostensibly socialist government, and so their activities should have fallen under closer scrutiny than they had received back in the United States. One familiar answer to this question is that Jones bought his way into Guyana and managed to stay there through a well-or-

28  Heading for the Scene of the Crash

chestrated series of bribes and sexual favors dispensed by female cultists to government officials. While it is quite likely that these charges are true, it is mistaken to view Jones’s commune and Guyanese government policy as related only through graft and corruption. For Jones’s success with Guyanese officials stemmed as much from his political ability as from his power to corrupt with money and sex. Remember that this was a man with letters of reference from the White House and Congress and with a background of municipal government service in California. Just as he “read” the civil rights movement in Indiana and urban renewal programs in San Francisco and used them to his own advantage, so he took in the Guyanese political scene and moved with it until he became a leader and exemplar of what the Guyanese were trying to accomplish for themselves. What the Guyanese were trying to accomplish in 1974 and 1975 was interior development, and interior development of a kind that would help to foster a sense of cultural identity among a decolonized and factionalized populace. Following independence in 1966 the new Guyanese government under Prime Minister Forbes Burnham was faced with two major problems: national integration and economic development. Guyana is a multiethnic country, populated by the descendants of African slaves, East Indian, Portuguese, and Chinese indentured laborers, European settlers, and indigenous Amerindians. Political struggles in the early 1960s had resulted in ethnic factionalism, climaxed by murderous racial fighting between blacks and East Indians that spread throughout the coastal areas and was contained only after British paratroops arrived. The struggle for independence took the form of parliamentary debate, rather than united armed resistance, and the new government was chosen in an indecisive election that left many Guyanese frustrated. This ambiguous beginning has made the task of fashioning a viable cultural identity a difficult one. In order to mute internal discord, Burnham cultivated an ethic of national unity by drawing on the image of being Guyanese first and foremost rather than African, East Indian, and so on. The presence of ethnic enclaves on the coast (where East Indians live mainly in country villages and Africans are concentrated in the two largest cities) made the interior an attractive place from the standpoint of the drive to achieve national integration: while East Indians grew rice and Africans worked in the bauxite mills, it would be Guyanese who developed the interior. Further political implications of interior development had to do with the perceived role of the former British colonial administration vis-à-vis the interior. After assuming power, Burnham noted that for a century and a half the people of his country had been effectively excluded from the greater portion of their land by British indifference to exploiting the resources of the hinterland. He and other Guyanese read rather sinister political intentions

Jonestown  29

into this neglect, arguing pointedly that a plantation labor force would have been impossible to maintain in the presence of a beckoning frontier. Free or cheap land and the promise of an independent lifestyle would have enticed the best workers away from the sugar plantations, thus crippling the colonial economy. It was thought that the British maintained the interior as a preserve for themselves to adventure in, where they could explore the exotic and meet those “children of the forest,” the Amerindians. Economically, the new nation was afflicted with many of the standard ills of underdeveloped countries. Postwar improvements in hygiene and standard of living (particularly the control of malaria) set off a prodigious increase in the birth rate. By 1967 more than half the population was below the age of fifteen, with little evidence in sight of a trend toward smaller families. Political independence did not mean economic self-determination, and the new leaders of Guyana found themselves confronting international cartels and multinational corporations that were mainly interested in extractive industries. The sugar plantations operated on a pre-independence model, employing large numbers of unskilled local workers on a seasonal basis and hiring managerial and technical staff abroad. The bauxite mills, owned by Canadian and US corporations, paid the highest wages in the country, but comparatively few Guyanese were employed and these were faced daily with the disparity between their living and working conditions and those of the group of North American engineers who ran things. Not only was production foreign dominated, but consumption habits too were almost compulsively oriented toward foreign goods—the kind of colonial mentality of the marketplace lamented throughout the Third World. The few local products that appeared in stores could not compete with similar but shoddy ones of metal and plastic from abroad. Graceless living room and dinette sets of veneer, plastic, and aluminum tubing were preferred over the same articles in local hardwoods and wicker by those who had money to buy. The solution Burnham proposed to these dilemmas was a seemingly novel political philosophy of “cooperative socialism.” The initial idea behind this philosophy was to encourage the growth of a third sector of the economy—cooperatives—that would combine the best features of the private and public sectors. Government sponsorship of cooperatives was initially modest, but it gradually increased until the cooperative emerged as the primary instrument of economic development. In February 1970, Guyana dissolved its formal ties to the British Commonwealth and became the world’s first “Cooperative Republic.” The changeover was accompanied by considerable ideological fanfare. As a foreign observer present during the event, I was struck by the symbolic elaboration of what at first appeared to be mainly part of an economic program. It became clear to me that the two issues of economic development and national integration were closely related, so that understanding

30  Heading for the Scene of the Crash

what was involved in a Guyanese becoming a member of a farming or timber cooperative meant understanding what it was to become Guyanese in the first place. A popular government slogan in the early days of the Republic was “The small man will be the real man” (usually rendered in Guyanese English Creole as “de small mon gon be de real mon”). As well as serving as catchy propaganda, I believe the phrase was taken up in the popular media because it evoked the ethic of the postcolonial: political powerlessness and economic dependency were about to give way to an era in which the individual would be more the author of his destiny. In the period from 1970 to 1980 the architects of the Cooperative Republic followed two general plans to achieve economic development: nationalize existing (coastal) industries; and exploit interior resources with groups that represented the new socialist way. The first of these is the more significant economically, for the bauxite mines, sugar plantations, and rice farms of the coastal area account for nearly all the country’s exports. The cooperative socialist program of nationalization proceeded quite rapidly, with the bauxite industry becoming government controlled in 1971 and the first plantations in 1974. It is the second aspect of cooperative socialism, however, that cleared the way for Jonestown. Although the Guyanese interior was still so underpopulated and undeveloped that its economic importance was marginal, it had tremendous significance for this very reason as a tabula rasa on which the socialist future would unfold. The potential for making the frontier a laboratory for the new society seemed great, in part owing to a happy result of the previous colonial system. The British policy of limiting access to the interior was effected by declaring virtually all of it Crown lands, with special provision for those areas populated by Amerindian groups. With independence in 1966, administration of those Crown lands was transferred to the new national government. Hence, most of Guyana is state-owned property. The government was consequently in an ideal position to design and carry out programs of social change—what, in fact, the nationalized press of the 1970s called “making the Revolution”— without having to confront a welter of vested interests, property claims, and so on. The major emphasis behind plans to develop the interior was the attainment of economic self-sufficiency. A colonial economy is by definition lopsided, since it functions to fulfill the economic requirements of a foreign people. Nationalistic political aspirations in postcolonial Guyana thus conflicted with established remnants of a dependent economy. The most crucial aspect of this dependency was in agriculture, where the plantation economy had functioned for generations to produce a surplus of a single crop—sugar— while subsistence demands were met only by relying on food imports from metropolitan countries.

Jonestown  31

A serious problem in the inaugural year of the Republic was the widespread preference Guyanese showed for imported foods. Georgetown residents disdained local subsistence foods that were available for consumption in the city, such as cassava, yams, eddoes, tanniers, and breadfruit, looking on these as being of low status and typical of “country” tastes. They preferred to purchase foods that were not easily grown in Guyana, mainly potatoes, onions, cabbage, and apples. The colonial system of production therefore greatly influenced habits of consumption, creating in this instance a colonialism of the palate and posing difficult problems for the government drive toward self-sufficiency. The difficulties were of two kinds. First, farmers’ incentives to grow surplus provision crops were few so long as they could anticipate marketing problems. Second, the tons of agricultural imports arriving in Georgetown harbor each month represented a serious drain on the country’s precious foreign reserves: the money going abroad could, the government felt, be better spent on industrial equipment to spur the development program. Moreover, the international oil and food crisis that began in 1973 intensified the plight of Guyana’s agricultural dependency. Not only were transportation costs doubled and tripled, but the price of the agricultural products themselves increased alarmingly owing to the rising price of petrochemical fertilizers. To add to these grave problems, the exchange value of Guyanese currency, then tied to the troubled British pound, declined by nearly 30 percent between 1970 and 1975. Translated into human terms, the economic reversals were even more alarming to average Guyanese. At the end of 1975, an unskilled laborer could expect to receive a daily wage of under three US dollars, while office workers and teachers received one hundred to two hundred dollars a month. These incomes could support people living in the countryside and growing much of their own food, but for the tens of thousands who had moved into Georgetown it became terribly difficult simply to exist. Unemployment, already high, continued to rise as urban migrants could not find work. The “choke and rob” criminal became a fixture of city life, and his activities increased in violence, from simple muggings to assaults with machetes (the Creole term, ever graphic and picturesque, is “choppings”) and, increasingly, guns. While the government no longer releases statistics on crime, the terrorized mood of the city readily communicates itself to the foreign visitor, not just through the cautions routinely given the tourist, but through the precautions taken by Guyanese residents of the city. Wire mesh and iron grillwork over jalousie windows give the once graceful tropical city the (accurate) appearance of being in a state of siege, and as evening approaches, private guards (“de gate mon”) appear before the entrances of homes of even the moderately well-to-do. It is in this context of a deteriorating society that programs for interior development were conceived and implemented, and so it is not surprising

32  Heading for the Scene of the Crash

that they often appeared improvised and harsh. The interior beckoned as the solution to the ills of overcrowding and unemployment on the coast, but beyond a general commitment to development loomed two hard tactical questions: Who was to do the developing? What exactly were they to do first? The peculiar demography of the country, together with its recent political history, made the first question tougher than it might at first seem. Except for Amerindians, prospectors, missionaries, and a few ranchers, nobody lived in the interior. There was no peasant population to mobilize and organize into cooperative schemes; there were just forest, rivers, and savannah. Moreover, the thirty-five to forty thousand Amerindians and mixed-Amerindians of the interior were unsympathetic to the predominantly black government, for a variety of reasons. In the first years of nationhood they were particularly apprehensive about the disposition of lands they had traditionally occupied, and consequently official talk about interior development left them with some uneasy feelings. And yet the Amerindians, because of their familiarity with the techniques of interior agriculture, were in the best position to implement development plans. A similar problem in interethnic relations existed on the coast, where the farming population was predominantly East Indian and politically alienated from the Burnham government following the widespread racial rioting in 1963 and 1964. The answer to the first of the two tactical questions was thus arrived at by a process of elimination: the remaining possibility, and one that promised to solve two problems at once, was to recruit and train unemployed, predominantly black, Georgetown youth to become the “Pioneer Cooperators” of the Cooperative Republic. The second question—What were the Pioneer Cooperators to do?—was readily answered in the context of the growing food crisis and the lack of a technological infrastructure to support large-scale operations of any kind (especially timber and mineral extraction): Pioneer Cooperators were to establish small agricultural cooperatives in the interior. When the first Pioneer Cooperatives were begun in 1970, plans called for them to be staffed by twenty to thirty individuals and to be subsidized by the government for their first year or so of operation. By that time it was felt the Pioneers would have cleared land and harvested their first crops, keeping some for subsistence and marketing the surplus through the national cooperative outlet in Georgetown. These groups were highly publicized, and considerable funds were spent to airfreight supplies and heavy equipment into remote areas. Despite these ambitious beginnings, the Pioneer Cooperative program was plagued with difficulties. The first groups of Cooperators were recruited from that section of the population with the least agricultural experience and, because of the sense of urgency behind the program’s formation, they received inadequate training. In addition to lacking basic agricultural skills, many of the volunteers labored under an even greater hardship: a pervasive suspicion and

Jonestown  33

fear of the “bush.” Although Georgetown residents live within a hundred miles of raw tropical forest, few of them have occasion to experience it firsthand. Consequently, a prolific folklore about “Buck people” (Amerindians), jumbies (evil spirits), and kenaima (murderers with supernatural powers) has grown up, with the result that the average Guyanese is not enthusiastic about spending time in the interior. The sites selected for some of the first Pioneer Cooperator camps seemed intended to maximize that anxiety, for they were located in far-flung corners of the country, in virtual isolation from the coast, and often miles from even an Amerindian village. Plucked from the streets of Georgetown and thrust into the deep interior, some Cooperators reacted desperately. In one case, an entire group of young, fearful volunteers swarmed aboard a supply plane and demanded to be flown back to the city. They left behind them only one old man, who for some months was the sole occupant of the Pioneer encampment. In another case, the alien forest apparently unhinged one young volunteer, who began to see and call out to bush spirits. His companions had to prevent him from running into the forest and becoming hopelessly lost. Added to the psychological strain were organizational problems that brought the Cooperator program to an untimely end. Mechanized agriculture in the interior had never been seriously tried, and the experiment led to a host of problems that were beyond the competence of the volunteers. When some crops were produced, there was the serious problem of transportation infrastructure: except for light aircraft, there was none. Even with the Cooperators receiving little more than subsistence provisions for their strenuous efforts, it was simply not a viable proposition to fly in heavy equipment and gasoline across hundreds of miles of forest and fly out meager quantities of produce such as black-eyed peas and peanuts. After 1973, and about the time Jones was planning his own agricultural community, the Pioneer Cooperator, object of so much lavish praise in the national press, was quietly forgotten. His place was taken in the public eye by another socialist frontiersman—the National Service Pioneer. The National Service was organized along paramilitary lines and represented Guyana’s commitment to building a socialist state on the model of Cuba and China. Young volunteers for the National Service were trained for varying lengths of time, but in the first years of the program most attention was given to a small group, just a few hundred, who had trained for a year as Pioneers. Government planners sought to profit from mistakes made with the Cooperators by recruiting volunteers from a broader ethnic and geographical base and giving them extensive practical experience under close supervision. Beginning in 1973, National Service camps were created in three parts of the country: Kimbia, the first and nearest to the coast; Tumatumari, on an interior river; and Papaya, in the Northwest District and not far from the site of Jonestown, which was begun a year later. Volunteers in the Pioneer Corps, young peo-

34  Heading for the Scene of the Crash

ple of both sexes usually between the ages of fifteen and twenty-five, were expected to spend an uninterrupted year at one of these camps. Rather than serving solely as a training facility for persons who would later be involved in small cooperatives, the National Service camp was itself intended to be a primary instrument of development. Crops grown at Kimbia and the other camps were to be marketed through the government’s outlets and some, like cotton, were to supply the raw material for developing food- and textile-processing industries. The National Service Pioneers’ life was arduous, since their agricultural tasks were only part of a political education designed to transform them into model citizens of the new Guyana. The National Service camp was organized as a military base, with uniforms, military etiquette, barracks living, and a rigid schedule. The day began at four in the morning, when the Pioneers were mustered out for two hours of calisthenics and drill practice followed by a run. They showered upon returning, had breakfast at seven, and assembled for duty roster. At Kimbia in the fall of 1975 the Pioneers’ energies were focused on the cotton crop, an agricultural experiment for the country and heavily publicized as yet another step toward full economic self-sufficiency. Much of the first cotton crop was grown and harvested by hand, so the Pioneers found their working days filled with hard manual labor. Although picking machines were introduced, they left so much cotton on the plants that Pioneers were still assigned to glean the fields after the machines had finished. Others were responsible for large vegetable gardens and for livestock and fowl. As well as feeding the camp, these operations were expected to produce significant surpluses that could be marketed in town. After a full working day in the fields, the Pioneers had an hour or so to relax and refresh themselves before the evening meal. Following that, at about eight o’clock, they assembled for a series of “national policy” lectures. Because the National Service was the government’s most important ideological experiment throughout the mid- to late 1970s, the camps received a steady flow of highly official visitors who addressed the Pioneers on the values and goals of the Socialist revolution. In addition to ideological talks and discussions, some evenings were given over to technical lectures by agricultural experts. The national policy lectures generally lasted until around ten o’clock, by which time the young Pioneers were quite ready for a few hours’ sleep before the next reveille summoned them to the parade ground. The exhausting schedule, the emphasis on economic self-sufficiency, and the intensive political indoctrination were fixtures of Pioneer life that Jim Jones, who knew little about Third World development programs before coming to Guyana, was to incorporate in Jonestown. The government expected to enlarge the three camps and to establish new ones in other parts of the interior, so that an increasing number of Guy-

Jonestown  35

anese youth would receive a solid grounding in the lifestyle of the future. But events overtook the Pioneers, as had been the case with the Pioneer Cooperators before them. On the coast the real economy, the one implacably mired in racial conflict and the rest of the colonial legacy, was breaking down. In 1975 the sugar market faltered, then plunged, just when the government had negotiated a generous nationalization scheme with the planters. The bauxite industry, recently nationalized, was whipsawed by the international cartel and local labor disputes. Escalating oil prices made the cost of electricity and transportation on the coast prohibitively expensive and cut interior development off at the knees. Guyanese were emigrating in numbers, heedless of currency restrictions that made emigration economic suicide. Meanwhile, at Kimbia, less was said of the cotton crop. The national press, which always pushed plausibility to the limit, stopped writing about the new industry in textile exports that was to follow naturally from the first crop at Kimbia. It turned out that the soil at Kimbia was not rich; fertilizers were introduced but were obviously too expensive for regular use. When the Pioneers harvested their crop and sent the bales to the new spinning mill outside Georgetown provided by the Chinese, it was discovered that the machinery was designed for cotton with a different fiber length. Other agricultural production—vegetables, chickens, eggs—ran into marketing problems: spoilage, waste, and poor distribution kept Kimbia from becoming the vanguard of a powerful third sector of the economy. As the economic value of Kimbia and the other camps dissipated, their national policy aspect acquired new importance. Young civil servants, including the many young teachers responsible for educating the enormous school-age population, began to be cycled through the camps over their two-month summer break. The Cooperative Republic would be built over the holidays. As far as several figures in the Guyanese government were concerned, Jim Jones was the right man in the right place at the right time. While Guyanese were giving up on interior development (“They’re packing it up in the interior,” as one development specialist put it to me), Jones offered to fund an enormous agricultural project in the remote Northwest District. And while the ideological posturing and pocket-lining of government ministers and secretaries were rapidly alienating the Guyanese people, Jones was eager to parrot the official line. And he offered more, for his scheme promised to salvage the best aspects of earlier development programs. He would settle in the interior, like the bands of hearty Pioneer Cooperators in the early 1970s, but he would do so en masse, like the Kimbia Pioneers, and with the aim of becoming a major food supplier. After a decade of false starts, Jones and his community would prove Burnham and his ministers correct in claiming that the interior was the key to prosperity and national integrity. Nor did it matter that this economic miracle was to be wrought by Americans; the invidious

36  Heading for the Scene of the Crash

suggestion that they would succeed where Guyanese failed was parried by noting the kind of Americans involved. They were poor and black, refugees and outcasts from the capitalist oppressor to the north, who looked to Guyana as an earlier generation of immigrants had looked to a young and vibrant United States. The United States had grown old and rich and evil; the future was waiting on the Guyanese frontier. In Our Father Who Art in Hell, James Reston Jr. describes Jonestown as “uniquely an American story” (1981: ix) and notes the reaction of students on his own university campus: Yet if Jonestown was genuinely felt as a tragedy by anyone in America, it was the college generation. Perhaps they did not verbalize or seek to define the nature of their sorrow in the wake of the Event, but their mourning was evident as they moped about the campus, newspapers tucked under their arms. Their identification was clear. As the generation that felt rather than intellectualized, they felt their own susceptibility to the Joneses of their decade. Jonestown was their Kent State. (1981: 74)

For all the merits of his book, Reston’s is a uniquely American reaction to “the Event.” For Guyanese and those who had followed Guyanese affairs closely, Jonestown was a grotesque end to uniquely Guyanese efforts at economic self-sufficiency and national integration. Too much ideological capital had been invested in just the kind of frontier development scheme Jonestown represented to allow the implication to pass unnoticed: the Pioneer Cooperators, the Pioneers, and the victims of Jonestown would share a common grave. While it is difficult to identify a movement (Reston’s “feeling generation” perhaps?) among American youth for which Jonestown represented the demise, it is clear that the murders/suicides coincided with a stage in Guyanese history and consciousness that gave them a dreadful ring of finality. Try as Burnham and his government might to dismiss Jonestown as the doings of American crazies, they could not escape the verdict that they not only profited in sordid ways from their alliance with Jones but, by linking his community with Guyana’s development strategy, extinguished the spirit of nationalist commitment among their countrymen.

Brother Jonesie Where there is political repression, poverty, and despair in the Caribbean, there is also reggae music. If America’s media response to Jonestown was massive television and journalistic coverage, the Guyanese or Caribbean response took the form, in part, of instantly released reggae songs. These were promptly banned on the government radio, but could be heard blaring from the rum shops and record bars that dominate street corners in Georgetown.

Jonestown  37

Of the three songs I know, “Brother Jonesie” (by the Tradewinds’ Dave Martins [1979]) is the most poignant.6 Martins’s lyrics emphasize the absolute control Jim Jones had over his followers, the psychological bond reinforced by the physical impossibility of escaping from the remote jungle camp. The song’s refrain is that Jones tells them to do something, and they do it: jump, dance, scratch, sleep, move, blink. Then the final, terrible line, rhyming with “blink”: when he tells them to drink, they drink (the cyanide-laced Kool-Aid). “Brother Jonesie” is merely described in text here, but I discovered it as performance. It was late January 1979, and I had just arrived in Georgetown on my third research visit to the country, which was to last until the following fall. I was staying at a friend’s house in the city, and when the first tropical weekend I had seen in a long time rolled around, I decided to sample a widely advertised musical event at the local Pegasus Hotel. Dave Martins and the Tradewinds, a group of Caribbean musicians based in Toronto, were playing under the stars. The Pegasus is Guyana’s one international hotel, one of a chain but, like virtually everything in Guyana, with significant government ownership. British Airways also has some affiliation with it, but the relationship has been strained since flight crews arriving in the late evening stopped staying at the Pegasus. Electricity failures and food shortages, endemic throughout the country, made life at the Pegasus less than luxurious, and so the crews preferred to turn their plane around and go back to Trinidad. And since the planes bring few tourists, the clientele of the Pegasus is a curious mixture of visiting Third World delegations, US and Canadian technicians with their families, high-rolling Guyanese entrepreneurs and government officials, prostitutes, assorted diplomats, and some university staff. The Pegasus keeps going by throwing large parties and buffets with a live steel band or, occasionally, reggae music that attract middle-class Guyanese as well as the seedier regulars. And the Tradewinds party was a real bash, for the group had an international reputation with several records to its credit. The setting was outside, on the landscaped grounds (with plenty of bougainvillea, hibiscus, and almond trees) around the pool. The Tradewinds were set up on a thatched platform with a dancing area in front and with party tables, chairs, and buffet tables scattered over the grounds. Light instrumental music accompanied the buffet, which was mostly chicken and rice and, like other Pegasus buffets, consumed in quantity. As the action around the buffet tables eased and bar orders for rumand-anything increased, the music became livelier and included some vocal numbers. Couples began to crowd the dancing area. Then Dave Martins announced “Brother Jonesie,” saying something about it being his reaction to Jonestown and welcoming a new arrival: it was a US television news team.

38  Heading for the Scene of the Crash

Martins said they had come to see how the Guyanese people were reacting to the news. And the “Guyanese people” reacted as they had been reacting for the past half-hour—by dancing. “Brother Jonesie,” like every good reggae melody, is a marvelously rhythmic invitation to dance. There in the tropical night, about 150 miles from Jonestown, Martins sang to the crowded dancers and the laughing partygoers at the tables, the television news crew panned the scene, and I watched, sipping a rum-and-ginger and thinking, among other things, of an ancient conversation about Woodstock. Jonestown as a dance tune is something Tom Lehrer might have come up with, but no one danced to his songs. But was it offensive, or—the only way I can ask the question—was I offended? Not really. Startled, certainly, and with a certain well-conditioned, conventional revulsion at dancing on this mass grave. But so much was happening in the scene, I found it impossible to hold on to a clear, strong impression. There was artistic composition and performance, the effervescence of ritual, the surprising presence of a US news team in this backwater of the world, and my own presence as anthropologist, as a conscious, compulsive entity striving to take it all in, think about it, write about it, and no doubt in the process make it something it never was or, certainly, no longer is. The Tradewinds probably barely remember the engagement, the dancers and drinkers have danced and drunk over the intervening years until that night has slipped into anonymity, and the news crew has had other assignments. Was their footage actually used? I have never learned. And if it were used, if the party scene were beamed into several million American homes under the aegis of Walter Cronkite or Dan Rather, what would the inhabitants of those untropical, tidy dwellings think of the song, the singer, and the dancers? Would I need to learn their reactions before comprehending the event? And would their reactions have been clear and distinct, even at the time of the broadcast—even if I could have hastened back to the United States that evening and hastily done some opinion polling? Or would I have found reflected in their responses to the program—if it were ever aired—something of my own bafflement and ambivalence? Would they, too, not have known what to make of the conjunction of the lively dance tune and the Jonestown horror? Once again a trivial incident—talking to a friend about a movie, attending a dance—has led to fundamental questions about the nature of an ethnographic event. What are the boundaries of the event? Are there in fact boundaries—does it leave off anywhere at all? Does the event belong to a particular group of people? Was Woodstock a definitively American movie, with a distinct meaning only Americans could grasp while others, like my Arawak friend, would misinterpret it? And was even Jonestown, although it occurred thousands of miles away in the jungles of South America, a “uniquely American” phenomenon, as Reston claimed? And if it is ever possible to say that an event belongs to a certain people, a certain place, does it follow that there

Jonestown  39

exists an authoritative interpretation of the event? Do certain people, because of privileged access to information, superior intelligence, perseverance, or whatever, get it right, come to know the true meaning of the event, while others, less gifted or less industrious, get only part of the story and produce mistaken interpretations? The incidents I have related surrounding Woodstock and “Brother Jonesie” are, I feel, impossible to “get right” in the sense of producing authoritative interpretations. Like Jonestown itself, those little episodes belong to no identifiable, bounded group or society; their only reality is a shifting, paradoxical assortment of perspectives that, when described together and in some detail, produce, not an authoritative version of the singular event, but a dissolution of the very idea that an authoritative version can exist. Ethnographic discourse does not yield such a version, whether authoritativeness be construed as bound up in office—anthropologist, government spokesperson—or as rooted in a popular wisdom about cults, the Third World, America. Instead, ethnographic discourse proceeds with an account of events to the point where several overlapping and mutually inconsistent versions are identified and their interrelationship made clear. Ethnographic discourse thus provides a meticulous account of how contradictory positions enter into a single event and are often held by a single participant. Ethnography rephrases and illuminates what is puzzling and ambivalent about basic human situations.

Georgetown Gossip The preceding accounts of Awakaipu, the Pioneers, and “Brother Jonesie” represent approaches to understanding Jonestown as an ethnographic event from, respectively, historical, national policy, and popular culture perspectives. As such the accounts are, in a sense, predigested; those events and their discourses have already been subsumed by nomothetic systems. The story of Awakaipu is there in the history books for all to read (though practically no one does); Guyana’s interior development programs are reviewed by local government officials, visiting technicians, and international agencies; “Brother Jonesie,” apart from its performative mode, is, literally, a record that anyone can listen to and comment on. But to round out a Guyanese perspective on Jonestown, it is necessary to examine how Guyanese in the course of their daily lives talked about—textualized, if you like—the event for themselves. They did not cite the historical record (I doubt if one Guyanese in a thousand knows the story of Awakaipu). Nor did they make the close comparison I have made to their country’s own development programs, although they did criticize privately Jones’s ties to their government and its propaganda campaign for interior development. And while numerous Guyanese attended

40  Heading for the Scene of the Crash

the Pegasus festivities, “Brother Jonesie” itself was already composed, and without their participation (in fact, Dave Martins is an expatriate Guyanese). Newspapers and radio provided no forum for an open discussion of Jonestown; the Chronicle avoided extensive commentary until early December, when it began to look as though the international news saturation would never dry up. And then the Chronicle published a special issue attributing the disaster to American cultism while describing at length the accomplishments of Jonestown the agricultural commune. Denied that public forum, Guyanese did what they have always done. They gossiped. With a population of eight hundred thousand, Guyana is a tiny society. Coastal crowding, racial barriers, physical barriers posed by enormous and unbridged rivers, and the consequent isolation of Georgetown from the rest of the country fragment that tiny society into claustrophobic networks of kin, neighbors, lovers, workmates, friends. In the small postcolonial city of Georgetown, cut off from its own countryside and from the outside world, generations of Guyanese have developed an institution of oral narrative—“stories”—that flourishes still. Whether a daily newspaper is available, or whether it exists and is controlled by a corrupt and fearful state, is in some respects irrelevant, for Guyanese rely on “stories” that come to them from their individual associations with other Guyanese anxious to know what is going on. “Stories” circulating in Georgetown in the days and weeks following Jonestown focused on several incidents or rumors that received little if any attention from American journalists: a United States “attack” on or “occupation” of Guyana; the prime minister’s wife’s broken arm; Yankee dollars on the local black market; and Jones’s secret hideout. Taken together, the stories compose an interwoven text of perspectives at the level of Guyanese common understanding, a text that differs significantly from those of my impressions I have examined up to now and, certainly, from official American and Guyanese commentary.

How the Story Broke In Georgetown in mid-November they knew that a US Congressman was in town and that he had come because of some problems in the Jonestown community, although the Chronicle admitted only Ryan’s presence in the country for unspecified reasons. At the end of the fatal weekend (the airstrip murders at Port Kaituma were committed on the afternoon of Saturday, 18 November, and the suicides at Jonestown that evening), Georgetown residents were alarmed by several US military aircraft flying in low over the city and circling Timehri Airport about thirty miles away. More than a few thought it was an

Jonestown  41

occupation force, that Burnham had finally exceeded the limits of US tolerance and was being overthrown. It is difficult to piece the story together, but some said fighter aircraft were involved in support of the first US transport flights that brought the burial squad. The chaotic news coming out of the US Embassy in Georgetown must have been difficult for State Department and Pentagon officials to interpret. A Congressman and several of his party had been killed in the jungle of an unfriendly country, whose government had opposed the Congressman’s mission there and apparently supported the group implicated in the murders. So the first US flights were dispatched as a military mission to secure the airfield and await further developments. The first army units off the transports were in full battle gear; a Pentagon spokesperson later said that was a routine procedure, but the Guyanese saw and heard “invasion.” Some planes fly over, troops disembark, a leader is deposed. The supposition gives a scale to the level of confidence and stability in this Third World country precariously balanced on the shoals of a continent whose language and culture are alien to its citizens.

Viola’s Broken Arm Official Guyanese response to the Jonestown killings was late and disorganized, with much initial reservation about doing anything at all (remember that armed Guyanese soldiers, a contingent of the Guyana Defense Force [GDF], were present at the Port Kaituma landing strip and did nothing while Ryan and his party were being slaughtered). A second detachment of GDF soldiers was sent to investigate and guard the premises of Jonestown. But it was the jungle, the soldiers were blacks from the city who knew about bush spirits and “Buck” (Amerindian) people, and the scene was one of unbelievable carnage. The soldiers kept to the edges of the Jonestown dwelling area— except for the looters among them, who picked over corpses and ransacked buildings searching for the Yankee money they knew was there. US journalists talked or bribed their way into the camp and were given extraordinary liberties to collect documents and tapes. High-ranking Guyanese officials visited the scene in the first days and arrogated to themselves inspection privileges that led to “stories.” The most remarkable of these “stories” concerned the wife of the prime minister, Viola Burnham, who had visited Jonestown on previous occasions and apparently understood the scale and funding of Jones’s operation. As the story goes, she and a prominent government official traveled to Jonestown on Sunday, the day after the killings, before even the GDF team had taken up positions. There she supposedly collected a suitcase full of US currency and jewelry from a cache in Jones’s personal quarters (these were ransacked

42  Heading for the Scene of the Crash

several times in the course of the next few days). With this booty she returned to Georgetown and shortly thereafter drove back to Timehri Airport, where she cleared customs with a ticket on an overseas flight. She was skipping the country and her husband with the loot. However, an alert immigration officer, who knew where his allegiance lay, telephoned the Prime Minister’s Residence with the news that Viola Burnham was making an unscheduled trip. Forbes rushed to the airport, where he and Viola had a tremendous scene that ended with Viola being bundled back to the Residence with a broken arm. She appeared in public a few days later with her arm in a sling, and the Chronicle, as usual, supplied no information. But the story had spread like wildfire, and all Georgetown was talking about that broken arm. Then the Chronicle ran another photo of Viola at a state function, with her cast, and in a caption supplied the incidental information that the prime minister’s wife, Comrade Viola Burnham, was suffering from a flare-up of an old injury sustained when she had fallen from a horse some years previously. The rum-shop humor was unsparing of this gaffe: “Who de horse, who de horse?” was the refrain, with no one in any doubt that the prime minister was the angry stallion.

Yankee Dollars Whether or not one chooses to believe the story of Viola’s broken arm—and most of Georgetown was eager to believe it—it is a fact that a great deal of US currency was taken out of Jonestown by Guyanese in the first days after the killings. Stories of suitcases full of greenbacks, of men murdered for their possession, of wads of bills hidden in Amerindian homes around Port Kaituma circulated along with the story of Viola. The police and army sent to guard the premises and, perhaps, the officials there to inspect them searched among the rotting, bloated bodies for the fortune concealed there. Guyana was a country almost out of money, and a fiscal desperation ruled the actions of everyone from the minister of Internal Affairs to the simplest housekeeper. Inflation and exchange rates had seriously eroded Guyanese currency over the previous few years, but that was not even the major problem: the government had made the Guyanese dollar a restricted currency. It was illegal to take it out of the country and, if one did smuggle it out, it was worthless in the outside world. Guyanese money could not be used to escape the country, which resembled a sinking ship more each day. Nor could it be used to purchase items from abroad, so that businesses floundered and personal needs went unfulfilled. In the context of this dying, desperate economy the Yankee money at Jonestown was an irresistible opportunity. Several hundred thousand dollars were brought out, apparently mostly by GDF soldiers, and soon flooded the local black market. At the time, the official exchange rate was 40 cents US to one Guyanese dollar

Jonestown  43

(a few years later [circa 1983] it was around 30 cents) and such were the needs of local businessmen and ordinary Guyanese planning trips abroad that the black market rate had pushed the Guyanese dollar as low as 10 cents. The infusion of Jonestown money changed the situation dramatically; suddenly there was as much American currency as anyone needed and the black market rates sagged. They remained at a depressed level for several months until, in June of 1979, they began to climb again. In this ironic fashion Jim Jones’s hoard, which he had squirreled away compulsively to use for future supplies from the United States and for bribing Guyanese officials, was put to use by ordinary Guyanese for purposes of their own. One can suppose that, in addition to Guyanese businessmen using the dollars to obtain supplies needed in their businesses, some of the money was used to enable Guyanese to escape their repressive and crumbling society. The victims of Jonestown had consigned their savings and Social Security checks to Jones, who had led them from California to their deaths in the jungle. Guyanese who fell heir to the tainted money used it to make the opposite journey, fleeing confinement, poverty, and social disintegration for the very society Jones had written off as a lost cause (and, in fact, there is a large Guyanese community in California). The reversal is chilling to contemplate: it was the move to Guyana that provided the isolation and normlessness essential to Jones’s insane scheme; and it was escape from Guyana that motivated many of the black market purchasers of the Jonestown dollars. Jonestown and Georgetown, the community of Americans who escaped to die, the city of Guyanese who seek to escape a creeping death, are two faces, two modalities of a single horror.

Working the Field, Graphing the Ethnos: Steps toward an Ethnographic Semiotics It has been a premise of anthropology since the discipline’s beginnings that other people, specifically exotic people, inhabit societies that can be methodically observed, described, and analyzed. As we know, this attitude represented a major advance over the established tendency to dismiss the doings and sayings of “primitives” as deficient and inferior. Today, however, we have reached a crisis point in ethnography conceived as the systematic elucidation of other societies. The crisis occurs on two fronts and reaches its fever pitch where the two coincide, forming a juncture, hinge, or bridge. The fronts are the activities of doing fieldwork and writing ethnography. If it is true that anthropologists since Boas and Malinowski have conscientiously pursued inquiries into the nature of ethnographic fieldwork, producing a large methodological literature in the process, it is also true that

44  Heading for the Scene of the Crash

they have rarely posed in print the really difficult questions about their craft (although that has changed over the years). As I see it, those questions are: What is it that the ethnographer goes to study? and, What separates or distinguishes him from what he studies? The first question might be rephrased as a fill-in-the-blanks questionnaire: “Biologists study _____”; “Economists study _____”; “Ethnographers study _____.” Two problems present themselves in completing this seemingly simple task. First, there is a classificatory lapse in passing from biologists and economists to ethnographers. If the questionnaire had instead said “anthropologists,” the interview subject might have felt safe in responding with “indigenous societies” or something to that effect. But to say “ethnographer” is to introduce a bifurcated identity, that of anthropologist/ethnographer, which appears to be absent from other disciplines (how could one meaningfully respond to biologist/_____ or economist/_____?). This difficulty signals something of the peculiar nature of our craft. We are presumed to move from one to the other, from the distant and romanticized “field” (even if it lies in a neighborhood of one of our cities) to the study or classroom, where the results of fieldwork are themselves worked and reworked until they become lectures, articles, or books. As anthropologists/ ethnographers we operate as double agents, and it is in acknowledging that notion of double (Kristeva 1969) that we begin to discover an essential feature of the species of inquiry that is anthropology. But simply filling in the blanks in our questionnaire is difficult enough. I proposed “indigenous societies”; others would undoubtedly suggest other terms that emphasize social change, urbanism, etc. I have no problem with those formulations, but the range of hypothetical substitutes is too small. For definitions or assumptions of what ethnographers study inevitably refer to entities called “societies” or “cultures” and invest those entities with a substantiveness, or presence as Derrida would have it, all their own. This is the problem with which I began this essay: What if the object of ethnography, the “native,” is not the privileged possessor of that set of actions, beliefs, rites, and practices to which E.B. Tylor referred us at the discipline’s beginnings? Only if we dismiss this carping doubt and accept the privileged status of the ethnographic object is it possible to proceed with the constructive methodological tasks of designing interview formats, sampling techniques, transcription conventions, and all the other items in our empirical tool kit. I would suggest that doubt cannot be so easily dismissed. Without really clarifying the basics of what we are doing, we employ the above tools or techniques to gradually build up an information storehouse, adding to it bit by bit as we gain the confidence of the natives—of those who know the secrets—until we at last have an adequate rendition of their society. Alternatively—and it is really not so very different at this level—we might presume that the natives’ views are relatively unimportant and proceed straight away to observing their actions

Jonestown  45

and formulating underlying, “objective” principles of behavior on the basis of those observations. Jonestown is an ugly smear across this tidy methodological landscape and across the landscape of human consciousness itself. Approaching Jonestown from an ethnographic perspective, it is impossible to retain the conventional assumption that the natives are the experts, that they know what they are doing and that our job is to watch, listen, and discover the underlying pattern of their assertions and behavior. It is also impossible, unless one embraces one of the grand cynicisms of an ism (Marxism, structuralism, functionalism, it does not matter which), to view the horror of Jonestown as a manifestation of hidden principles of social structure. For in this particular case the natives were insane, or permitted the insane to occur. And the natives of Jonestown are also dead, their lives extinguished by the very paranoid fury that brought them to our attention. Why, then, have I attempted to treat Jonestown from an ethnographic perspective, to maintain that it is not only an authentic ethnographic event, but an important one? It is because the ugliness and horror of that event transfixed persons who are neither insane nor dead and compelled them to render the event intelligible, to interpret it. An act of colossal madness fills the cultural atmosphere with a blizzard of texts and interpretations that tie that act to those persons who do the interpreting. All those texts, or as many as the ethnographer can assemble (as I have attempted to do here), provide him with the material necessary to his craft and thereby enable him actually to do something that can legitimately be called “ethnography.” And because so many people, from different countries, walks of life, and ideological persuasions, have been involved in the interpreting of Jonestown, it is impossible to say that the ethnographic event belongs to or represents a particular “society” or “culture.” Consequently, the first question I posed—“What is it that the ethnographer goes to study?”—cannot here receive the easy reply “A group of people, a society, a culture.” The reality of Jonestown is the heterogeneous assemblage of all the impressions, reactions, interpretations it has provoked; its madness opens a window on a multifaceted world of the cultural continuum (Drummond 1980). Doing fieldwork hinges on an assumption that one knows what manner of thing one is studying, and any such assumption is false if it involves a notion of a bounded culture with a single authentic or authoritative discourse. If that notion can now be rejected and some concept like the cultural continuum put in its place, then my second question—“What separates or distinguishes the ethnographer from what he studies?”—finds an answer. It follows from the deconstruction of the notion of a bounded culture that there is no decisive barrier between the ethnographer (the ethnographic agent) and the native (the ethnographic object). The former cannot look to the latter as

46  Heading for the Scene of the Crash

a neophyte to an adept, for the circumstances of social life are such that both are frequently confused and ambivalent about the aspects of cultural organization that seem most essential. Neither enjoys a privileged status. If culture is a text, it is a text whose author has lost his authority and even identity. Throughout this essay I have attempted to explore the complex interrelationship of my individual reactions to Jonestown and others’ public interpretations. Each of my impressions—of the American movie, of Awakaipu, of the Pioneers—led into a welter of texts, characters, social institutions basic to Guyanese and, sometimes, American culture. Note that the combination of impressions is peculiar to me; as far as I know, no other chronicler of Jonestown has attempted to bring such a diverse collection of materials together. In a sense, then, my own considerable experience in Guyana and my on-again, off-again fixation on Jonestown over the past several years in themselves constitute a major text of Jonestown (which you are now reading). Nor is my situation fundamentally different from that of ethnographers working on more traditional topics; if anything, those ethnographers can be far more authoritative than I, for my meager academic essay is chaff before the winds of television, journalism, and book coverage. Firth on Tikopia, Evans-Pritchard on Nuer, Malinowski on Trobrianders — each of these ethnographers has performed the remarkable and paradoxical feat of becoming the voice of “his” people while holding fast to canons of scientific objectivity that irrevocably distance him from them. His private audience with them has granted him the privileged status we customarily reserve for the native: he was there, he dwelled among them, he knows whereof he speaks. Into this classical world (post)modernity rushes like a tempest, destroying the ethnographer’s throne and drowning his voice in a sea of electronic images and newspaper pages. Simultaneously, the native begins to speak for himself— and not just in the village or the colonial capital, but in the same cities inhabited by his ethnographer. The two catch glimpses of one another waiting for a bus, entering an office building, being interviewed on television. It is only at this point that the nature of ethnography as dialogue (or, really, polylogue, after Kristeva 1977) becomes apparent; before, it was still possible to pretend—and it was pretending—that the ethnographer alone could speak, could produce the definitive monologue on a society. A culture is a heterogeneous assemblage of overlapping, conflicting interpretations, and its lack of boundaries means that the crucial notion of ethnos loses its former meaning. I have tried to identify some of the far-reaching consequences the dissipation of this notion has for doing fieldwork. But it is the second, virtually unexamined root of “ethnography” that poses the most fundamental questions. Important theoretical discussions of ethnicity (Galaty 1982; Herzfeld 1983) have alerted us to the textual properties of ethnos, but how often do we scrutinize the activity of graphing the ethnos? What

Jonestown  47

is ethnography as a species of writing (as James Boon [1983] asks)? In the framework of scientific discourse that has dominated and contained ethnography in the past, it is assumed that the object of research is the sole issue. The tools of description—the writing and not the written of ethnography—are regarded as frictionless, noncontaminating agents of the research act. We may agonize over research design, elicitation techniques, or—that old favorite— rapport, but proceed doggedly with the task of “writing it up” (in much the same sense as “wrapping it up,” referring to goods collected and purchased and waiting to be packaged). Everything I have said about Jonestown as ethnographic event argues against this way of looking at ethnography. There are too many discordant voices, too many facts that turn out to be impressions and impressions that turn out to be facts, too much ambivalence on the part of ethnographer and native alike, and, perhaps encompassing all the rest, too powerful a sense that the importance of words and deeds lies in their carnivalesque quality—in dressing ideas and social institutions in the garish costumes of the stage—for us to embrace the seductive illusion that our writing graphs the contours of an entity out there, before our eyes, waiting to be sketched. It is time to examine the lens of print through which we examine societies. I am advocating that we treat as a text not only culture, but the ethnographer as well (as Herzfeld [1983] argues), and that the textual (really, intertextual) properties of the two cannot be meaningfully distinguished from each other. In making this claim, however, I do not think I am surrendering cultural anthropology to literary criticism and calling on my colleagues either to turn in their pith helmets or, if they have been fellow travelers all along, to come out of the lit crit closet. The ethnographer does indeed produce a text; she tries to graph the ethnos (whatever that may be). But, unlike the literary critic, she also does fieldwork; she works the field as a productive act in tandem but not synonymous with the productive act of graphing the ethnos. It is at the juncture of these dual productions, these two fronts, where the most fundamental questions of ethnography are to be found. If a literary text, as Kristeva (1977) claims, is an acquisition, an iteration, an interpretation of other texts, a cultural text is similarly but more radically intertextual. A contemporary writer, say, Pynchon, is undoubtedly influenced by Joyce, and in reading Pynchon we are influenced by our readings of Joyce. But Joyce did not read Pynchon, nor did he have to contend with our contemporary and ongoing readings and interrogations of him. And since he spent so much of his time writing, he interacted with his peers rather less than an ethnographer might find desirable in an informant (what do we do with “ethnographic” reports based on work with a kind of super-Ogotomeli?). Suppose we applied the acid test of traditional ethnography to Joyce’s work: I know virtually nothing about culture X; you hand me a monograph on it; what does

48  Heading for the Scene of the Crash

it tell me about culture X? If the monograph is Ulysses I would be hard put to it for an answer that would satisfy the traditionalist. Certainly a large part of my difficulty would stem from my (as we now know) mistaken assumption that there is a culture X out there, waiting to be described by someone who writes more accessible prose. (If I have not persuaded you to entertain the idea that this assumption is mistaken, then you would assign all the blame to Joyce.) But does all the difficulty lie with my old-fashioned, functionalist ideas about culture? Are there ethnographerly texts and non-ethnographerly texts, just as Barthes (1974) claimed there are “readerly” and “writerly” texts (he much preferred the latter). Surely we can reject outmoded theories of culture without abandoning the idea that there are good and bad, more and less ethnographic texts. Culture happens fast and furious; the textualizers are interacting with and interpreting one another at the rapid clip of life itself. It is into this fray that the ethnographer steps, retreats to mend the damage and teach and write for a few months or years, then steps back, retreats again, writes some more, goes someplace new, writes about the old place in terms of the new, and through it all wonders how greatly the spaces in between—those periods of what he wants to call his “private life”—affect the doing and writing of his ethnography. In the meantime, while all this is going on, his “natives” do not keep silent; they do not remain locked in his word processor. And, like him, they are growing older and more complicated; their lives and the society around them are changing, often far more dramatically than the ethnographer’s own society. What I have called the ethnographic event is subject to all the pushes and pulls of their own retrospective interpretations and, more critically, to their disposition to repeat the event under similar circumstances. This tumultuousness makes for involved, even torturous discourse (ethnographers may well end up writing like Joyce after all!). But it is, I believe, a serious mistake to reject that discourse because of its unscientific embrace of confusion, ambivalence, and the carnivalesque—a serious mistake, and a terrifying one when one considers how far some are prepared to go in the name of certainty. In the heterogeneous array of ambivalent texts and characters that comprise the Jonestown horror, the figure most sure of himself, sure of his reading of the situation, sure he had got it right, was Jim Jones. The meaninglessness of the mass murder and suicides effects a dialectical closure of paranoid and scientific logic: there was a single truth. The stench has finally lifted and the jungle is reclaiming the clearing that was Jonestown (Burnham abandoned the ludicrous project to turn it into a museum). What remain are living Guyanese and Americans, the principal bystanders, and everything they have said, written, photographed, and thought about the event of Saturday, 18 November 1978.

Jonestown  49

An Evening of Ballet In the years that followed our conversation about Woodstock, my friend and I saw little of each other. I returned to the United States; he continued teaching at the school in the bush until, a year later, he won a position at the Government Training Institute in Georgetown. There he followed through on his plan to become a surveyor—a position in great demand in a country like Guyana, where nothing is charted and everything is to be built. When I returned to Guyana in 1975 for a few months, he had already graduated and was the supervisor of a ship,s crew mapping the maze of sandbars that choke the mouth of the Essequibo River. He had left the Amerindian village of Kabakaburi, where his identity as Amerindian or “Buck” was made problematic by an Afro-Guyanese, or Creole, ancestry. And he had joined the coastal society, where being a “Buckman” was a real social liability. A mixed Arawak, marked in his natal village as something other than Arawak, who yet, among his workmates on the coast, was a man of the interior, of the bush, a “Buckman”—this new man of Guyana moved uncertainly across the shifting internal boundaries of his new nation. In 1975 my work was still in the interior, still centered on the complex ethnic milieu surrounding the village of Kabakaburi. We moved, then, my friend and I, in different directions and when we did meet it was, fittingly, in transit. On a trip from Georgetown to Kabakaburi, a journey of one hundred miles and twelve hours, I reached the right bank of the Essequibo too late to catch the next leg of transportation—a ferry that departed daily on the threeor four-hour passage to the other side of the river. I was stranded, left to contemplate an evening on the stelling (the Dutch term is retained) as I watched the ferry pull away in its churning wake. I turned away, disappointed and angry, and there was my friend. It was our first meeting in four years, although I had been living with his parents in Kabakaburi for the past month or so. The filmgoer and former rural teacher was now master of a vessel, a small tug being used by his survey crew to map those sandbars through which the departed ferry was even then threading. He offered me a lift. Or, rather, he proposed a chase—they were about to depart upriver and would try to overtake the ferry and put me aboard. It turned out to be more of a chase than either of us had bargained for. The ferry was large and cumbersome-looking, but faster than we realized. The logical boarding point, a jetty on an island where the ferry stopped briefly to take on or disgorge a few travelers, was missed—the ferry pulled away before we could overtake it. Finally, we closed the gap in mid-channel. After much shouting and gesturing from both ships the ferry slowed to a trawl, our tug pulled abeam for an instant, I was persuaded to jump from rail to rail, my bag was thrown across, and my friend’s ship dropped back.

50  Heading for the Scene of the Crash

The chase had been punctuated, in the Guyanese way, by drinks and jokes and, in the final moments before I clambered up on the rail, we agreed to get together “in town” for more of the same. But that arrangement did not work out; my time was short and I kept close to Kabakaburi, avoiding the squalor and nuisance of Georgetown. We met once more, a few weeks later, and again on the ferry stelling. But that time a chase was unnecessary; the ferry was waiting and I was just another harried traveler anxious to get aboard. Four more years elapsed before our next meeting. We did not correspond, though I kept abreast of his career through letters from other members of the family. I learned that he had married; he now lived in Georgetown. When I returned to the country in 1979 we were finally able to have that drink in town, though the circumstances had changed. I was invited to dinner at his home and went with his sister and her husband, who also lived in Georgetown and who was my closest friend in Guyana. Our host lived in the southern part of the city, in an area that had been sugar cane fields a few years before, when we were chasing ferries. The home was one of a large tract developed during the “Feed, Clothe, House the Nation” campaign of the late 1970s. It and its neighbors represented an extension of the cane that had preceded and still bordered the homes—an architectural monocrop designed for cheapness and ease of construction, new homes for the workers of the new Cooperative Socialist Republic. A light rectangular frame on stilts, it was constructed so that the ground level beneath could be used as living and working space while the upper chambers, like a tree house, vibrated and shifted with the movements of their occupants. Before the meal, while we consumed that long-promised drink, I was shown the wedding book. How many of these have I seen during my intermittent incursions in Guyana? Homes in every part of the country have them, more or less elaborate depending on the social circles in which one is moving. Occasionally a book spans generations, with the parents’ wedding book supplemented by wedding pictures of married children. I remember (how long ago it seems, before I had lost count of these wedding books) being embarrassed and more than a little disappointed when one of my first Arawak acquaintances produced such a book and spoke proudly, sitting there in his Kabakaburi home, of the unions his daughters had made with men whose pictures showed them to be non-Amerindian men from the coastal belt that turned its back on the interior and its Amerindians, on my host and his fellow Arawak villagers. After all, I had come to live among and study Amerindians—Arawak and Carib—had come a long way and at considerable expense. And I had not expected to be shown wedding pictures, particularly pictures that documented the extinction of the very people whose “culture” (as I then supposed existed) I had come to study.

Jonestown  51

Life plays tricks on one’s categories. That afternoon in the tract house in Ruimveldt Gardens I found myself, nearly ten years after my initial disappointment with wedding books, in the company of people pictured in those very books. My host was a Guyanese, or, perhaps I should say, Kabakaburi success story. A boy of mixed Amerindian parentage had grown up in the bush, worked and studied hard, landed a prestigious professional job with the government, married a woman from the city, and now had a home there. On top of that, his wife was Afro-Guyanese, or, as the Guyanese still say, “African,” and was active in cultural programs sponsored by the government. She was, if not one of the party faithful, at least closely aligned with it. Amerindian and African, the bush and the city—the flimsy home strained under the oppressive weight of these Guyanese dichotomies. The afternoon passed easily enough, but there were no moments like those when my friend and I had shared our thoughts about Woodstock, or like those when we joked and drank during our ferry chase. He was now married, a man of substance, and was entertaining in his home. The occasion had a certain gravity about it. And there was an underlying political tension. He and his wife were of the government; his sister and her husband were not. They had not loosened their ties with the interior as effectively as my friend; and while he appeared to be a permanent resident in the city, they talked of returning to a country district. Conversation about recent events, including Jonestown, was punctuated by small hesitations: how much could be said and how much only hinted at? Several weeks later we met for the last time, by accident. It was at a performance of ballet and modern dance by a visiting French troupe that took place at the recently opened Cultural Centre. The Cultural Centre was an attraction and a thing of pride for Georgetown residents; it represented the first modern auditorium in the country. Before its completion, cultural performances were limited to the facilities of school auditoriums, cricket grounds, or, worst of all, the Georgetown movie houses. These movie houses are still, twelve years after the proclamation of the Socialist Republic, divided into classes of seats: “balcony,” the highest and best; “house,” a section of less comfortable seats below “balcony;” and the “pit,” in which Georgetown youth howl at the action on the screen from rows of wooden benches. So the Cultural Centre bypassed and, in a sense, erased the ignominy of the movie houses, where the crowds stand at attention for the playing of the national anthem before resuming their seats in the “balcony,” “house,” or “pit.” The Cultural Centre is located on the eastern edge of the city, near the Prime Minister’s Residence, the Botanical Gardens, and the recent Cuffy Monument (Cuffy was the leader of a failed slave rebellion, brutally suppressed by Dutch planters). Between this public area and the center of the city, where hotels and major businesses still give the appearance of normalcy, there lie

52  Heading for the Scene of the Crash

the squalid ruins of Georgetown, city of the Third World. Decaying wooden shops that have little left to sell are heavily barred and shuttered, streets and sidewalks are grimy with refuse, and the drainage trenches that lace this city below sea level are choked with filth, backed up and running over with a dark, septic fluid turbulent with parasitic life. As one rides to the evening performance at the Cultural Centre, the night people of central Georgetown have already taken up their stations: small groups of the homeless have unrolled their bundles of rags on the sidewalks and lit kerosene flambeaux; prostitutes lean like decrepit pickets against the fence surrounding the central Independence Park; youth gangs patrol their territories. Frequent blackouts, the result of the city’s exhausted electrical generating facilities, plunge the entire scene into darkness and accentuate the noises and smells of bedlam. To travel through this scene is to realize how bad things really are: on these streets, in this nation, there is no order, there is no law; anything could happen here. And anything did happen. Jonestown was 150 miles away and a few months in the past. The appearance of the French troupe at the Cultural Centre was a rare event, a piece of genuine high culture that had somehow been diverted on its way back to France from French Guiana. To celebrate the fact, the comrades of the party’s cultural hierarchy had turned out in numbers. And they came in their full regalia: heavy women wore Afro fabrics of European design and Asian manufacture; large men, their skin in glistening rolls, costumed themselves in the stiff, ill-fitting shirt jackets that have become the party uniform. They had negotiated the same streets as I to reach the Cultural Centre that evening. What had their thoughts been on that ride? The dance pieces were extraordinary, a sampling of brilliant ballet and innovative modern dance. I have been to so many flops in Guyana, sat through so many terrible productions in hot, cramped, ill-lit halls, that the evening at the Cultural Centre was a revelation. It was almost possible to forget what was outside, what awaited on the drive home, to forget the extinguished lives in the jungle community. And a small incident at the beginning of the performance, when a sound man mistakenly started a piece of modern atonal music and the audience, anticipating the obligatory national anthem, half rose, confused, not knowing for a moment where the patriotic gesture lay, until the mistake was corrected and they could stand proudly in their Cultural Centre while the anthem played—this incident only detracted for a moment from the compelling performance that followed. It was at intermission that I ran into my friend and his wife. He was shirt-jacketed in the official way and held a rum drink; she was in an Afro gown and her eyes sparkled with the importance of the occasion. But the comrades had been made uneasy by the performance; it was impossible to translate it into their political cant and, even more unsettling, its intensity

Jonestown  53

provided a harsh contrast to the shallowness and self-deception of their public lives. They were drinking heavily at the lobby bar, and soon the noise and gestures of the rum shop began to assert themselves in the Cultural Centre. My friend was on the edge of the crowd, drawn to the power at its center. Some of those men had been minor accomplices of Jonestown, officials who knew yet did not know what Jones was doing out there in the jungle. My sudden appearance was an embarrassment to my friend; the worlds of Kabakaburi and the Cultural Centre, the casual conversation about movies and the serious business of party recognition, the white man in the bush and the same man across the street from the Prime Minister’s Residence were too disparate. As we talked his eyes wandered nervously to his wife and to the shirt-jacketed figures at the bar. He made large gestures, in the manner of one full of purpose and needing to be off. In the crowded, cocktail party setting the separation was easily accomplished. We both understood what was expected; the crowd rippled and we were no longer standing together. In a few minutes the dancers would begin the second part of the evening’s performance. But at that moment the stage had shifted to the lobby, where these new men and women of Guyana, my friend and his wife among them, were giving a separate performance in the midst of the larger one provided by the visiting representatives of another, less devastated society. My thoughts wandered over that conversation about Woodstock years ago, over the recent horror of Jonestown and the present horror that waited in the streets outside, over the French dancers and the Guyanese drinkers. My friend had helped me see the event of Woodstock as performance; he now showed me the stage on which he performed. And somehow Jonestown was caught between, implicated by these two episodes, and thereby placed on a stage that lacked the coordinates of place, of nationality, of cause and effect. Performer and audience, event and interpretation, writer and reader—how is it possible to distinguish these in the ethnographic field? Where did they find all those actors?

Notes  1. An earlier version of this chapter was published as Drummond, Lee. 1983. “Jonestown: A study in ethnographic discourse.” Semiotica. 46(2–4): 167–210. Amsterdam: Mouton/De Gruyter. A rash of “instant books,” articles, and reports (Kilduff and Javers 1978; Krause et al. 1978; Harris 1978; US House of Representatives 1979) that appeared immediately following the tragedy has been supplemented by more considered pieces (Lewis 1979; Lane 1980; Naipaul 1981; Reston 1981), so that a considerable Jonestown literature now exists.  2. “Edification by puzzlement” is a theme developed by Fernandez (1981).  3. The reference is to Geertz (1971: 29): “The culture of a people is an ensemble of texts, themselves ensembles, which the anthropologist strains to read over the shoulders of those to whom they properly belong.” There are at least two problems here. First,

54  Heading for the Scene of the Crash the nature of texts precludes their belonging to somebody or to some homogeneous group—to an author—for they are always exchanges or interactions among diverse subjects. Always ensembles, as the citation notes, they iterate and encompass a diversity that undermines the conventional notions of ownership and authorship. Second, even if some texts did belong to a particular person or group, they would not be the texts that anthropologists usually see, which are texts produced by persons whose worlds are filled—if not drowned—by an intrusive, media-saturated modernity. The natives’ texts are riddled with bits and pieces of Western culture, so that, in reading over their shoulders, we often find an editorialized version of our own copy.  4. The article is a polemic against, among other things, cults, the US school system, and the editorial committee of the American Anthropological Association’s annual meeting. Jonestown itself is not discussed, nor is the fact that it happened in Guyana, a Third World nation that bears little resemblance to the middle-class America that is the real target of Harris’s invective. The possibility that the events at Jonestown could be tied to their Guyanese context does not seem to have occurred to Harris, an omission that makes his piece the antithesis of an ethnographic account.  5. An excellent account of Awakaipu’s millenarian movement may be found in Swan 1958 (pp. 243–245).  6. Regrettably, I have been unable to contact Martins or his organization for permission to quote the song’s lyrics. That may be just as well, for the reader needs to appreciate the piece not just as social commentary but as music—the vibrant reggae beat of a people who still believe in dance. Fortunately, the song is freely available online; see, for example, “THE TRADEWINDS—Brother Jonesie” (retrieved 24 July 2017 from https:// www.youtube.com/watch?v=QI0J4coNakw) and “Brother Jonesie—The Tradewinds” (retrieved 24 July 2017 from https://www.youtube.com/watch?v=fSc97QUGt24).

Chapter 2

Flash! Cultural n News Anthropology Solves Abortion Issue! Story at Eleven! (Being a Cultural Analysis of Sigourney Weaver’s Aliens Quartet)

The Evolution of Culture, as Seen from the Cutting-Room Floor If Humanity Made Sense, the Human Mind Wouldn’t Need To

I

t is an ancient Mariner, and he stoppeth … well, he stoppeth a whole lot more than one in three—we’re talking more like one in three billion people here, wedding guests and all, or, not to mince words, pretty much the entire planet. Hey, this thing’s totally global! What thing? Why, movies—American supergrosser movies that, besides being great fun, are sprawling, brawling, in-your-face engagements with issues that really matter to audiences whose members—you and me—are picking our way across the conceptual and emotional mindfields of day-today existence in the U. S. of A. If Samuel Coleridge were around today, hooking down laudanum (well, oxies) and drifting off in drug-crazed reveries, concocting fantastic and, as in “The Rime,” nightmarish tales of life way beyond the edge of everyday reality, if Coleridge were around today, he would be a screenwriter, cranking out scripts for Hollywood’s latest phantasmagorias. Americans may have entered the new millennium with our high-tech society in warp drive, with our megabytes now mounding into gigabytes and our new check cards taking the place of cold cash, but we are not about to give up our movies, in all their weird, otherworldly, horrifying content. Movies are our combined

56  Heading for the Scene of the Crash

myth and ritual, and we are every bit as tribal in our desire and need for their spectacle as the proverbial painted savage of old. Coleridge and his successors—Steven Spielberg, George Lucas, Thomas Harris, Sigourney Weaver, Anthony Hopkins—are our tribal magicians, necromancers, shamans, exorcists, dream-merchants whose productions not only fill our idle moments, but reach to the depths of our souls. To date, our legions of talk-show pundits, cultural mavens, social commentators, and the whole cultural studies rabble have barely scratched the surface of this deep, rich vein of our collective psyche, of what counts as being human in this new millennium. Sure, there are stacks and stacks of books and academic articles on the general theme of “the significance of the media in contemporary life” but 99.9 percent of these are so riddled with jargon and equivocation that they inspire only a great yawning “So what?” This army of interpreters lets us down in the worst possible way: they take movies that are gripping, exciting, tremendous Fun and wring out all their vital juices, drain that most precious commodity, a good time, right out on the floor where it congeals into a cold, lifeless blob. As Nietzsche said of philosophers who preceded him, “nothing real escaped their grasp alive” (1954 [1889]: 472). Want to bore an audience to tears? Want to trigger the quick-draw reflex of the American finger on the TV-channel remote? Just say you’re going to tell the good folks what they should really be thinking about a movie they loved— Star Wars, say, or E.T. They will turn away in droves. Much of this antipathy springs from people’s natural reluctance to have an emotional, even cathartic experience—just the kind of experience a great movie delivers—reduced to a few trite moral lessons. If a movie touches your soul, do you really care if it gets a two-thumbs-up? From Tipper Gore (remember her?) to all the aforementioned talk-show savants, we are lectured ad nauseum about the deleterious effects of our movies: they promote violence, erode family values, detract from vital social issues, and just generally deteriorate the fabric of society. But, for all these commentators and their endless, heavy-handed pronouncements, we seldom get a detailed, reasoned argument about just how a particular movie achieves a particular effect in its social surrounding. How do Star Wars and E.T., or even James Bond movies for that matter, act on people’s fundamental understanding of their existence in this complex, changeful world of twenty-first-century America? I would make the reckless claim that this last question lies in the province of the cultural anthropologist or anthropological semiotician, who is or should be prepared to look closely at a particular cultural production and find in it mythic and ritual themes of a fundamental generality. There are two parts to my proposal. First, the particular: if cultural anthropologists excel at anything it is in their tenacious, fine-grained exegesis of events that most people never see or, if they do, promptly ignore. We are consummate bores, and

News Flash  57

we compound the sin by making that our guiding methodology and totemic calling card: the method/theory/vocation of ethnography. As flies on the walls of rooms where nothing much ever happens, we ethnographers are left to our own devices by the horde of journalists and media commentators who want above all else to be in rooms where earthshaking events are occurring—or, at any rate, events they try desperately to elevate to the status of earthshaking (did we really care if Gary Condit diddled Miss Clairol?). Where the pundit may apply a few well-oiled phrases to the current supergrosser (phrases that never really deviate from common sense; that is why they’re so palatable), the cultural anthropologist, in his or her guise as ethnographer, dissects it scene by scene, forcing it to yield insights into our nature as cultural beings that are anything but commonsensical. Hence the second part of my proposal: cultural anthropology, as I practice it here, is not out to demonstrate a set of tidy messages that supergrosser movies propagate (violence resolves our social issues; family values are passé), but to identify fundamental and essentially unsolvable problems or puzzles that occupy our dreams, nightmares, and many of our waking moments— problems or puzzles that find dramatic form in movies. A mistake most scholars make in undertaking cultural analysis, and a mistake all media commentators seem to make, is that a critical assessment of a movie will yield up just such a set of tidy messages, will enable us to issue definitive pronouncements about the moral and aesthetic content or themes of movies. I think this is just what puts off the everyday person in the street, who may go to a movie to have a good time, but at the same time realizes, in her heart of hearts, that she is engaging dilemmas of mythic proportion that make her life anything but a tidy truth—a “truth” even such dim bulbs as Dan Quayle and George W. Bush can grasp and serve up to her in their pablum prose. If only things were so simple. If only cultural things, such as movies, TV shows, football games, weddings, and so on, contained meanings we could identify and then apply to our own lives. After all, isn’t that what anthropologists and philosophers have been telling us human culture is all about? Attaching meanings to events and social arrangements, meanings that help us adapt to and transform our environment, thus giving us a leg up on animals, those poor, dumb brutes who do not possess symbolic thought and who can only react to their circumstances? The following outlook pretty much defines the principal origin myth of secular America (educated Americans possess origin myths every bit as fantastic as painted savages): Darwinian evolution took its slow, meandering course until it produced a being, Man, endowed with consciousness and symbolic communication. Those faculties enabled Man to accelerate greatly the tempo of biological evolution by fashioning complex tools, planning hunts,

58  Heading for the Scene of the Crash

and making involved social arrangements (such as food-sharing and sexual division of labor). Where biological evolution depended on genes and their slowly accumulating mutations, human evolution utilized readily manipulated symbols and their embodiments in language, technology, and social institutions. The name given to this novel form of evolution, by figures like Oswald Spengler and E.B. Tylor, was “culture.” Several generations of anthropologists, acting as tribal elders, have passed along this account, or origin myth, until today the average literate American (whose literacy, of course, makes him none too average) takes it for granted that humans evolved because they possessed something called “culture” that was, as the litany runs in Anthro 101 courses across the land, acquired, shared, and adaptive. But should we regard the evolution of culture as an adaptive response to our environment? Adaptive in that it enabled us large-brained humans to better control events taking place around us? This canonical vision of humanity looks pretty good on the surface—after all, there are some seven billion of us while chimps, who were dealt a slightly different hand at Mother Nature’s poker table, have been reduced to a few bands of survivors. Despite this argument’s patent appeal, however, I would suggest that it is all turned around, a just-so story that is comforting for a number of reasons but that completely misconstrues the nature of cultural processes. Let me sketch out this little heresy, before proceeding to apply it in an interpretation of a particular movie-myth of twenty-first-century America: Sigourney Weaver’s Aliens quartet. Rather than view the evolution of culture as a triumph of sense or meaning over normlessness of the protocultural (early hominin and primate) world, I think it must be understood as a continuing and escalating engagement with contradictory aspects of life that themselves issue from earlier cultural or protocultural processes.1 In brief, culture is mainly about questions and problems, not about answers. From its earliest stirrings, before human culture was properly “human” or “cultural,” things have been thoroughly screwed up. Culture is “adaptive” in something of the way that, finding yourself trapped in a narrow tunnel with a freight train barreling down on you, it is adaptive to run like hell or wave frantically to the engineer. That train’s a-comin’, and you have little choice but to react desperately. Try anything, but don’t just stand there and get flattened. The just-so story of culture as a rational adaptation to environment ignores the desperate circumstances our hominin ancestors faced as they struggled to survive, individual by individual, group by group, generation by generation.2 Instead, it posits a prehuman world in which social arrangements and economic pursuits already possessed an inherent orderliness, only waiting on us large-brained folk to discern and exploit. This general outlook on the preconditions of cultural evolution parallels Darwin’s cozy specula-

News Flash  59

tion that biological evolution—the origin of life—began in “some warm little pond.” How very Victorian country-gentlemanly of him! The palette of nature was laid out, all soothing pastels of organic molecules, calm water, a nurturing sun. It was a scene you might chance upon while picnicking in a summer meadow. And how very, very different are the well-reasoned inferences of evolutionary biologists today, who place the origin of life in the Hades of deep-sea hydrothermal plumes, where the sun never reaches and where the first organisms, far from that warm little pond, thrived quite literally on fire and brimstone (Ghose 2013). The course of cultural evolution I sketch here is as different from the just-so story of rational adaptation as the findings of contemporary evolutionary biology are different from Darwin’s Victorian musings. If the protocultural world of early hominins wanted only some rational tidying up with our newly minted faculty of symbolization, that is, if things already pretty much made sense, then why would Mother Nature have gone to the trouble of producing and sustaining an organism as improbable and expensive as the human, whose large brain and helplessness at birth impose enormous costs in terms of energy use? I suggest Mother Nature went to the trouble because our hominin ancestors, who already possessed a long-established protoculture, found themselves up against it. With a lot of help from an erratic and unforgiving Pleistocene climate, they had made a mess of things: their lives had become thoroughly screwed up; that old train was barreling down the track; desperate measures were called for. Culture, with its signature traits of a highly evolved language and technology—the manifestations of a developed symbolization process—was that desperate measure. It was staying at least one jump ahead of that train; it was filling an inside straight with all your chips in the pot in that evolutionary poker game. An earlier American cultural anthropology, beginning with Franz Boas and Ruth Benedict and extending right into the 1990s, adopted a more comfortable attitude toward its principal concept, culture, one that kept close to the shores of its own warm little pond. Anthropologists regarded the densely packed symbolism of myth and ritual as the vehicle a people utilized to invest their lives and social institutions with meaning. This seemed a reasonable approach, and to their great credit Boas, Benedict, and their successors championed it in the face of prevailing academic and public opinion that those hordes of painted savages, unlike we rational Westerners, danced around and babbled nonsense because they possessed a “pre-logical mentality.” It was a happy thing to counter racism with the enlightened view that, if the symbols of myth and ritual do anything, they carry meaning, and the anthropologist in her role as ethnographer could proceed to describe a particular society or people as an interconnected set of meanings.

60  Heading for the Scene of the Crash

Increasingly, however, it has dawned even on cultural anthropologists that a society’s meanings are coherent and adaptive to a very limited extent. A man wraps his body with high explosives, walks into a pizza parlor filled with kids, and sets off an unspeakable horror. The next day crowds are parading in the streets with banner-sized photos of the man, celebrating the hero who has gone to his just rewards. An American executive lives his affluent life in an oak-shrouded suburb, commuting to the factory where he is CEO, a factory that manufactures anti-personnel bombs (charmingly called “toe-poppers” and “bombies”) that kill and maim tens of thousands of African and Asian children every year. Faced with decades of atrocity and hypocrisy, the old “symbolic anthropology” inaugurated by Benedict has lost its bearings, along with its lunch. Here we encounter one of the more bizarre chapters in recent intellectual history. In the early decades of the last century anthropologists insisted, in the face of a protofascist America, on the integrity—and integrality—of indigenous societies. Non-Western peoples were not deficient, but different, and the differences of a particular group formed a coherent pattern, a “culture.” Hundreds of thousands of kids were indoctrinated with this view in Anthro 101 courses across the land, and some of those kids went on to become the print and TV journalists, the authors of popular books, and the directors of popular movies (Kevin Costner’s Dances with Wolves, for example) who influenced tens of millions with their school-larned ideas about the wholeness and distinctiveness of cultures. Today Americans readily apply the term to racial, ethnic, and gender groups, to institutions (the “culture of the military”), to corporations (the “culture of IBM”). Yet while all this has been going on, American anthropologists have paid increasing attention to European scholars writing in the tradition of figures as diverse as Marx, Freud, and Lévi-Strauss. Early on, that is, beginning in the 1940s, Max Gluckman and, later, his students (including Elizabeth Colson and Victor Turner) began to describe “a society” as made up of inherently irresolvable conflicts and contradictions. It is the core of this argument that I develop here, extending it to cover the very symbols and values that individuals embrace as true and consistent. Baldly stated once more, social life—humanity—does not make sense. That is why the human mind has to take on that insurmountable problem. Thus the bizarre situation in which we find ourselves today. Just as the concept of culture gained general acceptance in American public life, it has become an embarrassment for cultural anthropologists. If educated Americans pay any attention to what we are up to (and the article titles in American Ethnologist hardly beckon), it has to be with a sense of disappointment and betrayal. The curious and, again, utterly bizarre effect of this switching-places is that the “culture” concept has now become an item in a folk taxonomy.

News Flash  61

Anthropologists abandon it as theory only to encounter it in the “field” of American life. Ethnic, gender, religious, and special-interest groups of every persuasion vehemently proclaim their inviolate cultural presence in a vast mix of other, equally distinct members of a multicultural world. So, in this roundabout, through-the-looking-glass odyssey the anthropologist is back to studying American “culture,” but now one made up of discrepant ideas and values, of symbols that do not mean. “Der Irrtum ist zu sagen, Meinen bestehe in etwas” (Wittgenstein 1967: 4). But if symbols do not mean, what do they do? What are the elaborate cultural productions of myth and ritual for? I propose that images and symbols do not mean in any conventional sense; they do not knit together an idea and a social practice in a tidy, coherent element of culture or social structure. Instead, the symbols of myth and ritual are boundary disputes, skirmish areas, flash points, danger flags, indications of where and how badly things have gone wrong in the course of living a collective, human life. Social commentators and their politician masters have got it wrong: culture is not consensus; elections decide nothing. Hail to the Thief. If cultural anthropologists in the past (and yes, into today) have played it safe in describing culture as adaptive and consensual, it is intriguing that the cultural productions of that peculiar tribe of painted savages known as “Americans,” and in particular some of their supergrosser movies, have let out all the stops. Educated Americans are more likely to find insights into their lives and their culture at the local movie theater than in the pages of American Anthropologist (assuming they would ever come across an issue of that staid organ’s minuscule print run of eight thousand). I suggest that we all, anthropologist and nonanthropologist, moviemaker and moviegoer, are perpetually engaged in the cultural analysis of our lives and our society. Not because we delight in constructing theoretical models and authoring reams of unreadable prose, but because we are beset, in our anything-but-ordinary human existence, with deeply troubling issues. Our lives, in Simon & Garfunkel’s ancient lyric, are “a floating question, Why?” We look to, as well as at, our movies. When an anthropologist looks to as well as at our movies, he or she is doing ethnography. Granted, there’s a lot of schlock out there in movieland and, when it comes to science fiction movies, a lot of xenophobic schlock: from Earth versus the Flying Saucers through Independence Day, our movies have fed an unhealthy American appetite for carnage visited on Them, those slimy, bug-eyed monsters out to eradicate humankind and the American way. Still, for the purposes of this essay, it is important to focus on science fiction movies. Without bothering to justify the claim, I would argue that the history of American film contains a thread of science fiction movies that are truly mythic in proportion: The Day the Earth Stood Still, 2001: A Space Odyssey,

62  Heading for the Scene of the Crash

Starman, Close Encounters of the Third Kind, Star Wars, E.T., Predator, 2010, Jurassic Park, Terminator, Contact, and, the focus here, the Aliens quartet. Lurid escapism? Knee-jerk xenophobia? No, in the examples—the movie-myths—at hand, our supposedly easy acquiescence in a commonsensical reality evaporates, replaced by an engagement with fundamental questions of existence and identity. Faced with the cinematic specter of the Alien, we are forced to suspend our everyday concerns—the spouse, the lover, the kids, the job, the mortgage—and confront other issues: What is humanity? How did we come to be? Where, if not to extinction without a trace, are we going? Would an alien intelligence be utterly different from our own? If so, how would we even begin to communicate with it? How would we even recognize it as an “intelligence”? Sci-fi movies are not philosophical tracts, but for that very reason their action-packed, cathartic episodes engage us in a Dreamtime world in which fundamental questions possess an immediacy and substance they never had in “Intro to Western Civ.” class. As true tempests, Shakespearean tempests, those movie-myths hurl our frail craft far from the sheltered coves and warm little ponds of daily existence.

Aliens/Aliens: A Cultural Analysis From ancient times until about the nineteenth century, it was taken for granted that some special animating force or factor was required to make the matter in living organisms behave so noticeably differently from other matter. This would mean in effect that there were two types of matter in the universe: animate matter and inanimate matter, with fundamentally different physical properties. Consider a living organism such as a bear. A photograph of a bear resembles the living bear in some respects. So do other inanimate objects such as a dead bear, or even, in a very limited fashion, the Great Bear constellation. But only animate matter can chase you through the forest as you dodge round trees, and catch you and tear you apart. Inanimate things never do anything as purposeful as that—or so the ancients thought. They had, of course, never seen a guided missile. —David Deutsch, The Fabric of Reality

Nor, to continue Deutsch’s thought, had the ancients seen any of Sigourney Weaver’s four Aliens movies, in which some decidedly animate beings differing fundamentally from both bears and guided missiles do a great deal of chasing around and tearing apart. If Aristotle’s classic identification of life with “some special animating force or factor” no longer satisfies twenty-first-century theoretical physicists or even us lesser mortals, Deutsch’s reference to the guided missile does introduce an aspect of the classic definition

News Flash  63

that is as critical today as in Aristotle’s time (and long, long before): people everywhere and always have shared a deep fascination with two very different sorts of animate beings: animals and machines. Bears, bows and arrows, and, later, guided missiles occupy prominent places in that complex of cultural production and consciousness we call “humanity” because they embody antithetical processes of being that continue to shape our destiny. In and of itself, “animation” is not as fundamental a property as what underlies it: the profoundly interesting feature of animate things is that they don’t just move around willy-nilly; rather, they are creative, generative, they make things happen, change the look and outlook, the feel of life. Animate things come into the world, perform purposeful actions that would otherwise be inconceivable, then age and die. The creativity or generativity of animals, humans, and machines is not, however, of a piece. Bears beget bears through a reproductive process we typically gloss as “natural”; bows and arrows beget bows and arrows through an intermediary “cultural” process; and humans bring other humans into existence through a process … well, just what are we to call that process? Natural? Not really. Cultural? Certainly not entirely, or, bowing to the biotechnological future, not yet. How is the phenomenon of human reproduction tied to the distinct generative processes of animals and machines? In an attempt to answer this question, and even to establish that it is a bona fide question, this essay develops an argument in four parts. First, the basic premise here is that human reproduction, far from being a natural phenomenon, that is, simply the sort of thing that bears do, constitutes an elemental dilemma for an emergent and continually evolving human consciousness. Although we might like to concentrate on just doing it the old-fashioned way, reproduction is in fact an enormous conceptual problem whose attempted resolution has helped shape humanity, has made us what we are (or are not). Second, we can recognize that problem and its attempted resolution in a wide variety of cultural productions and practices: origin myths, obstetrical rites, and, of specific interest here, movies such as the Aliens quartet. However frivolous sci-fi thrillers may at first appear, I would urge that some of them are among our most important vehicles of mythic thought. Third, a close examination of the symbolic processes in Aliens illuminates the wider issue of the general nature of human symbolization, of how we make sense or fail to make sense of the world. And fourth and finally, I would make the presumptuous claim that a cultural analysis of Aliens, far from being a mere intellectual exercise, is a key to one of the most vexing social issues confronting us today: the increasingly divisive and violent issue of abortion.

64  Heading for the Scene of the Crash

A Plea: Prelude to Analysis Here I would make a special plea, one that traditional ethnographers are rarely in a position to propose (although that is changing with the growing practice of “studying at home”). E. E. Evans-Pritchard, Godfrey Lienhardt, and Raymond Firth could hardly encourage their readers or respond to their critics with the admonition “If you don’t accept my account of Nuer, Dinka, or Tikopia society, then go see for yourselves!” The classical “peoples” of traditional ethnography were classically remote, all but unreachable by anyone but the most dedicated researcher (who besides the problem of travel would need to attend to the incidental matters of more or less learning the language and spending a year or more in a remote, inhospitable locale). Not the sort of “fact check” to which we (post)moderns have become accustomed. But in the present case I can indeed make such a plea and would ask in particular that, before you read another word here, see the movies! Since the Aliens quartet has long since left the silver screen for video archives, you won’t even have to pony up the outrageous admission fees the purveyors of the American Dreamtime charge for their little hits of movie-myth. Just go to any movie-download site and have them beamed right into your media room at home. Supply your own Orville Redenbacher and two-liter Coke and you are set! Besides, the movies are all in English, so, with that download or DVD, popcorn, and Coke, you have all the equipment and credentials necessary to start your own little cottage industry of cultural analysis. But … But in my experience, it is more difficult to convince anthropological colleagues to undertake such a research project than to enlist them in a follow-up trip to the hallowed stomping grounds of Evans-Pritchard et al. The “different strokes for different folks” ethic that pervades the community of oh-so-politically-correct cultural anthropologists in the United States stops short when those folks are people they bump into on the streets: everyday, ordinary Americans who cut a wide swath past the lecture halls and seminar rooms of universities as they beat a path to movie theaters showing the latest supergrosser. If you’ve ever tried to describe a scene from a movie you liked to someone who not only hadn’t seen that movie, but was unfamiliar with the actors, director, and even similar movies, you know something of the frustration one experiences in presenting an in-depth analysis of such scenes to a readership (most definitely not a viewership) that is mostly ignorant of and adamantly unreceptive to your material. I know from long and frustrating experience that proposing to do a serious treatment of the unserious fare of popular movies almost invariably draws blank, or hostile, looks from those I propose it to. Hardly any serious types have seen the movies—or will admit to having seen them. American “intellectuals” (using the term with Tom Wolfe’s derogatory intent) can sit for

News Flash  65

hours parsing Foucault or stirring polysyllables into a conversational mush (“globalization,” “commodity capitalism,” “technoscience”) but become instantly uncomfortable if someone suggests they sit through a ninety-minute screening of a popular movie, watch it intently, and then discuss it at length. Although prepared to go on and on about the “cultural significance of the media,” they rarely engage themselves in the visceral, blow-by-blow scenes of our movies. Hence, their interpretations are arid scholastic exercises, not the substantive accounts of the ethnographer working in the “field” of the neighborhood theater. My plea here is particularly important because, even with the best intentions, the anthropologist/ethnographer is almost always a dismal failure at narrative (and, regrettably, I am no exception). We are not bad at saying, but we are really awful at telling. Our keen interest in the cultural symbolism of narrative/myth is rarely communicated with any vitality or literary skill; we are, again, consummate bores. Think about this question: Read any good anthropology books lately? Or even any bad anthropology books? The educated American is invariably stuck for an answer. She can mention popular works in the most arcane disciplines (cosmology, elementary particle physics, mathematics, evolutionary biology, linguistics), but draws a blank when it comes to anthropology, that “science of humanity,” which, one would think, should have a tremendous appeal. But no. The blame for this most peculiar situation rests squarely with the cultural anthropologist. For reasons that need not be gone into here, we have abandoned accessible popular-science writing and the educated public in favor of internecine wranglings and postmolderings over our own subjectivities, poor old Napoleon Chagnon’s fieldwork, the purview of anthropology—anything but a gripping narrative account of people confronting their lives in basic ways. Yet just such narrative accounts are to be found in origin myths, whose principal themes, despite tremendous variation in content, are the same everywhere and always: the boundary between what is human and what is not; and the transformations of identities (animal-human-deity-machine) that have generated those boundaries. Our sci-fi movie-myths, including most especially the Aliens quartet, are origin myths of a peculiar sort. While traditional origin myths focus on how humanity has become separate from an animal world, Aliens explores the continuing process whereby We become Something Else, a human world evolving into an other-than-human condition. So, in the likely event that you have not acquiesced to my plea and watched the four movies, here is a rough, non-hyperlinked synopsis of that origin myth, as collected by an ethnographer in the Dreamtime cathedrals of movieland. The corpus, presented here in the traditional format for narrative analysis, consists of four Aliens movies: Alien (1979), Aliens (1986), Alien 3 (1992), and Alien Resurrection (1997).

66  Heading for the Scene of the Crash

The Story of Ellen Ripley (Aliens 1–4) The people of the future have mostly abandoned Earth, their lives spent on journeys through deep space to hostile, uninhabitable planets where they extract resources or on enormous space stations, metropolises really, where their government and industry are based. Early life on Earth, our life, has become as unimaginable for them as the lives of the first humans are unimaginable for us. Nowhere in the far-flung worlds of Aliens are there verdant landscapes and rivers, nothing green and growing from the soil. Those have receded into that place in memory where our own memories reside of Paleolithic forests with their game and daring hunters, of Paleolithic caves with their overwhelming sights, sounds, and smells of life. The “terrain” of Aliens is the technoworld of ships in deep space, and the “human” occupants of those ships are as different from the occupants of those Paleolithic caves as their starships are from the caves of Ice Age Europe. In Alien Resurrection (A4), Johner (Ron Perlman) sums it up in his disgusted reaction upon learning that their escape vessel is programmed to fly to Earth: “Earth! Oh, man. What a shithole!”

A1 (Alien, 1979) Ellen Ripley is crew aboard a deep-space freighter, the Nostromo. She and her half-dozen shipmates are encased in cryogenic chambers, sleeping away the months of dead time between ports. Suddenly there is an alert: the ship’s computer, “Mother,” has received a distress call from another vessel. The crew is automatically awakened and instructed to investigate. The vessel has crashed on an uninhabited, hostile planet; three crew members of the Nostromo are sent to investigate the craft and immediately find it to be extraterrestrial—the first such civilization ever encountered. One of the scouting team discovers a peculiar scene in the bowels of the immense, deserted ship: a floor is littered with what turns out to be an egg clutch of the alien monsters that killed the ship’s original occupants. The eggs contrast dramatically with the shipboard environments: unlike their high-tech, sterile surroundings, the eggs teem with a sinister organic vitality. Then a pod opens and a horrible, octopoid life-form propels itself out of the pod, smashes a hapless crew man’s face plate, and sends its tendrils into him. Taken back to the ship, the victim is in a coma, then mysteriously revives. But shortly afterward, the first terrifying episode of the alien’s reproductive process is revealed: the victim’s stomach swells and moves under his skin; he screams in agony; an alien head bursts out of his body, its multiple sets of teeth gnashing at the horror-stricken onlookers. The alien escapes and proceeds to kill the other crew members one by one, until only Ripley and the ship’s cat are left. It turns out that the Company owning

News Flash  67

the Nostromo, with the aid of the ship’s android, has conspired to infect the crew so that it could obtain a specimen for its weapons program. Ripley and the cat manage to escape in the ship’s “lifeboat,” only to discover the alien has slipped aboard. In a final terrifying battle, Ripley manages to eject the monster out the air lock, where it falls away, shrieking in the blackness of space. Adrift in a trackless interstellar void, Ripley and the cat seal themselves in a cryogenic chamber, where they will sleep until rescue hopefully arrives.

A2 (Aliens, 1986) Ripley’s lifeboat is not detected until it has been adrift for two hundred years; miraculously, she and the cat survive their cryogenic slumber. Hospitalized aboard an enormous city-in-space, Ripley begins to have a recurring dream: she watches as her own stomach distends, the skin writhes horribly, and, as she screams, “Kill me! Kill me!,” a little alien bursts out of her body. Something of a living fossil, Ripley tries to adjust to life in her new surroundings, where her old nemesis, the Company, has effectively become a galactic government. Her new masters propose a mission to her: to accompany a military detachment being assembled to investigate a new outbreak of the aliens on a remote planet. This time the alien egg clutch has claimed hundreds of victims, killing or cocooning all save a little girl, Newt, who becomes Ripley’s guide and surrogate daughter. The aliens have become a colony, with dozens of attacking soldiers that feed and protect an enormous, hideous queen and its rapidly growing egg clutch. After the platoon of hardened commandos is destroyed by the aliens, Ripley, Newt, an android, and one surviving commando make a desperate escape aboard another lifeboat. This is possible only after Ripley has had a dramatic standoff with the alien queen, rescuing Newt and incinerating the queen’s egg clutch in the process. Again, the lifeboat is not the final escape: the enraged, vengeful queen has made her way aboard. Ripley and the queen engage in an epic “mother-against-mother” battle when the queen goes after Newt (“Get away from her, you bitch!”). Donning a futuristic suit of armor, Ripley manages to dispatch the queen, again through an air lock. Ripley secures Newt in a cryotube and retires into one herself, to sleep again until rescue comes.

A3 (Alien 3, 1992) Again, Ripley’s vessel drifts aimlessly until it encounters a bizarre all-male prison world whose inmates are serving life sentences. Although rescued, Ripley encounters immediate signs of trouble: it appears that her cryotube was breached during flight by an alien, and that one other alien remains hidden. Indeed, Ripley is herself impregnated, and with the embryo of another

68  Heading for the Scene of the Crash

queen, which can establish a new nest on the prison planet. The other surviving alien proceeds to kill and kill again, until Ripley finally stops it while sacrificing her own life. The final scene shows Ripley, knowing that her “pregnancy” will soon come to term, diving into a vat of molten metal, just as her abdomen bursts and the infant queen shrieks from the bloody opening.

A4 (Alien Resurrection, 1997) Alien Resurrection takes Ripley’s adventures to a new level that is conceptually as well as emotionally disturbing. The bizarre “resurrection” involves both the alien queen and Ripley. Although both had perished in the vat of molten metal, the fiendish scientists of the Company-government obtain specimens of Ripley’s blood shed during battles on the prison planet—blood that contains DNA infected by the alien queen growing inside her. At a secret laboratory in space, the scientists proceed to clone and incubate hybrid variants of the Ripley-alien DNA, producing a monstrous set of offspring. One of these is a new alien queen, capable of starting a new colony. Most are grotesque amalgams of human-alien physiognomies: aborted monsters. One product of the laboratory, however, is Clone #8, a being that looks exactly like Ellen Ripley, but with some deeply troubling differences. Clone #8, “Ripley,” has superhuman strength, is impervious to pain, and possesses ambivalent feelings toward the aliens. Though she has a certain empathy or sixth sense for the aliens and a corresponding contempt for the weak and corrupt humans around her, she nevertheless allies herself with a small band of desperadoes— space pirates, really—who find themselves aboard the research station when the newly grown aliens break out of their enclosures. In one of the most riveting scenes in American film, Ripley and her band discover the cloning lab where her aborted “siblings” are kept in large vats and where a living specimen, horribly disfigured but still resembling Ripley, begs to be killed: Ripley incinerates her/it with a flame thrower and then goes on a rampage, destroying the entire lab with its hideous creations. Between pitched battles, Ripley forges a bond with Annalee Call (played by Winona Ryder), ostensibly one of the pirates but in fact a super-sophisticated android spy who has penetrated the research station in order to destroy the entire experiment. “I should have known you’re not human,” Ripley exclaims. “You’re too compassionate to be human.” As they battle to reach the escape vessel, the band is reduced to three members, plus Ripley: Johner, a lumbering, slow-witted sadist (“Mostly I just hurt people”); Vriess (J.E. Freeman), a paralytic dwarf who gets around in a motorized wheelchair; and the android-woman, Call. Just before making their escape, Ripley briefly yields to the ancestral urge to return to the alien queen’s nest. In a disturbing and macabre scene, she appears to renounce her human half, wrapping herself in the slithering coils of the queen’s tentacles

News Flash  69

while witnessing the latest horror: the alien queen herself has mutated, becoming human to the extent of giving live birth. As Ripley looks on, a fullgrown, hideous “child” is delivered that has some human features and the rapacious cruelty of the aliens. The stunning realization is that, since Ripley and the queen are versions of the same organism, this new horror is, in effect, also Ripley’s offspring. A cocooned scientist in the nest, completely insane, begins to babble over the “beautiful baby boy”—until the creature bites off half his skull. Ripley is repulsed, destroys the queen and her nest, and flees to rejoin the human survivors in the escape vessel, pursued by the new hybrid-alien monster. She reaches the ship and the two pilots, Johner and Vriess, blast off, heading on the programmed course for the ancient Earth. But, of course, the hybrid monster has managed to slip aboard in the final seconds. Ripley and Call find themselves in the cargo section of the ship, locked in a life-and-death struggle with the monster. Ripley gets the upper hand and the monster is expelled—in gory chunks—from the ship via a ruptured porthole. Pathetically, it shrieks for its “mother” to save it. As the ship hurdles through the upper atmosphere of Earth and clouds streak across the porthole, Ripley and Call find themselves triumphant and alone together. “What will we do now?” asks Call. “I don’t know,” Ripley replies. “I’m a stranger here myself.” “I’m a stranger here myself.” As the credits spool across the screen and the audience, perhaps including a cultural anthropologist or two, catches its breath, the realization strikes home that we, like Ripley and Call, are ourselves strangers transported to the surface of an increasingly uninhabitable and incomprehensible Earth. An origin myth, whether American or of the more conventional painted-savage variety, leaves the listener/viewer with a new and starkly drawn image of humanity. At the end of Alien Resurrection the social world is reduced to an ark containing four passengers: two inadequate males (a muscle-bound simpleton and a paralytic dwarf), who pilot the ark; and two superior and supremely competent pseudo-women (an alien-human hybrid and an android), who are passengers. Not a bad vignette of contemporary life, and one that poses the basic questions one asks at the end of every origin myth: Where does humanity go from here? What will humanity be after undergoing such a fundamental transformation? How will the tiny group of survivors reproduce? Who will marry and bear children with whom? Considering that the particular origin myth under consideration is a movie-myth that already stretches over four installments, we might reasonably wonder what an Alien 5 would involve. What sort of social world could Ripley, Call, Johner, and Vriess establish? Posing this question immediately brings us to the critical themes of this essay. For it is clear that the Aliens saga leaves us in the midst of a conceptual wreckage: we are forced to examine and deal with the ruins of our elemental beliefs concerning what it is to be human and

70  Heading for the Scene of the Crash

how humans are related through the process of biological reproduction. While moralists of Dan Quayle’s and Tipper Gore’s ilk might decry the xenophobic violence with which Ripley and her band zap the slithering little geeks, the ethnographer of twenty-first-century America immediately perceives that much more is involved. The postapocalyptic world of Ripley and her three companions, if it is to persist into Alien 5 and beyond, will not be about zapping geeks; it will be about perpetuating a group whose descendants have stopped being human. Call, the android-woman, will be able to reproduce only by finding the technological means to build other androids, or—an intriguing alternative— to install biotech components into others’ physical offspring. Ripley’s reproductive potential is a question mark, and a terrifying one. As an alien-human hybrid, a clone produced in a lab, she has more in common with Call than with flesh-and-blood females. If she can conceive and give birth, will her offspring be more or less like the alien whose DNA she carries? And given the fact that Ripley’s recurring nightmare—the distended abdomen, the thing moving beneath her skin, the agonizing, bloody eruption of the alien—is disturbingly like human birth, her willingness to bear a child is questionable, at best. Then there is the matter of the father. Vriess, paralyzed and confined to a wheelchair, may well be infertile, which would leave the macho Johner as the sole candidate to play Adam in this latter-day Garden of Eden. As movie-myth, Aliens directly engages the underpinnings of major social issues that our moralists and pundits largely ignore. Our moralists, like those Nietzsche excoriated in Twilight of the Idols, mistake effect for cause and regard as natural what is wholly arbitrary. For them, the most natural thing in the world is bearing children in the snug confines of the nuclear family. Mom and Dad and Buddy and Sis (aka June, Ward, The Beaver, and Wally) are the American way, the human condition. Or, as—who?—Patti Page used to sing, “Most people get married. That’s what they do-oooh-oooh.” Well, not Ripley and her crew. And not, with any equanimity, us moderns, for whom marriage is a game of Russian roulette and children are objects of a volatile mix of adult love, resentment, and custody battles. The supposedly “natural” process of human reproduction is a tremendous puzzle, contested and treacherous terrain where June and Ward can find no footing. Today’s future parents, conflicted in their innermost being by issues of abortion, sexuality, gender identity, parenthood, and treatment of children (their society is rife with child abuse, kiddy porn, school violence) have more in common with Ripley, Call, Johner, and Vriess than with the Cleaver clan. Appeals to biology and to a morality rooted in biological facts are useless here, and we should recognize them as what they are: ideological acts meant to prop up an arbitrary set of values. We are in the province of culture, not of an impossibly objective nature, and culture, as I have suggested, is about argument, not affirmation.

News Flash  71

Human reproduction constitutes an elemental dilemma for an emergent and continually evolving human consciousness. Aliens serves up that dilemma in the most graphic terms, enabling our trivia-ridden minds to ponder it while we munch popcorn, slurp Cokes, and knead each other’s flesh in the darkness of the theatrical alchuringa rite. The unifying theme of Aliens, from first to last, is the stark contrast between an asexual, high-tech human world and the horrible, mindless swarming and spawning of the alien nest. Ripley and her associates live regimented lives aboard a sterile ship whose mechanical contours replace the landscapes, trees, and rivers of our daily existence. Nor—an extreme measure for Hollywood—is there any sexual spark among the crew: although both men and women are present, they perform their assigned tasks and grouch as they go about them, reluctant cogs in the wheel of high-tech industry. The aliens’ nest and egg clutch, however, teem with organic vitality; they ooze with life. Their eggs, horrid, slimy things, open like the eviscerated gut of an animal to spew out their gnashing, slithering contents. Ripley’s world is populated by human drones who are kept in their harnesses by a cold and lifeless Company-government. The aliens are the chaotic antithesis of that world. Killing, eating, and breeding are all that they do, and they do it in public, in front of God and everyone, not behind the closed doors of our slaughterhouses and hospital delivery rooms. We would not attend these movies nor attend to their mythic elements were there a clear choice between the human technoworld and the aliens’ swarming organicism. But there is not, and that is the entire reason for the prominence of this opposition in our movies and in this essay. Human reproduction is an elemental dilemma because it involves irreconcilable aspects of both the technological “cultural” world and the organic “natural” world. We are not one thing, fundamentally separate from Call the android-female and her/its antithesis, Johner the dull-witted hunk. Rather, our being embraces both Call and Johner, and their marriage is a turbulent, deeply unhappy affair. That affair’s consummation is made possible only through the agency of Ripley. Ripley, truly a believe-it-or-not figure, embraces and somehow melds the disparate identities of the human-machine and the human-animal. Over the course of the four movies she undergoes a powerful transformation, from just one of the worker-drone crew in A1 to a machine-gun-toting action heroine comparable to Arnie and Sly in A2 and A3 and on to a cloned human-alien hybrid in A4. In A4 Ripley is r-e-a-l-l-y scary, in something of the way Hannibal Lecter is scary: a being who has strayed beyond the pale of humanity. But the terror Ripley instills runs deeper than Hannibal’s, who, in Hannibal (the sequel to Silence of the Lambs), turns out to be quite an amusing old duffer. While few of us wrestle with cannibalistic urges, half of us do wrestle

72  Heading for the Scene of the Crash

with the attractions and pitfalls of motherhood—and the other half at least have to confront the prospect and fact of parenthood. In the throes of that struggle we necessarily reach out for anything we can grab on to and hold on—a lifeline, a tree branch, a sight or even a sign of solid land. Ripley is such a sign, not a comforting one, to be sure, but in her dual human-alien nature she represents, and thus speaks to, deeply conflicted American women today in a way that June Cleaver cannot begin to do (if she ever did). Becoming a mother today, yielding to the urge to return to that egg clutch with its slitherings, its oozing gore, is in no way an easy or “natural” decision. Faced with that choice, an enforced choice in the event of an unplanned pregnancy, every woman has to decide whether to yield to her maternal stirrings or, like Ripley, reach for the flame thrower. Which brings us back to the aliens themselves. If Ripley is a mythic figure representing, in particular, American women today and, in general, the human species, then what exactly are the aliens? Here the movie-myth Aliens gets even scarier and grimmer. The general run of sci-fi movies, like spaghetti westerns and John Wayne war movies, presents no problems of interpretation. You can always tell the good guys from the bad guys. In Earth versus the Flying Saucers, Independence Day, and the rest the aliens are simply marauders, an invasion force bent on wiping out humanity. It’s either us or them. But the aliens Ripley confronts are different. To be sure, there are enemy soldiers, but there is also the queen-mother and her egg clutch, whose grotesque reproductive process (cocooning humans for newly hatched aliens to infest) is the dramatic focus of action. In a sense, we become them and they become us. The final scenes of Alien Resurrection underscore this ominous transformation, with the mutated alien queen giving live birth to a hideous alien-human hybrid (at once Ripley’s child and alter ego). Formulaic space operas, always about zapping geeks, never get so domestic; we never witness the mother alien nurturing her little aliens. But in Aliens we do, and that touches a nerve. Which nerve? Who or what are these peculiar aliens? Well, consider. These aliens infest, take over, then burst out of our bodies in an eruption of gore. They eat us out of house and home; in fact, they eat us. They run around screaming and breaking things. The thankless little monsters completely trash the place. Why—oh, yes!—they’re our kids! … Or, not exactly our kids, not movie images of the little upstarts actually underfoot. No, the grotesque, reptilian, gore-spattered things on the screen are the phantoms, the nightmare shades of all those aborted fetuses, millions and millions of them, dispatched in the sanitized clinics of America, children that will never be, monsters returned to haunt us. Remember, culture is not a rational adaptation to the environment; it is, rather, a desperate thrashing about, not in aliens’ tentacles, but in the serpentine twists and turns of our own minds. If we expect consistency, if we

News Flash  73

believe that things can be figured out so they make sense, we already exercise a tyranny over our own thought and life and, all too often, the lives of others. The only truly natural state of the human psyche is ambivalence; faced with an impossible choice, we want to have things both ways. The creatures that rampage across the screen in Aliens are, in the memorable phrase of another sort-of-brainy sci-fi movie (Forbidden Planet), “creatures from the id”—our id, of course. Aliens confronts us with horrifying images of the swelling, unstoppable tide of aborted human lives whose existence we do everything in our power to deny in everyday life. The movies’ graphic honesty casts a harsh light on both sides of the abortion debate in American society, revealing their distortions in the name of truth, justice, God. Just a movie? Just a myth? Yes, if we begin to approach myth in the manner proposed here: as narratives that engage the fundamentals of the human condition. Abortion is perhaps the most divisive issue in contemporary American society, with all indications being that both its rhetoric and its violence will intensify over the coming years. As interpreted here, the Aliens movie-myth reveals that the conflict is not amenable to any conventional solution: the forces of light or darkness will neither triumph nor agree to compromise. Rather than Americans and American culture figuring out what to do about the abortion issue, in all likelihood the intractable nature of the problem will prove a key element in transforming our most basic cultural values and ideas concerning human reproduction, marriage, and the family in the rapidly shifting contexts of medical science and the emerging phenomenon of biotechnology. The only realistic solution is for us to stop being human. Again, our moralists and social critics do not measure up well against the experience of watching and thinking through Aliens. Given the critical nature of the abortion issue, it is disappointing that social commentators of both the left and the right have done little more than bundle up the platitudes of “freedom to choose” and “right to life” in more or less strident rhetoric. The most radical and far-reaching treatment of human reproduction in a looming world of biotechnology has come from a perhaps unexpected source: not from the pulpits or the intellectual’s pen, but from the movie theater. Ripley as symbolic vehicle accomplishes what the usual arguments cannot because her very identity is strung out—smeared like a quantum particle—between impossible polarities of human and alien, of Us and Them. Partisan supporters and opponents in the abortion debate miss—intentionally or not—the contradictions inherent in their positions, and thus, as Nietzsche observed for social issues of his own time, parade nonsense as social justice or God’s truth. American women and American society have much more in common with Ripley than with the ideological caricatures one encounters in the literature.

74  Heading for the Scene of the Crash

Do the factions really want to frame consistent arguments, arguments that would impose a rationality missing from their real-life pronouncements? Then let the opponents of abortion rights truly embrace the principle that all human life is sacred, no matter how unformed or malformed. As well as marching against abortion clinics, let them also protest outside Texas prisons where feeble-minded and impoverished black men are routinely put to death. And the supporters of abortion rights? If human life is to depend on human agency or intervention, how can one abort a fetus and still maintain the evil of the death penalty? Even more to the point, why should a woman’s freedom to choose, to control her body herself, stop at the second or third trimester? The newborn infant through at least the age of two is in no sense a human being with human thoughts and muscular coordination—its brain is simply not developed. Were we to be honest with ourselves, we would classify it as a different species. And the mother is as constrained with the infant as with the fetus, often more so. Why not, then, adopt and extend a practice that has been with our species since its beginning, long before clinical abortion: infanticide? Her body, her life, herself. Neither side faces up to these tough questions. Instead, advocates, some of whom should know much better, resort to the threadbare platitudes we encounter everywhere. These, however, only serve to mask our deep and growing ambivalence toward the phenomenon of human reproduction: its messiness, its pain, its degradation of the individual woman, its uncertainty with respect to the child brought forth in a hostile world. Feminist critics of the right-to-lifers have enjoyed a much better press than their opponents, as well they should. After all, they know how the game of social protest is played, how, through deploying enfolding webs of discourse, one pretends to reason, to have logic, rather than God, on one’s side. Few voices on the left have been raised against those critics, but Camille Paglia, ever the maverick, has done so. In Vamps & Tramps (1994) and elsewhere, she makes the simple, unassailable point, echoed above, that abortion is a form of murder. Upon inspection, Aliens makes the same point, with even greater dramatic impact. When one comes to the end of an impassioned defense of abortion rights, when the carefully wrought paragraphs stop, one knows, somewhere in one’s psyche, that those millions of mutilated corpses are still out there, waiting to emerge in nightmares or, far worse, screaming for their revenge in Digital Surround Sound.

Back to the Future A fully developed cultural analysis of Aliens would take us further into the rich ideational landscape of the four movies. The above notes, though, hopefully

News Flash  75

suffice to establish the salience of one of several themes: the deep ambivalence we feel toward the phenomenon of human reproduction. That ambivalence makes itself felt in our daily lives, and it appears everywhere in both popular and less-than-popular media. Its contemporaneity is undeniable. For that very reason, however, I want to caution against regarding my discussion here as yet another breathless cry that “the Computers and Biotechnology are coming, and are they ever going to change things!” They are, indeed, coming; in fact, their vanguard is here, and they are already producing changes in American society and in our basic conceptions of humanity that will only grow in importance. But the situation Aliens depicts so dramatically is anything but new: human reproduction is an elemental dilemma that has fashioned and defined our species. While social commentators rush around, not unlike a pack of hunting dogs, their noses in the air, sniffing for a fresh scent of prey, for the latest thing, the cultural anthropologist takes a more dispassionate, panoramic view (what Lévi-Strauss has called le regard éloigné). Ripley is a stand-in, not just for the modern American woman, but for the human species, poised on one of several cusps of fate it has experienced during its brief and exceedingly strange history. Adopting that view means, in effect, going back to the biotechnological future, back to the beginnings of humanity. Why is human reproduction an elemental dilemma? The main thread of argument running through this essay is that most if not all the ideas and values we regard as essential to our identity are inherently contradictory, deeply flawed constructs we nevertheless defend with our lives. This general argument applies with particular force to core ideas and values regarding human reproduction. My claim runs counter both to common sense and to the great majority of social theories, which concur in regarding human reproduction, particularly its most dramatic and visible aspects—childbirth and the mother’s care of a helpless infant—as a uniform set of behaviors and attitudes, the most natural thing in the world. It is a curious fact that, despite their crucial differences, anthropological theories of kinship regard motherhood and its complement of behaviors as an invariant, “natural” property of “cultural” systems elaborated on the maternal complex. Yet as I have suggested elsewhere (Drummond 1978, 1996), far from being the most natural thing in the world, motherhood is in important respects the most unnatural. The unnaturalness of human reproduction derives from its intermediate or peripheral status with respect to the reproduction of animals and of machines, processes that we typically presume to be unambiguous. As Deutsch notes, from the time of Aristotle until the establishment of modern biology, all living beings were thought to possess some special animating force that distinguished them categorically from nonliving, inanimate things. Aristotle’s classic definition fails on two counts. First, it fails to acknowledge that the intriguing thing about living beings is not that they move around haphazardly,

76  Heading for the Scene of the Crash

but that they are intentional and generative agents. They act with obvious purposefulness, including the purpose of mating and breeding—their generativity. Second, Aristotle’s definition ignores the animate properties of an immensely important category: tools or artifacts. Long before it possessed a human hand, our hominin ancestor grasped and shaped objects around it, thereby creating an entirely new order of being. Pebble choppers and projectile points acquired and transformed the animate property of their maker. We were cyborgs before we were human. That transformative power is at the base of our species-old involvement with tools, whether an Oldowan chopper, a NASCAR vehicle, or a spaceship. Animals and machines are foundational elements of that semiotic complex we call “humanity,” dialectical processes that continue to shape our identity and destiny. The basic problem is that the female human body is terribly ill suited to give birth, while the fetus is terribly unprepared to be born. Hominin evolution over the past 4.5 million years has been a quixotic struggle between that body and its fetus. Getting the ape to stand, walk upright, and use its newly freed hands in increasingly complex ways necessitated a reduction in the size of the pelvic girdle, thereby constricting the birth canal. Yet at the same time pressures doubtlessly associated with increased sociality, tool manufacture, and (proto)language placed considerable value on enhanced cognitive skills, that is, a larger brain. The fetus’s large brain, of course, had to be accommodated by a larger skull, which in turn had to negotiate the constricted birth canal. The sorry compromise between these antithetical processes has resulted in a delivery that is agonizing and mortally dangerous to the mother’s life and an infant born at a shockingly premature stage of development, before its swollen head has made the passage through the birth canal even more perilous. As I recall from distant memories of Anthro 101, this “obstetrical dilemma” was duly noted in an early chapter dealing with human evolution, then forgotten in later chapters when the introductory text assayed the strictly “symbolic” topics of myth and religion. This is regrettable, for the development of modern Homo sapiens has not been as orderly and compartmentalized a process as the tidy chapter divisions of anthropology texts would suggest. Reproduction, along with environmental adaptation, kinship, and material culture, are not simply “physical” aspects of life operating independently of other, nicely segregated “symbolic” aspects such as origin myths and initiation rituals. The long and complex process of becoming human, and now, quite likely, something other than human, has involved the interacting and interthinking of all those elements, actions that transgress the ostensible boundary between “material” and “symbolic,” “physical” and “cultural.” This battle or tension, which still very much engages cultural anthropologists, has been fought and won long since in the world of art (won decisively and, most important, with a touch of humor):

News Flash  77 The story is told of Picasso that a stranger in a railway carriage accosted him with the challenge, “Why don’t you paint things as they really are?” Picasso demurred, saying that he did not quite understand what the gentleman meant, and the stranger then produced from his wallet a photograph of his wife. “I mean,” he said, “like that. That’s how she is.” Picasso coughed hesitantly and said, “She is rather small, isn’t she? And somewhat flat?” (Bateson and Bateson 1987: 161)

The remarkable fact is that long before people were people—that is, long before australopithecines and one of the early Homo line evolved into Homo sapiens sapiens—the obstetrical dilemma was already making itself felt. Cultural processes associated with bipedalism—tool use, protolanguage, social organization—steadily increased the tension between a reshaped female body and the undeveloped infant at birth (often likened to a premature ape). The human body as we know it today is already a product of a biotechnology that extends back several million years: the acquisition of culture transformed our physical selves as surely as today’s much-discussed techniques of genetic manipulation. The great mistake we make is insisting on seeing distinct processes at work here: “natural” physical behaviors on one hand, “cultural” technological inventions on the other. One rather sad, rather laughable consequence of this mistaken thinking is the popular movement to promote “natural childbirth”—as though those millions of years of culturally driven evolution could be willed away by adopting the politically correct attitude and doing some breathing exercises. Such a movement may provide the palliative effect of shielding its adherents from the reality of their situation, which is the reality of the human condition: like Ripley, we are all freaks, impossible hybrids of culture and biology, strangers on an alien planet Earth. The prehistory of our species is not, however, a simple unilineal development from apes and ape-men to humans—which would invite an interpretation of rational adaptation, of gradually getting better at the culture business. Rather, prehistory is filled with abrupt starts and stops, of multiple lines of hominins that simply disappeared, not missing links but dead ends. You see, a funny thing happened on the way to humanity, a thing anthropologists have only fairly recently figured out and that they have been characteristically slow and inept in communicating to the literate public: we modern humans haven’t been around very long at all, a tiny fraction of the millions of years of hominin evolution, and we got the way we are through genuinely mysterious circumstances. Over the past several decades the American public has become accustomed to news of paleontological discoveries that push the origins of humanity further and further back into the remote past. Australopithecines, predecessors of the human lineage, date to at least 4.5 million years ago. One of them, the famous Lucy, and her kind lived in East Africa as early as 3.5 million years ago. About two million years ago, the genus Homo itself was

78  Heading for the Scene of the Crash

already represented by three African species, one of which (Homo ergaster) is known to paleoanthropologists through a find as significant as, if less publicized than, Lucy: “Turkana Boy,” who lived about 1.55 million years ago and, had he grown to maturity, would have stood six feet tall and weighed 150 pounds. These were staggering figures, incredible spans of time, when they were announced. The drama deepens with the appearance of Homo sapiens, who, for reasons that will become apparent, are rather ambiguously called “early modern humans” and “archaic humans.” H. sapiens appeared in Africa around 150,000 to 200,000 years ago, then spread into the Middle East as early as 100,000 years ago and on into Asia and Europe only in the last 40,000 to 45,000 years (Mellars 2004). Note what an afterthought this turn of events was in the course of hominin evolution, which proceeded on its haphazard course for over 4.5 million years before, in the past 150,000, producing the basic version of our species. The human career has occupied less than onehalf of 1 percent of what we generally, and misleadingly, refer to as “human evolution.” Something was going on all that time, but is it really legitimate to call that process “human evolution”? Perhaps, rather than the capstone, we are more like a hiccup. The tenuousness of our condition becomes even more apparent when we take a closer look at our “early modern human” ancestors. These folks looked just like people you pass on the street every day, just like you and me. Slap an Armani or Versace on them, take them by the salon, and they’d look just right—except for one small problem: these folks weren’t folks at all. Homo sapiens, yes, but not, in our own redundancy, Homo sapiens sapiens. That redundant qualifier makes all the difference. The Armani and Versace notwithstanding, our “sapient” forebears couldn’t carry on a conversation, not because they didn’t speak English or Hindi or whatever, but because they didn’t have a fully developed human language with verb tense markers, pronominal shifters, and grammatical cases. Symbolic representation as a whole was beyond them; art, ritual, and probably music were either absent from their lives or present in the most rudimentary form. Even in the area of technology, their stone-working techniques had been around 100,000 years before their appearance as a species. And though their worked stone was masterful, many groups had not acquired the knack of fitting stone tips on their spears. They still hunted with sharpened sticks. And the bow and arrow, that classic fixture of “primitive” existence? Still tens of thousands of years in the future. Contrary to the popular stereotype, Homo sapiens did not burst on the scene—tall, blond, Daryl Hannah look-alikes (as in Clan of the Cave Bear)— and immediately begin lording it over dark, hunchy grunts like the Neandertals. The great mystery is that early modern humans lived side by side with Neandertal and surviving Homo erectus populations in the Middle East and

News Flash  79

Southeast Asia for forty thousand to fifty thousand years (2,000 to 2,500 generations) without either displacing them or interbreeding with them to any appreciable extent. They did not possess a newly mutated gene for smartness and aggression, nor, evidently, had they been zapped by smart rays from the black obelisk in 2001: A Space Odyssey. In their stone tool technology and minimal artistic-ritual artifacts they were barely distinguishable from their unevolved neighbors. Then that funny thing happened that did usher in humanity, all of a sudden and during the final seconds of the hominin evolutionary clock. Archeologists refer to this occurrence as the “Middle-Upper Paleolithic explosion” or, more soberly, the “Middle-Upper Paleolithic transition.” Around forty thousand years ago, give or take several thousand, Homo sapiens became a great deal more sapient, acquiring our own redundant doubly sapient species label. Their hunting techniques and tool kits became far more sophisticated and specialized. They began to paint on cave walls, to bury their dead in ceremonial graves, to sew garments, to make ornaments, and to produce sculpture. Their former neighbors ominously disappeared from the scene, with the last Neandertal population surviving in southwest Europe until perhaps 28,000 years ago. But what exactly happened? What, after perhaps a hundred thousand years of somnolence, awakened early modern humans to the myriad possibilities of a fully human existence? This may well be the most significant question anthropologists can ask, not just for themselves but for all of us; the narrow academic debate opens directly on the deepest issues of the nature and future of humanity. It is precisely at this juncture, where our turbulent prehistory meets an even more turbulent future, that Ellen Ripley and the movie-myth Aliens make their reappearance. Recall my general argument that culture is not a rational adaptation to the environment but a desperate engagement with contradictory aspects of life that threaten to overwhelm us. And recall my specific argument that a major theme of Aliens is the elemental dilemma posed by human reproduction: a biological process subverted, over millions of years, by protocultural transformations of the human body. I suggest those arguments figure into an answer to the deep question before us. At some time around forty thousand years ago pockets of early modern humans, pockets perhaps separated by thousands of miles and thousands of years, began to conceptualize the enormity of the situation they confronted: chaotically changing seasons, bitterly cold winters, fluctuating herds of game animals, complex intragroup rivalries and alliances, intense competition with other surviving groups, hunts that often killed the hunter as well as the hunted, births that often killed the mother or child or both. They came to realize that they were well and truly up against it, backed into a corner by the very protocultural processes that had brought

80  Heading for the Scene of the Crash

them into being. In those desperate straits many, perhaps most, such groups perished: more dead ends to add to the long list of dead ends that is the “success story” of human evolution. For other groups, the mere act of conceptualizing the problems they faced—even though those problems were largely unsolvable—sharpened their collective understanding and will to carry on. That epochs-old train was barreling down the tracks; there was nowhere to run or hide; the only thing to do was face it head on, to fight fire with fire or, in this case, to deploy new cultural weapons against the old protocultural menace. I suggest that those new cultural weapons were of two sorts, that they fitted into two primordial and interrelated symbolic complexes whose vestiges are with us yet: a male-centered set of behaviors and beliefs concerned with hunting and the spiritual nature of animals; and a female-centered set of behaviors and beliefs concerned with pregnancy, birth, and child-rearing.

The Upper Paleolithic Explosion and Its Female Figurines The question of what went on during the true “dawn of humanity” when a fully human culture made its appearance is fascinating, deeply important, and, because of the limited evidence available, inevitably speculative. Readers whose familiarity with this topic is even more limited than my own may begin to get a sense of the profundity of the issue by consulting David Lewis-Williams’s excellent The Mind in the Cave: Consciousness and the Origins of Art (2002). For my purposes here I propose that the Upper Paleolithic saw the development of dual symbolic complexes, doubtlessly interrelated in many respects, with one being a male-centered set of behaviors and beliefs regarding the linked identities of human-animal-spirit and the other being a female-centered set of behaviors and beliefs regarding the generativity of women and the practical and spiritual nature of childbirth and the resulting infant. Since the male-centered complex typically has received far more attention in the literature than the female-centered complex, and since the latter meshes exactly with my focus in this essay, I attend to it in some detail while glossing over the other. Regarding the male-centered complex, it will suffice to note that Upper Paleolithic hunters’ relation to their prey was qualitatively different from, say, my cat’s relation to a mouse or even chimpanzees’ relation to the Colobus monkeys or gazelle fawns they hunt and kill. That difference, which persists today among remnant bands of subsistence hunter-gatherers, may be found inscribed in the remarkable paintings of Chauvet and Lascaux caves of the period: animals were not just so much meat on the hoof, but quasi-human, quasi-spiritual beings that demanded ritual performance and something very

News Flash  81

like reverence. I know from my own experience as an ethnographer among Arawak and Carib groups of northeastern South America that when an individual hunter goes out alone in the tropical forest he may encounter an animal that behaves oddly, perhaps moving erratically or simply turning and staring him down. Such an encounter is deeply upsetting to the hunter; the last thing he would do is loose an arrow at it, even though he and his family may be badly in need of a meal. For that animal’s behavior indicates that it is in fact a spirit-animal, perhaps a shaman who has “turned” or perhaps even a Master of Animals (the principal spirit of a particular species). In pondering the mind-set of that hunter, alone in the towering forest, you must keep in mind how very, very far he is from the industrial slaughterhouses of Nebraska. Hunting and the mythic-ritual complex surrounding it were intimately tied to a corresponding mythic-ritual complex centered on pregnancy and childbirth. At the heart of both was a concept of generative vitality or organicism that inspired what was to become a truly human culture. Yet the other face of that vitality was also present, as indissociable from it as the other side of a sheet of paper: both hunting and childbearing were stalked by death, by a sense of the danger and destructiveness surrounding life that, once grasped, fundamentally altered the equation of living.

Female Figurines of the Upper Paleolithic In the brief and undistinguished history of anthropology, a particularly dismal chapter is the emergent discipline’s treatment of Upper Paleolithic sculptures of women that began to be unearthed in the latter half of the nineteenth century. Those sculptures were tiny, usually no more than three or four inches long, and often, but not exclusively, depicted women with large buttocks, pendulous breasts, swollen abdomens, and pronounced genitals. Intriguingly (and tellingly), while Upper Paleolithic art abounds with images of animals and animal-spirits, actual sculptures of humans overwhelmingly represent females. These sculptures proliferated during the Gravettian period, roughly eighteen to thirty thousand years ago, and have been found in sites from southwestern Europe through the Ukraine and into Siberia. As such, they comprise most of the earliest known examples of figurative art. One sculpture, discovered in 2008 at a rich Upper Paleolithic site at Hohle Fels in Germany, dates from thirty-five to forty thousand years ago and may well be the very earliest example of figurative art—an image of Woman that sets the stage for the appearance of humanity as presently constituted (Curry 2012). On the basis of the first few figurines, plucked from their in situ deposition by men who were little more than grave robbers, the archeologist Edouard Piette (1895, 1907) at the turn of the century proposed the first of what was to become a series of wildly speculative “anthropological” theories.

82  Heading for the Scene of the Crash

Piette saw such a pronounced contrast between the voluptuous “Venus” figurines and others—including his 1894 discovery of the tiny, finely carved head of a woman—that he declared there had existed two races in Ice Age Europe: one slender, white Cro-Magnons and the other an obese “Negroid” people. His account of the “steatopygic” Negroid people was influenced by an event that had caused a sensation in London and Paris of the early nineteenth century: the shameful exhibition of a Khoisan (Bushman) “Hottentot” woman, known as Saartjie Baartman, who had been sold as a slave and transported to Europe to exhibit as a freak show. Piette and his enthusiastic supporters (his principal work on the subject appeared a bare quarter century before the Nazi Party came to power in Germany) elaborated his thesis, making of the Cro-Magnon an elite race sprung from an Egyptian dynasty and the Negroid race a brutish, inferior lot.3 The so-called Venus figurines, Piette insisted, were factual, realistic renderings of members of that Negroid race, as substantiated by the tragic figure of Saartjie Baartman (Jennett 2008: 34–37). They compared images such as the two below and found in the similarities undeniable evidence that an actual, genealogical connection existed across two continents and some twenty thousand-odd years. Just why the first sculptors the world had seen chose to create these tiny images—meant to be worn as pendants, as evidenced by perforations or wear marks in the stone or bone—is a question that does not seem to have occurred to Piette or his followers. Why would those sculptors, most probably members of the superior Cro-Magnon race (surely the “Negroids” did not have the talent), have chosen as their subject grossly proportioned women they held in low esteem? If Piette did not trouble himself with the daunting question of why, his immediate successors were only too eager to propose an answer and, in the process, dragged the fledging discipline of anthropology deeper into the mire of racism, sexism, and mediocrity. One should recall Marx’s famous observation that a historical event unfolds twice, the first time as tragedy and the second as farce. Piette’s racist theory, spun from virtually nothing, was a tragic beginning for a supposedly dispassionate “science of humanity” and helped pave the way for a historical mind-set that plunged Europe into a chamber of horrors. The interpretation of the figurines lurched from tragedy to farce with the appearance of work by G.H. Luquet (1930) and others extending right into the 1980s: these little statuettes, with their exaggerated breasts, buttocks, and genitalia, were obviously sex toys—paleo-porn!—which early man liked to fondle and ogle. It is difficult to overstate the absurdity of this position. Imagine: some of the best minds of science figured that these cave guys sat around the hearth with lots of half-naked women walking around, yet they chose to concentrate on tiny, crudely carved stone pieces in order to get their

News Flash  83

rocks off. There was your answer, as obviously true as the existence of the male sex drive. Pathetic. Again, the paleo-porn interpretation is not an antiquarian curiosity—a fable spun by sexually repressed old white guys coming out of the Victorian era, guys who didn’t see many half-naked women walking around (outside of the brothels they frequented). Writing as late as 1949, Karel Absolon, professor of anthropology at a Czech university, informed his readers that, based on his extensive study of the Upper Paleolithic, “sex and hunger were the two motives which influenced the entire mental life of the mammoth hunters and their productive art” (Jennett 1949: 469). Engaging in a little amateur psychology, Professor Absolon claimed that these Stone Age artists used art to project their “sexual libido.” In the best emerging tradition in the social sciences, he even coined a meaningless bit of jargon to conceal the superficiality of his analysis: the little statuettes represented a “diluvial plastic pornography” (Jennett 1949: 208). Talcott Parsons would have been proud.4 Interpretations of the figurines continued to lurch from one extreme to the other throughout the latter part of the twentieth century. Since most of the figurines lacked the kind of precise, stratigraphic data that is a requirement of archeology today, and since the figurines were so inherently evocative, scholars were free to follow any interpretive whim they chose. The resulting literature is what Richard Feynman, who did not suffer fools gladly and who was ever-caustic about the social sciences, called “cargo cult science” (1974): wishing will make it so. In a sort of backlash response to the paleo-porn theories, a school of thought developed that emphasized the maternal symbolism of the figurines: Paleolithic women were not mere sexual objects; they were first and foremost mothers. The exaggerated body proportions of the figurines were meant to represent women’s fertility and its vital role in preserving the group. Note that this outlook can hardly be called less sexist than the Playboy school of paleo-porn, for here again men are seen to attach value to women on the basis of what they do for men, namely, provide the offspring required by the strong leader of a small community. From Carl Reinach’s 1908 speculation that the figurines were part of a fertility cult centered on increasing the number of childbirths right through the 1970s, when Magin Berenguer proclaimed that they represented “man’s obsessive need for women who would bear him lots of children” (Berenguer 1973: 51–52), the dominant assumption has been that a group’s survival dictates that its women have as many children as possible. Once again, these professional scholars, including founders of the discipline, got things horribly wrong. Writing in the oppressive confines of agricultural societies, which dictate that men conscript women to have as many children as possible to serve as field hands, Reinach, Berenguer, et al. ascribed their own values to human groups utterly dissimilar from early

84  Heading for the Scene of the Crash

Figure 2.1. Saartjie Baartman, on display 1810–1815, London, Paris. Library of Congress.

News Flash  85

Figure 2.2. Venus of Willendorf, circa 22,000 years bc. © User:MatthiasKabel/Wikimedia Commons/CC-BY-SA-3.0.

86  Heading for the Scene of the Crash

twentieth-century Westerners. The ethnographic literature that accumulated throughout that century amply documents that subsistence hunter-gatherers, whose lives much more closely resemble those of Upper Paleolithic Europeans than they do contemporary European villagers, do not welcome and cannot support large numbers of children. As we will see in a bit, the attention given to pregnancy, as attested in the figurines, most probably had to do with the mother’s survival of the ordeal and not with the need to encourage multiple births. As a productive, adult member of a small group whose survival was almost always on the line, she was a far more valuable asset than the helpless and demanding infant she brought into the world. The Neolithic Revolution, which brought with it concentrated agriculture, cities, armies, and tyrants, was still mercifully in the remote future, some ten or twenty thousand years after the little figurines had played their part in the lives of people living in caves and rock shelters, hunting animals and foraging forest products. The really disturbing thing about these fertility arguments is that their authors arrogantly refused to turn their (admittedly feeble) lens of analysis back on themselves, to examine, in the purportedly scientific manner they adopted toward the figurines, their own lives, their own culture as reflected in the interpretations they readily assigned to works of art tens of thousands of years older than themselves. Of course, postmodernism was still a ways in the future (and these guys probably didn’t read much of Nietzsche); these “scholars” blithely assumed that their words referred to things in a real world, rather than to others’ words, others’ discourses (the paleo-porn crowd, for example). Had they done so, they immediately would have had to ask themselves the hard question: Why should we think that these early humans, who had every reason not to burden themselves with the heavy baggage of a surfeit of infants, did everything in their power to encourage fertility in their women? That hard question can be answered only in part by pointing out that human fertility and large families are highly valued in societies based on agricultural production. There is another, much darker factor at work in scholars’ blithe attribution of such values to Upper Paleolithic groups. It is a four-word answer: the Holy Roman Catholic Church. Through the good agency of the church, the efflorescence of classical antiquity was extinguished for a thousand years, a Dark Age in which bigotry and ignorance reigned supreme. A major tenet of that church, which persists unchanged to the present day, was that human sexuality is all about procreation, and the procreators had better keep at it—nose (or some body part) to the grindstone to produce more and more souls that will stand in need of salvation. Procreation being a perfectly natural and divinely ordained process, the last thing the church would countenance was individuals, specifically women, who knew a great deal about female sexuality, pregnancy, and childbirth and therefore proposed to interject themselves in what church leaders made a sacred, inviolate domain. These

News Flash  87

women, herbalists and midwives, were denounced as heretics and witches by the Holy Church, which performed its divine duty by torturing them and burning them at the stake. Much of their knowledge, which embodied a worldview that procreation could be managed, that it was subject to human agency and not to divine decree, was lost or became suppressed, illicit lore. When the first anthropologists appeared on the European scene after centuries of diligent work by the world’s first thought police, they found it easy to adopt the doctrinal view that of course women wanted babies, and the more the better. To their lasting shame, and ours, they did not bother to examine the sociocultural milieu in which they paraded their own bigotry, a bigotry to which they assigned a new name: anthropology. Had these squawking parrots actually embraced the spirit of free inquiry (rather than the Holy Spirit) they would have noted that the desire to have lots of babies is scarcely a universal attribute of culture, that it is, in fact, a rather bizarre curiosity. It would not even have been necessary to consult the growing ethnographic literature to find references to women in hunting and gathering groups practicing a variety of birth-control methods, of which a major example is the knowledge and widespread use of herbal abortifacients. Man, as the sagacious Berenguer claimed, may have harbored an “obsessive need for women who would bear him lots of children,” but the women had other ideas. Extensive use of abortifacients occurred in classical antiquity, prior to its genius being extinguished by the Dark Ages: Once the plant [silphium] grew near the Greek city-state of Cyrene in North Africa. In fact, silphium made Cyrene famous. Herodotus spoke of the harvesting of the wild plant, and other Greek sources said that attempts to cultivate the plant failed … . One may wonder why a plant would make a city famous. Soranus told us: it was a contraceptive—one of the best in the ancient world. Its popularity, however, drove it to extinction probably soon after Soranus’ time. (Riddle 1992: 28)5

Women’s power over their own reproductive activity is so much a part of life in lowland Amazonian communities even today that it may well derive from a differentiation stretching back to the Upper Paleolithic between a set of beliefs and behaviors centered on hunting and animal-spirits and a set of beliefs and behaviors centered on human reproduction—my principal argument here. I can attest from my own research in northeastern South America that along a stretch of tropical forest river containing a few Amerindian settlements one finds two or three individuals who identify themselves and are identified by the group as practicing shamans. These individuals are invariably male and engage in a number of activities: as well as provide hunting magic in the form of amulets or charms, they claim to cure and inflict illness and to detach themselves from their bodies to become animal-spirits or flying spirit-travelers. My experiences with such individuals (admittedly a very lim-

88  Heading for the Scene of the Crash

ited set) has not been particularly inspirational; I found them more interested in rum and young women than in the world of nonalcoholic spirits.6 Still, after a couple of years doing fieldwork there, I have never been able to read Castaneda with quite the same enthusiasm as before. The shamans I came to know were definitely not the suntanned Socrates that Castaneda portrays Don Juan to be. In those same communities, coexisting with these rather tarnished shamans but keeping a much lower profile, one finds two or three older women who consult and assist other women in reproductive matters: herbalists and midwives who know a lot about the subject and lend their help in the absence of any kind of outside medical assistance. Since most of these Amerindian groups have been extensively missionized and subjugated by colonial administrators with provincial, Victorian sensibilities, these women are understandably reluctant to discuss their craft and lore with a white male ethnographer. Private as they are, they perform a sorely needed public function, which is evident in the coming and going of women, some pregnant, some not, to their dwellings. I was able to ascertain that abortifacients played a large part in their practices and even able to collect samples (none identified). Prior to their “pacification” and centuries of colonial domination, I believe, such groups relied much more on these experienced women. As in the Upper Paleolithic, precontact Amerindian life was beset by constant and grave problems: intergroup disputes and warfare, frequent migration or transhumance, and widespread hunger and starvation. A young woman with a small child who found herself pregnant had, for her own sake and that of her family, to seek a remedy. A pregnancy that came to term but resulted in a deformed infant or twins meant that the infant (one or both in the case of twins) had to be abandoned in the forest. Experienced women who could anticipate complications in pregnancy were a tremendous asset to the individuals involved and to the group as a whole; they helped to safeguard and perpetuate their people, not by promoting multiple births, but by caring for and protecting the lives of young women whose abilities were badly needed by the community. Of course, things have changed a great deal, now that we have suppressed those heathen, murderous practices. Writing in Scientific American, Lawrence M. Krauss (2010) relates an incident that demonstrates how our views of pregnancy and abortion have come a long way (down) from the days of Ancient Greece and the Upper Paleolithic. A senior administrator at a prominent Catholic hospital in Phoenix was confronted with a situation in which a 27-year-old pregnant woman required an abortion to save her life. The administrator was Catholic, as was the patient. A major concern was that the woman already had four children; her death would impose great hardship on them. After conferring with the patient, her doctors, her family, and the hospital’s ethics committee, the administrator approved the abortion. At that point

News Flash  89

the Bishop of Phoenix, one Thomas Olmsted, excommunicated the administrator. His reasoning: “The mother’s life cannot be preferred over the child’s.” Krauss concludes his report with the observation that, generally speaking, “a man who would callously let a woman die and orphan her children would be called a monster; this should not change just because he is a cleric.” May the pious Bishop Olmsted burn in his Christian hell for all eternity. But remember, human culture is a rational adaptation to the environment (surely all the anthro textbooks can’t be wrong?). We must also remind ourselves that in important respects the Upper Paleolithic is not that far removed from situations that individuals in contemporary American society face—individuals not unlike the hapless mother of four condemned by Bishop Olmsted (whom we wish an imminent, slow, and painful death), but who lack even the medical assistance and social support available in the Upper Paleolithic. Centuries of racism, sexism, and rampant greed, coupled with severe environmental and economic vicissitudes that are part of life on Earth have produced an untold number of victims, each of whom came face to face with the direst need. For them, the sanctimonious preachings of the church and the general society’s embrace of “family values” had ceased to matter: they, like a great and increasing number of us, found themselves at the edge of disaster, staring into the mouth of the Paleolithic cave as into the maw of the abyss, and acted accordingly. Women nurture. That is the basis of the maternal complex at the foundation of human culture. If it comes to sacrificing an infant, whether in the wilderness outside a Paleolithic cave or inside a Phoenix abortion clinic, the mother and those attending her do not hesitate if it is a matter of saving her life or, especially, the lives of her other children. That precept applies when a woman has lost a child through stillbirth; her unique ability to nurture may be called on outside the context of immediate family, when the survival of the group is at stake. That is the situation described by Steinbeck in what is arguably the most powerful passage in The Grapes of Wrath, which comes at the very end of the book. Rose of Sharon (Rosasharn), the principal character in the passage, gave birth to a stillborn infant two days before the following episode: The Joads, Ma and Pa and their children Rose of Sharon, Winfield, and Ruth, along with Uncle John have abandoned their broken-down truck and are slogging, desperately tired and hungry, along a muddy country road. Suddenly, a thunderstorm overtakes them. They see a barn in a field across the fence bordering the road and make a run for it. Rose of Sharon collapses and Ma orders Pa to carry her. On reaching the barn they are surprised to find two men inside, a younger one frightened by the newcomers, the older lying in the hay unable to move. They are father and son. The son begs Ma’s help: his father is dying, having eaten nothing in days. Ma accepts a blanket from

90  Heading for the Scene of the Crash

the boy to bundle up her sick daughter. The father whispers a plea to Ma; she replies that his son will be all right. The son makes another desperate plea to Ma for help … . … “Hush,” said Ma. She looked at Pa and Uncle John standing helplessly gazing at the sick man. She looked at Rose of Sharon huddled in the comfort [blanket]. Ma’s eyes passed Rose of Sharon’s eyes, and then came back to them. And the two women looked deep into each other. The girl’s breath came short and gasping. She said “Yes.” Ma smiled. “I knowed you would. I knowed!” She looked down at her hands, tight-locked in her lap. Rose of Sharon whispered, “Will—will you all—go out?” The rain whisked lightly on the roof. Ma leaned forward and with her palm she brushed the tousled hair back from her daughter’s forehead, and she kissed her on the forehead. Ma got up quickly. “Come on, you fellas,” she called. “You come out in the tool shed.” … For a minute Rose of Sharon sat still in the whispering barn. Then she hoisted her tired body up and drew the comfort about her. She moved slowly to the corner and stood looking down at the wasted face, into the wide, frightened eyes. Then slowly she lay down beside him. He shook his head slowly from side to side. Rose of Sharon loosened one side of the blanket and bared her breast. “You got to,” she said. She squirmed closer and pulled his head close. “There!” she said. “There.” Her hand moved behind his head and supported it. Her fingers moved gently in his hair. She looked up and across the barn, and her lips came together and smiled mysteriously. (Steinbeck 1939: 312–313). What would the odious Bishop Olmsted have to say here? Here he might— but would never—consider that the bigotry of his church’s teachings is superseded by the vengeance his God was preparing to take against the Oklahoma bankers, California landowners, petty bureaucrats responsible for the Joads’ suffering. They will burn in hell along with the good Bishop. It is the grand theme of Steinbeck’s great book: “In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage” (1939: 238). Cobbled together from the “Battle Hymn of the Republic” and “The Book of Revelations,” Steinbeck here calls out for social justice backed by divine retribution. It is a powerful indictment, not only of California in the 1930s but of the oppression and deceit built into human society generally. To carry Steinbeck’s line of thought a bit further (though perhaps not quite in the direction he intended), human culture/civilization is an enormous con game, run by an outfit called the Nova Mob, whose various sobriquets (Sammy the Butcher, Izzy the Push, The Subliminal Kid, and others) stand for the consortium of government/corporations/church and whose

News Flash  91

own grapes of wrath are themselves growing heavy, heavy for the vintage. In short, in today’s terminology, the Nova Mob is the Deep State. But Inspector J. Lee of the Nova Police is undaunted; he’s coming after them, and about to close down the con game: So pack your ermines, Mary—We are getting out of here right now—I’ve seen this happen before—The marks are coming up on us—And the heat is moving in—… Ten thousand years in show business. The public is going to tear the place apart. (Burroughs 1964:17)

But don’t forget—keep telling yourself! Human culture is a rational adaptation to the environment. If analysis of the little figurines, or rather the analysis of the analyses, leads us into these deep, dark waters of culture theory, it may or may not come as a relief to find that there is a lighter, happier side to the figurines. Why, they have found a new, vastly more important significance than mere items for fusty old archeologists to debate in their tedious little journals: they have become icons of pop culture! In Jean Auel’s immensely successful six-volume series of prehistorical fiction, Earth’s Children (1980–2011), the author fleshes out the archeological material with an engrossing chronicle of her heroine’s (Ayla/Daryl Hannah) odyssey from one end of the Eurasian Upper Paleolithic world to the other.7 Drawing on earlier work that cast the figurines as emblems in a fertility cult and representative of a generalized notion of the earth’s fecundity, Auel postulates that Cro-Magnon peoples of the Upper Paleolithic worshiped a “Mother Goddess” and that the figurines, which she called “donii,” were images of that goddess. Auel’s gripping if deeply flawed tale inspired writers with a feminist agenda to produce a rash of Mother Goddess books (again, see Karen Jennett’s excellent Female Figurines of the Upper Paleolithic [2008] for a detailed account). These works, some of which involved a heady mix of archeology, mysticism, and astrology, proclaimed that the Mother Goddess was the supreme deity in a religion based on men’s acknowledgment that women, as creators of human life, were superior beings. That religious belief was subsequently shanghaied and perverted by a male conspiracy that dethroned the goddess and replaced her with a male supreme power, God the Father. Richard Feynman is smiling from his grave. The real problem with Auel’s premise, which does not take a specialist in Upper Paleolithic archeology to spot, is that it, like the earlier fertility-cult argument, applies the value system of a much later, agriculture-based culture to dispersed hunter-gatherer groups. Just as small groups of Upper Paleolithic people living in adverse and ever-changing circumstances were unlikely to have encouraged as many births as possible, so were they unlikely to have embraced a concept of the environment, Mother Earth, as a warm, fertile,

92  Heading for the Scene of the Crash

nurturing being. They did not till the soil, plant seeds, wait for them to grow, then harvest the bounty; they scrabbled around for what they could find or kill, which all too often was precious little. Their lives were utterly different from those of village farmers who appeared with the Neolithic, around ten thousand years ago. Particularly when Auel turns, late in the Earth’s Children series, to the mammoth hunters of Ukraine and Siberia, it becomes extremely difficult to visualize those individuals, huddled in their shelters of mammoth bone and hide while the Siberian winter howled outside, worshiping their store of figurines as emblems of a nurturing Mother Goddess. Auel and her followers have borrowed a prominent element of later agricultural societies, a belief system based on an Earth Mother, and quite mistakenly applied it to a very different time and social order. I should note in passing that this is not to say, as conventional wisdom would have it, that the lives of Upper Paleolithic people were necessarily worse than those of their Neolithic successors. It is a daunting puzzle as to just why the Paleolithic-Neolithic transition occurred. It is hardly a clear-cut case of progress, of rational human adaptation once again moving the species one more step up the evolutionary ladder. Living in dispersed communities, having more or less regular access to animal protein through hunting and fishing and to a diversity of vegetable products through gathering, it seems a poor trade to move into cramped villages where disease was easily spread and where animal protein and a diversity of vegetables were in short supply. Archeological surveys of human remains from the two eras in fact demonstrate that late Paleolithic hunter-gathers were generally taller and more robust than hardworking, gruel-eating early Neolithic farmers. My take on the problem (again from a decidedly nonarcheological perspective): the Paleolithic-Neolithic transition, the dawn of human civilization proper, was the first Big Con run by the Nova Mob, getting the marks to abandon a less demanding lifestyle for one in which they would have to do a lot of hard, backbreaking work and turn over a considerable chunk of the fruits of their labors to the new bosses, the kings and priests who immediately appeared on the scene. When that social contract was being written, you can bet that Izzy the Push, The Subliminal Kid, and their cronies were sitting at the table with wet pens and wide smiles. Actually, not to get too far off the track—well, okay, it is pretty far off the track, but hey, remember, this little essay is totally peripheral!—I do cherish another theory of the origin of civilization, which, in the interests of full disclosure, I should divulge here. It’s called, or at least I call it, the Booze Theory of Civilization. It goes like this: For thousands and thousands of years the cave guys and gals would go out in the morning and, with any luck, would return later with the butchered carcass of an animal and a few hide baskets of roots and greens. Then they’d build up the fire, throw an elk hindquarter on the barbie, toss the roots on the coals, and settle in

News Flash  93 for a night of good eats and casual sex (they couldn’t catch the Tonight Show). But for all of this, there was still something missing, some vague yearning for something better. Then one night, in a particular cave, it happened that the cave gals hadn’t been able to scrounge up more than a few greens, and with no vinaigrette those didn’t make much of a side dish for the elk steaks. But then a couple of the gals remembered that they’d set aside in a corner of the cave a few hide containers of grass seeds they’d gathered a week or so previously. When they went over to check them out they found that the containers gave off a peculiar, rank odor and that the seeds had partly fermented into a suspicious- looking, bubbly liquid. But with nothing else to offer, they brought them over to serve with the meal. Now, the cave folks didn’t exactly have discerning palates, and they didn’t have vegans and Dr Oz on their case telling them what and what not to eat, so they figured, what the hell, so what if the mush tastes kind of strange, it’s here and let’s give it a try. Then an amazing thing began to happen: they all started to feel pret-t-t-t-t-y good. One of the cave guys, Trog (cave guys always get named Trog), said, “This is it! This is what’s been missing from our BBQs. We need more of this stuff! Bring it on! Chivas and Chateaubriand!” And so the world’s first single malt was born. Over the next few weeks and months the cave folks gradually adopted a new routine. As well as hunting and gathering they spent part of each day tending to that clump of wild grass the gals had found (an ancestral strain of what was to become wheat), removing competing plants, tilling the soil around the clump, bringing water when rain didn’t come. As time went by they held back some of the seed from particularly robust grass stalks and planted that around the original clump. After a while they had a supply they could harvest from time to time and have another barbeque blowout. Thus the world’s first farming village came into being.

If this line of thought strikes you as irredeemably silly, reflect that its admitted silliness may serve a purpose. While you are busily dismissing (dissing) my little flight of fancy, ask yourself: How, precisely, does my scenario for the origin of civilization differ from the several interpretations of the Paleolithic figurines we’ve been considering? I would suggest that those are just as silly, in the sense of unfounded speculation, but that my little theory has the great advantage of not possessing an insidious hidden agenda, unlike the racism of Piette’s “Negroid” theory, the sexism of the paleopornography interpretation, and the fuzzy-minded feminism of the Mother Earth concept. Adopting Nietzsche’s maxim that the search for truth must be accompanied by humor, my agenda here is to propose a bit of thought-provoking Fun. When the cat of cargo cult science is turned out, it may return with just about any rodent. In seeking to advance our understanding of the figurines, it is fortunate that paleoanthropology, unlike general anthropology, eschews those speculations and focuses on specifics, attempting in this way to wring out every possible bit of information from the cryptic little statuettes. Several thorough and illuminating studies by Michael Bisson and Randall White (Bisson and White 1996; White 1997, 2003, 2006) at once add much to the interpretation of the

94  Heading for the Scene of the Crash

figurines and demonstrate the paucity (really, silliness) of the approaches we have been considering. Through careful review of available stratigraphic data and microscopic examination of the figurines, Bisson and White determine that the objects quite probably had the specific function of assisting in some way(s) with childbirth; in other words, they were an important part of an obstetrical toolkit employed by women of the Upper Paleolithic to accomplish a well-defined objective. The approach and language of these two paleoanthropologists are so refreshing, after what has gone before, that it is worth quoting a selection, from White (1997), at some length: Use of the Grimaldi, and at least some other Gravettian female sculptures in the context of childbirth is consistent with an archaeological context in which they are often found in clusters as if cached away for future use; childbirth being an occasional occurrence in small human groups. Moreover, the idea that the sculptures themselves were perceived as having power is supported by recent finds from Avdeevo on the Russian Plain (Grigoriev 1996 and personal communication). There, in addition to purposeful pit-burial of whole sculptures, sometimes more than one to a pit, Govozdover and Grigoriev have found fragments of the same broken figure buried meters apart in meticulously dug pits of a special, cone-like form. If the sculptures were perceived as inherently powerful, it is easy to imagine that the disposal of broken examples would have been attended by great care and ritual. Childbirth is both an emotionally charged and potentially dangerous event. It is predictable in its general timing (i.e., the average length of gestation), but unpredictable as to the timing of the onset of labor, the sex of the offspring, and the survival of the mother and/or child. We hypothesize that the Grimaldi figurines are best interpreted as individually owned amulets meant to ensure the safe completion of pregnancy. Amulets employ the principle of similarity to influence the outcome of uncertain events. They are often made by their owners, although they may also be obtained from shamans. Since the ethnographic record shows that in many societies amulets are thought to gain power with age, the sculptures may have been passed from mother to daughter over a number of generations. This scenario also satisfies many of the legitimate demands of the feminist critique. It does not require the figurines to represent a generalized concept of womanhood, but instead recognizes that they may be produced by and for individual women, with no necessary inclusive or monolithic meaning that derives from gender alone. Individual production probably accounts for the great variability of the figurines. Our interpretation also does not imply the subordination or commoditization of women as do the fertility goddess (Gimbutas 1989), paleopornography (Guthrie 1979), and mating alliance (Gamble 1982) scenarios. Instead, we recognize the importance of women in themselves, not just as sources of babies, since we suspect the motivation behind these amulets was the survival of the mother rather than the baby. From this perspective, women are envisioned as taking active control of an important part of their lives using magical means that would have been entirely rational within their cultural context. (White 1997: 115–117)

News Flash  95

Faced with the ages-old obstetrical dilemma as delivery approached, every woman, possessed of an essentially modern consciousness but completely lacking any modern medical assistance, turned to the figurines and to those accomplished in their use, herbalists and midwives, to survive.

Back to Back to the Future All philosophers have the common failing of starting out from man as he is now and thinking they can reach their goal through an analysis of him. They involuntarily think of “man” as an aeterna veritas, as something that remains constant in the midst of all flux, as a sure measure of things. Everything the philosopher has declared about man is, however, at bottom no more than a testimony as to the man of a very limited period of time. Lack of historical sense is the family failing of all philosophers. —Friedrich Nietzsche, Human, All Too Human

The tension and ambivalence produced by the contradictions that reside at the heart of culture are felt at least as strongly today as in those caves of the Upper Paleolithic. But we (post)moderns, with our enormous populations and specialized institutions, do a better job of hiding the dilemma—and thus make it even more frightening. We embrace an anemic political correctness and recite its platitudes—animal rights, freedom to choose, global warming—on every talking-heads program on TV. But the organicism, the gore, of it all, is never confronted. We carefully conceal the gore of animal slaughter and butchery and the gore of childbirth and abortion in our high-security abattoirs (factories of death unlike anything the world has seen) and behind the closed doors of hospital delivery rooms and abortion clinics. Hiding these vital and visceral aspects of our lives enables us to pretend, in daily life, that they don’t exist. It enables us to proclaim the niceness of everything and how nice we are in finding it all so nice: “Oooh, such a darling Shih-Tzu,” “Oooh, such a cute baby.” In doing so we hope somehow to suppress the horror implicit in our denial and, perhaps even more horrible, the necessary entanglement of the two symbolic complexes that made us fully human. Men slaughter animals and from the gore of the butchered carcass endow the human group with continued life. Women give birth and from the gore of delivery similarly perpetuate the group. Yet both acts are fraught with danger: men are injured and killed in the hunt or afflicted by the vengeful spirit of an improperly slain animal; women are injured and killed in the act of giving birth or cursed with a misbegotten child. The freshly slaughtered animal and the newborn infant are hedged around with elaborate taboos for a very good reason: both animals and infants may be monsters in disguise.

96  Heading for the Scene of the Crash

The obstetrical dilemma that has been with us for millions of years before we were properly “we,” that is, before modern humans appeared on Earth, has given birth to another progeny. As well as having a decisive role in birthing us as a species, it has spawned a conceptual dilemma at least as acute. Apart from the fact that the newborn might actually kill the mother in delivery, its birth posed the fundamental problem of fitting it into the life of the family and social group. The first problem for an emergent cultural order, for a newly formed humanity, was to establish a set of identities. What were “animals,” “humans,” “tools,” “deities,” “Us,” “Them”? The infant is born into a world of interrelated groups and, owing to the incest taboo, it necessarily has ties with both mother’s and father’s groups. But what sort of ties? How does the infant/ child come to be identified with a particular social group? This is the conundrum of kinship/ethnicity at the heart of culture. With these conceptualizations in process of formation, the phenomena of pregnancy, childbearing, and infant socialization lost any transparent “naturalness” they may have possessed for earlier hominins or for other mammals. My cat delivers its litter and succors the offspring (perhaps after eating a runt or two) without a seeming thought of fitting the new organisms into a social or conceptual order. For a woman and those who attend her at childbirth (a near necessity owing to the difficulty and danger of delivery), it is an entirely different matter. The newborn is not naturally anything; premature and utterly dependent, it is human only by virtue of the behavior and beliefs others direct toward it. The squalling stranger thrust into the group could be the stereotypical bundle of joy beloved of sweet little old ladies. Or, through some accident of birth, an unusual delivery, a physical deformity, or even a sign from the moon, it could as readily be an intrusive, malevolent spirit to be destroyed before worse things occur. Ellen Ripley’s recurrent nightmare returns to haunt us, as it has always; otherwise, there would have been no Aliens. You wake from a deep sleep to find your abdomen grossly distended. Something is moving inside you, rippling your skin. The pain intensifies, becomes unbearable. You scream uncontrollably. Then the bloody, shrieking horror erupts from your body, tearing your organs, causing a hideous death. “What will we do now?” Call, the android-woman, asks. “I don’t know,” Ripley replies, “I’m a stranger here myself.” As Alien Resurrection concludes, we realize that Call and Ripley’s situation is our own. Neither natural beings nor cultural products, we share their ambiguous, incomplete identity. We cannot rely on tradition, cannot point to some vast panorama of “human evolution” that has established us at the pinnacle of creation, for, as we have seen, “we” haven’t been around very long at all. And our future gives every indication of being far more changeful and uncertain than our past. What will follow humanity? Where, if anywhere, is the “here” we share with Rip-

News Flash  97

ley and Call? Without resorting to another dreadful “post-” term or dusting off Nietzsche’s unjustly criticized “über” metaphor, let’s just call that being which may come “Something Else.” Our language, especially the bloodless, contorted print-speak of academia, can’t begin to describe it. It is something we sense or feel in our marrow: a distant storm approaching over the prairie, an ocean swell that begins to lift our frail craft.

Notes  1. For more—much more—on those contradictory aspects of life as generative principles shaping a semiotic construct of “humanity,” see Drummond 1996.  2. Things were touch and go for the genus Homo during the final million years of the Pleistocene. To date at least six species of Homo are known to have coexisted during that period (National Museum of Natural History 2017). One can only speculate, with wonder, what intergroup and interspecies relations occurred during that uncertain time, what left one species to prevail while the others disappeared. The question is all the more important in view of recent genomic research indicating that a million years ago the ancient medley of protohumanity was winnowed to a population of some fifty thousand individuals (Storrs 2010). How? No one seems to know. Whatever the cause, it seems clear that whoever survived that population bottleneck owes his or her continued existence not to an ordained “progress” built into the species, but to a radically different principle: the luck of the draw. That train’s a-comin’.  3. It is bitterly ironic that the actual course of prehistory in Europe upends Piette’s racist interpretation. The original hominin inhabitants of Europe, the Neandertals, were probably light complected. Having lived in northern climes for several hundred thousand years, they needed all the vitamin D they could get. This large-boned “brutish” race was displaced by gracile modern Homo groups emigrating from Africa and the Levant, where abundant sunlight very likely dictated dark skin.  4. For an excellent review of the figurines and their several discrepant interpretations, see Jennett 2008.  5. If the Greeks were up to this kind of monkey business way back when, it takes some punch out of the old joke: A Greek and an Italian man were sitting in a bar, when the talk inevitably turned to sex. “You know,” said the Greek man, “it is we Greeks who introduced the notion of sexuality to Western culture.” The Italian man thought for a moment and responded, “True. But it is we Italians who introduced the notion of sex with women.”  6. But then, you might ask, what’s so wrong with that? Isn’t it a time-honored value complex of the American male: faster horses, older whiskey, younger women?   7. Just a cautionary note, however: whenever you find a writer referring to primitive or prehistoric people as “children” or “childlike,” immediately run—do not walk—for the exits.

Chapter 3

n

Lance Armstrong The Reality Show

S

even successive Tour de France victories, anointed four times as SportsMan of the Year by the US Olympic Committee, countless other awards, a mound of books, magazine exposés, and newspaper articles accusing and defending Lance Armstrong of wrongdoing, a small army of lawyers launching suits and countersuits, multimillion-dollar endorsement deals, and it all came crashing down around him that fateful day in January 2013 when he walked out and took his seat on the set of the most sacred shrine of the American conscience: The Oprah Winfrey Show.1 The show is America’s Confessional, and Oprah the Grand Inquisitor. In front of millions, under the blazing studio lights, she can extract confessions of sins concealed for years by the most distinguished among us. From best-selling authors of bogus books to repentant celebrities, Oprah has them in tears, telling all between car giveaways and painkiller commercials. The CIA could have saved all the expense and bad press over its secret prisons and waterboarding—just trundle Khalid Sheikh Mohammed out on stage and Oprah would have had him singing like a canary in time for top-of-the-hour cable news. And what about the audience for Lance’s confession, those couch-potato voyeurs who experience the wider world as a bizarre combination of talk shows, sports events, reality TV, cable news breaking stories, sitcoms, and HBO/Showtime movies? That is to say, what about us—the great American public? With her dramatic unveiling of Lance’s charade, Oprah bestowed the ultimate gift on that audience, better even than those fabled car giveaways. For a few brief moments, before we had to return to our troubled, occluded lives, she allowed us to experience a true, pure feeling, hot as a poker, bright as a laser: the righteous indignation that wells up inside the American breast when we encounter a fundamental betrayal of trust, a scam far worse than Bernie

Lance Armstrong  99

Madoff ’s (who merely stole from the rich), a con that subverts the balance of the way things are and are supposed to be. Lance was our ultimate athlete-hero and in many ways our ultimate American Hero of recent times, far more impressive and sponge-worthy than a Super Bowl quarterback (for all the weeks of hype, really just a flash in the pan, forgotten until next season), a muscle-bound home-run slugger, or an unlikely sort-of-black president with a lawyer’s golden tongue. Day after day, year after year, mile after torturous mile, Lance wore that yellow jersey, the leader’s emblem of the Tour de France. And—the sweetest treat of all—this gaunt, determined young Texan from the outskirts of Dallas wore it proudly through throngs of spectators right there in that citadel of anti-American snobbishness: France. Lance’s betrayal of the public trust was especially painful because he was the ultimate underdog, the hero-image we Americans somehow manage to embrace while riding roughshod over the rest of the world. It was a modern miracle that he should have been on that bicycle seat at all, that he should even have been alive. At twenty-five, with his reputation as a top cycling competitor already established, he was diagnosed with advanced testicular cancer, a cancer that had already spread to his brain and lungs. Following surgery and chemotherapy he was given less than a fifty-fifty chance to live. Full recovery, let alone a return to sports, seemed a remote possibility. Yet with excellent physicians and an innovative regimen of chemotherapy, he not only survived but, three years after his surgeries, won the Tour de France. His is a remarkable story, one of the most impressive come-from-behind living legends of American history. And, cruelest of ironies, all made possible by that fount of life-giving, life-extending wonders, the pharmaceutical industry, which was later to strike him down. Lance Armstrong was an athletic prodigy, endowed with remarkable stamina from childhood and, quite probably, from birth. While still in junior high school he became attracted to endurance sports—swimming, running, bicycling—and, seeing a poster for an “Iron Kids” triathlon, entered the competition. He won. He was thirteen years old. Two years later he was ranked first in the under-nineteen category of the US Triathlon. At sixteen, he became a professional triathlete. In 1988 and 1989, aged eighteen and nineteen, he held the title of national sprint-course triathlon champion. Two years later he became a professional in the world of international cycling competition and put together an impressive series of victories that culminated in his first Tour de France win in 1999. The scandal that erupted around him in later years should not detract from the remarkable gifts of a truly exceptional human being. Rather, it adds to the tragedy: that one so gifted should feel he needed an edge to remain on top. But … But it is precisely at this point, when our moral compass seems fixed on a steady bearing, that it is necessary to question the basis of our

100  Heading for the Scene of the Crash

certitude, to question whether we inhabit a neatly partitioned social world in which some deeds and people are good, some evil, and in which we know for a certain fact when someone—Lance Armstrong in this case—crosses the line, goes over to the Dark Side. Oprah, with her enormous audience of other right-thinking Americans, does not question the premise that good and evil are clear to all, necessary anchors to secure us in a rapidly changing, often bewildering world. Nor does anyone in her parade of penitents appear to question that premise; they know the secret wrongs they have done and, under the blazing studio lights and Oprah’s doe-eyed gaze, confess all to the Grand Inquisitor. It is necessary to ask, in short, whether Lance Armstrong’s deeds violated all that is good and decent in human life or whether, just possibly, those deeds actually cast their own inquisitorial light on our basic values. In the very midst of the public firestorm of outrage, it is necessary to ask whether Lance is so awfully bad. (Do you perhaps recall the old joke circulating during the trial of Lyle and Erik Menendez, two enterprising teenagers who took a drastic shortcut to their inheritance by doing away with their parents in their Beverly Hills mansion: “So we shotgunned Mom and Dad—was that so awfully bad?”) When one begins to turn the Inquisition back on itself, to consider what the Lance Armstrong affair reveals about our basic values, it is at once apparent that Americans have quite specific expectations of their athletes. By far the most important, and general, of these is that the star athlete display his God-given physical talent: he performs feats of natural prowess before the stadium throngs, the crowds lining the race course, the multitudes of those couch potatoes slumped in front of their giant flat-screen HD sets. Not to get too Lévi-Strauss on a readership that has mostly turned its back on the master, but Americans believe in a fundamental division between Nature and Culture. And the star athlete is the embodiment of The Natural (as played by Robert Redford). His body is his temple, and anything he does to defile that temple is dealt with harshly by bureaucratic agencies established to identify any violation of that ideal. (And by high school football coaches who forbid beer—and even sex—for their Friday-night wonders.) Drug tests have become the norm in professional sports: the football or basketball player who tests positive for cocaine or other mind-altering drugs faces suspension. Gone are the days when Mickey Mantle could walk up to the plate drunk as a skunk and swing for the bleachers. But far worse than these debilitating drugs is the use of drugs intended to improve performance: steroids and blood-doping chemicals of all sorts are part of a growing pharmacopia of the Great Satan of professional athletics, the dreaded and despised performance-enhancing pharmaceuticals. These evils subvert the natural order of things. The Lance Armstrong affair has put one little bureaucracy in particular in the spotlight: the US Anti-Doping Agency (USADA). Created in 2000 to

Lance Armstrong  101

enforce strictures on drug use by Olympic athletes, its lab-coated inquisitors conduct their studies under the agency’s slogan, “Inspiring True Sport.” Examining its goals in some detail is at least as revealing of American values as reading those fanciful documents, the Declaration of Independence and the Constitution, drafted by a small group of wealthy white slave owners in the late eighteenth century: Mission: Preserve the Integrity of Competition We preserve the value of the integrity of athletic competition through just initiatives that prevent, deter, and detect violations of true sport. Inspire True Sport We inspire present and future generations of athletes through initiatives that impart the core principles of true sport—fair play, respect for one’s competition, and appreciation for the fundamental fairness of competition. Protect the Rights of Athletes We protect the rights of all athletes to compete healthy and clean—achieve their own personal victories as a result of unwavering commitment and hard work— so they can be celebrated as true heroes. (US Anti-Doping Agency 2017)

To “compete healthy and clean … .” The self-righteous obtuseness of the mediocrities who formulated these goals does justice to Ward Cleaver, that all-knowing disciplinarian who dispensed his sage advice every week to keep The Beaver in line. What is wrong, misled, or, frankly, stupid about the pretentious goals of the USADA? Why should we not look to them as an admirable statement of a fundamental morality that all the world, particularly the world of professional athletics, should embrace? The principal problem with those goals is that they fail to recognize that the Nature/Culture dichotomy embraced by Americans is in fact an elaborate cultural construct or folk taxonomy, a contrivance that owes little to the joint physical and social endowments of a human being. The crucial fact the lab coats ignore is that there has never been a “natural” man or woman “to compete healthy and clean” in anything. Our bodies are the product of some three million years of an evolutionary process that mixed—and often mangled—discrete physical abilities, technical expertise, and social skills. If it possesses any distinguishing feature at all—and that is quite debatable—what we choose to call “humanity” is a loose and ever-shifting assemblage of biology and culture. For a few technocrats to stroll into this rat’s nest and begin to dispense ill-formed edicts in the guise of scientific findings is laughable and terribly sad.

102  Heading for the Scene of the Crash

But even if we set aside these big-picture considerations drawn from paleoanthropology and cultural anthropology, the antics over at the USADA appear quite limited in scope. Let us begin by granting their premise that professional athletes should be required, under penalty of exclusion from their sport, to refrain from tampering with their “natural” abilities through “unnatural” performance-enhancing measures. This proves to be a slippery slope. For starters, how do the lab coats identify precisely which chemicals are to be placed on their Index of forbidden drugs? The global pharmaceutical industry is a multibillion-dollar enterprise devoted to creating more and more new drugs (which it touts as being far more effective than its earlier products, whose patents soon expire and fall prey to cheap generic replacements). In tandem with America’s official “war on drugs” (and we all know how well that’s going), the FDA and other bureaucracies like the Anti-Doping Agency face the impossible task of keeping up with, let alone regulating, the flood of new drugs hitting the market every year. Where the general public is concerned (the trodden masses without their own army of lobbyists in the Gucci Gulch corridors of Congressional office buildings), the best these agencies can do is require the giant pharmaceutical corporations to issue disclaimers and warnings when they showcase their products in commercial spots on Oprah and the evening news: “Feeling depressed? Take our new antidepressant pill! It’ll make you feel great! … Well, actually, it may make you want to kill yourself. But, hey, your doctor will prescribe it!” Closely related to the challenge posed by new pharmaceutical drugs is the burgeoning group of vitamins, minerals, hormones, and other “nutritional supplements” that, because they are deemed “natural,” fall outside the purview of the FDA and similar agencies. When Mother June sent The Beaver down to the corner grocery store to pick up a few things for supper, her shopping list didn’t include items such as açai, ginkgo, kava, bilberry, sativa, and senna. There are thousands of these substances, whose effects on the human body are known only vaguely. And when used in their purified or processed form and in an enormous variety of combinations, it is anyone’s guess what their short- or long-term effects may be. Suppose that Lance and other professional athletes, instead of raiding the medicine chest, paid a visit to the local herbalist, who gave them a god-awful-tasting brew compounded of berries from the New Guinea highlands, roots from the Amazonian forest, leaves from the Manchurian steppe. After a few weeks of hooking down this stuff, they went out and did amazing things on the racecourse or playing field. Would our little band of inquisitors at the USADA hastily revise their regulations and go forth to strip medals, return prize money, and generally ensure that athletes “compete healthy and clean”? We are a little further down that slippery slope—and picking up speed (but hopefully without any “unnatural” lubricants!).

Lance Armstrong  103

And here’s another curve ball—no spit: suppose Lance et al. decide to frustrate the lab coats who routinely sample their urine and blood for telltale traces of proscribed substances. Instead, they find a few medical technicians of their own, physicians and therapists at the vanguard of an established and expanding field: ultrasound treatment. Long used to reduce inflammation, relieve osteoarthritis, and promote postsurgery healing, innovative ultrasound treatment is found by these pioneers to strengthen muscle growth and significantly improve stamina. A few weeks of regular treatment have all the performance-enhancing effects of steroidal and blood-doping chemicals, but without the unpleasant side effects (you can still get it up!). Natural? Unnatural? Permissible? Proscribed? If the officials decide such treatments confer an unfair advantage, what will they say about deep-tissue massage? Whirlpool baths? The slope grows steeper. On a not altogether whimsical note, we may extend this inquiry to a quite different scenario. Rather than take a risk with any physical means of improving their games, suppose that “Slammin’ Sammy” Sosa, Mark McGwire, and Barry Bonds discovered a remarkable sports hypnotist. Under deep hypnosis, they were told over and over, “You are a very good long-ball hitter. You will hit many home runs. You may now wake up and head for the ball park. But first, that will be five hundred dollars.” They then proceeded to hit record numbers of home runs and garner an impressive list of rewards until the whiskey-bloated lawyers in Congress, finding it unfashionable to hunt communists, hauled them in for forced testimony that forever tarnished their outstanding careers. Taken together these examples seriously undermine the moral certitude exuded by USADA bureaucrats, Oprah, her vast audience, and the “wrongdoers” themselves. Still, we are just coming to what is by far the slipperiest part of our downward rush, as represented by the equipment and facilities that are integral to athletic competition. Virtually every athletic event (perhaps excepting only nekked female mud-wrassling, which has not yet been designated an Olympic event, tant pis) involves the use of complex, manufactured artifacts in a specialized, often fantastically expensive setting such as the ball park or Olympic stadium. Kevin Costner’s Field of Dreams is built on a tract of bulldozed urban blight rather than an Iowa cornfield, and only after the city fat cats have stuffed a whopping bond issue down the throats of the rube citizens. Even a seemingly simple mano-a-máquina arrangement like a man on a bicycle is hedged around by a host of technical and financial matters. The bicycle itself is not two centuries old; before that, the particular combination of physical ability and mental toughness required to win a Tour de France was likely expended harvesting crops in a seigniorial manor. Today’s racing bicycle is a piece of cutting-edge technology, the product of advanced metal-

104  Heading for the Scene of the Crash

lurgy, engineering, and aerodynamic tests conducted in a wind tunnel. Lance Armstrong’s bicycle (rather, bicycles, since he required a stable of them for a single Tour de France) was a ten-thousand-dollar machine with incredible lightness and tensile strength. That machine was essential to his victories. Its importance cannot be overstated. Suppose that somewhere in Bulgaria, Romania, or Something-or-other-istan there lives a strapping farm lad with the metabolism of a Galapagos turtle and a dream of himself in the yellow jersey leading the pack through the tortuous course of the Tour. The only bicycles available to him, however, weigh twenty-five pounds and have tires that would fit a light truck. Unless some wheeler-dealer promoter spots the lad and plucks him out of his rural obscurity, he will grow old picking beets and riding his two-wheeled clunker around the town square. Even when an athlete’s equipment is minimal, as, say, with a Speedo suit worn by an Olympic diver or swimmer (but not too minimal—none of those scandalous Riviera codpieces for our Natural Man), the facilities required for the sport are monumental. Greg Louganis, the Olympic diving sensation of the 1980s, grew up in Southern California around swimming pools, trampolines, and diving coaches (he was later to become yet another star penitent on The Oprah Winfrey Show). The Olympic diving pool for the ten-meter platform and three-meter springboard where Louganis launched his remarkable aerial displays is at least sixteen feet deep, not exactly Mom and Dad’s backyard above-ground Target special. Had Greg grown up in bayou country as one of the cast of Swamp People, learning how to dive off the dock of his granddaddy’s crawfish hole, he is unlikely to have perfected his signature reverse two-and-a-half pike. These examples could be compounded endlessly, and all underscore the crucial fact ignored by the narrow-minded lab coats of the USADA that their so-called true sport involves the seamless meshing of physical ability and technical expertise. It is almost certainly true that these technocrats are kept too busy compiling lab reports and giving legal testimony to keep up with the vastly more interesting scientific discoveries in the field of paleoanthropology. Tool use has long been thought to be a distinctive feature of the human species: long before language evolved to anything like its present state, early hominins were feeding and protecting themselves with the help of stone tools. The human body and nervous system (including the brain) evolved to promote tool use; such is our Natural Man. Moreover, it now appears that, contrary to previous anthropology-textbook wisdom, stone tool use actually preceded the appearance of the entire Homo genus. The earliest stone tool users (and possibly makers) were not humans at all, but an australopithecine lineage that flourished over three million years ago. The most famous member of that lineage (whose claim to naturalness might now be challenged by the USADA!) is Lucy (in the sky with diamonds). Her conspe-

Lance Armstrong  105

cifics, Australopithecus afarensis, were using stone tools to butcher carcasses some half-million years before the appearance of the Homo line (Noble 2010). Human evolution was in large part a consequence of tool use, not the reverse. Hurtling down this slippery slope, we at last plunge over the edge of a vast precipice (like James Bond in the adrenaline-pumping opener of The Spy Who Loved Me) into a dark and bottomless sea. We have encountered and must now face (sink or swim!) a stunning paradox: an athlete’s physical body is in fact less natural than the implements/tools/machines she employs to display her skill. For the ancestors of those artifacts created her body, and millions of years before all the recent hype about biotechnology engineering a race of cyborgs. The human body is basically a particular sort of artifact, which we happen to find very special (since we inhabit one). How might this revelation affect our deeply rooted belief that Nature and Culture are fundamentally separate? If that dichotomy now appears far too nuanced and convoluted for bureaucratic dullards to comprehend, yet alone regulate, what are we to make of our strong feelings, our love, for the athlete? If not a display of unblemished physical perfection, what is it about “true sport” that we celebrate, even worship? Ironically, a clue to the answer to these questions is to be found in the very language of those who regulate athletics: their goal is to detect and banish the use of “performance-enhancing drugs” because they seek to ensure the integrity of performance. Anyone can ride a bicycle, but only a very few can ride at speed over the two thousand miles of jumbled terrain of the Tour de France. We like to see people who can do things very well. But only certain things. Warren Buffet is an exceptional performer when it comes to making money, but we don’t throng the streets of Omaha to catch a glimpse of its Oracle. And we don’t award him any gold medals (since he already has most of the gold). Nor do we celebrate the people skills and networking abilities of those we send to Congress; in fact, we’d much rather tar and feather that lawyerly vermin. What we value about performance is intrinsic to the meaning of the word: it is an activity involving display and focused attention. The performer, as an individual or member of a small group or team, behaves before an audience in a way that engages, excites, rivets the attention of that audience. He is the catalyst essential to transforming the humdrum doings of daily life into an event. We have been hurled over the edge of a slippery slope into the sea below, and we now find ourselves in troubled waters. If we as right-thinking, fairminded Americans insist on or acquiesce in our government and its lackeys regulating the performers among us, what are we to think about the highly discrepant treatment we apply to those individuals? Performers come in all stripes. We bestow attention, even adulation, and riches on them based on their ability to engage and excite us. Some accomplish this on the playing

106  Heading for the Scene of the Crash

field, some on the racecourse, some on the three-meter board, and still others on stage, film, CD, or even, to invoke a rapidly disappearing world, through the written word. Yet if it is superb performance we value, why should we apply different standards to the outstanding performers among us? Particularly now that we have seen how intractable the Nature/Culture opposition is, and in deference to the cherished American value of fair play, should we not demand that all our performers adhere to the same standards of conduct? Perhaps, to the delight of the bureaucrats in the USADA, we should greatly extend their mandate, tasking them with the responsibility of ensuring that all our performers are “healthy and clean” exemplars to the general public and, especially, to our young people who emulate them. Yet as the inquisitors begin their new assignment, they immediately encounter some deeply disturbing material. Having decided to begin their new studies with the performance-arts equivalent of Olympic gold medalists and their arch villain, Lance Armstrong, they compile CDs, DVDs, and journalistic accounts of a musical group that over the decades has provided the most successful spectacles of any type of performance, including sporting events such as the Super Bowl. That group goes by a whimsical name: the Rolling Stones. The lab coats confirm persistent and shocking rumors that a prominent member of that group, one Keith Richards, is often under the influence of a variety of controlled substances and, horror of horrors, sometimes performs on stage while in that condition. Moreover, they learn that the leader of the group, a Mick Jagger, is said on occasion to do the same, prancing around the stage like the drug-crazed maniac he apparently is. Considering the blatant disregard these performers show to their bodies and, far worse, to the multitudes that idolize them, the USADA must act swiftly. Using its expanded authority, it acts to strip the Rolling Stones of every musical award the group has received over the past half century. And the bureaucrats, supported by a phalanx of lawyers, take steps to impound and seize the fortune the group has amassed through its illegal activities. They embark on the daunting task of removing the group’s songs from YouTube and other social media while confiscating any CDs and DVDs they locate in stores and online. Having sniffed out this flagrant violation of our basic values, the lab coats are distressed to find that the stench goes far deeper than contemporary musicians caught up in the narcissistic drug culture. Additional research documents that major figures in literature were anything but “healthy and clean,” and, even more alarming, that their work is tainted by unmistakable signs of their substance abuse. On reviewing the novels and short stories of Ernest Hemingway the investigators find that all exude the strong bouquet of liquor, and that the blood-alcohol content of his later work in particular should be incorporated in its titles: Islands in the Stream (of Rum), for example. Fearful of the harmful effect Hemingway’s conduct may have on the millions of

Lance Armstrong  107

Americans required to read his poisonous books in school, the authorities make every effort to eradicate that influence by seizing copies of his books and expunging references to him in textbooks. And just as they did with Lance Armstrong and his trophies, they strip Hemingway of his Nobel Prize. As the expanded USADA digs deeper into the field of literature, it finds other cases that require its inquisitorial attention. It discovers that the nation’s youth, already the victims in a raging war on drugs, are subjected throughout middle school and high school to the poetry of an especially pernicious figure: the notorious opium addict Samuel Coleridge. Like Hemingway, Coleridge not only made no secret of his drug abuse but wove it into the body of his work with dark, disturbing images. In “The Rime of the Ancient Mariner” ([1798] 1997), which millions of our children are required to read at a young and impressionable age, we find deeply troubling passages: Alone, alone, all, all alone, Alone on a wide wide sea! And never a saint took pity on My soul in agony. The many men, so beautiful! And they all dead did lie; And a thousand thousand slimy things Lived on; and so did I. I looked upon the rotting sea, And drew my eyes away; I looked upon the rotting deck, And there the dead men lay. I looked to heaven, and tried to pray; But or ever a prayer had gusht, A wicked whisper came and made My heart as dry as dust.

....................... All in a hot and copper sky, The bloody sun, at noon, Right up above the mast did stand, No bigger than the moon. Day after day, day after day, We stuck, nor breath nor motion;

108  Heading for the Scene of the Crash

As idle as a painted ship Upon a painted ocean. Water, water, every where, And all the boards did shrink; Water, water, every where, Nor any drop to drink. The very deep did rot: O Christ! That ever this should be! Yea, slimy things did crawl with legs Upon the slimy sea. About, about, in reel and rout The death-fires danced at night; The water, like a witch’s oils, Burnt green, and blue, and white.

Coleridge’s final outrage, which prompts the lab coats to drastic action in removing his name from the record of world literature, is that he actually composed a large part of one of his most famous poems, “Kubla Khan,” while in an opium stupor. Even Coleridge’s decadent English contemporaries were scandalized by his audacity in publishing his hallucinations as poetry. Clearly, such behavior is unacceptable to anyone who values the integrity of performance. The integrity of performance. At this point in our inquiry it is difficult to know just what that phrase might mean. Readers will appreciate that the previous pages have been an exercise in reductio ad absurdum (although an occasional reader with ties to the Moral Majority might endorse these arguments to the letter), a fixture of philosophical and mathematical thought since the pre-Socratics. If we approve the punishments meted out to Lance Armstrong for his use of performance-enhancing drugs, then we must condone punishment for other exceptional performers who have done the same. If that course of action is untenable, then our treatment of Lance Armstrong is seriously in error. Something is deeply amiss in the American socio-logic. To begin to understand what that might be, it is necessary to employ the classical reductio argument in a way that departs from the formal proofs of Whitehead and Russell (1910–13). In the matter before us there is no unambiguous truth value: [it is not the case that A entails B and A entails not-B] does not apply. The law of contradiction, a bulwark of traditional philosophy, is of no help here. Why? It is because the Lance Armstrong affair, like every cultural phenomenon, obeys a “logic” that owes far more to Camus than to Russell. What most Americans accept as unquestionably true—the need to

Lance Armstrong  109

assure that athletic performers be “healthy and clean”—is shot through with ambiguity and irresolvable contradiction. Our moral compass is not fixed on a true course because there is no true course; an unflinching examination reveals that compass to be spinning haphazardly from one point to another. Any certain truth one proposes is therefore incomplete and mistaken, and to insist on it, particularly by legislating it, is an absurd undertaking. It is a page from Camus’s The Rebel ([1951] 1991), not Whitehead and Russell’s Principia. It seems the only honest approach for a cultural analysis of the Lance Armstrong affair and, by extension, American society in general is to identify key dilemmas at the heart of our set of basic values.2 Any credo put forward as a guide for behavior, especially the all-too-common odious variety that regulates and punishes, is inevitably skewed, a one-sided distortion of an underlying absurdity. The key dilemma (or “elemental dilemma,” following Fernandez [1971, 1981]) in the Lance Armstrong affair is the irresolvable contradiction posed by an extraordinary individual being both an autonomous actor and a social being subject to the laws and standards of a group composed of highly diverse but mostly ordinary individuals. We value his exceptional performance yet at the same time insist that he conform to rules set by all-too-unexceptional people who insist on living in a mediocre world. The unhappy marriage between the individual and society is a fundamental feature of human life, but is particularly strained in the United States. Only in Camus’s world would the slave owner Thomas Jefferson draft what is arguably the best-known sentence in the English language: “We hold these truths to be self-evident, that all men are created equal.” Founded on absurdity, American society over the past two-plus centuries has become a land of irresolvable contradictions (we are the logician’s excluded middle, the “or” symbol in the Principia proposition ⋅2.11.⊢. p V ~p). Nowhere is this more evident than in the matter of competition. Created equal, everything in life urges us to get ahead. Of course, it is impossible to get ahead without leaving others behind. During the first decade of the twenty-first century, financial inequality in the United States returned to the extremes reached during the boom-and-bust era of the late 1920s that precipitated the Great Depression: In the United States, wealth is highly concentrated in a relatively few hands. As of 2010, the top 1% of households (the upper class) owned 35.4% of all privately held wealth, and the next 19% (the managerial, professional, and small business stratum) had 53.5%, which means that just 20% of the people owned a remarkable 89%, leaving only 11% of the wealth for the bottom 80% (wage and salary workers). In terms of financial wealth (total net worth minus the value of one’s home), the top 1% of households had an even greater share: 42.1% (Domhoff [2005] 2017).

110  Heading for the Scene of the Crash

If competition for wealth and social status has now largely played out, with 1 percent of Americans owning nearly half of the country’s financial resources, we non-1-percenters are left with a burning need that has no real-world economic outlet. How can one hope to get ahead when the odds are so terribly long? It seems that American culture has generated two complementary responses to the agonizing problem of increasing inequality and wage servitude in this land of golden opportunity: spectator sports on a massive scale and television reality shows. While politicians of a declining Roman republic of the second century bc devised the scheme of “bread and circuses” to keep their masses from rising up in protest against their corrupt regimes, the American establishment has hit on a more stringent plan: forget the bread and concentrate on the circuses. Sporting events have lost most of their former appeal as local affairs in which ordinary people could participate: while kids still play ball in vacant lots on occasion (when they aren’t exercising their thumbs on their iPhones), and while a few oldsters still slog around softball diamonds in community parks, much of the participatory nature of sports has lapsed. Instead, enormous Colosseum-like structures have been erected in our cities, and every two years an entire sports complex—a sprawling athletic village—is built to host the Olympic Games. Those kids still playing Little League baseball are inculcated, sometimes violently by dads frustrated by their own mediocrity, with the hallowed American value of competition. Yet only a tiny fraction of those kids wind up in the big leagues, The Show that mesmerizes the herd made up of their former teammates who did not make the cut. Baseball, our unofficial national pastime, has been transformed almost beyond recognition over the past several decades. Billionaire owners trade millionaire players in a twenty-first-century slave market and send them out to play in immense stadiums erected as municipal shrines at taxpayers’ expense, stadiums with roofs, climate control, and Astroturf for grass. Games played at night under batteries of lights with near-freezing temperatures outside have become the norm for the World Series (the exigencies of cable TV coverage demand it). And the playing season, already long, has been extended to pump up the bottom line. The team itself has become a specialized corporate unit. The boys of summer have become the designated hitters, relief pitchers, and base runners of November. Even with their new corporate structure and big-screen HDTV appeal, however, spectator sports have taken a back seat to a phenomenon that has exploded at the heart of American popular culture: reality shows. In a sense, MLB, the NFL, and the NBA serve up sports programming that is itself a genre of reality television, since they are unscripted displays of American competitiveness in action. But the definitive shows that have completely transformed

Lance Armstrong  111

American television are much more recent than corporate-based sports. Productions of the 1970s such as The Dating Game, The Newlywed Game, and The Gong Show paved the way for shows of the late 1990s and 2000s that took the television industry and the American public by storm. The phenomenal success of the now-iconic shows Survivor and American Idol ushered in a new viewing environment with a myriad of shows that feature competition as the supreme value in virtually every facet of American life. Participants in these shows do not simply go on vacation to exotic locales (Survivor, The Amazing Race), enjoy singing and dancing (American Idol, Dancing with the Stars), work at advancing in the world of business (The Apprentice), form romantic attachments (The Bachelor and The Bachelorette), or even, in what may well be the most pernicious of these shows, play the little-girl game of dress-up (Toddlers & Tiaras). JonBenét’s body lies a-moldering in its grave. Participants do none of these real-world things; instead, they engage in contrived and cutthroat competition to see who can do reality-show things best, who can be the winner. As traditional religious faith and church attendance wane even in this land of Puritan ancestry, it would not be an exaggeration to suggest that reality television has become the new national religion, one that engages and excites tens of millions of viewers and keeps the most popular shows at the top of rating charts. From week to week, we can’t wait to see who gets voted off Survivor and who the nasty judges of American Idol send home in tears. It is a “religion” based not on Christian love or Islamic orthodoxy, but on raw, unbridled, in-your-face competition. However, the bitter irony of reality television is that the situations and made-for-television personalities and dramas of the shows are hopelessly artificial, distorted and contrived versions of competitive life in an American society that has already picked the winners—that tiny 1 percent who own and control the bulk of the nation’s resources. The reality of American life, its stark inequality, racial hatred, rampant gun violence, perpetual war, untreated medical conditions, prisons (for profit!) bursting with a population that dwarfs that of Solzhenitsyn’s Gulag—none of this is touched on in the breadless American circuses that enthrall us. For all too many of us, the multitudes that make up the shows’ audiences, actual life is incredibly alienating and painful, and so we eagerly grasp at a fictional reality composed of the basest stereotypes and passed off as genuine. In The Future of an Illusion, Freud (1927) lays out a formidable and chilling argument in which he describes monotheistic world religions as a collective case of a self-delusion neurosis, a neurosis cultivated by people incapable of facing life’s problems without a cognitive/affective crutch. And in Civilization and Its Discontents (1930) he extends that argument to civilization as a whole: human society is a fabric of palatable lies, woven over the ages to dis-

112  Heading for the Scene of the Crash

guise irresolvable conflicts within each individual psyche. Here is the reality that our new national religion, reality television, does everything to conceal. In its tentative encounter with its host culture—ourselves—American cultural anthropology has paid insufficient attention to these fundamental arguments that come to us brilliantly presented in the work of Camus, Freud, and, yes, Burroughs. Instead, that faltering academic discipline has preferred virtually to ignore Camus’s penetrating analysis of modern society and to dismiss Freud and the psychoanalytical approach as inadequate to the task of the description and analysis of social action (and incidentally has tarred LéviStrauss’s profound thought with the same brush). Although anthropologists may occasionally speak of cultural analysis as cultural criticism, that discussion is generally confined to economic and political topics. But the problem before us goes deeper: it goes right to the heart of the system of basic values we profess to embrace. As suggested above, a close analysis of those values reveals them to be shot full of contradiction and ambivalence. Rather than pursue that line of thought rigorously, cultural anthropology as it has developed in the United States tends to put a happy face on social life, taking as its program the elucidation in meticulous detail of the symbolic composition of culture—essentially an exercise in hermeneutics that celebrates the intricate structure of its subject, and not the discordant systems of nonmeaning integral to the key dilemmas of American and any culture. It is much nearer the truth to regard culture not as a treasure trove of a people’s vital essence, but as a disease, a virulent outbreak that infects and poisons its carriers. To approach culture from this perspective requires the anthropologist to examine and dissect it with the cold, analytical precision of the pathologist. It requires Nietzsche’s passion for the unvarnished truth, which he advocated repeatedly to little avail.3 In its advanced pathological state, it is essential that the anthropologist approach American society as a pathologist would a diseased organism, seeking out the specific toxins and tumors that are in the process of destroying it. In that analysis, a particularly malignant tumor attached to vital organs of our society is the body of reality shows; these sap whatever creative energy survives in a sadly diminished America. These shows are so virulent because they tap directly into the core tissue of American values: to tame the wilderness through individual effort; to make something of oneself starting with the very little available to the immigrant; or again, in a phrase, to compete and win. It is often said that American society owes its distinctive character to the experience of pioneers and settlers faced with a vast frontier that they had to conquer or die in the attempt.4 If the grand design of American culture may be described in this way, then one might suggest that the historical theme is repeated in the host of reality shows now inundating the airways. That suggestion would come with a crucial disclaimer, however, which we owe

Lance Armstrong  113

to Marx’s famous observation in The Eighteenth Brumaire: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce” ([1852] 1968: 97). The tragedy of America is part of the larger tragedy of the Americas. It is the story of genocide and environmental degradation on an unprecedented scale, perpetrated by European explorers and colonists turned loose on the New World, turned loose and intent on enriching themselves, on winning regardless of the cost in human lives and established ecosystems. “The discovery of America was followed by possibly the greatest demographic disaster in the history of the world” (Denevan 1992: 6–7). The extent of the carnage and catastrophe was not widely acknowledged for centuries after the event, although lone and immediately discredited voices were raised from the beginning (the work of Bartolomé de las Casas being an outstanding example). It might be hoped that the mistake would have been corrected by the young discipline of cultural anthropology, which in the United States came of age through exhaustive studies of Native American societies (see the impressive volumes of the Bureau of American Ethnology). To its lasting shame, however, the foremost authorities on those groups—Alfred Kroeber, dean of American anthropology, and Julian Steward, editor of the canonical Handbook of South American Indians—grossly underestimated the indigenous population of the Americas. In a flagrant display of professorial arrogance, Kroeber and Stewart dismissed population figures of 35 to 50 million advanced by las Casas and other scholars as the inflated and fanciful work of nonspecialists. Instead, Kroeber (1934) proposed a figure of 8.4 million and Steward (1946) 15.6 million. Because of their influence in the field, those numbers were not seriously challenged for decades. They provide a jarring contrast with the best current estimates of indigenous population at the time of Columbus’s arrival: 54 to 75 million (see Denevan 1992). Tens of millions perished from smallpox, measles, influenza, famine, and massacres, and the response by anthropologists was to catalog museum artifacts and record the quaint customs of the few survivors. On a smaller scale, the tragedy of America unfolded in an especially agonizing manner: in the Rocky Mountain west, with the coming of the mountain men and their exploitation by the first of the robber barons, John Jacob Astor and his American Fur Company. Perhaps no figure in American history or legend is imbued with the independence and supreme competence of the mountain man: living by his wits in a wild and hostile land, he survived hunger, bitter winters, and Indian attacks. And not only did he survive, he triumphed. In the best American tradition, he won. At least for a couple of decades. Even before the beaver began to run out and European tastes turned to silk, legendary mountain men like Jim Bridger, John Colter, and John “Liv-

114  Heading for the Scene of the Crash

er-Eating” Johnson5 felt the pressure to abandon their independent lifestyle in favor of a more regimented existence as employees of a fur company. It was a fundamental change in a nascent American culture: the freest of men became pawns in a new world of big business crafted by Astor and later robber barons such as Leland Stanford and Cornelius Vanderbilt. Astor and the others had learned the secret of capitalist alchemy: how to change the blood and sweat of others into gold for themselves. With the advent of reality television, the tragedy of America has returned as farce. Astor and the robber barons have given way to an even more crushing economic force: multinational corporations that sponsor television shows carefully designed by media giants to bring in the circus audiences with their consumer dollars (an insidious refinement of the early Roman political palliative, with the masses now supplying bread for their masters). The most popular shows, Survivor and American Idol, have replaced immensely brave and talented personalities like Bridger and Johnson with shallow caricatures of heroes and heroines who submit themselves to the abuse of the shows’ directors and judges in return for a shot at fame and fortune. It is a pathetic charade of competition in which even the supreme American value, winning, has lost its meaning, become a minor ripple in the onrushing torrent of 24/7 cable news. Who were last year’s winners of Survivor and American Idol? Or the year before, or the year before that? No one knows; no one cares. It doesn’t matter at all; the circus opens tonight under the big top/screen with a new cast of stunted, superficial characters ready to endure any humiliation for a moment of glory. And we, the American multitudes, will be glued to our sets. In what Nietzsche might have called an example of world-historical irony, one season of Survivor managed to take things beyond farce into sheer travesty and thereby expose a fundamental but contingent premise of American culture: competition and reward are inseparably linked. Who could disagree with that premise, which is the basis of the American experience from grade school to the grave, the underlying force at school, at work, at play, and, in its distilled essence, reality television? You compete, win, and are rewarded with trophies, money, adulation. You compete, lose, and are rejected and forgotten. As in all previous seasons, Survivor embraced this premise in its 2009 installment: Survivor: Tocantins—The Brazilian Highlands. Set on the Tocantins River, a tributary of the Amazon in north-central Brazil, the show followed its usual format of dividing the sixteen contestants into two “tribes,” thus underscoring its adventure theme of primitive life in exotic locales. The names selected for the two tribes were Jalapao, after the region of Brazil where the show was filmed, and Timbira, the name of an actual tribe of Brazilian Indians whose survivors lived about a hundred miles from the Survivor camp. It would be interesting to know the circumstances behind the selection of the latter name; apparently it was done to add a touch of local color—American

Lance Armstrong  115

contestants playing at being actual indigenous Brazilians. The series unfolded with the usual ridiculous tasks, backstabbing alliances, hidden immunity idols, the exile island, and elections to vote out unpopular players. The final election ended, as always, with a Sole Survivor, who took the million-dollar prize and became a television personality for a few days. Competition and reward, two sides of a coin. The travesty perpetrated by the show’s directors on an unknowing and uncaring American audience was in selecting “Timbira” as a catchy name for one of the show’s “tribes.” For everything in actual Timbira life, with its traditional homeland a bare hundred miles away, contradicts the premise of competition-reward etched in American thought and exploited in the Survivor series. Had the directors and writers for Survivor: Tocantins bothered to do more than superficial background research in selecting a site for the 2009 season, they would have discovered an anthropological classic, The Eastern Timbira, by one of the foremost ethnographers in the discipline’s brief history, Curt Nimuendajú (1946). The Timbira are one of several groups associated with the Gê linguistic-cultural stock found throughout central Brazil (others include the Sherente, Shavante, and Apinayé).6 A prominent institution of these groups, and one elaborated in intricate detail by the Timbira, is the log race. For the race the Timbira form two teams, whose membership is based on one of several dual divisions, or moieties, in the social organization of the village (age-set moieties, rainy-season moieties, plaza-group moieties, ceremonial-group moieties—theirs is, indeed, an intricate society). The teams travel several miles from the village into the galleria forest, where they cut two sections of burity palm, each weighing 150 to 200 pounds. The race begins with a member of each team shouldering the heavy, cumbersome log and running at full speed toward the village. When he tires, the log is handed off in mid-stride to a second runner and so on until the exhausted runners reach the village and deposit their logs in designated ceremonial locations. A classic competition with a race to the finish line? A race with winners and losers (hopefully none of whom have ingested performance-enhancing drugs that could be detected by a Timbira chapter of the USADA)? No, on the contrary, the Timbira undertake the grueling competition for its own sake: it is a race in which the purpose is to race, not to celebrate a winner and denigrate a loser. Log races form the national sport not only of all the Timbira, including the Apinaye, but probably of all Northwestern and Central Gê. None of the other numerous observances that characterize the public life of these tribes has so deeply roused the attention of civilized observers. This is primarily because, next to the girls’ dances in the plaza, log racing is the most frequently repeated ceremony; further, it stands out for its dramatic impressiveness … .

116  Heading for the Scene of the Crash And now we come to the feature that remains incomprehensible to the Neobrazilian and leads to his constantly ascribing ulterior motives to this Indian game: The victor and the others who have desperately exerted themselves to the bitter end receive not a word of praise, nor are the losers and outstripped runners subject to the least censure; there are neither triumphant nor disgruntled faces. The sport is an end in itself, not the means to satisfy personal or group vanity. Not a trace of jealously or animosity is to be detected between the teams. Each participant has done his best because he likes to do so in a log race. Who turns out to be the victor or loser makes as little difference as who has eaten most at a banquet. (Nimuendajú 1946: 136, 139)

The farce Marx chronicles in The Eighteenth Brumaire pales in comparison with the travesty of Survivor: Tocantins. Had old Karl been around to view the show, it would have had him clawing at his carbuncles and begging for mercy: Stop! No more of the utter absurdity of human existence! (After all, that is supposed to obey the laws of historical determinism, not chaos.) Louis Bonaparte, that caricature of Napoleon, doesn’t begin to compare with the mediocrities paraded on Survivor. In its obsession with competition and reward, American culture manages to trivialize athletic activity beyond recognition, to destroy the inherent joy of doing. Running or riding a bicycle, along with hitting a baseball, throwing a football, swimming, and skiing, may be done for the sheer enjoyment of the activity, of experiencing one’s body in concerted motion. Breath-hold diving over a coral reef, open-water swimming in San Francisco Bay,7 skiing a winding mountain trail beneath a stratospheric blue sky, running for miles along a deserted country road can be, like the Timbira log race, ends in themselves, instances of genuine re-creation that transport the individual to another realm of being. That experience is close to the exhilaration described by those thirteenth-century Provençal troubadours whose gai saber or joy-in-knowing/ doing Nietzsche commemorated in The Gay Science ([1882] 1974), echoing his own dedication to engaged and passionate experimentation (suchen and versuchen) rather than to methodical system building. To resort to a term no longer fashionable, it is about the quest. It becomes almost impossible for us to capture that sense of exhilaration when our daily existence is subject to a practice that governs American life: keeping score. What did you get on the chem test? How fast did you run the mile? How did you do on the SATs? What number is on your paycheck? How big is your house? Your car? Even, for God’s sake, your dick? (Time to email that order for Viagra—comes in a plain brown wrapper! But, oops, definitely a performance-enhancing drug!) All these questions and countless others like them are distilled in what we do for fun—or have others do for us: sports. Guys who could not manage even to run the bases sit slumped in seats at Yankee Stadium, cradling scorecards they can barely see over their beer bellies,

Lance Armstrong  117

but they keep score. The activity itself, the lived experience of superbly conditioned athletes on the field, is reduced to a pile of lifeless statistics, the raw material for an endless stream of other numbers that eventually lead to selecting the winner, the Sole Survivor in American society’s reality show of Life. These absurd questions and activities that permeate and shape all of life in America conceal a monumental irony, a cosmic joke: our obsessive need to keep score, to identify and reward those who are very good at what they do, may well lead to missing or misinterpreting truly exceptional individuals who fall outside the limited perspectives of the all-too-ordinary individuals who pass judgment on them. There is a story here, really an apocryphal anecdote (it is an Einstein story and, like most, probably is apocryphal). It concerns an organization that is one of the most prominent scorekeepers in the country and, increasingly, around the world: the Educational Testing Service (ETS), creator and administrator of the SATs that have impacted the lives of oh-so-many Americans. From an early age, children with some intelligence are taught to dread the SATs; they are told that a high score may advance their chances of becoming a professional or a manager of some sort, and thus joining that shrinking middle class (19 percent and going down) that Domhoff ([2005] 2017) described (see above). A low or even average score may doom a child of a family with ordinary means to a difficult life of labor and menial jobs; he will sink into that vast pool of 80 percent of the population who are just surviving. The story goes like this: It seems that when the ETS was just getting organized, in the late 1940s, its button-down executives were anxious to determine the effectiveness of the math section in particular—mathematical facts being irrefutable, they wished to calibrate their set of questions so that the test would accurately identify how students performed on a scale of dull to brilliant. Since the ETS was located in the intellectual mecca of Princeton, New Jersey, someone had a bright idea: just up the road, at the Institute for Advanced Study, there was an individual who was making quite a stir in the world of mathematics and physics, one Albert Einstein. Why not have him take the SAT math test they had just put together? Certainly he would establish a benchmark against which young test takers could be ranked. So they approached Einstein, he agreed, and they sat him down with the test. Now, a major portion of the math SAT tests a student’s ability to discern a pattern in a series of numbers. A question would supply a four-number series, say 2-4-6-8, and a multiple-choice set of possible answers, say 16, 24, 10, 1. The student is required to select the answer that best fits the pattern established by the four-number series, in this case the 10. As Einstein went through this section of the test, for each question he thought of an equation that would fit each of the multiple-choice possibilities. Then he picked the answer that gave him what he found to be the most interesting equation—almost always not the answer the test designers wanted.

118  Heading for the Scene of the Crash

This little experiment doubtlessly disappointed the ETS executives, but judging from the content of the SAT math test that has been inflicted on students for the past sixty-plus years, its results did nothing to dissuade them from their course of action. Einstein was obviously an anomaly, an oddball, and his toying with their sacred exam could safely be disregarded. A thought that might have given them pause, but clearly did not occur to the right-thinking, compete-and-win executives of the ETS, is that if anomalies occur in so highly structured a world as mathematics and theoretical physics, what bizarre deviations from agreed-on, socially acceptable norms might be found in other walks of life? In order to keep score it is necessary to have an authoritative scale, a means of ranking and grading individual performance. But there are in this life those rare individuals whose extraordinary gifts defy ranking; they go off the scales fixed by mediocrities like the executives of ETS. People are different, and a few people are so vastly different that it is senseless to tabulate, to score, their performance. In a catchphrase from the failed cultural revolution of the late 1960s, now but a sad and haunting memory, there are indeed the haves and have-nots, but there are also the have-something-elses. Those remarkable individuals either go off the charts or, more often, and tragically, fall between the cracks and are lost. In that case their exceptional ability, which initially establishes them as stars, dooms them to censure and sometimes ruin when they allow their exceptional abilities, whether in mathematics (John Nash), chess (Bobby Fischer), engineering (Nikola Tesla), aviation (Chuck Yeager), philosophy (Friedrich Nietzsche), poker (Stu Ungar), or, in the case at hand, bicycle racing (Lance Armstrong), to run afoul of standards of acceptable behavior. Even if we insist on maintaining scales to rank people, we encounter the next insuperable obstacle: there is not a single scale, or even a few, that adequately evaluates individual ability. Rather, there is a tangled multitude of scales that crosscut and often conflict with one another, so that any attempt to implement one hopelessly distorts the overarching truth of boundless difference. As a thirteen-year-old, Lance Armstrong already possessed an unprecedented combination of raw physical ability and mental determination. Yet everything about his society and his immediate circumstances—he was, after all, named after a star wide receiver of the Dallas Cowboys; such was his family tradition—led him to embrace organized sport as the means of realizing his potential. And that decision, taken in the context of a judgmental and punitive society, proved his undoing. None of us can experience or perhaps even imagine the tremendous stamina and mental toughness required to stay at the head of the pack of the Tour de France, day after day, year after year, but all too many of us are quite prepared to thwart those remarkable displays, to declare them illegal, not sufficiently “healthy and clean” for the fearful and vengeful herd of nonentities that makes up American society.

Lance Armstrong  119

A parting thought: The vast sea of seven billion human beings awash on this fragile planet, those multitudes, is akin to the night sky—dark, without depth or substance, obscure, formless. That sky makes up a background for the stars, each star impossibly isolated from the others, alone, blazing in the dark immensity of space, each with its own history, its birth, evolution, and death. There is no racecourse, no set of standardized tests, no contest of any description that a star must strive to win. The star’s light radiates aimlessly, forever, illuminating the darkness of space and imparting to it whatever form it may possess. Here or there its beams happen to strike a random atom, perhaps, on the rarest of occasions, an atom in the retina of a sentient being. That is all there is, that is the “career” of the star. Here or there … here or there an individual star blazes so brightly that it consumes itself, devours its own matter, reaching the point at which it collapses in on itself in a spectacular explosion, a supernova of cosmic proportions, incinerating or scorching everything around it. Then it sinks into oblivion forever. Lance Armstrong.

Notes   1. The full interview is available in four parts on YouTube, beginning with “Oprah and Lance Armstrong The Worldwide Exclusive Part One” (retrieved 10 June 2017 from https://www.youtube.com/watch?v=e_-yfFIiDao).  2. For a detailed presentation of this proposal, see Drummond 1996: chap. 3, “A Theory of Culture as Semiospace.”  3. For extensive discussions of this idea, see Drummond 2010 and chapter 4 in the present volume.  4. See the classic work by Henry Nash Smith, Virgin Land: The American West as Symbol and Myth (1950).  5. Johnson was most definitely not the character portrayed by Robert Redford in Jeremiah Johnson. See the gripping account of the Liver Eater in Thorp and Bunker 1969.  6. In addition to Nimuendajú’s monograph, for a thorough analysis of Timbira culture see Drummond 1967–68.  7. See Edwin Dobb’s brilliant essay “Immersed in the Wild” (2010).

Chapter 4

n

Shit Happens An Immoralist’s Take on 9/11 in Terms of Self-Organized Criticality

I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. —Umberto Eco, The Limits of Interpretation Momma used to say that life is like a box of chocolates: you never know what you’re going to get. —Forrest Gump (originator of this essay’s title phrase), Forrest Gump Toward a Nietzschean Anthropology: I believe that a Nietzschean anthropology is possible … . [It] would adopt Nietzsche’s notion of humanity as a dynamic entity in process of fundamental transformation. Also Sprach Zarathustra sings with the dialectical tension of the antithetical yet mutually implicative processes of untergehen and übergehen, of humanity as a transformative system always caught up in a going-under and a going-over. “Man is a thing that will pass” … . That anthropology would … pursue a starved, reckless, take-no-prisoners cultural analysis of the inherent strangeness of our species. —Lee Drummond, “Culture, Mind, and Physical Reality”

Interpreting 9/11

N

ature abhors a vacuum, but not nearly so much, it would appear, as does culture.1 When those airliners plowed into the World Trade Center (WTC) and the Pentagon, they unleashed, in addition to that incredible destruction, an avalanche of interpretations: anyone with breath to speak (and, in truth, a microphone in front of his or her face to breath into) or ink/electrons to write immediately began to spew forth a great torrent of words, a torrent not unlike the gruesome

Shit Happens  121

confetti of airborne pages settling to the ground amid the choking dust and ruins of the twin towers. As with other bizarre and deadly events that have burst upon the American public completely unexpected and almost lacking in precedent—Jonestown, Waco, Oklahoma City, Heaven’s Gate, Columbine—9/11 unleashed a mad jumble of reactions. The mediocrities we anoint as our commentators and spokespersons of the conventional political and cultural spectra were caught totally unprepared. As the endless video loops of the planes hitting the towers and the towers coming down bored their way into the depths of consciousness of every viewer caught, like a deer in the headlights, in the glare of the television screen, those “commentators” realized they had to come up with another endless stream or loop, a spiraling loop of words to fill in the senseless images of destruction and terror. Twenty-four-hour television news, day after day of newspapers with only one real story, magazines rushing special issues into production—all that frenzied activity, like the frenzy of victims and onlookers struggling to escape the scene in lower Manhattan, struggling to survive, to draw the next breath, however choked with dust and filth it might be, that frenzy led into the vast yawning maw of the American public’s need to know, of their need to comprehend the incomprehensible, to fix on a pin this horrid, unknown little monster that settled, on wings of fire and death, into the heart of the nation. With every conceptual and emotional bearing lost or destroyed, with the gyroscope of common sense smashed and ripped from the control panel, our “commentators” were like the rest of us, in free fall, in a free f ’all, trying desperately to sustain that endless torrent of words, to keep pace with the mindless, unstopping video loop of those planes and their stricken targets. And so their reports, their words, were all over the place, as uncoordinated and confused as the scene of panic at Ground Zero. A day after the attacks, Jerry Falwell was on his buddy Pat Robertson’s cracker-barrel TV show, ranting about pagan abortionists and women’s libbers calling down God’s wrath on America by their foul, unholy actions. According to Jerry, who claims to be on a first-name basis with his Maker, the Big Guy figured, hey, things are really getting out of hand down there! I’ll have to show those abortionists, those fornicators, those ungodly liberals. But how? Shall I wipe out Los Angeles? Burn Paris to the ground? Afflict the sinners with a terrible pestilence? Cause the oceans to boil? Nah. Wait, I know! I’ll take out the World Trade Center! Divine logic! What could be clearer to an all-knowing, all-powerful God? … And there are people, plenty of people, who believe this absolute crap. After all, look who won the non-election. At the other extreme of the political and intellectual spectrum, but with equally tactless timing, Susan Sontag weighed in, in a hastily compiled New Yorker piece (2001), with a message curiously similar to Falwell’s: it was our

122  Heading for the Scene of the Crash

fault. Terrible as they were, the events of 9/11 should not have come as a surprise. America has become corrupt and evil, not in the eyes of Falwell’s God, but in the all-seeing intelligence of a superstar critic who can discern the sinister workings of a government conspiracy in every line of a New York Times article. If we have eyes to see and a brain to think with, we routinely wallow in guilt, good guilt, well-deserved guilt, the sickening, consuming rot that gnaws every liberal’s entrails. How could we—meaning we perceptive, well-intentioned, enlightened sorts—think otherwise? After all, it is clear that lots and lots of people in the world hate us, and with good reason. American bombs, land mines, economic embargoes, and plain old-fashioned capitalist exploitation have earned us the enmity of much of the Third World, those dark, impoverished masses who, on rare occasions, tire of huddling and decide to strike back. Attack the Great Satan. Hijack the planes. Aim them at the oppressive bifid penile icon of America and Globalization. We are guilty. We deserved it. Extreme views, to be sure, but then most views in the wake of 9/11 were extreme: contortions of an everyday reality stretched well past the breaking point. George Jr., who prior to the fateful day probably could not locate Kabul on a map, not only learned his geography in a hurry, but discovered a deep appreciation for Islam as a religion of peace and forbearance. He and his senior henchmen beat a path to the nearest mosque, where they listened piously to proclamations by Islamic clergy of their love for peace and brotherhood. George Jr. even chimed in to underscore their sentiments, his Texas ballpark drawl resonating strangely in that exotic setting. Nearly simultaneously, Salman Rushdie, who has been on the receiving end of some of that Islamic peace and forbearance, made the alarming suggestion, “Yes, This Is about Islam” (2001). That Rushdie, he sure knows how to hit a nerve. While George Jr. and cohorts were wrapping themselves in pious encomiums about Islam in preparation for going forth to murder tens of thousands of Muslims, Rushdie had the bad form and unerring vision to observe that “Death to the Infidels!” is not just a stock phrase from B-movies, but a sentiment endorsed, if not always shouted, by a good many of the planet’s billion-plus Muslims. With the pot on a high boil and everyone reaching in with a stick to stir it, these and many more conflicting interpretations rose to the top, forming an impenetrable froth in which we all lost our way. It began to seem that Inspector Harry Callahan was right, in The Dead Pool (1988), when he observed that opinions are like assholes: everybody’s got one. The opinions that counted most in the tragedy and its aftermath, of course, were the most extreme of all. Never mind Falwell, Sontag, Rushdie, that series of closely argued essays in the New York Review of Books, or those really esoteric academic interpretations that have begun to appear. (Why, I even noticed a piece not so long ago that tied everything in with Mohammed

Shit Happens  123

Atta’s “genital anxiety”!) Never mind all those; the opinions that really mattered were those of George W. Bush and Osama bin Laden. George Jr. and Osama: central casting could not have come up with a better pair of opposing characters. George Jr., a hayseed Goliath as powerful as he is dull-witted; Osama, a mysterious poet/murderer who resembles conventional portraits of Jesus, only cradling an AK-47 rather than a cross and speaking in Arabic as flowery as George Jr.’s Texan is coarse. Utterly different characters, yet both seem convinced to the marrow of the fundamental rightness of their causes. Not since Reagan have we heard such Bible Belt slogans from the White House: echoing Ron’s obsession with the “Evil Empire,” George Jr. routinely and matter-of-factly refers to the “evildoers.” He’ll git ’em; he’ll smoke ’em out of their holes. Yay-uh, brother! Shout hallelujah! Of course, it’s hard to know just what Osama’s pronouncements are, since Condoleezza Rice and Donald Rumsfeld have decided that even his words are a terrorist act and banned them from our national airways. Before their odious censorship began, though, it was pretty clear that Osama’s views are as absolutist as George Jr.’s: America, Zionism, and globalization are all avatars of the Unbeliever, the Infidel, the Great Satan that humiliates and murders Muslims everywhere. America and its icons must be destroyed, wherever and whenever possible. Forget rules of engagement, forget articles of war, just git ’em, kill ’em all. Or however that translates into Osama’s classical Arabic. Although the most bitter of enemies, George Jr. and Osama, along with much of the world in the days following the attacks, appeared to share the view that 9/11 was a momentous, earthshaking event. One of Bush’s first pronouncements after the little coward came out of hiding that fateful day was that “the world has changed.” He then signaled to the American public and particularly to his Pentagon and defense-industry cronies that we were embarking on “the first war of the twenty-first century,” a global war against a new enemy: terrorism. Osama, in his videotapes, clearly saw the event as a decisive strike, a culmination of years of more modest Islamist bombings. For both men, the manifest enormity of the 9/11 attacks presaged correspondingly momentous dramas that were to follow immediately: a global war against terrorism for George Jr.; a global Islamic jihad against the West for Osama. Afflicted by the feverish gloom of the weeks following 9/11, most of us were probably inclined to accept this highly dramatic version of things: the unthinkable had occurred; the world had changed; the global ante had been upped precipitously; endemic warfare was about to descend on us. But what if none of that is true? What if 9/11, for all George Jr.’s and Osama’s dramatic pronouncements, for all the media frenzy, even for all the national paralysis, what if 9/11 is something that just happened? Not the pivotal unfolding of a grand strategy, the ultimate coup of an international terrorist conspiracy. And not the tragic wake-up call for American military and

124  Heading for the Scene of the Crash

“homeland security” forces to rear up and take charge of a nation grown lax and soft under eight years of a Democratic administration. What if 9/11 is none of those things, but instead is one of those supremely unlikely events that catch us completely off guard, rivet our attention, perhaps even change a number of things about our lives, but then eventually dissolve into the mists of history, only to be replaced by other, completely different, completely unrelated events? That is what I would suggest. Just as there was no Evil Empire (another of Ron’s pre-Alzheimer’s delusions, abetted by acting in too many B-movies), so there is no vast international conspiracy, no global terrorist threat, no implacable “evildoers,” no “axis of evil,” themselves perhaps largely a product of George Jr.’s James Bond fantasies of going up against SMERSH and SPECTRE, tinctured by his Masonic Lodge paranoia. And on the Islamist side, there is no Great Satan, no world-dominating infidel intent on mounting a modern Crusade to crush the true believers, none of those dramatic entities conjured up by a super-rich Saudi playboy/businessman turned visionary/murderer. All these ideological layerings on the event of 9/11 are just that: residues of our desperate, flailing attempts, in the grip of the utter confusion of our conflicting emotions of shock, anger, fear, hatred, to find relief in a web, largely already spun, of interpretations of uninterpretable chaos, of answers to the unanswerable questions posed by The Event. As Umberto Eco observes in The Limits of Interpretation, it is those desperate attempts themselves that make the world terrible by approaching events as though they were explicable. It comes down to this: Do we, meaning here primarily we Americans, and they, meaning here Muslims around the world, choose to accept George Jr.’s and Osama’s versions of 9/11? Do we sign up for George Jr.’s hastily concocted war against global terrorism and turn a blind eye to his looting the national coffers as he shovels more and more chips across the table to his murderous cronies in the Pentagon and CIA? Do we sit by as George Jr. and that notorious bigot John Ashcroft further erode civil liberties and individual privacy under the noxious banner of “homeland security”? And in the other camp, do Muslims around the world feel Osama’s call so deeply that some wrap their bodies with high explosives and head for a bus stop or crowded restaurant, where they set off an indescribable carnage? And do their surviving relatives, friends, and neighbors, chafing under the hateful oppression of the likes of George Jr. and that old genocide Ariel Sharon, then turn out in the streets, flourishing larger-than-life portraits of the martyrs and celebrating their righteous acts and their ascent to the Paradise of the Seventy Virgins? I would urge that we follow neither path, because both start from and lead to fundamental misunderstandings or, worse, misrepresentations of the nature of human existence.

Shit Happens  125

To begin to get a sense of what is involved in my proposal, it is first necessary to step back from The Event and, in doing so, learn to regard 9/11 as a lens through which to view the world and not as the already constituted object of our informed vision. We close our eyes and in our mind’s eye see those planes smashing into the twin towers, again and again and again. The power and immediacy of that image obscure the questions that every thinking person must ask when she then opens her eyes: What are those images connected to? What are the social processes, cultural values, and even personality traits that lead into and away from The Event? How do those same social processes, cultural values, and personality traits influence our very perception of that traumatic image, of what appears to be happening right before our eyes? In other words, what is the nature of the association of before and after, of “cause” and “effect”? Questioning the what and how of 9/11 in this way is to open an inquiry into the structure and process of the event’s sociocultural context. It is to begin to conduct a cultural analysis. I suggest that such an inquiry leads us to sweep aside almost all the interpretations of 9/11 generated to date. George Jr. and Osama, Falwell and Sontag, Peter Jennings and the reactionaries on Fox TV, the flag wavers and the occasional protestors, nearly all these figures and the reams of documents and miles of videotape they’ve produced are beside the point, useless and often pathetic posturings that obscure the structural and processual realities of the event. In pursuing the two major questions, the what and how of 9/11, a cultural analysis—at least, the one I conduct here—regards interpretations of the event not as alternative answers or explanations, but as themselves aspects of the problem to be investigated. The what and how, the structural and processual issues before us, lead into two discrete and traditionally segregated areas of inquiry. In taking up the structural problem, we are led to consider the nature of values, here writ large as a confrontation between Good and Evil. And in addressing the processual problem, we must examine the assumptions we bring to how things are connected, the nature of connectedness, of cause and effect, of before and after. These two concerns involve dissimilar literatures, seldom combined in a single essay. In taking up the question of the nature of values, I draw on Nietzsche’s Beyond Good and Evil ([1886] 1992) and On the Genealogy of Morals ([1887] 1992), works cited often enough today but rarely taken to heart by social commentators and never, to my knowledge, applied to the events surrounding 9/11. In those works Nietzsche characterized himself and his position as “immoralist,” as one who rejects conventional values, seeing in them the very antithesis of what they purportedly represent. The present analysis is immoralist in that sense: it regards 9/11 as the instrument for a fundamental rethinking and Nietzsche-like revaluation of the conventional values paraded

126  Heading for the Scene of the Crash

in the media. In addressing the processual problem of how we understand the connection between 9/11 and its social context, I utilize work in complexity theory, specifically the centerpiece of that theory: the concept of self-organized criticality. As we proceed, I hope to establish that these highly dissimilar literatures, Nietzsche and complexity theory, come together at a deep level to provide a radically new understanding of the world.

Before and After, Cause and Effect When catastrophe strikes, analysts typically blame some rare set of circumstances or some combination of powerful mechanisms … . But systems as large and as complicated as the earth’s crust, the stock market and the ecosystem can break down not only under the force of a mighty blow but also at the drop of a pin. Large interactive systems perpetually organize themselves to a critical state in which a minor event starts a chain reaction that can lead to a catastrophe. —Per Bak and Kan Chen, “Self-Organized Criticality”

Unless one believes that values are somehow preordained, handed down from on high, one must accept even implicitly that values, like organisms, seasons, and everything else, come and go, enter and leave the world, are born and die, are created and destroyed. Hence the question of connectedness or succession, of before and after. Our commonsense understanding of how events are connected is bound up in the notion of cause and effect. Something happened, some event we can isolate and readily identify, and that made something else happen, again some event that is clearly what it is. A child throws a rock; the rock hits a window; the window breaks. Mohammed Atta manipulates the controls of an airliner in such-and-such a way; the airliner impacts the tower; the tower collapses. Like all commonsense understandings, the notion of causality serves us reasonably well in the course of our daily lives; it is in fact the basis for what we like to think of as our rational, scientific approach to life. But like all commonsense understandings, the notion of causality loses its efficacy and begins to tear at the seams when it is pressed into service to account for aspects of events that are part of the rough-and-tumble flow of everyday existence. Why did the child throw the rock? Was he aiming at the window? Did his hold on the rock slip as he released it? Why did Atta board the plane that fateful day? These questions, questions we regard as the real crux of the matter, are difficult or impossible to answer within the tidy framework of cause-and-effect association. And not because they are somehow “psychological,” and therefore

Shit Happens  127

less accessible than the strictly “physical” questions about rocks, windows, and aircraft controls. They are so difficult to answer because posing them introduces additional elements or agents to an overly simplified situation. Unless we have contrived an artificial little experiment and enlisted a child to hurl a rock at a window set up in a laboratory, the child throwing the rock in the everyday world involves a host of other considerations. Did someone give him the idea to throw the rock? Whose house did the rock hit, and did the child have some grievance against that owner? Was the throw part of a larger event, a game with other kids perhaps, or was he acting alone? Questions of this sort mushroom when we shift our attention from the child to Mohammed Atta. The real world of children hurling rocks and adults hurling 757s is composed of a great many individuals taking a great many actions for a great many reasons. It is a terribly complicated world in which the interaction of ever-changing sets of agents and events is the prevailing rule. It is, in short, what Bak and Chen (1991), in the essay cited above, describe as a complex system. In such a system it is never possible to isolate a simple cause-and-effect association between two elements that is very interesting. A single element or event in the system is always influenced by an indeterminate number of other elements and events, whose degree of influence is itself indeterminate. It is not that the principle of cause and effect is invalid; far from it. It is just that there are always so many causes with such varied and shifting effects that to single out one or two for special attention is arbitrary or—and here we come to the nub of things—politically or culturally motivated. Whether other cultures put such faith in a mechanistic principle of cause and effect is an interesting matter for ethnographers to explore, but it is undeniable that American culture, with its can-do, hands-on attitude, values that principle greatly. Far too greatly. When we see something broken, we try to fix it. The first step in fixing it is to figure out what went wrong, in short, what caused the problem. And the more evident or spectacular the failure, so, we think, the more evident or spectacular will be the cause of that failure. Some things, such as cancer, are slow, subversive problems whose causes are murky, hard to figure out. But a broken bone or a gashed arm are right there in front of us, and so, we believe, are their causes. Moreover, the greater the injury, the more dramatic the event, the more blatant the cause. This is precisely the mind-set responsible for the public reaction to 9/11. From out of nowhere, flames and exploding debris rained down on the heart of the nation. Such public, visible devastation must have a correspondingly dramatic explanation. These things don’t just happen. There must be some fiendish genius behind it, a genius who commands a legion of unbelievably fanatical, evil followers. How else to explain it? How can such a thing happen? One thing of which George Jr. and his henchmen, along with the media moguls of New York City, are not guilty: when the unthinkable happened

128  Heading for the Scene of the Crash

blocks from their own bases in Washington and Manhattan, they did not turn to complexity theory. Their patriotic outrage readily seized on an explanatory scheme that was ready-to-hand: American common sense, with its unquestioned assumption of linearly scaled cause-and-effect association. Things happen for a reason, and when big things happen there must be big reasons behind them. But, as Bak and Chen (1991) demonstrate in their masterful essay, that’s simply not how things happen at all. Bak and Chen introduce an entirely different way of thinking about events, one that might have had a tremendous influence on American reactions to 9/11, at home and abroad, if only George Jr. and the rest of the Washington and Manhattan power brokers had been better read and, well, a whole lot smarter. In fairness to George Jr. and crew, though, thinking through this matter involves more than simply modifying one’s assumptions about causality: it requires adopting a fundamentally new concept of the nature of a system, in this case, of the system that is American society. How can a trivial event trigger a dramatic change in a large system, whether that system is, in Bak and Chen’s examples, the earth’s crust and the stock market, or, here, American society? Why is cause and effect not linearly scaled? Why should the gnat bother the elephant? The answer that complexity theory offers to these difficult questions is remarkably counterintuitive. We are used to thinking of large-scale, enduring organizations as stable. After all, in order simply to become an enduring, large-scale organization, a system by definition has had to get its very involved act together and keep it together for a long time. That’s what “stable” means. Isn’t it? Well, as it turns out, no. What Bak and Chen are saying is that any system composed of a considerable number of elements that have interacted among themselves for an extended period has worked itself into a state of delicate balance. “Stability” is not the unswerving course a juggernaut sets through the ocean swells of history (America as a “ship of state”); it is, rather, the quavering, step-by-step maneuvers of a high-wire artist attempting to cross an abyss. The slightest thing—a tiny slip, a gust of wind, a tremor along the cable—can spell disaster. In a stunning reversal of common sense, complexity theory proposes that the “natural” state of any highly organized system is not stability in the conventional sense, but criticality or, specifically, self-organized criticality. A system’s interacting elements make a series of accommodations to one another, accommodations that are always in the nature of a compromise: move just enough this way, but not so much as to disturb some other relationship extending in another direction. Criticality is very much a boundary phenomenon, a creature of the periphery. If a system’s internal accommodations are strong, unambiguous, and rigid, then it simply ossifies, freezing into a simple crystalline structure that loses its dynamism and interest. Conversely,

Shit Happens  129

if its accommodations are sporadic and ineffective, its elements cease their patterned interaction and the system dissipates. The “happy medium” of an organized, dynamic system is thus not particularly happy, for the inherent order of that complex system is not a placid, established regularity but a state of tension operating at the very edge of chaos. When applied to the self-organized system that is American society and to the event of 9/11, the concept of criticality puts things in a perspective very different from that promulgated by George Jr. and the media moguls. The fact is that 9/11 was not an intrusive calamity, a sudden eruption of terrorist-sponsored chaos that threatened a well-established, God-fearing, law-abiding, solid-as-a-rock America. That America is a fiction, an ideological fabrication that its masters and, in truth, most of its citizens tell themselves to keep an alternative and awful truth at bay. Much of what counts as The Event of 9/11 was internal, not intrusive, and an example not of the intervention of foreign “evildoers” but of the simple if counterintuitive truth that social arrangements are perpetually about to come unstuck. Our lives, yours and mine, are strung along a razor’s edge of circumstance. On either side of that razor’s edge there awaits an abyss, an abyss where there resides not Fate, but just one of those things.

(Beyond) Good and Evil Whoever fights monsters should see to it that in the process he does not become a monster himself. And when you look long into an abyss, the abyss also looks into you. —Friedrich Nietzsche, Beyond Good and Evil

I have said that 9/11 is a lens for interrogating the conventional values we bring to that event and, in the process, for revaluing them in a manner that follows Nietzsche. The view I present here is that the principal reactions to 9/11—George Jr.’s and Osama’s, and through them the public response of America and the Islamic world—are hopelessly, obscenely wrong. The millions of American flags plastered on shop doors, office windows, and car bumpers throughout the country and the postcard- to banner-size photos of Osama sold in kiosks throughout the Muslim world are tokens of a willful and grotesque refusal to look deeply and searchingly into the abyss of The Event. When one does conduct such a search, then one discovers readymade values that disguise monsters and, just perhaps, a set of radically different values that rise from the ashes of convention and stereotype. Why has it been so difficult for social commentators to see this? Forget about the political dim bulbs and the vacant, tarted-up faces of TV news

130  Heading for the Scene of the Crash

anchors—their ilk can only be expected to seize on the most shallow and crude explanations. When disaster strikes, when the monster peers up out of the abyss, they intuitively head for Oscar Wilde’s last refuge. But why have much smarter people dared to go only a little further in examining 9/11? Why have thinkers of Sontag’s caliber stopped with knee-jerk radicalism, using 9/11 and the US government’s totalitarian response as just another segue into a familiar critique of American foreign policy? Why have professional students of society and culture —and here I am thinking of my own tribe, cultural anthropologists—not plunged into the depths of cultural analysis that 9/11 demands?2 At least for this last question, I think I have the answer. The core of the problem for cultural analysis is that popular discourse— that of George Jr. and Osama, Dan Rather and Peter Jennings—is steeped in values and value judgments. That discourse is mired in convention, in stark stereotypes of good and evil that are assumed to be self-evident. In confronting the event of 9/11, our politicians and media personalities apply those stereotypes unquestioningly; for them and their constituencies those stereotypes capture what social life is all about. But cultural anthropology, throughout its brief and unimpressive intellectual history, has studiously avoided discussing values as values and has maintained a strict silence regarding those ultimate values, “good” and “evil.” We will soon need to take a close look at the phenomenon of cowardice and at specific acts of cowardice in this inquiry into 9/11, but even at this juncture it intrudes—for cultural anthropology, and most contemporary informed or “intellectual” cultural criticism, has played the part of the coward. Cultural anthropologists and their academic cousins have avoided any direct engagement with values, and with the danger and personal turmoil values carry with them, through a bizarre combination of segregation and denial. Segregation has been accomplished through the doctrine of cultural relativism: “they” have their own, distinctive set of values that make perfect sense within their cultural universe and that are not subject to “our” Western evaluation of them. Ruth Benedict’s Patterns of Culture (1934) encouraged three generations of cultural anthropologists and the educated American public to adopt the liberal, self-congratulatory posture of cultural relativism. In a shameless misrepresentation of Nietzsche’s concepts of Apollonian and Dionysian, as developed in “Homer’s Contest” ([1872b] 1954) and The Birth of Tragedy ([1872a] 1954), Benedict introduced the wholesome, feel-good attitude that has dominated professional and popular thought about other cultures: “they” are miniature, self-contained worlds whose inhabitants, once “enculturated,” think and behave in a manner entirely consistent with the standards and beliefs of their fellows. If Zuni are the embodiment of tranquil solidarity, Kwakuitl of unbounded competitiveness, and Dobuans of continual paranoia, it is because their respective cultural values have fashioned them

Shit Happens  131

in those ways. What appear to be extreme behaviors and beliefs from outside one of those societies are seen to be perfectly explicable and “natural” when put in their proper social context. It is only we fallible outsiders, meaning Westerners, whose own cultural precepts place blinders on our vision, who adopt the heinous perspective of “ethnocentrism” and judge those pristine others by our own corrupt selves. That classical formulation is now in tatters. “Cultures” were never segregated, internally consistent complexes of values and beliefs, but were always unstable arrangements of conflicting values held by diverse assortments of individuals. And in the seventy years since Benedict’s work appeared, waves of migration, world war, nation building, and global consumer capitalism have swept away even the vestiges of those supposed “primitive worlds.” From the standpoint of a searching cultural analysis, the relativism of Patterns of Culture is an intellectual cop-out: by claiming that “their” values are right for “them,” however bizarre and repugnant they may seem to “us,” we are spared the difficult but necessary next step of interrogating their values in terms of our own and, far more critical, our values in terms of theirs. Postmodernist cultural anthropology, with its signature technique of “deconstruction,” has carried Benedict’s intellectual cop-out several steps further, while claiming, of course, that it overturns all her thinking. Under the dubious banner of “anti-essentialism,” postmodernists dispute the claim that a human group or “culture” maintains a core set of beliefs and values basic or “essential” to its existence. Instead, they regard every belief or value as simply a reaction to some other belief or value, an interpretation of an interpretation, a text commenting on another text. Postmodernist anthropology thus absolves itself of having to confront even Benedict’s segregated values by denying their relevance altogether: the urgency and fateful consequences of values simply drop out of the picture. It is a shame and a scandal for cultural anthropology, the field whose dual techniques of ethnography and cultural analysis promised to lay bare the fundamentals of human life. In a world riven by ethnic hatred and warfare, a world of violence unimaginable when Patterns of Culture appeared, postmodernist anthropologists have closed ranks, a tiny group of smart, well-protected academics, modern courtiers really, engaged in private games of hyper-reflexive word play while the world seethes in blood. The niceties of intertextual interpretation would be lost on Mohammed Atta and his band of fanatics, as they would be on Israeli storm troopers who rain death and destruction on Palestinian concentration camps from their American-made attack helicopters and tanks, as they would be on Serb artillery men who shelled civilian apartment buildings in Sarajevo month after month while the world stood by, and as they would be on the roving gangs of Hutu butchers who hacked to death an average of ten thousand

132  Heading for the Scene of the Crash

of their Tutsi countrymen every day for ninety days, again while the world stood by. In all these cases, values were obviously of the utmost importance, a matter of life and death. Cultural anthropology’s refusal to grapple with them reveals more than a flawed analysis; it reveals a failure of nerve. It is an act of cowardice. For the immoralist to pursue a cultural analysis of 9/11, he or she must first disavow the heritage of Benedict and the postmodernists, must recognize and avoid the condition of moral paralysis induced by their cowardice. In doing so, however, the immoralist does not thereby place cowardice beyond the purview of analysis. What does it matter, after all, whether a few pale and trembling academics draw back from the searing flames of conflict? There is a larger issue. The immoralist cannot discount a possible role for cowardice in the unfolding saga of 9/11 because it may well prove central to the event. That is precisely what I would suggest.

America the Bullyful: Scale and Perspective In the hours and days immediately following the attacks, as I sat transfixed watching nonstop TV coverage, even then, in the rush of emotion, a troubling thought took root and began to grow: having been struck a quick succession of stinging blows, America, its president and its people, was behaving like a coward. A particularly repugnant sort of coward. America was behaving like a bully. The bully uses its size and strength to intimidate its smaller and weaker fellows and gets used to pushing them around. Gets used to liking it. But the bully’s obnoxious behavior doesn’t flow from its strength, just the opposite: despite its fear-inspiring might, the bully knows itself to be weak inside. It is afraid. And when, on occasion, one of its victims stands his ground and dares to strike back, the relatively minor blow he manages to inflict is enough to send the bully squalling home with a bloodied nose, a blackened eye. George Jr.’s blustering pronouncements, once he came out of hiding that fateful day, were the self-pitying howls of the bully who can dish it out but can’t take it. How dare they commit these unspeakable acts of terror? Those evildoers! The world has changed! Boo-hoo-hoo. What is the bully’s revenge? The bully gets its friends together (assembles a great “coalition”), arms them to the teeth, and sets out with them in tow to smash the upstart, to grind him into the dust, to make sure he or his kind never again dares to fight back. The whole world must rise to its defense; it is a moral imperative, unquestioned, absolute. Of course, the bully doesn’t call it “revenge” or address the obvious question of why such massive force is required to subdue a weak and scattered resistance. That question doesn’t

Shit Happens  133

arise because the enemy has committed the unforgivable sin: not only has he harmed Americans, and in some number, but he has attacked them in the very heart of the nation, in its financial towers and its military bastion. He has sullied everything we stand for, the World Trade Center and the Pentagon. These are the bully’s shrines, its icons of its own perceived greatness: buildings crammed with cutthroat deal-makers who pursue the almighty buck with a single-mindedness rarely seen on the planet; buildings crammed with foul-tempered generals, trying to focus through their perpetual alcoholic haze on whom to kill next. And George Jr. stares into the cameras and asks in the unctuous tones of a small-town preacher, “Why do they hate us so?” Why, indeed. The immoralist insists on viewing 9/11 entirely in terms of scale and perspective, aspects of The Event rarely touched on in all the media uproar. Scale: Americans died, but how many, and how were those numbers arrived at and modified? Perspective: how do we weigh the value of a life, and hence the negative value of a death? I should note at the outset that in pursuing the immoralist’s approach the cultural analyst or, specifically, cultural anthropologist adopts the harsh, clinical manner of the pathologist. In earlier essays, and now in this one, I air the unhappy (and unpopular) thought that cultural anthropology, when it is practiced with any intellectual honesty, has much in common with the medical discipline of pathology. Practitioners of both attend in intimate detail to a diseased or broken subject, attempting thereby to determine the precise nature and course of its malignancy. As I said, not a happy thought, and far from the smile-button certitudes of relativism and the cerebral silliness of postmodernism. And a thought that goes against the grain: all along we’ve proceeded as though America had sustained a traumatic injury inflicted by an outside agency (those evildoers), but what if the news is far worse? What if 9/11 is actually an early and isolated eruption of a grave internal condition, of a terminal disease? The workings of that disease, its symptomatology, manifest themselves in the matter of scale, as that unfolded in the hours and days following the attacks. Just how major was the devastation? Americans indeed died, but in numbers reported more in the manner of hysterical ravings than any sober appraisal. In the first hours following the attacks on the twin towers, news anchors gave the most alarming estimates of casualties. I vividly recall Peter Jennings, ashen-faced, in shirt sleeves, his hair hat barely in place, ominously announcing that some forty thousand people worked in the towers on any given weekday, and another ten thousand or so were regularly in the subway stations beneath them. Could the attacks have killed, in minutes, fifty thousand or so Americans? Roughly the American death toll for the entire Vietnam War? The suggestion, made right there on network news, was staggering, unthinkable. And yet there was Jennings, delivering those numbers

134  Heading for the Scene of the Crash

with a horrified restraint, a grim speculation that only drove their significance deeper into our minds. Americans are long accustomed to receiving their news and evaluating its significance in terms of the “body count”—another revolting Newspeak contribution to our language by our recently bombed friends at the Pentagon. How many were killed in that tornado that hit southern Illinois last night? Only three, you say? Well, we won’t be hearing much more about that story! Oh, what’s that? Estimates have just been revised upwards to twenty dead? Well, now that’s more like it! Look for detailed coverage on CNN. If it bleeds, it leads—and if it bleeds a lot … . Our initial reaction to events and the considered assessments that follow are so often tied to this grisly, stupid numbers game. Live by the numbers, die by … . It is a matter of scale. In the first hours following the attack, Jennings et al. calibrated the scale of horror at a staggering level: how could all of us viewers not feel we were suddenly plunged into a nightmare? As it turned out, of course, those reports were grossly inaccurate speculation. For as the hours wore on and authorities realized that long streams of victims had escaped the burning towers, choking, blackened, but alive, the numbers, that holy grail of the American conscience, began to come down, and sharply. By the time Rudy Giuliani, the nation’s mayor, began his series of press conferences, the numbers had dropped into four figures. The scale of horror had to be drastically recalibrated. In a matter of hours, we were asked to contemplate, to get our bruised sensibilities around, not carnage on a scale of the entire Vietnam War, but, and the comparison was made repeatedly by Jennings et al., of Pearl Harbor. Far, far worse than Pearl Harbor, they reiterated—and almost all civilian deaths! (We shall return, unhappily, to this matter of civilian versus military deaths.) The scale shifted, new numbers called for new touchstones of horror, new analogies to the depraved evil of Those Others out to destroy America. For a long time, weeks and weeks, the numbers remained up there: seven thousand, six thousand, five thousand—still far more than Pearl Harbor. Then those, too, began to erode as the debris cooled and authorities continued their grim tabulations, their official body counts. Four thousand, three thousand: the well-conditioned American audience, following the body-counters’ results, became unsure as to the appropriate level of disbelief and outrage. Of course no one said it—in fact, hardly anyone dared even think it (how many immoralists are out there, anyway?)—but one began to sense in the air the vaguest feelings of, well, betrayal and disappointment. When Jennings and the others came on the air to announce the smaller numbers, they seemed uncomfortable, embarrassed even, at letting their audiences down, at having been found out in yet another act of media hype. A part of us, of our collective soul, was relieved that the extent of suffering had not been so

Shit Happens  135

great, but another, larger part resented having been misled once again by the opinion-makers who, we all know, manipulate not just “facts” but our very sense of emotional well-being. We know they do it routinely, but we don’t like having that manipulation thrown in our faces, don’t like being stampeded into near-hysteria, then being told a few weeks later, “Guess what? We seem to have been a little off on what we told you earlier.” As I write, nearly a year after the attacks, the body count from the World Trade Center has decreased to fewer than 2,800, and Americans find themselves caught in a whipsaw of emotion: we hadn’t wanted fifty thousand people to die horribly in that inferno, but when we were fed those numbers, conditioned as we were from decades of TV-watching, we experienced a growing ambivalence as the numbers declined drastically. “Hey, wait a minute! You said—” “No, we didn’t really mean that. We thought the death toll might go that high, but, you understand, in the heat of the moment … .” The TV anchors’ accounts began to drift from outrage to apology, making it difficult to sustain our patriotic fervor. All the little Stars and Stripes fluttering defiantly from the windows of our pickup trucks and SUVs began to fray, to discolor, and, one by one, to disappear. All the flag posters and decals plastered on every surface began to blister and peel away from the desks and doorways of America. Thanks to the pillars of our society—the corporations, the media, the government—the numbers game, the matter of scale, has become so much a part of our outlook on daily life that we can’t ignore it, often can’t even recognize it. Our acceptance of their decades-old strategy of playing the numbers game on us has, with 9/11, created an undertow or backlash: we aren’t accustomed to counting down in situations of national emergency. The threat to America, to our daily lives and the sanctity of all things, is always couched in “escalating” (more Newspeak) numbers: the Russkies are building more missiles; they’re basing more of the infernal things in Cuba; Ho Chi Minh (that old evildoer!) is ordering more North Vietnamese regulars to meddle in the righteous little civil war going on below the 17th parallel. When the numbers decline precipitously, that shift of scale induces an acute case of vertigo in the American consciousness, a vertigo that affects our whole orientation to The Event and, just perhaps, opens the way for the cultural analyst to investigate, or dissect, the fundamentals of American values. The difficult matter of perspective is at the heart of any discussion of values: how do we weigh the value of a life, and hence the negative value of a death? As we’ve noted, cultural anthropology has contributed little to this fundamental problem and has thereby left the door open to every sort of charlatan to fill the nation’s airways with shallow, doctrinaire pronouncements of what is good and what is evil. And the charlatans—George Jr. and Osama principally among them—have been glad to oblige.

136  Heading for the Scene of the Crash

Even a cursory inquiry into the perspectives at work in 9/11 reveals that the values at issue are far more complex than a simple confrontation of good and evil, global capitalism and Islam, American democracy and religious fanaticism. While the antagonists instinctively cloak themselves in these “universal” values, the immoralist proceeds to lay those values open for inspection, in fact, to lay them out on the dissecting table. What, to begin with, is the actual source of America’s horror at 9/11? We have seen that it is not the sheer numbers, for those have been deflated. The most important aspects here are that American lives were taken by surprise acts of human violence in a public, even sacred, place. Change one or more of those elements and the horror dissipates. What we are accustomed to thinking of as an unbidden, visceral reaction of pure emotion (the horror! the horror!) is a complex, conditioned response that owes everything to a whole set of culturally specific beliefs and practices. Notable among these last is the recent phenomenon of television saturation of any event that begins to measure up to the specifications identified above. We don’t watch history unfolding on TV as much as we witness its fabrication and attend to its production values. Here is the cornerstone of our hallowed set of American values (that fool and bigot John Ashcroft can go on and on about these, even in doggerel verse): the production value of human life in and of itself is slight; what matters is that American lives are at stake. That this crucial perspective shapes our feelings and actions is beyond dispute. How else do we account for the halfhearted, episodic response by the American public and media to the genocides committed by the Khmer Rouge in Cambodia and the Hutu in Rwanda? Each of those systematic and prolonged atrocities involved the murder of more than a million people, and yet no flags fluttered from the windows of American cars, no expeditionary force of our Green Berets and Army Rangers was dispatched, no great coalition of democratic nations was assembled to stanch the crazed bloodletting. Instead, the years have passed and Pol Pot has been allowed to die a peaceful death in his retirement villa, an unrepentant old villain to the end. Couldn’t we have spared a single Cruise missile to erase that abomination from the face of the earth? Jimmy Carter, Prince of Peace and King of Wimps, even used the US veto power at the UN to prevent the Khmer Rouge’s delegation from being expelled. Wouldn’t do to rock the diplomatic boat. The element of surprise is also critical to our perspective on events. When a madman appears out of the blue at a school or synagogue and guns down a dozen people with his NRA gun-show special, there is a great deal of public agonizing to go with the intense media coverage. The litany begins immediately: What’s wrong with our schools/families/cub scout troops (take your pick) that something so awful could occur? Where did we go wrong? How could this have possibly happened in our great society? And yet it is

Shit Happens  137

not uncommon, and barely remarked on at a national level, if a dozen children and young people are gunned down on the streets of Los Angeles or Chicago over a period of a week or two. The individual acts are a surprise, certainly, but their statistical regularity is not; we already expect that sort of routine atrocity and so discount it in our moral calculus of events. Bill Maher captures this perspectival element of American values perfectly in one of his HBO stand-up routines: “And then there’s Memorial Day coming up, and we hear some government agency announce that five hundred or so people will probably die in accidents over the weekend. And we think, ‘Yeah, that sounds about right.’” These last two examples also illustrate the importance of the element of a public or sacred place as the site of the event. The streets of South Central LA and Chicago’s south side are certainly public, but are not places where the great majority of Americans can visualize themselves spending a lot of time. But all Americans have firsthand experiences of schools, post offices, the workplace, and religious institutions and tend to regard them as somehow inviolate, as deserving of a hands-off approach when mayhem is contemplated. This completely irrational supposition deserves extensive analysis in its own right. Erving Goffman made a brilliant study of this aspect of American values in Relations in Public (1971), where he emphasizes the remarkable extent to which we large, powerful animals are able to curtail aggression in public places. The madman’s attack on people assembled in a public building violates the unspoken taboo we observe in close interaction with our fellows and thereby triggers our horror-struck reaction. Goffman’s thesis is tragically confirmed by 9/11. The most alarming aspect of the attacks was not their targets—after all, the World Trade Center and Pentagon were “naturals” for terrorists—but their weapons: hijacked domestic airliners. Apart from elevators in high-rise buildings, perhaps the most tabooed public space in American society is the airliner. Crammed together for hours at a stretch and with no avenue of escape, its passengers are bound by the strictest, if unspoken, rules of deportment. Foremost among those rules is to avoid giving any indication that you might be inclined to violence. And though we never puzzle it through, that taboo extends to the airliner itself. When the hijacker jumps up from his seat and begins yelling and brandishing a weapon, he violates the principal taboo—and in the process effectively paralyzes his fellow passengers, who cannot let themselves believe that public order has been breached so egregiously. But when the hijacker then proceeds to turn the plane itself into an instrument of aggression, the effects of that ultimate violation of public order are, as we know, devastating to the psyche. Consider for a moment a different scenario. Mohammed Atta and his gang manage to acquire two ocean freighters and secrete aboard them several surface-to-surface missiles. Perhaps a few of Saddam’s notorious old SCUDs,

138  Heading for the Scene of the Crash

though it would be laughable to think that those relics could be targeted precisely enough to hit a particular building. Atta and his crew position the freighters a couple of miles offshore of Manhattan and the Maryland coast. Then, on the morning of September 11, they launch their cargo. The effect of the missile strikes is identical to that produced by the airliners: the towers collapse; the Pentagon is severely damaged. To be sure, these hypothetical attacks are calamities of unprecedented magnitude. But without the element of the hijacked airliners, do those disasters quite equal in public horror and outrage what actually occurred on that day? I think not, and for the reason that the hypothetical attacks are in fact much more probable, and much more explicable, than the actual attacks. After decades of Cold War and Star Wars indoctrination, the American public is conditioned to the idea that enemy missiles may rain down on us at any time. Being conditioned, we have tucked that particular nightmare away in a place in our collective psyche that we reserve for the host of other nightmares that accompany contemporary existence. But our reservoir of terror contained nothing like the particular nightmare Osama and Atta devised; we were emotionally unprepared and reacted accordingly.

Horror Stories The quality of “naturalness” rounds out this inquiry into the perspectival aspect of American values. Statistical regularity and routine of the sort discussed above invest even social events with something of the inevitability we associate with natural occurrences. Unless we are personally affected by a Memorial Day traffic accident, Maher’s comment captures exactly our detachment from individual violent events and our willingness to regard those events as part of the natural order of things: You say five hundred people will meet a violent death this weekend? Yeah, that sounds about right. When disaster strikes through an actual occurrence in nature, from a flood, earthquake, tornado, or hurricane, even the element of surprise is not enough to place the disaster on the same level of public trauma as 9/11. And if the victims of natural disaster are not American, if the unthinkable strikes far from the prying eyes of even CNN, then our level of involvement becomes minimal. Such is the importance of perspective in our supposedly uniform and uncomplicated value system. Take earthquakes, for example. Which major earthquakes come to mind? Which do you know anything at all about? Faced with these questions, the great majority of Americans immediately think “California,” and many of those (certainly if they happen to live in that state) recall the devastating Northridge quake in the Los Angeles area and the notorious “World Series

Shit Happens  139

quake” that disrupted that famous game and threatened to destroy at least one of San Francisco’s landmark bridges. They were indeed terrible, and terrifying, events, but unless you were an unfortunate homeowner, landlord, industrialist, or, worst of all, insurance agent in those locations, the chances are very good that you were not severely affected. For how many lives did those notorious “killer quakes” extinguish? Like the years of their occurrence (1994 and 1989), the death tolls of the two quakes have slipped from memory, long since lost in the blizzard of horrifying facts generated daily by an obliging and ghoulish media. Yet considering the phenomenal devastation, considering block-after-block of collapsed buildings and miles of twisted roads, we remember, or think we should remember, that the death tolls were alarmingly high. The numbers are 57 (Northridge) and 63 to 68 (San Francisco)—125, maximum. Hundreds of hours of TV coverage, hundreds of pages of newspaper articles, hundreds of hours of official “fact-finding” investigations, all over those 125 deaths. American lives are indeed a precious commodity. And American collective memory is indeed an incredibly selective device. In describing that faculty, we should actually refer to an American “collective amnesia,” so quickly and thoroughly are even the most dramatic events expunged from memory. If pressed, the person in the street might recall that decades ago (the sixties? the seventies?) there was an enormous earthquake in Alaska. Anchorage, perhaps? But Alaska, state or not, is a distant wilderness; who could have been around to get killed, anyway? As it happens, and here even the most astute must refer to the US Geological Survey or some other official source for the details, the death toll of the 1964 Anchorage quake was exactly that of the combined Northridge and San Francisco quakes: 125. Even so, confronted by a single killer quake whose mortal damage equaled the combined fatalities of the two worst earthquakes most of us can recall, we remain quite unperturbed. It was all so far away and so long ago. It is about at this point, I submit, that American “collective memory,” “collective amnesia,” or whatever we want to call it fades away to the vanishing point. We could stop a thousand people on the streets of American cities and towns and glean only the most jumbled, anecdotal reports of other major earthquakes. “I think there was a bad one in Turkey a few years ago.” “Didn’t Iran have a big quake sometime after that hostage crisis thing?” And so on. Wait a minute, though. Aren’t we still talking about human life and the tremendous value we deeply caring Americans place on human life? And in the case of the Anchorage quake, aren’t we talking about American lives that were lost in that disaster? But American or not, isn’t it one of our fundamental values that we regard all human life as precious, even sacred (hence George Jr.’s and Ashcroft’s revulsion at the whole topic of human cloning)? Does the memory of a hundred-plus lives tragically snuffed out just evaporate from our sensitive collective conscience?

140  Heading for the Scene of the Crash

The immoralist’s response here can only be deeply cynical (which, remember, translates as clinical). The wonder is not that so much has been forgotten; it is that even a tiny fraction of the American public remembers as much as it does about those long ago or faraway horrors. To begin to grasp the significance of this observation, when you’re conducting those person-in-the-street interviews about major earthquakes and your respondents have finished giving their fumbling answers, ask them one last question: what about Tangshan? The response you’ll get to that question, virtually without exception, will be looks of incomprehension. What or where or who is “Tangshan”? The response you almost certainly will not get would go something like this: In the predawn hours of 28 July 1976 an earthquake of approximately 8.0 magnitude struck west of the city of Tangshan, an industrial center of about one million people. The quake occurred within a densely populated triangle made up of Tangshan to the east, Beijing about one hundred miles to the northwest, and Tientsin, the third-largest city in the People’s Republic of China, about fifty miles to the southeast of Tangshan. Seismic stations in the vicinity had reported a heightened level of activity in the preceding weeks, which was not particularly unusual because that area, near the Gulf of China, is an active site in the subduction zone of the Pacific plate. And seismic prediction, as anyone knows who has listened over the years to the Southern California dialect of seismobabble, is worse than useless: it makes you crazy with the dread of something awful that will happen sometime, somewhere. But it did happen in Tangshan, and it was over very quickly. Even an 8.0 quake lasts less than two or three minutes. Of course, there are the aftershocks, which can be deadly. In the main, though, a couple of minutes separates victims from lying at home in a deep sleep and being crushed in a lethal chaos of falling mortar and timber, an instant during which the very ground beneath them heaves like an angry wave. When it was over, Tangshan, that city of one million, was reduced to rubble. Observers later reported that only a handful of buildings remained standing. A handful, out of thousands. Everything else was gone. How many people died as a result of that moment of chaos? What, we body-counters want to know, was the death toll? The answer to that question would not satisfy the legions of official body-counters who were busy for months after the World Trade Center attacks, busy tabulating and cross-checking death certificates, busy interviewing family and friends, busy setting up an elaborate administration to divvy up the millions of dollars of donations that poured in to help heal the wounds of the nation itself, of America, in this time of its gravest crisis. The answer to that question is that nobody knows how many people died that early morning in Tangshan. The Chinese authorities were reluctant to release any information at all for weeks after the event, were reluctant even to admit a major disaster had

Shit Happens  141

occurred. In their wisdom, these leaders, the murderous old men in Beijing who would later order the massacre at Tiananmen Square, refused humanitarian aid and technical assistance from other countries. In truth, they or their subordinates were probably overwhelmed at the devastation they found when the first squads of rescuers and soldiers arrived. Still, the authorities, the commissars, felt this chaos reflected somehow on not just their administrative competence, but the very legitimacy of their political system. Marxist-Leninist doctrine is strictly determinist and unilineal; it does not like to acknowledge a world in which things just happen. There is supposed to be a scheme, a direction to history, known to the enlightened, to the Party. Old Karl, sitting in the British Museum, itching his carbuncles, liked to pen masterful essays on the meaning of historical events. Specific, identifiable causes were at work in processes of change, and Karl was just the person to sort them out, to hold them up on a pin for the world to see. Old Karl would not have liked to contemplate a world in which shit just happens. Old Karl would not have been a fan of complexity theory. Old Karl would not have liked to hear the news coming out of Tangshan, a hundred miles from the glorious capital of the People’s Republic of China. After months of silence, amid mounting speculation by geologists and aid organizations around the world, the Chinese authorities released a provisional death toll: 240,000 to 250,000. A quarter of a million human lives, most of them snuffed out in minutes. By the time the Chinese authorities came forward with this figure, however, the scientific world and a few media sources were trying to come to terms with even more alarming figures. By then, the quarter-million tabulation seemed to be what it was: a ludicrous and calloused attempt by the Chinese authorities to conceal the extent of the devastation. The expert consensus that took shape in the West during those months was that the death toll had been in the range of 500,000 to 650,000. Not a quarter-million human lives, but half a million. Or more—a lot more. After all those months had passed, though, it was old news in the United States and other Western countries. Particularly since only a few grainy photos of the disaster were released through the China News Agency, it was impossible to fashion it (and “fashion” is the apposite term here) into a media event. Without that coverage and with foreign news crews barred from the scene, there was not the opportunity for Dan Rather, Tom Brokaw, and Peter Jennings, outfitted in their tailored safari jackets, to speak to us in grave terms of the tragedy against a background of collapsed buildings. Scale and perspective. These factors combined to erase the unprecedented catastrophe of Tangshan from the American conscience. The fact of that erasure, however, forever puts the lie to the sanctimonious posture our politicians and other public figures adopt regarding the fundamental value of human life. The next time you see George Jr.’s smirking face on TV

142  Heading for the Scene of the Crash

proclaiming his reverence for human life and his deep sense of loss over the 2,800 or so dead at the World Trade Center, look deeper into the abyss that yawns around him and you may discern the restless stirrings of half a million ghosts long since forgotten by this great humanitarian. The smirking face, those phantoms, the moral vertigo that scene induces in any truly thinking, feeling person—doesn’t it make you want to puke? Regrettably, the immoralist cannot allow himself the indulgence of that unbidden, natural reaction. In his pathologist’s dissection room, the nausea he feels deep within him has become an insupportable luxury, a conceit for those who would rather parade their supposed virtue than get to the core of the rot that infects them and their society. And there are always more cases, more stricken beings, waiting their turn at the table. There is, for example, Mazar-i-Sharif. Earthquakes in China, floods in Bangladesh, mudslides in the Andes—all these disasters and too many others have taken lives by the tens and hundreds of thousands. And while we Americans who cherish human life above all else may find it embarrassing to have our neglect, our willful amnesia, thrown in our faces in these cases, we might retort that those are, after all, natural disasters. For all the horrifying loss of life, no human agency was directly involved. We were all spectators before the wrath of nature (or God, if you are twisted enough to believe that a Christian God presided over Tangshan). We reserve our solicitude and our capacity for moral outrage for those events that are caused by people acting in unconscionable, horrible ways. As believers in Good and Evil (and, yes, that stinking old corpse also waits its turn on the dissection table), we may regret the loss of life in natural disasters, but we abhor and oppose, with every red-white-and-blue fiber of our being, the calculated, cold-blooded taking of human life. We despise the Evildoers. Thou shalt not kill. Thou shalt not kill. Mazar-i-Sharif. Until George Jr. and Rumsfeld launched their Holy Crusade against the Evildoers in Afghanistan, Mazar-i-Sharif was one of those utterly unknown places, the back of the beyond, so remote and inconsequential that the producers of Survivor would not have given it a second look as a location for their next season. That unknown place, a city of around two hundred thousand, burst upon the American public with the discovery of the “American Taliban,” John Walker Lindh, following the bloody revolt of Taliban prisoners and its bloodier suppression by our noble allies, the troops of the Northern Alliance, three months after 9/11. The shock that an American youth nurtured at the very bosom of the nation—Marin County, California—would turn his back on his family and country to become an Islamist soldier trained by the Chief Evildoer himself, Osama bin Laden, was compounded manyfold by the death during that prison revolt of the CIA operative Mike Spann.

Shit Happens  143

The morality play/palsy built around this bizarre juxtaposition of characters soon dominated the media. It was just the sort of personal narrative they and their audiences thrived on. First CNN showed us The Prison Interview: there was Spann, young, clean-cut, corn-fed-healthy, crouched on a blanket thrown on the ground, interrogating Walker, emaciated, disheveled, shackled, a Charles Manson stand-in kneeling before his CIA captor. Next, CNN brought us scenes of The Prison Uprising, a suicidal last-ditch stand by captured al-Qaeda and Taliban fighters. Then, news of The Death. The valiant young CIA officer had been killed by those fanatics, and who knew what part Walker himself might have played in this heinous deed? A few days later, in the tasteless protocol of American television, we were taken to Spann’s hometown in Arkansas: ribbons wound around everything in sight; an entire town consumed by grief for its fallen son; the distraught family; the mayor’s impassioned comments about Spann’s patriotism and Walker’s treachery. The viewing public was incensed by the stark drama of it all. Spann the Good, who died for his country in an alien, Godforsaken place (not at all like Arkansas), versus Walker the Evil, who renounced his family and home and betrayed his country. Then cut to John Ashcroft, our moral champion, anointed as such at Bob Jones University, declaring in ominous tones his views on the traitor and Walker’s just deserts. The nation was caught up in this made-for-TV drama. The government would see that justice was done. CNN’s Nielsen ratings climbed back toward the dizzying heights reached during the days following 9/11. For the American public, this morality play effectively ends the story of Mazar-i-Sharif; that city of two hundred thousand is relegated to its previous obscurity. Walker the Evil is transported, in the cruelest fashion, his wounded body lashed head to foot to a stretcher with duct tape, back to the United States. The remains of Spann the Good are returned with the greatest reverence to his home and family, where a hero’s funeral awaits. Back in the United States, away from the godless horrors of Mazar-i-Sharif, we expect that events will take a less cathartic turn. The total absence of law and order in Mazari-Sharif and the extreme actions that it encouraged will, we are depressingly confident, give way to the overprotectiveness of the American legal system. While we may yearn for the turncoat to get what’s coming to him, we are all too familiar with the role that wealthy parents and their high-priced lawyers inevitably play in such situations. We are still digesting the bitter pill of the O.J. trial. And so the intense violence of those ten or twelve days in Mazar-iSharif yields to months of tedium and unending legal hijinks, during which our righteous anger over a traitor’s complicity in the death of a noble son gives way to our heartsick feeling that something is deeply wrong right here at home. Right here at home. The phrase captures exactly the habitual turn of the American mind and conscience away from events in foreign lands to matters

144  Heading for the Scene of the Crash

immediately at hand. It is the narcissism of empire: what happens there really has meaning only to the extent that it affects life right here in America. It always has to be about us. Mazar-i-Sharif, that city of two hundred thousand people, becomes a mere stage setting for what really matters: the saga of Good versus Evil enacted by Spann and Walker (the city may as well have been featured on the next season of Survivor after all).3 The immoralist, however, with a cynical/clinical eye trained on the entire picture, on American culture and Afghan society, refuses to regard Mazari-Sharif as just another sound stage thrown up on a lot at Universal Studios to provide the setting necessary for our action heroes—Arnie, Sly, Bruce, Spann—to strut their stuff before an audience of millions. Having previously conducted detailed examinations of America’s imperial narcissism, the immoralist brushes aside the pathetic little melodrama of CNN’s Walker-Spann fable, regarding it as the surfeited diversion of a silly, corrupt people headed, like all empires before them, for the trash heap of history (and, yes, just plain old “history,” not old Karl’s “History”). Mazar-i-Sharif. What could possibly interest us about the place itself, apart from its use as a stage for American melodrama? As Dan, Tom, and Peter explained it to us between laxative commercials, Mazar-i-Sharif was the most provincial of provincial capitals, a depressed and depressing place in the extreme north of Afghanistan, only thirty-five miles from the Uzbekistan border (and Uzbekistan’s location and spelling were other questions on the world geography quiz George Jr. would have flunked pre-9/11). Why, it was the merest historical accident, a relic of nineteenth-century conflicts between imperial Britain and imperial Russia, that Mazar-i-Sharif became part of the gerrymandered nation of “Afghanistan” at all. Most of the city’s residents were ethnic Mongols of Central Asian origin, Hazaras, easily distinguished (and easily discriminated against) by members of Afghanistan’s majority ethnic group, the Pashtuns, speakers of an Indo-European language and not the Turkic language of the Hazaras. To round out the improbable diversity of this backwater region, the Hazaras were Shi‘ite Muslims, while the Pashtuns— and especially their notorious political movement, the Taliban—were fervent Sunni Muslims. Diversity, whether ethnic, linguistic, or religious, does not sit well with fanaticism. When all three types of diversity were present and when fanatics as extreme as the Taliban were busily shoring up their Islamist regime, the “stage” of Mazar-i-Sharif was set for disasters infinitely more horrible than the Walker-Spann melodrama. In 1997 and 1998 the American public heard next to nothing of Mazar-iSharif; to revert to our person-in-the-street yardstick, not one American in a thousand could have told us who, what, or where “Mazar-i-Sharif ” was. The vindictive old fools in the Congressional Republican leadership were too busy

Shit Happens  145

attending to Bill Clinton’s impeachment, happily paralyzing the entire United States government to punish a furtive blow job in the Oval Office. Even when Clinton took hasty aim at Osama bin Laden with a couple of Cruise missiles after the embassy bombings in Kenya and Sudan, our Republican statesmen were unanimous in their denunciation of his blatant attempt to deflect public attention from what really mattered to the Free World: that blow job. Dan, Tom, and Peter were all over that. So curious were they about the fabled blue dress that they had no airtime to inform their viewing audience of incidental reports in 1998 by the United Nations Commission on Human Rights and by the independent organization Human Rights Watch. Of those reports we heard nothing.4 Those reports documented two related atrocities committed in Mazar-i-Sharif in 1997 and 1998, each involving thousands of victims. But not American victims. Not Spann and Walker, the living embodiments of Good and Evil. And hence not really news. Scale and perspective. The first atrocity was perpetrated in May 1997 by the Hazara militia, Hezb-i-Wahdat, against some two thousand Taliban fighters. It was the kind of warlord violence we have grown accustomed to hearing about in news of Afghanistan: greedy, violent men forever fighting one another with arms and slogans supplied by the United States, Russia, Iran, Pakistan—any world or regional power will do and is happy to oblige. The second atrocity occurred in August 1998, when the Taliban took dreadful revenge on Hazara fighters and civilians in the streets of Mazar-i-Sharif. Accounts of the first massacre are vague, owing in part to the fact that they conflict with all the subsequent spin on events in Afghanistan, which depicts the Taliban as the ultimate bad guys, the Evildoers. But in May 1997, Taliban fighters were the victims of some sort of treachery on the part of their old rivals, the Hazara militia. The predominant version is that one of the Hazara leaders negotiated a peaceful surrender of Mazar-i-Sharif to a Taliban leader whose forces were laying siege to the city. On the basis of that understanding between warlords, some two thousand Taliban fighters entered the city as an occupation force. It was a trap. Once in the city, Hazara snipers, firing from prepared positions, cut them down. When the slaughter was over, some five hundred Taliban survivors were rounded up and shut in metal storage sheds, crammed together like cattle. They were left there. The May sun beat down on the closed shed; the Taliban suffocated and died. By some accounts this event is the origin of what would become a signature of later atrocities committed by the Taliban themselves: “death by container.” The Hazaras, of course, later became identified in Dan, Tom and Peter’s reports as part of the “Northern Alliance” and its US-sponsored campaign to retake the strategic city of Mazar-i-Sharif. The television cameras scrubbed these butchers clean of the blood of thousands of human beings.

146  Heading for the Scene of the Crash

Predictably, the Taliban were not so forgiving. By August 1998 their forces in the North were stronger, aided in great measure by equipment and even military personnel from Pakistan, our soon-to-be ally and champion in the war on terrorism. Several thousand Taliban fighters bent on vengeance descended on the city, raking the streets with heavy machine-gun fire and leaving carnage piled high in the streets. Borrowing a page from the Hazaras’ playbook, they stuffed survivors, adults and children alike, into container trucks where they suffocated and, essentially, were roasted alive. Children not killed were often mutilated, with one or both hands chopped off by these glorious defenders of Islam. And how many died, we ghoulish American body-counters want to know? During that week in August, the UN and Human Rights Watch estimated the death toll at between five thousand and eight thousand, with thousands of others injured or maimed for life. Two or three times the death toll at the World Trade Center—an entire city reduced to rubble. And barely a whisper of this atrocity in the American media, which three years later would be turning over every stone in Mazar-i-Sharif, looking for a story line to flesh out the drama of Walker and Spann. In great measure this willful ignorance of the city’s recent history may be attributed to the elements of scale and perspective we have been discussing, elements that determine whether a particular event will be identified as “news” and thus worthy of our purported sensitivity and respect for human life. But specific political considerations also played a role in our national failure to act or even speak out about the latter atrocity: at the time, the Taliban were more or less on our side, supplied by our allies the Pakistanis and by our own CIA. The Great Satan of the Soviet Union might have been dispatched years before, but in its place we had to face Iran with its Islamist militants. The ayatollahs of Tehran coveted the enormous oil and gas reserves of Central Asia every bit as much as did ExxonMobil and made their interest known by massing a large military contingent on their long border with Afghanistan. The ridiculous little melodrama of Spann and Walker was still three years in the future; for the time being, political expediency dictated that no great fuss be made over the thousands of men, women, and children tortured and murdered in Mazar-i-Sharif. When the news crews finally arrived in the city, their lenses did not capture those thousands of phantoms, howling for revenge from the invisible abyss that engulfed the safari-jacketed reporters. The vicissitudes of recent history are nothing, however, to the staggering incongruity between Mazar-i-Sharif as modern and ancient city. Up to now, we have examined the workings of the elements of scale and perspective on contemporary or recent events: the World Trade Center attacks; the Tangshan earthquake; the immediate political history of Afghanistan. It is crucial to acknowledge that those subjects are framed in an encompassing discourse of

Shit Happens  147

modernity, according to which Afghanistan is everything the United States is not. We are central, rational, developed, powerful, up to date; they are peripheral, superstitious, undeveloped or tribal, weak, behind the times. Every pronouncement on every network news program employs these oppositions as unquestioned foregrounding to the breathless breaking stories about to be imparted to the viewing audience. Slouched on our living room sofas or seated in the ergonomic rockers of our home media centers, we here at the center of civilization gaze with condescension and dismay at scenes enacted by people whom history forgot, tribal savages locked in an ages-old war of all against all. The irony is staggering. Were our perceptions—our perspective—not tied to the minuscule calibrations of the nightly news, we might be free to consider for a moment, before buying into all the 9/11 rhetoric, that introducing a different time scale completely alters the accepted reality. Two millennia ago the tidy oppositions we deploy to distinguish Us from Them would not only be inapplicable, they would be completely reversed. In the first to fifth centuries ad Mazar-i-Sharif and its sister city, Balkh, lay at the major crossroads of the ancient world: the Silk Road that connected Persia and eastern Europe with China; and the important trade route between Central Asia and the civilizations of the Indus Valley to the south. The two cities were near the center of Koshan civilization, which thrived on the wealth of goods and diversity of peoples that flowed through it. What would the cosmopolitan rulers of this opulent kingdom have thought of the inhabitants of the unknown island later to be called “Manhattan” or of the forests of western Europe, all of whom were primitive villagers or nomads running around in skins? Surely their versions—and they must have had something like them—of Dan, Tom and Peter would have spoken with contempt of those savages crouched in their distant, uncharted lands. Like Mazar-i-Sharif before it, the World Trade Center stood at the financial hub of a global civilization. Those buildings in Manhattan, however, existed for the merest instant of recorded time. And if bin Laden and Atta had not taken aim at them, do we seriously believe, when we stop to apply any rationality at all to the situation, that they would have withstood whatever onslaughts future millennia might bring? The sands of time shift, and keep right on shifting, despite the evangelical rhetoric of a dull-witted Texas politician. Shift happens. Everything George Jr. says is predicated on the idea—and perhaps it is actually his belief—that this thing he calls “America” will go on and on, fighting off attacks by Evildoers, standing as a monument to the freedom-loving peoples of the world. How can he and millions of his countrymen ignore the lessons that even the most general survey of history teaches? One does not have to sign on to complexity theory to take in at a glance that the phenomenon of the island of Manhattan, with its twin 110-story towers, is a stupendously improbable creation, a piece

148  Heading for the Scene of the Crash

of “self-organized criticality” lifted whole cloth from the textbooks, a disaster waiting to happen. The wonder will be if the now-diminished Manhattan endures a fraction of the time that Mazar-i-Sharif and Balkh flourished. If, as pundits never tire of announcing, the 1900s constituted the “American century,” are there four or five more such centuries? Will descendants of the martyred firefighters and police officers gather at Ground Zero on September 11, 2501 to hold up photos of the departed and stare forlornly at the cameras? Or, as seems increasingly likely, will the shrine at Ground Zero have long since vanished, along with the ground itself, prey to a force, global warming and melting polar ice caps, infinitely more destructive than anything Atta could manage with a couple of airplanes? “Manhattan” may well become a synonym for “Atlantis” and meld with that rich body of folklore.

Nietzsche and the Revaluation of Values Scale and perspective: these elements count for everything when one initiates a dissection of what is advertised at the outset as a whole and essentially healthy body—the body politic of post-9/11 America, its men and women standing tall, proudly waving the stars and stripes, proclaiming an inviolate and innate set of values. Following Nietzsche, I have suggested—and it cannot be well received in the current social climate—that our most fundamental values, those of Good and Evil, are constructions, pathological growths really, on a body of cultural processes that must be examined for the complex entities they are. There is nothing simple or innate about even our most deeply held values or our most instinctive response when those values are violated: for example, the national outrage at the events of 9/11. I shall shortly follow Nietzsche further, taking the next step in the procedures of the pathologist’s operating theater, and inquire what ideals or values might take the place of the diseased and spent notions of Good and Evil. Before taking that next step, however, I want to present an additional case, one I hope will prepare you for what is to come. When I propose with Nietzsche that we abandon the very notions of Good and Evil, does that not elicit in you, particularly given your patriotic fervor in the wake of 9/11, a sharp protest, even a sense of betrayal? It is one thing, you might object, to invoke past earthquakes in China, Afghanistan’s political history, the ancient world of two millennia ago, and future global warming in support of a coldblooded (again, cynical/clinical) argument that judgments of good and evil are spurious—untenable simplifications imposed on a complex and shifting reality. But, you might continue, it is quite another thing to advance that same argument in an immediate person-to-person situation in which one human being acts horribly toward another. We might parade the statistics coming out

Shit Happens  149

of the massacres at Mazar-i-Sharif, and certainly those are gruesome enough, but when we contemplate the horror up close of a Taliban fanatic hacking off the hands of children in the name of God, doesn’t that stir a revulsion that is as real and visceral as it gets, that is well beyond any contrived interrogation, any cultural analysis of values? My response to this excellent and impassioned question is, I think, consistent with Nietzsche’s critique of “fundamental” values. The revulsion, the up-close horror you feel is real, but does that powerful feeling fit naturally into a world you believe to be populated by good and evil forces, good and evil individuals? Specifically, are there “evil acts” involving a malignancy of spirit, a cruelty of intent, and a viciousness of execution that in combination leave no room to question the nature of those acts? Is Evil a force loose in the world, a hideous thing that takes control of a person and places his actions beyond the pale of properly human behavior? Despite the folk appeal of this notion (and if you want empirical confirmation, check out the sales of Stephen King’s books or the endless series of horror movies), the immoralist must ask if anything is ever quite so absolute, so rigidly determined, so predictable. Or, to borrow again from my infelicitous title, is Evil loose in the world or does shit just happen? I suggest the answer to this question regarding the irreducibility of upclose and personal evil turns out to be a paradox: it is precisely when we think we have isolated the irreducible core of evil in the actions of a single individual that the notion of Evil as a force loose in the world disintegrates. To invoke once more the analytical tools deployed here, the scale and perspective at which individual acts of violence occur are far too fine-grained and diverse to extrapolate to the cultural construct we may hold of Evil, whether that be George Jr.’s evangelical-tinged notion of “evildoers” or some dark, ultimately Romantic notion of absolutes that reside in human behavior. I linger over these points because I think the most difficult thing for us to do is to confront the truly hideous cruelties people inflict on other people and not derive from that grim encounter a lesson that we then apply to society as a whole. Interpreting those actions, as Eco has said, as though they possessed an underlying truth. Evidence of this nearly irrepressible tendency to interpret and of its pitiful failure is to be found in the public rhetoric that surrounds any eruption into daily life of the murderous and bizarre. When the first reports of Jonestown, Oklahoma City, Waco, Heaven’s Gate, and Columbine burst on our television screens, the immediate commentary, the first serving of pablum by Dan, Tom, and Peter was to ask how such a monstrous thing could have happened. Are there, they wanted to know, deep and dark secrets regarding the psychology of cults, something that might explain how their members lose all sense of human worth? Then our news anchors trot out on screen the first of many psychologists hastily assembled by network staffers and

150  Heading for the Scene of the Crash

invite them to pontificate on the psychology of cults. Or they cut to another talking-heads collection of psychologists, sociologists, and educators who stand ready to enlighten us about violence in our schools and the tremendous stress American schoolchildren are under today. Or, finally, when we come to the events of 9/11, they round up hordes of individuals who have somehow come to be identified as “terrorism experts” and ask them to ramble on about the background of the attacks and to second-guess everything that led to them. A child watching TV during those traumatic weeks might reasonably conclude that the job is worth considering for the future, a real hedge against the growing unemployment problem. “Mom, when I grow up I wanna be a terrorism expert.” But what can these blowhards possibly tell us about the individual experience of terror?

Danielle van Dam Danielle van Dam was seven years old and lived with her parents in a comfortable suburb of San Diego. On the evening of February 1, 2002, Danielle’s father (her mother was out with friends) put her to bed. In the morning she was gone. Disappeared from her own room on the second floor of the family home in the middle of the night. The police were called. A search was begun and soon radiated out for miles as the days passed without a sign or word of Danielle. In the investigation, neighbors were questioned. One neighbor in particular, David Westerfield, aroused suspicion and came under close police scrutiny. Westerfield—tall, balding, paunchy—was a 49-year-old engineer and divorced father of two grown children. He lived just two doors down from the van Dams. He owned an RV, which he kept parked beside his house—a portrait of suburbia. Danielle disappeared on a weekend. That weekend, Westerfield took his RV out into the nearby desert on a camping trip. The coincidence aroused police suspicion. But three weeks passed without an arrest and without any sign of the little girl, despite massive search parties organized by sympathetic San Diegans. The distraught parents made tearful appeals on television. Westerfield, who continued to deny any knowledge of the kidnapping, was arrested. Forensic tests on his home and RV had identified items stained with Danielle’s blood. A week later a search party investigating a backroads area near El Cajon, only thirty miles from the van Dam home, discovered the body. It had been thrown into a trash-strewn ravine and was blackened by weeks of exposure to the sun, so that initial reports said it had been partially burned. It still bore Danielle’s necklace. The police have not disclosed what injuries had been inflicted on the little girl. As I write, Westerfield has been found guilty and faces the death penalty. Evidence collected from his home included numerous items of

Shit Happens  151

child pornography. Westerfield, middle-aged, the father of two, was a pedophile, living a quiet life in a quiet suburb of one of America’s golden cities. Apparently there was little in his background to arouse suspicion. His neighbors had no reason to be wary of him. There was even the bizarre report that on the evening of Danielle’s disappearance, Westerfield encountered Danielle’s mother and her friends at a local bar and enjoyed a dance with the mother. That report, understandably, has not been emphasized in the local news stories; it does not fit well with the image of the grief-stricken family. Evidently this neighbor, this familiar adult, somehow entered Danielle’s upstairs bedroom that night, persuaded her to be silent or silenced her, then carried her off to his own home or RV. Two days and three nights passed before Westerfield was again seen at his home. What took place in that RV out in the desert during those interminable hours? Before the violated body of the little girl was disposed of, tossed out like a piece of trash? The questions well up in anyone hearing the story. Why would any human being do such a thing? How did he manage to steal a child away from her own bedroom, with a parent at home? And the most horrible question of all: What did he do to her? Lurking behind these questions, of course, is our suppressed imaginings of the horror experienced by the little girl. What could that possibly have involved? What was her experience, her desperation, her terror? Could anything be worse than that? Coming to grips with these questions, the revulsion and outrage we feel lead naturally into a familiar mode of expression, a long-standing cultural construct: the killer, this sickening pedophile, is Evil incarnate. He is an inhuman monster, waiting like the predator he is for the time to strike. Evil is loose in the world, and it struck that awful night in the suburbs of San Diego. Faced with unspeakable horror, we instinctively grasp at the response Eco laments: we interpret. We insulate the unspeakable, or have it insulated for us, by a torrent of words and images. Confronted by the sudden and completely unexpected eruption of the horrible in the midst of ordinary domestic life, we respond, paradoxically, by claiming that it has always been present. We summon up absolutes: Good, Evil, Innocents, Monsters. It is exceedingly strange, a defining quirk of conventional thought. Since by definition we have very limited experience with the extraordinary, our reaction to it, our proposed explanations or interpretations, should be tentative, cautious, hedged about with disclaimers. In American homes, unlike those in Mazar-i-Sharif during those awful days of August 1998, children are not routinely kidnapped, tortured, and murdered. With Danielle’s horror story, we are on extremely uncertain terrain, which would seem to dictate caution. Instead, we give not a passing thought to such a clinical approach and wade right in with all the self-righteous fury and bombast we can muster. Our response to the horrible is shouted from the evening news, complete with obscene close-ups of the grieving parents; it is

152  Heading for the Scene of the Crash

splashed in headlines across our newspapers; it is writ in the stony visages of prosecutors and district attorneys clamoring for death as retribution. The immediacy of horror—here Danielle’s ordeal, there the story of 9/11—blinds us to the exactly contrary truth: the stories of Danielle and of 9/11, like all horror stories, are shot through with coincidence and uncertainty. They are webs of circumstance in which we become ensnared. They are things that just happen. Consider. Westerfield, now reviled as a pedophilic monster, lived an apparently unexceptional life. He received a technical education, held a professional job, married and raised a family. He was almost fifty years old on the weekend Danielle died. What are we to say about this newly discovered monster—and by extension about the fabric, the connectedness, of daily life— when we survey all the nights he stayed at home, perhaps indulging in lurid fantasies as he pored over his child pornography, or perhaps just watching Regis Philbin on TV? We really don’t know. And if we ever come to know, if Westerfield unburdens himself of details of a secret life, that confession will be a consequence of the horrific exception represented by that Friday night in February. From his home two doors away, did Westerfield watch for Danielle to come home, for her playing in her yard? Weeks or months before striking, did he note the location of her upstairs bedroom, monitor the nightly routines of the van Dam family members? Did something out of the ordinary happen that evening to set him off? Did he plan his act carefully over a period of days and weeks, or did he decide suddenly, on impulse? Were there other occasions, other nights, when he almost left the house to carry out his horrible plan? And if so, what stopped him? Was it a material fact, a light turned on or off in the van Dam home, a figure glimpsed walking past a window? Or did something else, something unknowable, something barely perceived even by him, arrest his plans on those previous nights? To what, if not circumstance, do we attribute the event? Here we must ask ourselves the question dozens of the van Dams’ neighbors are doubtlessly asking themselves as they lie awake in their own suburban bedrooms, sleepless from dread: had this monster fixed on Danielle from the start, or did he train his perverted gaze on their own children, thinking those unspeakable thoughts about them? And if Westerfield had done that, if his imagination had wandered, which children did he take particular notice of, what tentative plans did he make to pay a visit to one of them on some future Friday night? As one of the van Dams’ neighbors remarked to a reporter, “You think, oh, you live in a nice neighborhood and think you know all your neighbors; you think something like this could never happen here. But it can. It can happen anywhere” (Madigan 2002). In these and a plethora of similar speculations we are effectively thrown back on the classic opening of the 1950s radio show featuring Lamont

Shit Happens  153

Cranston as the Shadow: “Who knows what evil lurks in the hearts of men? Ba-dum, ba-dum, ba-dum [spooky musical score]. The Shadow knows!” But, typically, we don’t. And if we were at all honest with ourselves, we would not willfully ignore all those months and years of not knowing when we raise our voices with the mob (and its televised incarnation in the personae of Dan, Tom and Peter) to vilify an individual as Evil incarnate, a monster beyond the pale of humanity. Couldn’t it be, despite all our theatrics and portentous oaths, that Westerfield’s visit to the van Dam home was something that just happened? A very odd thing about all this is that the possibilities, all too real possibilities, get scarier as we distance ourselves from the immediate circumstances of the van Dam family and their disturbed neighbor. If Westerfield lived an ordinary life in an ordinary neighborhood for years, what, as the neighbor perceived, are we to think of the thousands of other people living their own ordinary lives in that suburb? Of the millions of people scattered around the megapolitan sprawl of San Diego and its satellite cities? Recall, too, that five years before Danielle’s murder another of those suburbs, barely ten miles from the van Dam home, was the scene of the Heaven’s Gate suicides. There, in Rancho Santa Fe, a city of five thousand identified in the 2000 census as the wealthiest community in the United States, thirty-nine people put on their jumpsuits and running shoes and extinguished their lives, anticipating rebirth aboard the Mother Ship concealed behind the Hale-Bopp comet. Venturing further afield, what are we to think of the tens of millions of seemingly ordinary men and women going about their lives in the suburbs of America? How many of their homes shelter monsters in disguise, monsters in waiting, whose fantasies keep returning to that special night to come when they will take the fateful next step from ogling dirty photos or children at play in the school yard to taking one, taking one and doing all the things to that child they have dreamt of for, oh, so long. Dissecting the van Dam case in this way should make it difficult to return to the conventional, reflexive posture we assume when confronted by horror that strikes unannounced, whether at the heart of domestic life or at the heart of American financial and military institutions. The mob will doubtlessly continue to howl for the blood of the villain, the evildoer, but are you quite so ready now to add your voice to theirs? Are you quite so ready to attribute the horrors you witness or hear about to a specific, identifiable cause, whether that be a twisted sexuality or a fanatical religious belief? Can you still propose Evil as the root of those horrors when on close inspection “evil” turns out to be so disguised, so episodic, so hit-or-miss? If Evil is indeed a factor when things come badly unstuck, as our president assures us it is, why are its hosts so difficult to tell apart from our friends and neighbors? How was Westerfield able to live out most of his life as an ordinary man, and how was Osama bin

154  Heading for the Scene of the Crash

Laden able to command American military support as a valuable ally against the then-current Evil Empire? There is a book, cited earlier in this essay, that sheds a great deal of light on the nature of evil in American society. Fittingly, it is one of the most frightening books ever written. Its author is not Stephen King, Anne Rice (all those vampire stories), or Thomas Harris (Silence of the Lambs, Hannibal), nor even Franz Kafka, although the author’s works have been likened to Kafka’s. The book is Relations in Public and its author is Erving Goffman (1971). In prose utterly lacking in dramatic expression, in all the flourishes we have come to expect of “horror stories,” Goffman sets out his terrifying thesis: Although we exclaim over the rupture of public order when a murder or, still more, a massacre is committed, the wonder is that such events are so rare, that public order is so orderly. Human beings are large, powerful mammals capable of doing great harm to their fellows, particularly when crowded together in the unnatural confines of urban life—elevators, city streets, high-rise apartments … airliners. But they rarely do. Take a half-dozen primates of any other species, individuals unfamiliar with one another, and shut them up in an elevator together. The result would be pandemonium. But we humans spend our entire lives in that monkey house, routinely subjecting ourselves to potential violence at the hands of strangers without any avenue of escape. How do we accomplish that? Not by having evolved a distinctive “human nature” full of righteous sentiments and disdain for the occasional throwback, the evildoer. No, public order, Goffman maintains, is created and maintained by means of the sheerest gossamer of ritualized behaviors—checks and balances people learn to enact as an actor learns a part in a play. And what keeps the whole performance from coming unstuck? What keeps the gossamer of human interaction from shredding under the heavy use we make of it? What keeps our monkey house from erupting in bedlam? Goffman’s unsettling answer, delivered to a nervous American public at the end of the tumultuous 1960s when tears were appearing everywhere in that fabric, was this: very little. A drunk on a city street, a disgruntled employee at the office, or, now, a suspicious-looking character boarding an airplane, and the elaborate fiction the rest of us live by is ruptured, perhaps for a moment, perhaps for an afternoon, perhaps, as for many New Yorkers, for a lifetime. Goffman’s thought lacks the smile-button certitudes of George Jr.’s mouthings; its cold, antiseptic breath reaches us straight from the pathologist’s clinic. For George Jr., “America” is that freedom-loving, robust (lots of baseball playing!) organism that would enjoy perfect health were it not for the scheming of fanatics like Osama bin Laden and Saddam Hussein. For Goffman, America, and all of modern society, is perpetually in the most fragile health, subject at any moment to virulent attack from any number of seemingly innocuous sources. Although Goffman’s work predates complexity theory, his vision of

Shit Happens  155

society as a system of delicate checks and balances at the edge of chaos meshes nicely with that theory’s concept of self-organized criticality. The correspondence suggests a way for social thinkers to avail themselves of the intriguing work being done in complexity theory—so much better than continuing to debate the same old platitudes about the rightness and stability of American society. Goffman reminds us that that stability is a balancing act performed along a razor’s edge. We have seen that American values are shot through with contradiction, their inherent messiness making a farce of the bombast of our politicians and pundits. It is nonetheless disturbing to find that what might be called the “cornerstone value” of American society, the fundamental value of human life, is subject to the same swarm of contradictions that plague other aspects of our culture. I approached this problem earlier from the vantage point of comparing other tragedies—the Tangshan earthquake, the Mazari-Sharif massacres—with the World Trade Center attacks, a comparison that demonstrated the unhappy truth that the loss of human life is something we readily dismiss or forget. But the Mazar-i-Sharif case in particular raised an issue that returns full force with a consideration of Danielle van Dam’s fate: how do we fit human suffering, particularly suffering at the hands of others, into our vaunted system of values? A part of the answer here is that Americans play their usual numbers game with death: What was the body count? A drunk’s car killed a mother and her child on a city street last night? Well, that’s a sad thing. But what’s that? You say a drunk in another city hit a school bus, killing a dozen children? Well, we’d like to hear more about that on the evening news, and doubtlessly we shall. When Dylan Klebold and Eric Harris rampaged through Columbine High School on 20 April (Hitler’s birthday) of 1999, killing thirteen in addition to themselves, the nation was stunned, the media in a feeding frenzy. But today, if a child takes a gun to school and kills another child, that episode scarcely makes the national news. Particularly if that child is from South Central Los Angeles and the killing is “gang-related,” well, that’s strictly local news—and not a lead story at that. In 2001 the Los Angeles police classified as gang-related 346 homicides. About one a day. As routine as the daily newspaper itself. And for the first few months of 2002, the rate of such killings has about tripled, as police were transferred from such nonessential duties as patrolling school yards to providing “homeland security.” The murder of children by children has become mere background noise, a static hiss that intrudes on our existence but has long ceased to register as a message in itself. More important than the numbers is the shock value of an event, which is closely tied to our complex and readily exploited sense of horror. A child killed by a hit-and-run drunk driver provokes a local and temporary outrage. A child, such as Danielle, abducted from her home and tormented by

156  Heading for the Scene of the Crash

a pervert for hours or days before being murdered and thrown in a roadside gully commands the sympathy of a nation. Here we encounter a fundamental feature of an American value system, of our discrimination between good and evil acts. To the extent that we believe that a death involved great pain in a highly traumatic situation that occurred without warning, so we attribute a large component of suffering to that event and experience a corresponding sense of outrage. Many children suffer from painful, wasting diseases that destroy them over a period of weeks or months, but those deaths lack the terrifying context and the unexpected cause that are essential aspects of our heightened response to traumatic death. Disease is an evil, but it is decidedly not the Evil we can point a finger at, not Evil in the corporeal, villainous form of a David Westerfield or an Osama bin Laden. It is true that our national media, led by Dan, Tom and Peter, are a pack of shallow-minded jackals and apologists always on the prowl for the latest sensation. But we really can’t blame them for the tremendous disparity in the attention they give, say, to a child wasting away from a mortal illness and to Danielle’s nightmarish death. We can be repulsed, and rightly so, at the sight of the jackals feasting on the parents’ pain, but at the same time we must recognize that the news crews did not entirely invent the notion of “news.” To suggest that, as is often done by critics of the media, is to attribute too much intelligence, too deep a wisdom, to those overpaid mouthpieces, for to invent the news would require that journalists first articulate, from a mass of contradictory experience, a consistent set of American values and then proceed to act on them. They aren’t that smart. And besides, the task is an impossible one. Consider the following definition in the Dictionary of American Slang (1995): anchor man 1 n phr college students fr 1920s The student having the lowest academic standing in the class 2 n phr (also anchor, anchor person) A television news broadcaster who has the principal and coordinating role in the program.

Etymologies are often misleading, but this one is right on the mark. Juxtaposing the tragedy of Danielle van Dam with that of the World Trade Center attacks begins to teach another lesson: if it is impossible to specify the cause of a violent act, if the seemingly well-defined boundaries of an event twist and turn and erode under close examination, it is also impossible to measure and compare the suffering caused by different acts. Our instant repugnance for acts of cruelty, the horror we experience at their occurrence, is as close as we come to possessing a Kantian categorical imperative; the experience of horror is that elusive thing-in-itself that resists interpretation and explanation. The experience is a subjective there-ness, which either we feel or we don’t. Having that experience, knowing the sickening sensation of watching the first TV images of the planes hitting the towers and hearing the reporters’

Shit Happens  157

stricken descriptions of dozens of people hurling themselves from the top of one of the buildings, or listening to the first reports of the discovery of Danielle’s scorched body in that trash-strewn ravine, provides no basis for comparison, no grounds for interpretation. Those experiences are just there (or not, and we wonder deeply about those who experience the “not”). It would be an act of the greatest dishonesty, of the most abominable casuistry, to presume to construct a calculus of suffering and the horror we feel for it. Suffering is not a quantity, but something akin to an absolute quality of experience. Do you disagree? Then tell me, do you have an equation that can weigh relative suffering, relative horror? Do you possess some magisterial or God-given power to assign one value to Danielle’s suffering and another, presumably much higher value to that of the nearly three thousand people who died at the World Trade Center? What are the terms of that equation of yours? The number of dead bodies? The hours of agony? The psychological trauma, however that may be quantified? And do you have in that equation a mysterious, sneaky coefficient, something like a moral equivalent of Einstein’s cosmological constant that measures pure, undiluted horror? Following Nietzsche, the immoralist renounces any claim to authority by this morality of body-counters who would tabulate the most intense human experiences just as they sum their ledgers. That practice complements the utilitarian philosophy of John Stuart Mill that Nietzsche despised. Mill’s maxim to conduct oneself so as to promote the “greatest good for the greatest number” would here simply add the corollary to avoid the greatest suffering for the greatest number. Both principles are based on dual premises: first, that good and evil exist as absolute qualities (for Mill, the source of that absolute knowledge was the imperial British Crown), and second, that good and evil can be quantified (in keeping with the emergent ethic of an industrial nation-state). The immoralist rejects both premises. “Good” and “Evil” are not absolute moral qualities. As we have seen, they are not even consistent interpretations of behavior. Long before Ruth Benedict, and to much greater effect, Nietzsche perceived that “the good” is simply a residue of the habits of the herd, of that dross of humanity he characterized, in a phrase that does not endear him to politically correct moderns, as “the sum of zeroes—where every zero has “equal rights,” where it is virtuous to be zero.” ([1883–1888] 1968: 53]) The salient feature of those habits is that they are restrictive, negative, life-denying: the morality Nietzsche excoriated is a doctrine of Thou Shalt Nots. It is the repressive covenant of the stupid, the weak, the cowardly, of all who are unwilling to consider, with him, what humanity might be capable of in overcoming itself. If “good” is constituted in this way, it follows that “evil” is simply what the cowardly fear. Far from being an absolute quality of experience or even

158  Heading for the Scene of the Crash

a consistent set of behaviors, “evil” takes the form of a ragtag collection of boogeymen, some traditional, some fabricated on the spot by our xenophobic politicians and preachers. The exemplars of evil may be near at hand and easy to fear and hate, such as the media’s construction of David Westerfield as pedophilic monster or George Jr.’s and John Ashcroft’s images of those loathsome evildoers, Osama bin Laden and the “American Taliban,” John Walker. Or its exemplars may be classical figures whose subsequent reputations have been dramatically rehabilitated: Jesus, sentenced as a criminal by Pilate and reviled and stoned by the Jerusalem mob, or Socrates, condemned to die for his immoral acts by the George Jr.s and John Ashcrofts of a democratic Athens. To broach the analogy of Osama to Jesus, which has been done by several commentators in the months following 9/11, is to reveal the fragility of moral judgment and its inherently unpredictable history. If the criminal and rabble-rouser Jesus has been deified and his true nature obliterated (we know what Nietzsche said about the last Christian), who can legitimately denounce those Muslims around the world who view Osama as a new prophet? And who can predict what transformations will be wrought on the persona of Osama and the events of 9/11 by future centuries of Islamic thought? No one. Things happen. Although the Jesus-Osama analogy may be used to inflame the endless talking-heads debates among network pundits—to strike a spark that is immediately extinguished by those stupefying bores on 20/20, 48 Hours, The Larry King Show, and The O’Reilly Factor—its real value is that it demonstrates the inadequacy of any universal morality, based as that morality is on notions of inherent “good” and “evil” qualities of a person or event. With the rejection of an absolute or universal morality, does the Nietzsche-inspired immoralist then embrace a cultural relativism of the sort pioneered by Benedict or, perhaps being more fashion-conscious, a continental postmodernism whose incomprehensible texts unquestionably establish the death of Language, if not of Truth? No, to both counts. Benedict’s classic Patterns of Culture is an intellectual scandal that has gone mostly unrecognized by generations of cultural anthropologists. As noted earlier, her application of Nietzsche’s Apollonian/Dionysian opposition completely misrepresents that concept, making of it an either-or phenomenon: Culture A is Apollonian, Culture B Dionysian. Nietzsche’s principal argument in The Birth of Tragedy ([1872a] 1992) is that the opposing forces are co-present and interactive, the tension between the antitheses providing the genius of Greek drama. That egregious error is, however, a slight misstep in comparison with Benedict’s expropriation of Nietzsche’s work to support her relativistic thesis. However one wishes to characterize Nietzsche’s thought—and its rehabilitation has been slow in coming in Anglo-America—the last thing one

Shit Happens  159

should attribute to it is relativism. Far from embracing a democratic thesis of “different strokes for different folks,” Nietzsche made the backbone of both Beyond Good and Evil ([1886] 1992) and On the Genealogy of Morals ([1887] 1992) the principle of an “order of rank” (Rangordnung). Certain individuals are manifestly superior to others; ideals of equality and love of neighbor are tawdry little idols embraced by those who would deny the inherent order of things. Nietzsche’s insistence on an “order of rank” as a fundamental principle in human relations raises a most important question. But not the question of how the politically correct, righteously multicultural, stridently antihegemonic pomo is to get around this embarrassment, to explain away his master’s elitist indiscretion (How did Nietzsche ever become associated with postmodernism? The only plausible answer: through its adherents willfully misreading him as flagrantly as did Benedict). No, the critical question Nietzsche’s insistence on an “order of rank” raises for his philosophy is, How, having repudiated any conventional morality with its tainted concepts of “Good” and “Evil,” can one then insist on a standard by which human actions are measured? If the project of the “revaluation of all values” necessarily begins with discarding existing values, what will take their place? Beyond Good and Evil and On the Genealogy of Morals offer up a revolutionary proposal: to abandon morality in favor of aesthetics. Nothing less. For the immoralist, “good” and “evil” are tainted, inherently hypocritical notions to be cast aside in favor of values of the “noble” and “beautiful” versus the “common” and “ugly.” Persons and their actions are to be judged by the nobility or commonness of character they exhibit, and their creations or productions by their beauty or ugliness. It is a staggering, difficult, and deeply unsettling proposal, and one that has scarcely begun to be assimilated even by Nietzsche’s supposed admirers (the last Nietzschean …). Today Nietzsche’s work is embraced by those whose positions (to the extent one can decipher them) are directly contrary to his own: social democratic intellectuals who delight in applying their “antihegemonic” critique to a political system that acts contrary to their own fastidiously correct values of egalitarianism and multiculturalism. If our postmodern intellectuals make a hash of Nietzsche, our political and media personalities go about their business totally ignorant or uncaring of his ideas, ideas that have put an indelible stamp on twentieth-century thought. Imagine George Jr. or John Ashcroft approaching the microphones, staring into the television lights, and saying, “In declaring war on terrorism, our course of action will be to pursue the noble and beautiful.” Those two dim bulbs would not know where to begin in approaching Nietzsche’s thought, and to the extent they understood anything of it they would brand it subversive and “un-American.” And indeed it is.

160  Heading for the Scene of the Crash

The WTC Attack In taking up Nietzsche’s proposal the cultural anthropologist must adopt the role of cultural pathologist elaborated earlier: presented with a diseased body, the pathologist/immoralist disregards the conventional oohing and aahing around him and looks long and hard at the thing laid open on the table before him. The attack on the World Trade Center is that thing before him. But the thing is brought to his pathologist’s clinic with a lot of attendant baggage, swathed in bandages, already tarted up with ludicrous prosthetics meant to embellish and distract from the corpse. He must first strip those away, to lay bare the diseased thing itself. What are those bandages, those prosthetics? He is told that a band of fanatical evildoers has destroyed a cultural icon, a prominent symbol of America and all it stands for. He considers that suggestion as part of a preliminary diagnosis. After all, he must know what manner of thing he is dealing with and what has happened to it. He considers the suggestion and finds it wanting. When did the World Trade Center become a prominent symbol of American culture? The Statue of Liberty, only a couple of miles away and an easier target for Mohammed Atta and his pilots, is just such a symbol. Why was that not the target? Or, ranging further afield, why did those pilots not target the Golden Gate Bridge, Disneyland, historic downtown Philadelphia, or a dozen other sites that are far more prominent national cultural icons than a couple of skyscrapers erected in the 1970s? He is then told that the symbolism of the World Trade Center is of a more conceptual nature, that the twin towers represent the global dominance of American capitalism. This strikes him as a contrived explanation. From what he knows of the subject laid out before him, much of the office space in the towers was occupied by municipal government agencies and auxiliary financial services. Although it presented a more difficult target, could not Atta and Co. have directed their planes right next door and destroyed the New York Stock Exchange, indisputably the heart of our capitalist system? In the same vein, could the hijackers not have used O’Hare Airport to strike at the Chicago Mercantile Exchange, where the agricultural products and natural resources of the Third World, the fruit of the blood and sweat of countless oppressed laborers, are bought and sold by soft, fat men who have never worked a day in the fields or mines? Noting that his subject has come to him disguised in too many suspicious bandages, he considers its origins more closely. Searching his memory, he recalls that the twin towers had anything but a noble beginning, that they were the result of shady deals put together by corrupt politicians and sleazy real-estate developers. He recalls that many New Yorkers, probably a

Shit Happens  161

majority, and much of the national public had originally regarded the towers as ostentatious, as a ridiculously disproportionate blight on New York’s skyline, and as a colossal failure of urban planning. In short, people found the towers ugly. Recognizing the emotionally charged, not to say hysterical, reaction to his subject, the immoralist realizes he must examine his own personal response to the attacks. He considers that his own background may make him either a very good or very bad choice as a cultural pathologist assigned to the case. Having spent his entire childhood in rural areas of America’s heartland and having rarely visited New York City, he realizes that he completely lacks whatever complex of urban sensibilities New Yorkers and other urbanites bring to the tragedy. And being a longtime resident of Southern California—and, at that, an area of Southern California outside the immediate orbits of Los Angeles and San Diego—he acknowledges within himself a certain lack of interest in, even antipathy for, New York City. The human anthills and decaying concrete canyons of cities of the Eastern Seaboard have always repelled him; he knows he could never trade the open skies and soaring mountains of the American West for whatever those cities might have to offer. And whenever he has thought about the matter, which has been seldom, he realizes that the World Trade Center exemplifies for him all the negatives he attributes to the urban East: the greed, the corruption, the crowding, the meaningless bustle. The ugliness. The immoralist asks himself whether he is qualified to proceed with his investigation. Perhaps he harbors too many prejudices to render a professional opinion? In considering that issue, he harks back to his one significant personal involvement in the attack: in the days following 9/11 a very old and very dear friend, an artist living in SoHo just blocks from the twin towers, cannot be reached. All the phones, of course, are out. What has happened to him? Several days later, when he finally hears from his friend, their telephone reunion is punctuated by a telling comment his friend makes, a comment that rivets itself to all subsequent thoughts our immoralist has about the WTC attack. This friend, like the immoralist himself, is something of an Angry Old Man, a refugee of the failed cultural revolution of the sixties. The friend says, “I know why they hit the Pentagon, but why in the world did they attack the World Trade Center?” Just days after the event, with his SoHo loft still off-limits and uninhabitable, with the smoke and stench still hanging over the city, this Manhattanite of decades was baffled by the terrorists’ choice of targets. Rather than patriotic rage, what he felt, as debris rained down a few blocks from Ground Zero, was a consuming puzzlement and deep sadness that so much death and destruction seemed to have such an unfocused, almost random cause.

162  Heading for the Scene of the Crash

His friend’s reaction to the attack encourages the immoralist in his own puzzlement, his own detached examination of the thing before him. If the World Trade Center meant little to him personally, if at least some New Yorkers found the choice of targets incomprehensible, and if, as he suspected, his own previously indifferent attitude toward the WTC was mirrored in the attitudes of millions of Americans, then whatever could have been Atta and Co.’s motivation? At this juncture in his investigations the immoralist considers a most curious feature of the history of the attack: Islamist terrorists simply have a thing about the World Trade Center. They set off a car bomb beneath it in 1993 and succeeded in terrorizing the city and the nation, though the actual damage was not extensive. And it would not be surprising to learn that over the years the FBI has foiled other Islamist plots against the twin towers. Reflecting that the actual symbolic value and functional importance of the towers are far less than those claimed by the media, the immoralist is left with the disturbing thought that much of the motivation underlying the attacks is an irrational fixation. Along with a horde of immigrants from all over the world, Muslims from Saudi Arabia and Egypt come to New York City and see for the first time the two obtrusive structures that dominate the skyline. If a few of those Muslims have an Islamist bent, it is a simple matter for them to identify the aggressive ostentation of the towers with their vision of America as an ungodly and oppressive giant. The immoralist notes the implications of this line of thought: these Islamist terrorists, portrayed by our national leaders and the mainstream media as diabolically clever conspirators, evildoing geniuses who concealed their elaborate plot from our entire intelligence community, were at bottom merely starstruck yokels, fresh off the plane at Kennedy, gawping at—and despising—the grotesque opulence of the twin towers. Atta and his fellow conspirators were the very opposite of John Le Carré or Ian Fleming characters; they were merely frustrated, rootless young men of the ideologically charged Middle East, one or two generations removed from the proverbial camel jockey, and, with all of that, carrying an enormous chip on their shoulders. Like kids playing in a sandbox, they wanted to knock down the tallest thing around them. The immoralist notes that on this matter Islamist activists have believed their own press, even when that press is the biased Western mass media: both describe the plot to hijack airliners and use them as weapons as the work of diabolical genius. The media and the Islamists want the American public to believe it is dealing with Dr No rather than a bunch of unsophisticated youth. The immoralist doubts this grand vision of his subject. He recalls that in the media frenzy surrounding the Columbine High School massacre, portions of Eric Harris’s diary—in which the disturbed youth thought about hijacking

Shit Happens  163

a plane and crashing it in New York City after completing his grisly work at Columbine—were published: If there isnt such place [a foreign government without extradition] then we will hijack a hell of a lot of bombs and crash a plane into NYC with us inside firing away as we go down, just something to cause more devistation. (Shepherd 2017; one doesn’t apply sics to the text of a psycho killer)

Atta and some of his associates, living in the United States at the time, may have come across these reports. Perhaps the reports influenced their thinking on how best to strike at the Great Satan. The immoralist, something of a moviegoer, also vaguely recalls at least two or three movies in which terrorists hijack a plane in order to strike at an American target. He does not believe that Atta and Co. necessarily saw those movies; they probably did not share his own affection for the cinema. But the combination of the Columbine media event and those movies does leave him with the distinct impression that the idea or scheme of using planes as weapons was in the air. A solitary terrorist genius didn’t have to think it up. As a cultural pathologist, the immoralist is charged with dissecting and analyzing a diseased organism with the end in view of contributing to the treatment of future occurrences of the disease. Even his preliminary observations of the World Trade Center attack lead him toward the conclusion that George Jr. and John Ashcroft have got things hopelessly, and probably deliberately, wrong. Their elaborate plans for a global war on terrorism do not fit the facts of the case he sees before him, in which a small band of fanatics with relatively minimal backing attack a couple of buildings they hate for their own highly idiosyncratic reasons. Still, he does not doubt for a moment that hundreds of millions of people around the world have a deep antipathy for America; unlike George Jr., they don’t get that whipped-puppy look on their faces and ask in an incredulous tone, “Why do they hate us so?” The immoralist is sure they have their reasons, some of which are obvious to him and, he thinks, to any intelligent observer. Nor does he doubt that a tiny fraction of those hundreds of millions of disaffected souls are intent on acting on their beliefs: it is hardly surprising that al-Qaeda and similar organizations attract thousands of individuals who would do harm to America at any cost to themselves. But the existence of those organizations and the actions of their members are hardly the grand conspiracy depicted in George Jr.’s call to arms. The immoralist is led to a contrary view, one consistent with the teachings of complexity theory rather than with George Jr.’s simpleminded worldview. In performing his assigned task of isolating a cause of the WTC attack, the immoralist reaches the preliminary conclusion that its cause was an improbable combination of ideas and events, a web of circumstance that took shape in a way no one, not even the attack’s

164  Heading for the Scene of the Crash

perpetrators, could precisely have foreseen. His preliminary conclusion is that shit happens. When he outlines his thinking to a few people, both within and outside his profession, he is met with the sharpest rejection. The immoralist recognizes that his acquaintances’ strongly emotional reaction to his ideas are in fact a part of the symptomatology of the disease he seeks to understand. He notes that his critics keep returning to a common point: “Never mind,” they say, “about the exact political and psychological causes of the attack. The thing all of us must keep in mind is that thousands and thousands of people died horribly in a cowardly action that threatens the very fabric of our society. Don’t you have any basic human compassion for so much suffering?” When this accusation is leveled at him, the immoralist’s features change; for the barest instant, a sad, bitter smile plays across his face. Thousands of deaths, indeed. During the course of his life the immoralist has witnessed so much death, has seen so much disease and suffering. And he knows that his personal experiences are nothing compared to the human death and suffering around him. He was not among the first relief workers to reach Tangshan; he did not witness the massacres at Mazar-i-Sharif; he was not on the killing fields of Cambodia; he did not visit blood-soaked villages of the Rwandan countryside; he was not huddled in a Sarajevo apartment as the shells rained down day after day. The immoralist recognizes immediately that when his critics demand that he mourn thousands of lives lost, they mean thousands of American lives. He recognizes this, and the stench of his countrymen’s hypocrisy assaults his every sensibility; it is far worse than any he has had to endure when bent over a rotting corpse in his clinic. As he knows from long and bitter experience, this American hypocrisy is far more selective and insidious than simple xenophobia. The pompous bigots who proclaim their grief do not concern themselves nearly so much with the thousands of American children and youth who die by the gun every year. Quite the contrary, they are content to cheer Charlton Heston on as he proclaims from the presidential podium of the National Rifle Association that only death will pry the gun from his fingers. Death or, as it has turned out, Alzheimer’s. Tough luck, Chuck. Nor, the immoralist notes, have those righteous mourners been so vocal over the years as cigarettes have killed more than four hundred thousand Americans every year. Quite the contrary. They have returned those great defenders of tobacco and the American Way, Jesse Helms and Strom Thurmond, to Congress again and again, until at last even those statesmen’s massive doses of growth hormone were not enough to keep them from lapsing deeper into their prolonged senility. With this stench of hypocrisy in the air, the immoralist refuses to join the chorus of mourners of the victims of the World Trade Center attack. He

Shit Happens  165

recognizes that nearly every human death, particularly a tragic death, devastates individuals closest to the deceased. He knows the suffering of survivors is real; he has experienced it himself. And he knows that that suffering is a fundamental residuum of human existence; it may be, as proposed earlier, as close as we come to absolute value. The immoralist therefore acknowledges that grief is a profoundly personal emotion—and, for that very reason, questions its extension to the public at large. How, he asks, can a deeply private experience be transformed through media coverage into a collective outpouring of grief? The emotion simply does not work that way, but he notes that the emotion of grief can be made to work on a collective level. The immoralist recognizes that, while private grief is outside his field of expertise, its public transformation is very much a part of the cultural pathology he routinely studies. The people who died in the attack led entirely private lives; not one among the nearly three thousand could be described as a public personage. The World Trade Center attack was hardly an event comparable to the assassination of John Kennedy. There, a young, charismatic president, his beautiful and elegant wife, and their lovely children represented a national First Family; Kennedy’s tragic death was experienced as an actual death in the family by many Americans. Absent a public personage, however, the grieving survivors of the WTC attacks transformed themselves and were transformed by the media into pathetic stand-ins for an individual of real national stature. The jackals of the media were only too happy to oblige: they feasted on every outpouring of human grief; they encouraged and reveled in the construction of Princess Di–style shrines in which photographs, memorial cards, and the jumbled debris of a consumer culture combined to form a twenty-first-century wailing wall. When he examines this phenomenon of collective grief more closely, the immoralist notes curious aspects of the actual victims singled out for special regard: police officers and firefighters. If individuals of national stature were not among the victims to mourn, the media, taking its lead from the nation’s mayor, Rudolph Giuliani, fixed on the bravery and sacrifice of police and firefighters called to the scene minutes after the first attack. Giuliani regularly appeared at his frequent news conferences sporting an NYPD or NYFD cap; George Jr. toured Ground Zero and mounted a carefully prepared pile of rubble with firefighters in full regalia surrounding him. It was one of the definitive photo ops of the whole ordeal. The American character: a nation that venerates its police officers and firefighters.5 It is a distressing prospect, and for the immoralist an indication of how deeply rooted is the pathology afflicting the body of America. In Nietzsche’s perspective, the herd becomes so fearful that it begins to worship those who control it and lead it to physical or spiritual destruction. Cowards, who are themselves bullies, turn in their fear and trauma to greater cowards,

166  Heading for the Scene of the Crash

greater bullies. Nietzsche would have despised the NYPD as he did the authors of the New Testament: the flock must have its shepherd; it cannot for a moment pretend to live as a group of self-willing, self-fulfilling individuals who have no need of shepherds or police. Mindful of recent American history and his own status as cultural refugee of the sixties, the immoralist observes that the nation has come a long way—down—from the general condemnation of Massa Daly’s police riot at the 1968 Democratic Convention to the post9/11 glorification of New York cops. There is really very little in our national literature that addresses the enormity of our situation. For one thing, that situation has changed so rapidly that most of us are living in a past when convicts and parolees were a scandalous rarity. We have been caught off guard, comforting ourselves with platitudes from the last century while construction crews work overtime to build huge new for-profit “detention facilities” across the land. While C-minus law students with a vindictive streak—nasty little control freaks—swell the packs of assistant district attorneys that flourish in every one of our courthouses.

Final Things and Fantasies Stripping away the bandages, cutting through the prosthetics, removing the masks that obscure the pathology that is 9/11, the immoralist comes at last to the naked thing itself and to the horrible question that thing poses: what is the value of those lost lives? He stands at his dissecting table unsupported by the illusions of his countrymen, without their simpering hypocrisy regarding the fundamental value of human life, without their puerile fascination with global conspiracies, without their need to make heroes of the lowest orders of humanity. Faced with the thing itself, the immoralist sees that his subject—all those lost lives—possesses no nobility, has created nothing of beauty. The victims were a haphazard assortment of municipal workers, bureaucrats, money-changers: a succession of zeroes. His subject lacks the aesthetic significance that Nietzsche insisted is the only valid criterion for establishing value. Considered in these terms, those three thousand lost lives mean little. With this stark conclusion weighing on him, the immoralist’s mind races. Thoughts he had in the first hours and days following the World Trade Center attack come rushing back. Remember, he has come to believe that catastrophe strikes anywhere, without warning and without the possibility of coherent explanation. If that is so, then the tumult of his thoughts throws up scenarios he cannot dismiss: a jumble of what-ifs. What if Atta’s New York City targets had been different? The Museum of Modern Art, perhaps? Or the New York Public Library? The Museum of Natural History? New York

Shit Happens  167

University? Or, widening the scope of things just a bit, suppose Atta and his cohorts had targeted places the immoralist associates with worthwhile human endeavor—not exemplars of the Nietzschean quest, perhaps, but far better undertakings than anything the World Trade Center represented. Suppose the planes had hit the Santa Fe Institute, Princeton’s Institute for Advanced Study, Chicago’s Art Institute, the California Institute of Technology, or MIT? Had those been the targets, then certainly George Jr. and John Ashcroft would not have reacted with quite the xenophobic fury they spewed out when the Pentagon and twin towers were hit. After all, George Jr. and Ashcroft are themselves common, ugly personalities, and as such they mourn the loss of the common and ugly while secretly gloating over the destruction of the noble and beautiful. But in the immoralist’s world, the brilliant minds and talents and, far more tragically, the youthful genius found in those few places he respects would be a vastly more devastating loss to the world than the grubbing inhabitants of the World Trade Center. If we follow the immoralist in his reckoning of value, shouldn’t we breathe a collective sigh of relief that Atta and his cohorts, self-deluded fools that they were, selected targets that are insignificant in any scheme of things that matter? In this vein, the immoralist speculates on what someone with Atta’s hatred and fanaticism might have done if equipped with a time machine rather than airplanes. Suppose he or a similar fanatic could take the life of Einstein in 1904, before the young patent-office clerk authored the extraordinary quartet of papers in 1905 that launched modern physics? Suppose this time-traveling fanatic could assassinate the young V.S. Naipaul in 1955, when that impoverished Trinidadian immigrant to London was struggling to write his first novels? Suppose the fanatic were to visit the young Picasso, van Gogh, Beethoven? Speculative, even fanciful, but the mere mention of these outlandish possibilities underscores the immoralist’s Nietzschean perspective on things. Value does not reside in the herd with its suffocating closeness and mindlessness; it resides in the solitary individual whose quest for truth and beauty takes him or her far from the confinement of the masses. That rare individual creates; the herd merely exists. Adopting this perspective drives home the urgency of the hard question the immoralist poses: What is the value of the lives lost in New York City on 9/11? When we have exhausted our sensibilities watching the grotesque interviews with family members, when we have seen far more than we care to see of the wailing walls of photos and memorabilia, when we have finished gagging, for the time being, on George Jr.’s pious bilge, can we say that any of that secondhand grief, that warmed-over patriotism, can begin to measure up to the loss we and the world would endure if deprived of Einstein, Picasso, Naipaul?

168  Heading for the Scene of the Crash

The immoralist’s thinking along these lines has a far less speculative side. Rather than continue weighing fanciful scenarios, he considers it of the utmost importance to determine what, if anything, of true value was lost in the World Trade Center attack. Pursuing that investigation immediately leads him to observations that are both alarming and surprisingly little-discussed in the media. In the thousands of hours of TV coverage and in the thousands of pages of 9/11 journalism, very little has been said of the irreplaceable art destroyed in the inferno. The American public is forced to endure endless interviews with the bereaved, endless speeches by dim-witted politicians, endless images of the smoking ruin, but rarely if ever afforded a look at the real treasures forever lost to the world on that September day. Among these: an enormous tapestry by Miró entitled World Trade Centre that hung in the mezzanine of Two WTC; a looming twenty-five-foot sculpture by Calder that stood outside Seven WTC; two pieces, one a painting and the other a thirty-foot sculpture, by Roy Lichtenstein. These major works, gone forever, were among the relatively few pieces of art publicly displayed. The privately owned works were far more numerous in that bastion of global capitalism; these were kept behind closed doors for the covetous pleasure of their owners and their clients. There were, for example, the Rodins of Cantor Fitzgerald. The firm of Cantor Fitzgerald is a money-changer on a scale unimagined by Jesus when he was applying the scourge to its earlier incarnations in the Temple of Jerusalem. As the premier trader of government bonds in the world, Cantor Fitzgerald, founded by B. Gerald Cantor, occupied three floors near the top of One World Trade Center and employed about a thousand people. On its uppermost floor, the 105th, the firm had created a magnificent “museum in the sky.” The museum included dozens of works by Rodin (Cantor had been the most important private collector of Rodin in the world) as well as a large collection of American and European paintings, sculptures, and photographs. Atta’s hijacked plane slammed into the building just below the Cantor Fitzgerald floors; some seven hundred employees were killed. It was by far the greatest loss of life sustained by an individual firm. The museum and its entire contents were also destroyed: works of art kept from the public eye, reserved for the gloating enjoyment of bond traders, a store of loot assembled through rampant capitalist greed, were at last released to a grieving world as a heap of smoldering ash and fragments, treasures consigned to the mass grave that awaited the ruins at Ground Zero. Cantor Fitzgerald and its chairman, Howard Lutnick (who escaped death while taking his son to kindergarten that fateful day), brought public attention to themselves in the months following 9/11 by producing a series of tasteless and offensive TV commercials. In these, Lutnick and various Cantor Fitzgerald survivors reflect on the horror they experienced and their

Shit Happens  169

pride in returning to the bond markets the very Thursday following the Tuesday disaster. Sure, hundreds of our friends and coworkers died horribly, but, by God, we were back trading two days later! Aren’t we the greatest money-changers the world has ever seen? One wonders, what would Jesus have thought of Atta’s applying the scourge to this nest of vermin? Extending a macabre line of thought, the immoralist ponders a hypothetical trade based on Nietzsche’s revaluation of values: rather than going on and on about the lost lives of parasitic bond traders, what if it had been possible to selectively increase the death toll at the twin towers while proportionately decreasing the number of works of art lost in the disaster? Suppose a high-level meeting had been scheduled for September 11 that attracted to the upper floors of the twin towers all the top executives of Enron, WorldCom, Tyco, and a dozen other larcenous corporations? And suppose at the same time every piece of art on site had been removed for inventorying or whatever—a total collection conservatively valued at $100 million by art historians involved in the specialized business of insuring art collections. Think about it. Deeply. Would you sacrifice a bunch of thieves and liars, whose unchecked greed has brought great hardship to thousands of families, to save hundreds of works of art released from corporate monopoly and spared a fiery destruction? Would you trade Ken Lay for a Calder? Would you? The immoralist can only answer, “Yes! And in a New York minute!” (But don’t worry about our CEOs—those trapped rats, scurrying up to the observation deck, would still have a sporting chance: they could deploy their platinum parachutes.) What is the value of all those lost lives? To begin to answer that question honestly, which never even gets asked in the knee-jerk comments of politicians and pundits, is to open a searching analysis into the nature of value itself. In that analysis there can be no assumptions, no “Of course thisand-that are the case …” to cloud the issue. Principal among those assumptions—the one everybody trots out right away—is the fundamental value of human life. There are, however, two fundamental problems with this “fundamental value.” First, in applying a Nietzschean perspective one recognizes an “order of rank” at work in human affairs, an order that distinguishes the worth of an Einstein or Picasso from that of a lying, thieving CEO. Second, there can be no established value to human life because humanity itself is anything but established: from the species’ beginnings to the present day and into the foreseeable future, humanity’s one constant has been the transformation of its physiology, its mind, its social organization, its culture, its very being. “We” are a from-something-else/to-something-else proposition: “… man is a bridge and no end …” ([1883–92] 1966: 198) (the bridge is one of Nietzsche’s most powerful metaphors). Only a fool—and, as we have seen, there are plenty of those—would insist on an inherent moral order of things.

170  Heading for the Scene of the Crash

Counterterrorism George Jr.’s declaration of war on an abstraction, “terrorism,” is as senseless and dishonest as his predecessors’ earlier declarations of war on other abstractions: poverty, crime, drugs, the Evil Empire.” Such acts are an alarming and depressing indication that the spirit, if not quite yet the institutions, of totalitarianism dominate the thinking of American policymakers. Those “wars” are conceivable only if one pretends that the lives of poor people, of criminals, of drug addicts, and, now, of “terrorists” are a thing apart from the healthy, whole body of American society. Pretends that those “deviates” are not members of our own families (and we don’t declare war on our family, even though we may live in a state of perpetual warfare with them!). Pretends that what “they” are is not intimately bound up with what “we” are. The totalitarian “we” is constituted through those declarations of war, thus enabling the most stupid and dishonest among us to tighten the noose on whatever sort of diversity, of human difference and creativity, seems at the moment to threaten that “we.” The immoralist’s inquiry exposes the futility and repugnant ugliness of this view. In his pathologist’s clinic, he observes that the body of American society laid out before him is anything but healthy and whole. It is covered with festering sores. Hence, the pathologist asks, why, when a Mohammed Atta applies a pinprick to one of those sores, should we be surprised when a foul-smelling pus oozes out? Let us be ruthlessly logical here. George Jr. declared war on “terrorism.” The American bully, stung by a bee (Atta’s pinprick), howls like the spoiled child it is, and then hurls its obese bulk at whatever has dared offend and challenge it. Battalions are mobilized, reservists called up, fleets reassigned. And on the home front, because the front is now our own homes, a great Department of Homeland Security is created. That new department promises to expand manyfold the powers of the police in a society that is already an embryonic police state. “Terrorists” lurk among us, in Oregon towns and Buffalo suburbs, as well as in Afghan caves and Philippine jungles. What are the results of this “war” being waged on two fronts? Several thousand Afghans are killed, among them hundreds of “foreign fighters.” A few hundred captured fighters—not “prisoners of war” but “unlawful combatants” without any protection by the Geneva accords—are transported to the other side of the world to squat in wire cages hastily assembled at the US base in Guantanamo, Cuba. And what of Osama bin Laden, Mullah Omar, and most of their chief lieutenants? Disappeared without a trace. Our archvillains remain in hiding and, with each day, recede further from the public’s imagination. What of the coterie of oily sheikhs who sponsored al-Qaeda and made all the carnage possible? Despite several crowing press releases about

Shit Happens  171

cutting off those funds, George Jr. barely scratched the surface of the vast Arab wealth supporting the Islamist cause. And on the home front? Hundreds of arrests, months of detention in solitary confinement without charges and without visitors. Two or three fish are netted, principal among them Zaccarias Moussaoui, and offered up for the usual ludicrous trials. Is this what we have to show for our “war on terrorism”? These are the flailing, bellowing, and mostly futile actions of an enraged bully. And worse than ineffective, they are dangerous, for two reasons. First, they demonstrate to the world yet again the inability of the US government to mount an effective response to crisis. The world witnesses the bully thrashing around, wasting its enormous resources, accomplishing little, and, most alarming, becoming obsessed with trifles along the way (shuffling cabinet responsibilities to make room for the Department of Homeland Security; hauling the FBI and CIA on the carpet to stir the ashes of intelligence on terrorism). All this simply reminds the world that this is the same government, the same bunch of vindictive old men—the “leader of the Free World”—that went into a coma for more than a year over that horror of horrors: Bill Clinton’s blow job. Out there in the world they’re still shaking their heads and laughing over that one: Clinton brought eight years of peace and prosperity and an astounding budget surplus, that’s all well and good, but how, we repeat, how can we ever forgive him for that nastiness in the Oval Office? What will we tell the children? It is beyond ridiculous. The world witnesses this puritanical frenzy, takes it all in, and waits for it to surface in the future, when a delusional American government will again make a fool of itself. Second, terrorists and would-be terrorists around the world (and there are doubtlessly tens of thousands of them) take in the spectacle of America’s rampage and grow stronger. It is precisely what they expected from the bully they despise. Donald Rumsfeld and Colin Powell can crow about American victories all they want; their words find a mocking audience in the cities and villages of the Middle East and Central Asia. The might of the US military destroyed the weak little Taliban regime in Afghanistan (while failing to capture most of its leaders) and returned that blighted land to its earlier chaos. How will the US nation-building effort proceed there, as the months and years pass and political assassinations multiply? Already the American media and government have grown tired of Afghanistan and its unsolvable problems; Saddam and Iraq (The Sequel) are now the hot topic. Meanwhile, in the villages and hillside encampments of Afghanistan and Pakistan, thousands of armed men bide their time. The bully’s rage spent, they will return to their old ways. To the limited extent the cultural pathologist can recommend prophylactic and therapeutic treatment for his diseased subject, he reviews the US response to 9/11 and finds it hopelessly inadequate. That response does not

172  Heading for the Scene of the Crash

begin to ameliorate or cure the disease he sees laid out before him. A “war on terrorism” is a mere ideological posture. What is required, our pathologist concludes, is counterterrorism. Counterterrorism is just that: it turns the actions and tactics of terrorism back on itself. It does not mount a full military campaign, reorganize government departments, convene military tribunals, or conduct mass arrests and endless detentions. Those are the predictable, wasteful, and ineffective acts of a totalitarian regime. Counterterrorism responds in kind to the injury inflicted. It is an eye-for-an-eye justice, an age-old system of pure revenge that has not quite been crushed under the accumulating weight of law books. Counterterrorism is swift and merciless, as swift and merciless as the attacks that precipitate it. And very tightly focused: it identifies the precise nature of the attack on itself and responds in kind. As Hannibal Lecter, quoting Marcus Aurelius, advised Clarice Starling, “Ask of each thing what it is in itself. What is its nature?” (Silence of the Lambs) The Afghanistan campaign took months to get underway, and although George Jr. was praised for his deliberation, it was a pointless waste of time. Within hours of the attacks, the US government and soon thereafter the world knew the national origins of the hijackers (most of them our dear old friends and allies, the Saudis), the pivotal role of Osama bin Laden and his al-Qaeda organization, and the motivating cause of Islamist activism. That is the nature of the attacker; that is the nature of the thing in itself. And the nature of the attacked, of course, is large, prominent public buildings of both symbolic and instrumental importance to a nation. The American response, its counterterrorism, comes within days of the attacks. George Jr. does not first get on TV and mourn the dead, does not praise Islam while denouncing its fanatics, does not call for the guilty to come forward. The world has seen that self-righteous face of America and heard its whining voice too often and finds them pathetic. No, events take quite a different course. Within days of 9/11 the most prominent Wahabi mosque in Saudi Arabia, Egypt, and Pakistan is simultaneously obliterated, each targeted by a brace of Cruise missiles or the notorious “smart” bombs. For years the Wahabites, with enormous financial support from wealthy Saudis, have been indoctrinating Muslim youth with a virulent version of Islam, exhorting them to destroy the infidel. They now reap the whirlwind they have sown. On the same day, the corporate headquarters of Osama’s father’s Saudi construction firm becomes a plume of smoke and a pile of rubble, along with two or three of that firm’s major public works in Saudi Arabia. In a city we have already visited, Mazar-i-Sharif, the famous “Blue Mosque,” reportedly containing the tomb of the Prophet’s son-in-law, meets the same fate. In Islamabad, the headquarters of Pakistan’s notorious secret police force, the Inter-Services Intelligence Agency, which for years has openly supported the Taliban and Osama,

Shit Happens  173

is obliterated. At the end of that fateful day, the Saudi air force, now a hornet’s nest of activity, issues an alarming communiqué: their radar has detected a single Cruise missile, coming in unusually high and on a direct course to impact the holiest of holies: Mecca! There are only seconds to react; some in the sacred site below turn their eyes up to await the infidel’s wrath. But there is no devastating explosion in the vast central plaza of the shrine; instead, the warhead impacts and releases an enormous cloud of leaflets, reminiscent of the gruesome cloud of confetti which swirled around the collapsed towers days before. A few stunned clerics pick up leaflets as they settle to the ground. The leaflets are inscribed with a message, in the most flowery classical Arabic of course: Fuck with the bull and you get the horn! Immediately following this airmail delivery, George Jr. is back on TV to announce that the US government now expects that the leaders of al-Qaeda and the Taliban, along with the most important financial contributors to those organizations, will be delivered to American authorities. Dead or alive, it matters little, as George Jr. made clear in his infamous “we’ll smoke ’em out; we’ll git ’em” speeches. Along with those criminals, the United States expects a reparations payment in the amount of fifty billion dollars, to be provided by the Saudi government and its wealthy citizens who have supported the Islamist cause. Failing the actual cash, a fifty-billion-dollar credit in US purchases of Saudi oil will be taken. These conditions must be met in two weeks. If not, the next deliveries to Mecca, Medina, Saudi royal palaces, and every Taliban stronghold will not contain leaflets. George Jr. notes that the Saudi palaces are on this list because the ostensible target of United Airlines flight 93, which crashed in Somerset County, Pennsylvania, was the White House. An eye for an eye. Counterterrorism. As president, he is not about to stand by while American landmarks and American lives are lost. And he is not about to send US troops halfway around the world to attempt to track down a bunch of fanatics who have gone to ground. Local authorities are far better suited to that on-the-ground guerrilla operation; US strikes are intended to encourage them to round up and deliver the killers. And the killers and the money had better be delivered. If not, more very bad things will happen. Fuck with the bull …

Multiple Realities, Superabundant Order George Jr.’s actual conduct of his war on terrorism demonstrates that he has signed on to the going version of reality—signed on to it without knowing it or even beginning to think about it, since he is too dim-witted to do either. He actually believes that everyday life makes sense and can be controlled. Effects follow naturally from causes, so that individuals in the know—meaning the

174  Heading for the Scene of the Crash

conniving lowlifes who become politicians—can anticipate cause-and-effect sequences and, the most important part of all, jump into the mix to manipulate events. Common and comfortable as this way of thinking is, we have seen again and again that it is false. One thing leads to another often through sheer coincidence, so that the enormous number of events that shape an individual’s life comprises a vast web of circumstance. It is not that everything happens by chance, making life wholly unpredictable from one moment to the next; it is, rather, that chance enters into a sufficient number of events to make any large-scale prediction and control impossible. Human life, even in such supposed bastions of peace and tranquility as the United States, is conducted on the edge of chaos. To say that we live in “equilibrium” does not mean that things have arranged themselves into a (perhaps God-given) stability. The true meaning of social “equilibrium” is that a great many individuals, acting on the basis of conflicting intentions (intentions that regularly conflict even within a single individual), construct a society that is perpetually about to fly apart: a volatile and dangerous entity. The curious, thoroughly paradoxical twist to this circumstantial vision of reality is that it is the very opposite of saying that life is nonsensical, that nothing make sense, that everything is random. It is impossible to arrive at clear and definitive interpretations of events not because things don’t fit together, but because things are interconnected in such a myriad of ways that no single pattern embraces all the possibilities, all the realities of social life. The entropic view of life, as proposed by, for example, Claude Lévi-Strauss in a morose passage toward the end of Tristes Tropiques ([1955] 1974), is directly contrary to the more recent take on things offered by complexity theory. Meaning is not a precious commodity that one derives after sifting through a plethora of inanimate “facts”; meaning, in the sense of ordered relations among elements, is superabundant, at least as common as those inanimate facts themselves. Remarkably, order spontaneously erupts from any dynamic combination of randomly distributed elements (Kauffman 1995). A primordial cloud of ionized gas coalesces into vast clusters of galaxies, each a dynamic entity with its internal process of life and death. Many stars in those galaxies are born with embryonic planets circling them. And on at least one of those planets (and how can there not be a multitude?) simple chemicals have combined to form molecules of incredible complexity, including DNA. That single molecule in turn has sparked the continuing evolution of uncounted millions of species. One of those species is even in the process of transforming itself, its very nature, through the use of a technology it has created from nothing. Human life and the societies it forms are a seething multiplicity. There is no correct, authoritative interpretation out there, waiting to be discovered by the perceptive analyst, because there are a multitude of interpretations, all locally valid to some extent, all impinging on and modifying one another. The awful danger,

Shit Happens  175

to return to the passage from Eco with which this essay began, is that people pretend to establish a single interpretation as the underlying truth of an event and thereby render existence terrible. It is an exceedingly difficult thing to admit to ourselves. Particularly when we confront tragedy, and most particularly when we confront tragedy on the grand scale of the WTC attack, we want to believe that things happen for a reason. “It was his time to go.” “It was God’s will.” “It was the diabolical scheme of evildoers” (which translates, in the idiom of the previous examples, as “Satan caused this to happen”). George Jr. and his ilk nourish (or say they nourish) the illusion that this land of ours is a wonderful place that would go on being wonderful if it were not for extremists who seek to disrupt its normal functioning for their own selfish and misguided ends. The worst sort of these agitators and militants is the terrorist, a true foreign devil: dark, penetrating eyes; a villainous beard; strange clothing; fanatical, heavily accented speech. Even if our various wars—on poverty, crime, drugs—sometimes make it difficult to tell the good guys from the bad guys (since they are usually part of our own families and communities), the war on terrorism is a pure exercise in Us versus Them. That is a lie. A lie, or, at the very least, a terrible, damaging mistake. Osama bin Laden was once our staunch ally, a valiant freedom fighter defending the noble Afghan people from the Evil Empire of Soviet communism (and if you don’t believe that, just watch Rambo III in which Sly joins up with the mujahadin and reprises his role of war hero; we now see what John Walker got for trying the same thing!). Mohammed Atta and his crews lived among us, a tiny splinter element of a vast Arab-American presence in our cities. At the same time, American political, economic, and military aggressions have penetrated every corner of the Islamic world. Muslims everywhere are force-fed a daily diet of TV coverage of American jets and American tanks, commanded by Israeli storm troopers, murdering thousands of defenseless Palestinians. Tens of thousands of Muslims and other disenfranchised peoples of the Third World would give their lives to avenge those and other American atrocities. And millions of their compatriots would witness with approval those acts of revenge. It is ludicrous to believe that even a totalitarian regime, such as the United States is becoming, can stem that tide of vengeance. Our already grotesque “defense” budget could be swollen several times over; our new Department of Homeland Security could induct thousands more internal police officers, thousands more Chekists; our satellite surveillance and its monitoring could be increased manyfold: terrorists would still strike their targets. It is not unlikely that a terrorist’s bomb, plane, or satchel filled with anthrax spores will kill hundreds and perhaps thousands of Americans in the future. The vast reservoir of hatred American governments and corporations have created piles up on the shores of our privileged and

176  Heading for the Scene of the Crash

delusional nation; who can doubt that it will spill over or break through here and there? Shit happens. The Palestinians have huddled in their concentration camps for decades, waiting every night for the bomb or missile to strike. That is the true meaning of terror, not the childish rubbish George Jr. peddles with his “war on terrorism.” The urgent question then is not whether George Jr. can fashion America into such a thoroughly totalitarian regime that it becomes possible to monitor everyone’s doings and thereby interdict a terrorist before he carries out his plan. That would be impossible to accomplish, and its mere attempt—which is now well underway—would destroy whatever Nietzschean nobility and beauty may reside in this land of malls and military bases. The urgent question is whether America will continue to act the enraged bully, lashing out at every puny insult to its selfish existence, or, by some miracle, pursue another course? Things don’t happen for a reason. Things just happen. (Alice Bowman [Meg Ryan] in Proof of Life [2000])

Notes   1. This chapter was written during the year following the 9/11 attacks and as such is infused with the emotions of that time.  2. A brilliant exception to this criticism is Kapferer 2004, which appeared a couple of years after the present essay was written.  3. Just as the nation of Guyana served merely as a backdrop for the “American story” of Jonestown.  4. Sheridan (1998), writing in the Sunday Times, provides a detailed and graphic account of those reports.  5. For Americans, as with Max Frisch’s characters in the pre-Nazi German society of Biedermann und die Brandstifter (Biedermann and the arsonists) (1953), we no longer worship God, but the fire department.

n Bibliography Absolon, Karel. 1949. “The Diluvial Anthropomorphic Statuettes and Drawings, Especially the So-Called Venus Statuettes, Discovered in Moracia: A Comparative Study.” Artibus Asiae 12(3): 201–220. Alien. 1979. Directed by Ridley Scott. Los Angeles: 20th Century Fox. Aliens. 1986. Directed by James Cameron. Los Angeles: 20th Century Fox. Alien 3. 1992. Directed by David Fincher. Los Angeles: 20th Century Fox. Alien Resurrection. 1997. Directed by Jean-Pierre Jeunet. Los Angeles: 20th Century Fox. Auel, Jean. 1980. Clan of the Cave Bear. New York: Crown Publishing Group. ———. 1980–2011. Earth’s Children. 6 vols. New York: Crown Publishing Group. Bak, Per, and Kan Chen. 1991. “Self-Organized Criticality.” Scientific American, January, 46–53. Barthes, Roland. 1974. S/Z, trans. Richard Miller. New York: Hill & Wang. Bateson, Gregory, and Mary Catherine Bateson. 1987. Angels Fear: Towards an Epistemology of the Sacred. New York: Macmillan. Benedict, Ruth. 1934. Patterns of Culture. Boston: Houghton Mifflin Company. Berenguer, M. 1973. Prehistoric Man and His Art. London: Souvenir Press. Bisson, Michael S., and Randall White. 1996. “Female Imagery from the Paleolithic: The Case of Grimaldi.” Culture. Boon, James. 1983. “Functionalists Write, Too: Frazer/Malinowski and the Semiotics of the Monograph.” Semiotica 46(2–4): 131–150. Burroughs, William. 1964. Nova Express. New York: Grove Press. Camus, Albert. (1951) 1991. The Rebel: An Essay on Man in Revolt, trans. Anthony Bower. New York: Vintage Books. Coleridge, Samuel Taylor. (1798) 1997. “The Rime of the Ancient Mariner.” In The Complete Poems of Samuel Taylor Coleridge, ed. William Keach, 147–166. New York: Penguin Books. Curry, Andrew. 2012. “The Cave Art Debate.” Smithsonian Magazine, March. Retrieved 6 June 2017 from http://www.smithsonianmag.com/history/the-cave-art-debate-100617099/. The Dead Pool. 1988. Directed by Buddy Van Horn. Los Angeles: Warner Bros. Denevan, William M., ed. 1992. The Native Population of the Americas in 1492, 2nd rev. ed. Madison, WI: University of Wisconsin Press. Derrida, Jacques. 1967. L’Ecriture et la Différence. Paris: Editions du Seuil. Deutsch, David. 1997. The Fabric of Reality. New York: Viking Press. Dictionary of American Slang. 1995. 3rd ed. New York: HarperCollins. Dobb, Edwin. 2010. “Immersed in the Wild.” High Country News, 27 June. Retrieved 9 June 2017 from http://www.hcn.org/issues/42.11/immersed-in-the-wild.

178  Heading for the Scene of the Crash Domhoff, G. William. (2005) 2017. “Wealth, Income, and Power.” Who Rules America?, April. Retrieved 6 June 2017 from http://www2.ucsc.edu/whorulesamerica/power/ wealth.html. Douglas, Mary. 1999. “Jokes.” In Implicit Meanings: Selected Essays in Anthropology, 146– 164 London: Routledge. Drummond, Lee. 1967–68. “The Dream and the Dance: A Comparative Study of Initiatory Death.” Unpublished ms. Retrieved 22 August 2017 from http://www.peripheralstudies.org/uploads/D_and_D.doc. ———. 1978. “The Transatlantic Nanny: Notes on a Comparative Semiotics of the Family in English-Speaking Societies.” American Ethnologist 5(1): 30–43. ———. 1980. “The Cultural Continuum: A Theory of Intersystems.” Man, n.s., 15(2): 352–374. ———. 1981. “The Serpent’s Children: Semiotics of Cultural Genesis in Arawak and Trobriand Myth.” American Ethnologist 8(3): 633–660. ———. 1983. “Jonestown: A Study in Ethnographic Discourse.” Semiotica 46(2–4): 167–210. ———. 1996. American Dreamtime: A Cultural Analysis of Popular Movies, and Their Implications for a Science of Humanity. Lanham, MD: Littlefield Adams Books. ———. 2010. “Culture, Mind, and Physical Reality: An Anthropological Essay.” Palm Springs, CA: Center for Peripheral Studies. Retrieved 7 June 2017 from www.peripheralstudies.org. Dumont, Jean-Paul. 1978. The Headman and I: Ambiguity and Ambivalence in the Fieldworking Experience. Austin, TX: University of Texas Press. Eco, Umberto. 1990. The Limits of Interpretation. Bloomington, IN: Indiana University Press. Fernandez, James. 1971. “Persuasions and Performances: Of the Beast in Every Body … and the Metaphors of Everyman.” In Myth, Symbol, and Culture, ed. Clifford Geertz, 39–60. New York: W.W. Norton. ———. 1981. “Edification by Puzzlement.” In Explorations in African Systems of Thought, ed. Ivan Karp and Charles S. Bird, 44–59. Bloomington, IN: Indiana University Press. Forrest Gump. 1994. Directed by Robert Zemeckis. Los Angeles: Paramount Pictures. Freud, Sigmund. (1905) 1960. Jokes and Their Relation to the Unconscious, trans. and ed. James Strachey. New York: W.W. Norton. ———. (1927) 1961. The Future of an Illusion, trans. James Strachey. New York: W.W. Norton. ———. (1930) 1961. Civilization and Its Discontents, trans. and ed. James Strachey. New York: W.W. Norton. Frisch, Max. (1953) 1986. Biedermann und die Brandstifter, ed. Peter Hutchinson. London: Methuen. Galaty, John. 1982. “Being ‘Maasai’; Being ‘People-of-Cattle’: Ethnic Shifters in East Africa.” American Ethnologist 9(1): 1–20. Geertz, Clifford. 1971. “Deep Play: Notes on the Balinese Cockfight.” In Myth, Symbol and Culture, ed. Clifford Geertz, 1–38. New York: W.W. Norton. ———. 1983. “From the Native’s Point of View.” In Local Knowledge: Further Essays in Interpretive Anthropology. New York: Basic Books.

Bibliography  179 Ghose, Tia. 2013. “Origin of Life: Did a Simple Pump Drive Process?” LiveScience, 10 January. Retrieved 6 June 2017 from http://www.livescience.com/26173-hydrothermal-vent-life-origins.html. Goffman, Erving. 1971. Relations in Public: Microstudies of the Public Order. New York: Basic Books. Guyana Chronicle. 1978. Special supplement on Jonestown, 6 December. Harris, Marvin. 1978. “No End of Messiahs.” New York Times, 26 November. Hart, Keith, and Anna Grimshaw. 1993–2000. Prickly Pear Pamphlets (no. 1–13). Retrieved 22 September 2017 from http://thememorybank.co.uk/2009/05/25/prickly-pear-pamphlets.http://thememorybank.co.uk/2009/05/25/prickly-pear-pamphlets/. Herzfeld, Michael. 1983. “Looking Both Ways: The Ethnographer in the Text.” Semiotica 46(2–4): 151–166. Human Rights Watch. 1998. “Afghanistan: The Massacre in Mazar-i-Sharif.” Human Rights Watch Report 10 (7): C. Retrieved 15 June 2017 from https://www.hrw.org/legacy/ reports98/afghan/Afrepor0.htm. Jennett, Karen. 2008. “Female Figurines of the Upper Paleolithic.” Honors thesis. San Marcos: Texas State University. Kapferer, Bruce, ed. 2004. The World Trade Center and Global Crisis: Some Critical Perspectives. New York: Berghahn Books. ———, ed. 2004–. Critical Interventions: A Forum for Social Analysis. New York: Berghahn Books. Kauffman, Stuart. 1995. At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. New York: Oxford University Press. Kilduff, Marshall, and Ron Javers. 1978. The Suicide Cult. New York: Bantam. Krause, Charles A., et al. 1978. Guyana Massacre: The Eyewitness Account. New York: Berkeley Press. Krauss, Lawrence. M. 2010. “Faith and Foolishness: When Religious Beliefs Become Dangerous,” Scientific American, 1 August. Retrieved 10 June 2017 from https://www.scientificamerican.com/article/faith-and-foolishness/. Kristeva, Julia. 1969. Sèméiotikè: Recherches pour une sémanalyse. Paris: Editions du Seuil. ———. 1977. Polylogue. Paris: Editions du Seuil. Kroeber, Alfred. 1934. “Native American Population.” American Anthropologist, n.s., 36(1): 1–25. Lane, Mark. 1980. The Strongest Poison. New York: Hawthorn Press. Lévi-Strauss, Claude. (1955) 1974. Tristes Tropiques, trans. John Weightman and Doreen Weightman. New York: Atheneum Publishers. Lewis, Gordon K. 1979. Gather with the Saints at the River: The Jonestown Guyana Holocaust of 1978. Rio Piedras, Puerto Rico: Institute of Caribbean Studies. Lewis-Williams, David. 2002. The Mind in the Cave: Consciousness and the Origins of Art. London: Thames & Hudson. Luquet, G.H. 1930. The Art and Religion of Fossil Man, trans. Russell Townsend Jr. New Haven, CT: Yale University Press. Madigan, Nick. 2002. “Grim Guesswork Led to the Body of San Diego Girl.” New York Times, 1 March. Marcus, George E., and Michael M.J. Fischer. 1986. Anthropology as Cultural Critique: An Experimental Moment in the Human Sciences. Chicago, IL: University of Chicago Press.

180  Heading for the Scene of the Crash Martins, Dave. 1979. “Brother Jonesie.” West Indies Records. Marx, Karl. (1852) 1968. “The Eighteenth Brumaire of Louis Bonaparte.” In Karl Marx and Frederick Engels: Selected Works, 95–180. New York: International Publishers. McGranahan, Carole, and Uzma Z. Rizvi. 2016. “Decolonizing Anthropology.” Savage Minds. Retrieved 22 September 2017 from www.savageminds.org. Mellars, Paul. 2004. “Neanderthals and the Modern Human Colonization of Europe.” Nature 432: 461–465. Naipaul, Shiva. 1981. Journey to Nowhere: A New World Tragedy. New York: Simon & Schuster. National Museum of Natural History. 2017. “Species.” What Does It Mean to Be Human? Smithsonian Institution, Human Origins Initiative. Retrieved 6 June 2017 from http://humanorigins.si.edu/evidence/human-fossils/species. Nietzsche, Friedrich. (1872a) 1992. The Birth of Tragedy, trans. and ed. Walter Kaufmann. In Basic Writings of Nietzsche, 1–144. New York: Modern Library. ———. (1872b) 1954. “Homer’s Contest,” trans. and ed. Walter Kaufmann. In The Portable Nietzsche, 32–38. New York: Viking Press. ———. (1878) 1986. Human, All Too Human: A Book for Free Spirits, trans. R.J. Hollingdale. New York: Cambridge University Press. ———. (1881) 1997. Daybreak: Thoughts on the Prejudices of Morality, trans. R.J. Hollingdale, ed. Maudemarie Clark and Brian Leiter. Cambridge: Cambridge University Press. ———. (1882) 1974. The Gay Science, trans. Walter Kaufmann. New York: Vintage Books. ———. (1883–88) 1968. The Will to Power, trans. Walter Kaufmann and R.J. Hollingdale, ed. Walter Kaufmann. New York: Vintage Books. See also: https://archive.org/stream/ TheWillToPower-Nietzsche/will_to_power-nietzsche_djvu.txt. ———. (1883–92) 1966. Thus Spoke Zarathustra: A Book for All and None, trans. Walter Kaufmann. New York: Viking Press. ———. (1886) 1992. Beyond Good and Evil, trans. and ed. Walter Kaufmann. In Basic Writings of Nietzsche, 179–435. New York: Modern Library. ———. (1887) 1992. On the Genealogy of Morals, trans. and ed. Walter Kaufmann. In Basic Writings of Nietzsche, 437–599. New York: Modern Library. ———. (1889) 1954. Twilight of the Idols, or How to Philosophize with a Hammer, trans. Walter Kaufmann. In The Portable Nietzsche, 463–563. New York: Viking Press. Nimuendajú, Curt. 1946. The Eastern Timbira. Berkeley, CA: University of California Press. Paglia, Camille. 1994. Vamps and Tramps: New Essays. New York: Vintage Books. Pálsson, Gísli. 1995. The Textual Life of Savants: Ethnography, Iceland, and the Linguistic Turn. Reading, UK: Harwood Academic Publishers. Parsons, Talcott. 1951. The Social System. London: Routledge & Kegan Paul. Piette, Edouard. 1895. “La station de Brassempouy et les statuettes humaines de la periode glyptique.” L’Anthropologie 6: 129–151. ———. 1907. L’art pendant l’age du renne. Paris: Masson. Proof of Life. 2000. Directed by Taylor Hackford. Los Angeles, CA: Warner Bros. Rabinow, Paul. 1977. Reflections on Fieldwork in Morocco. Berkeley, CA: University of California Press. Regnault, F. 1912. “La representation de l’obésité dans l’art historique.” Bulletins et Mémoires de la Société d’Anthropologie de Paris 5(3): 229–233.

Bibliography  181 Reinach, Carl. 1908. Cultes, Mythes, et Religions. Paris. Reston, James, Jr. 1981. Our Father Who Art in Hell. New York: Times Books. Riddle, John M. 1992. Contraception and Abortion from the Ancient World to the Renaissance. Cambridge: Harvard University Press. Rushdie, Salman. 2001. “Yes, This Is about Islam.” New York Times, 2 November. Retrieved 11 June 2017 from http://www.nytimes.com/2001/11/02/opinion/yes-this-is-aboutislam.html. Shepherd, Cyn. 2017. “Eric Harris’ Writing.” A Columbine Site. Retrieved 24 July 2017 from http://www.acolumbinesite.com/eric/writing/plans2.gif. Sheridan, Michael. 1998. “How the Taliban Slaughtered Thousands of People.” Sunday Times, 1 November. Retrieved 11 July 2017 from http://www.rawa.org/times.htm. Silence of the Lambs. 1991. Directed by Jonathan Demme. Los Angeles: Orion Pictures. Smith, Henry Nash. 1950. Virgin Land: The American West as Symbol and Myth. Cambridge: Cambridge University Press. Sontag, Susan. 2001. “Tuesday and After: The Talk of the Town.” New Yorker, 24 September. Steinbeck, John. 1939. The Grapes of Wrath. New York: Viking Press. Steward, Julian. 1946. Handbook of South American Indians, vol. 6: The Comparative Ethnology of South American Indians. Washington, DC: Smithsonian Institution. Storrs, Carina. 2010. “Endangered Species: Humans Might Have Faced Extinction 1 Million Years Ago.” Scientific American, 20 January. Retrieved 6 June 2017 from https://www. scientificamerican.com/article/early-human-population-size-genetic-diversity/. Swan, Michael. 1958. The Marches of El Dorado: British Guiana, Brazil, Venezuela. London: J. Cape. Thorp, Raymond W., and Robert Bunker. 1969. Crow Killer: The Saga of Liver-Eating Johnson. Bloomington, IN: Indiana University Press. United Nations Security Council. 1998. “Security Council, Concerned by Deteriorating Situation in Afghanistan, Supports Establishment of Unit to Deter Human Rights Violations,” media release. New York: United Nations, 8 December. Retrieved 15 June 2017 from http://www.un.org/press/en/1998/19981208.sc6608.html. United States Anti-Doping Agency. 2014. “USADA Game Plan 2020.” US Anti-Doping Agency website. Retrieved 22 August 2017 from https://www.usada.org/about/ strategic-plan/. United States House of Representatives. 1979. The Assassination of Representative Leo J. Ryan and the Jonestown Guyana Tragedy. Report of a Staff Investigative Group to the Committee on Foreign Affairs. Washington, DC: Government Printing Office. White, Randall. 1997. “Substantial Acts: From Materials to Meaning in Upper Paleolithic Representation.” In Beyond Art: Pleistocene Image and Symbol, ed. D. Stratmann, M. Conkey, and O. Soffer. San Francisco, CA: California Academy of Sciences. ———. 2003. Prehistoric Art: The Symbolic Journey of Humankind. New York: Abrams Books. ———. 2006. “The Women of Brassempouy: A Century of Research and Interpretation.” Journal of Archaeological Method and Theory 13(4): 251–304. White, Ron. 2003. “Plane Crash.” CD audio. Track 3 on Drunk in Public. Hip-O-Records. Whitehead, Alfred North, and Bertrand Russell. 1910–13. Principia Mathematica. Cambridge: Cambridge University Press. Wiener, Norbert. 1948. Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.

182  Heading for the Scene of the Crash Wilford, John Noble. 2010. “Lucy’s Kin Carved Up a Meaty Meal, Scientists Say.” New York Times, 11 August. Retrieved 14 October 2014 from http://www.nytimes. com/2010/08/12/science/12tools.html. Wittgenstein, Ludwig. 1967. Zettel, trans. G.E.M. Anscombe, ed. G.E.M. Anscombe and G.H. von Wright. Berkeley, CA: University of California Press. Woodstock. 1970. Directed by Michael Wadleigh. Los Angeles: Warner Bros. Yeats, William Butler. 1919. “The Second Coming.” In The Dial: A Fortnightly. New York: The Dial Publishing.

n Index 9/11 attacks, 3, 5, 6, 9, 10, 13, 20, 120–130, 132, 133, 135, 136–138, 142–144, 147, 148, 150, 152, 158, 161, 166, 167, 168, 171, 172, 176 abortifacients, 87, 88 abortion, 3, 7, 8, 55, 63, 70, 73, 74, 88, 89, 95, 181 Absolon, Karel, 83, 177 Adams, Kathleen, 21, 26 Afghanistan, 10, 142, 144, 145–148, 171, 172, 179 Alien Resurrection, 65, 66, 68, 69, 72, 96, 177 Aliens movies, 3, 5, 7, 8, 55, 58, 62–75, 79, 96, 177 All the President’s Men, 22 Al-Qaeda, 139, 143, 163, 170, 172, 173 alterity, 2, 4, 12 ambivalence, 38, 47, 48, 73–75, 95, 112, 135, 178 American Anthropological Association, 7, 14, 54, 171 American Dreamtime, 4, 19, 64, 178 American Ethnologist, 4, 60, 178 American society, 1, 2, 4, 7, 24, 72, 73, 75, 89, 109, 111, 112, 117, 118, 128, 129, 137, 154, 155, 170 Amerindian(s), 7, 15, 23, 25, 26, 28, 29, 30, 32, 33, 41, 42, 49–51, 87, 88 animate / inanimate, 62, 63, 75, 76, 174 anthropological semiotics, 2, 17, 19, 43, 177, 178 anthropological voice, 1 anthropology / anthropologists, 1–7, 10–12, 14, 16, 17, 19–22, 38, 39, 43, 44,

47, 53–61, 64, 69, 75, 79–81, 82, 83, 87, 93, 98, 102, 104, 112, 113, 120, 130–133, 135, 158, 160, 178–180 Anthropology News, 14 Anthropology Today, 14 Arawak, 15, 19, 38, 49, 50, 81, 178 Aristotle, 62, 63, 75, 76 Armstrong, Lance, 3, 5, 6, 8, 9, 98,99, 102– 104, 106–109, 118, 119 Atta, Mohammed, 126, 127, 131, 137,138, 147, 148, 160, 162, 163, 167, 170, 175 Auel, Jean, 91, 92, 177 Awakaipu, 23, 24, 27, 39, 46 Baartman, Saartjie, 82, 84 Bak, Per, 126–128, 177 Barthes, Roland, 48, 177 Bateson, Gregory and Mary C., 77, 177 bauxite, 28–30, 35 Bekeranta, 23–25, 27 Benedict, Ruth, 18, 60, 130–132, 157–159, 177 Beyond Good and Evil, 125, 129, 159, 180 biotechnology, 7, 73, 75, 77, 105 Bisson, Michael, 93, 177 black market, 40, 42, 43 Boas, Franz, 43, 59 Boon, James, 47, 177 Booze Theory of Civilization, 92 brisure, 21 Brother Jonesie, 36–40, 54, 179 bully, 132, 133, 170, 171, 176 Burnham, Forbes, 25, 28, 29, 32, 34, 35, 36, 41, 42, 48 Burnham, Viola, 41, 42

184 Index Burroughs, William, 91, 112, 177 cable news, 1, 98, 110, 114 Camus, Albert, 108, 109, 112, 177 Cantor Fitzgerald, 168 Castaneda, Carlos, 88 Castiglione, 2 cause-and-effect / causality, 126–128, 174 Chauvet, 80 Chen, Kan, 126–128, 177 Church, Catholic, 86, 87, 89, 90, 111 climate change, 2 CNN, 134, 138, 143, 144 Coleridge, Samuel, 55, 56, 107, 108, 177 colonialism, 11, 20, 28–31, 35, 40, 46, 88 competing-to-win / competition, 8, 9, 99, 101–103, 109–116, 118, 130 complex system, 10, 127, 129 complexity theory, 10, 126, 128, 141, 147, 154, 155, 163, 174 cooperative, 29, 30, 32, 34 Cooperative Republic, 15, 29, 30, 32, 35 corporation(s), 12, 29, 60, 90, 102, 114, 135, 169, 175 cotton, 34, 35 counterterrorism, 170, 172, 173 coward(ice), 2, 123, 128, 130, 132, 157, 164, 165 Critical Interventions, 14, 179 Cro-Magnon, 82, 91 Cronkite, Walter, 22, 38 cultural analysis, 1, 2, 4, 6–8, 55, 57, 61–64, 74, 109, 112, 120, 125, 130–132, 149, 178 Cultural Centre, 51–53 cultural critique, 1, 2, 179 culture(s), 2, 6, 8, 11, 18, 19, 20, 26, 39, 41, 44–48, 50, 52, 53–55, 57, 58–61, 69, 70, 72, 73, 76, 77, 79–81, 82, 83, 86, 87, 89–91, 95, 97, 100, 101, 105, 106, 110, 112, 114, 116, 119, 120, 127, 130, 131, 144, 155, 158, 165, 169, 177, 178 culture is adaptive and rational, 19, 57, 58, 59, 60, 61, 72, 76, 77, 79, 89, 91, 92, Cyrene, 87

Darwin, Charles, 57–59, data (and analysis), 17, 18, 21 Daybreak, 9, 180 Department of Social Relations, 10 Derrida, Jacques, 21, 44, 177 Deutsch, David, 62, 75, 177 discrepant meanings, 5–7, 9, 61, 97 dissection, 3, 6, 7, 8, 9, 20, 57, 112, 135, 136, 142, 148, 153, 163, 166 Douglas, Mary, 12, 178 Drummond, Lee, 4, 45, 53, 75, 97, 119, 120, 178 E.T., 56, 62 early modern human, 78, 79 Earth’s Children, 91, 178 earthquake(s), 10, 20, 138–140, 142, 146, 148, 155 Eco, Umberto, 120, 124, 149, 151, 175, 178 Educational Testing Service (ETS), 117, 118 Einstein, Albert, 117, 118, 157, 167, 169 elemental dilemma, 63, 71, 75, 79, 109 empathic perspective, 3, 17, 18, 21 English Creole, 15, 30 essentialist, 11 ethnocentrism, 6, 9, 131 ethnographic event, 16–20, 26, 38, 39, 45, 47, 48 ethnography, 7, 16–18, 21, 26, 39, 43–48, 57, 61, 64, 131, 180 ethnos, 43, 46, 47 Evans-Pritchard, E.E., 46, 64 event, nature of, 5–11, 13–16, 22, 23, 26, 27, 36, 38, 39, 45, 48, 53, 82, 105, 123– 130, 132, 133, 135–137, 146, 152, 155, 156, 165, 175 evolution, 55, 57, 58, 74, 75, 76, 78, 93, 101, 115, 169 Falwell, Jerry, 121, 122, 125 female figurines, 80, 81, 91, 179 female-centered set of behaviors, 80 Fernandez, James, 53, 109, 178 fertility cult, 83, 86, 91

Index  185 Field of Dreams, 103 field, fieldwork, 3, 17, 21, 43–47, 53, 61, 65, 88, 178, 179 Firth, Raymond, 46, 64 Fischer, Michael M. J., 1, 179 folk taxonomy, 8, 60, 101 Foucault, Michel, 65 Freud, Sigmund, 12, 60, 111, 112, 178 Frisch, Max, 176, 178 Geertz, Clifford, 3, 53, 178 George Jr. (Bush), 9, 57, 119, 122–125, 127–130 132, 133, 135, 139, 141, 142, 144, 147, 149, 154, 158, 159, 163, 165, 167, 170–173, 175, 176 Georgetown, Guyana, 7, 15, 23–25, 31–33, 35–37, 39–43, 49, 50–52 Gê-speaking tribes, 9 Gluckman, Max, 60 Goddess, Mother Earth, 91–93 Goffman, Erving, 137, 154, 155, 179 Gravettian, 81, 94 Grimshaw, Anna, 14, 179 Guyana, 3, 6, 7, 15, 18, 19, 21–24, 26, 27–31, 33, 34, 36, 37, 39–43, 46, 49, 50, 52–54, 176, 179, 181 Guyana Chronicle, 22, 40, 42, 179 Harris, Marvin, 6, 7, 9, 21, 53, 54, 179 Hart, Keith, 14, 179 Harvard University, 10, 11, 181 Hazara, 145 Herzfeld, Michael, 21, 46, 47, 179 Hohle Fels, 81 homeostasis, 10, 11 hominin, 58, 59, 76–79, 96, 97, 104 Homo genus, 77, 78, 97, 104, 105 Homo sapiens, 76–79 human reproduction, 7, 63, 69–71, 73–76, 79, 87 Human, all too Human, 95, 180 humanity, 4, 11, 55, 58, 60, 62, 63, 65, 69, 71, 72, 75–77, 79–82, 96, 97, 101, 120, 153, 157, 166, 169, 178

immoralist, 9, 120, 125, 132–134, 136, 140, 142, 144, 149, 157–170 interior development, 7, 21, 23–26, 28–35, 39, 49–51 intertextual, 47, 131 irresolvable contradiction, 109 Islam, 111, 122–124, 129, 136, 142, 144, 146, 158, 162, 171–173, 175, 181 Jennett, Karen, 82, 83, 91, 97, 179 Jennings, Peter, 125, 130, 133, 141, 144, 147, 149, 153, 156 Jesus, 123, 158, 168, 169 joke, 12, 13, 50, 97, 100, 117, 178 Jones, Jim, 6, 15, 22, 24, 25, 27, 33–37, 43, 48, 53 Jonestown, 3, 5–9, 15, 16, 19–27, 30, 33, 34, 36–43, 45–48, 51–54, 121, 149, 176, 178, 179, 181 Joyce, James, 47, 48 Kabakaburi, 49–51, 53 Kafka, Franz, 15, 16, 18, 150, 154 Kapferer, Bruce, 14, 176, 179 Kimbia, 33–35 Kluckhohn, Clyde, 10 Kristeva, Julia, 44, 46, 47, 179 Kroeber, Alfred, 113, 179 Kubla Khan, 108 Lascaux, 80 lens (of analysis), 6–8, 47, 86, 125, 129 Lévi-Strauss, Claude, 8, 12, 60, 75, 100, 112, 179 Lewis-Williams, David, 80, 179 logic, 10, 48, 74, 108, 121 Maher, Bill, 137, 138 male-centered set of behaviors, 80 Malinowski, Bronislaw, 43, 46, 177 Manhattan, 121, 128, 138, 147, 148, 161 Marcus, George, 1, 179 Martins, Dave, 37, 38, 40, 54, 179 Marx, Karl, 45, 60, 82, 113,116, 141, 144, 180

186 Index Mazar-i-Sharif, 13, 142–149, 151, 155, 164, 172, 179 Mecca, 173 millenarian cult, 7, 23, 54 myth, 12, 19, 56–59, 61–65, 69–73, 76, 79, 81, 119, 178, 180, 181 Mythologiques, 12 Naipaul, V.S., 167, 180 National Service (Pioneer), 33, 34 native’s point of view, 3, 178 natural vs. cultural (Nature/Culture), 8, 9, 59, 63, 70–73, 75, 77, 86, 96, 100, 101, 103–106, 131, 138, 151 Neolithic revolution, 86, 91, 92 New York Times, 6, 21, 122, 179, 181, 182 Newspeak, 134, 135 Nietzsche, Friedrich, 2, 9, 10, 12, 56, 70, 73, 86, 93, 95, 96, 112, 114, 116, 118, 120, 125, 126, 129, 130, 148, 149, 157–160, 164, 165–167, 169, 176, 180 Nimuendajú, Curt, 115, 116, 119, 180 “No End of Messiahs”, 6, 179 Nova Mob, 90, 92 obstetrical dilemma, 76, 77, 94, 95 Olmsted, Bishop Thomas, 89, 90 On the Genealogy of Morals, 125, 159, 180 Open Anthropology Cooperative, 14 Oprah, 5, 8, 98, 100, 102–104, 119 Osama bin Laden, 122–125, 129, 130, 135, 138, 142, 145, 153, 154, 156, 158, 170, 172 Our Father Who Art in Hell, 36, 181 Paglia, Camille, 74, 180 Pakistan, 145, 146, 171, 172 Paleolithic, 66, 79–81, 83, 86–89, 91–95, 177, 179, 181 paleo-porn, 82 ,83, 86 Parsons, Talcott, 10, 11. 83, 180 pathologist of the social, 3, 10 pathologist, cultural, 3, 6, 8, 9, 112, 133, 142, 148, 154, 160, 161, 163, 170–172 pattern variables, 10

Patterns of Culture, 130, 131, 158, 177 Pearl Harbor, 134 Pegasus hotel, 37, 40 Peoples Temple, 6, 7, 25, 26 performance-enhancing drugs, 8, 100, 102, 103, 105, 108, 115, 116 perspective (vis-à-vis scale), 13, 132, 133, 135–138, 141, 145–149, 167, 170 Picasso, 77, 167, 169 Piette, Edouard, 81, 82, 93, 97, 180 Pioneer (Cooperators), 27, 32, 33, 35, 36, 39, 46 “Plane Crash”, 12–14, 181 plantations (sugar), 29, 30 Pleistocene, 59, 97, 182 politically correct, 2, 64, 77, 157, 159 Pomeroon River, 25 popular culture, 8, 39, 110 Port Kaituma, 25, 40–42 postmodernism, 86, 131–133, 158–160 postmodernist anthropology, 131 precarity, 2, 4, 12 prehistory, 77, 79, 91, 97, 177, 181 Prickly Paradigm Press, 14 Prickly Pear Pamphlets, 14, 179 Pynchon, Thomas, 47 quantum entanglement, 5, 6, 7 race, racism, 2, 7, 12, 25, 27, 28, 32, 35, 40, 59, 60, 82, 89, 93, 97, 111` Radcliffe-Brown, A.R., 20 recomposition, 4 reggae, 7, 36–38, 54 Relations in Public, 137, 154, 179 Reston, James Jr., 36, 38, 53, 181 revaluation of all values, 10, 125, 148, 159, 169 Rime of the Ancient Mariner, 55, 107, 177 Ripley, Ellen, 66–73, 75, 77, 79, 96 ritual, 4, 38, 56, 59, 61, 78–81, 94 Rodin sculptures, 168 Rodney, Walter, 25 Rolling Stones, 106 Roseasharn, 89

Index  187 Rushdie, Salman, 122, 181 Russell, B., and A.N. Whitehead, 108, 109, 181 Ryan, Leo, 22, 40, 41, 181 Sahlins, Marshall, 14 SAT, 116–118 Saudi Arabia, 124, 162, 172, 173 Savage Minds, 14, 180 scale (vis-à-vis perspective), 13, 41, 118, 128, 132–135, 141, 145–149, 168, 175 scene of the crash, 11–13 Schomburgk, Richard, 23 self-organized criticality, 9, 10, 120, 126, 128, 129, 148, 155, 177, 179 semiotics, 2, 17, 19, 43, 53, 56, 76, 97, 177–179 shaman, 56, 81, 87, 88, 94 Simon & Garfunkle, 60 social thought, 10 Sontag, Susan, 121, 122, 125, 130, 181 Spann, Mike, 142–146 Spengler, Oswald, 58 Star Wars, 56, 62, 138 Steinbeck, John, 89, 90, 181 Steward, Julian, 113, 181 stories (Georgetown gossip), 7, 40–42 structural functionalism, 10, 21 supergrosser movie, 55, 57, 61, 64 Survivor, 9, 111, 114–117, 142, 144 Swan, Michael, 54, 181 taboo, 8, 95, 96 Taliban, 142–146, 149, 158, 171–173, 181 Tangshan, 140–142, 146, 155, 164 terrorism, 3, 6, 31, 121, 123, 124, 129, 132, 137, 138, 146, 150, 159, 161–163, 170–173, 175, 176 text, 18, 21, 26, 37, 40, 46, 47, 76, 132, 163, 179 The Mind in the Cave, 80, 180 The Social System, 10, 180 Timbira, 114, 116, 119, 181

Timehri airport, 40, 42 tool/ machine/ artifact, 34, 35, 57, 63, 65, 71, 75–77, 79, 94, 96, 103–105, 113, 167, 181 Tour de France, 98, 99, 103–105, 118 Tradewinds, 37, 38, 54 train, runaway, 11, 58, 59, 80 Twilight of the Idols, 2, 70, 180 Tylor, Edward, 44, 58 U.S. Anti-Doping Agency (USADA), 100, 101–104, 106, 107, 115, 181 Ulysses, 48 United States, 1, 4, 9, 10, 14, 21, 24, 26, 27, 29, 31, 36–38, 40–43, 49, 53, 54, 64, 98, 99, 109, 112, 113, 136, 139, 141, 143, 145, 147, 153, 163, 170–175, 181 value of human life, 10, 13, 136, 141, 155, 166, 169 Vamps & Tramps, 74, 180 van dam, Danielle, 150–153, 155–157 Venus figurines, 82, 85, 177 Walker, John Lindh, 142–146, 158, 175 warm little pond, 59, 62 Weaver, Sigourney, 7, 55, 56, 58, 62 Westerfield, David, 150–153, 156, 158 whiff of carrion, 2, 3, 109 White, Randall, 83, 177, 181 White, Ron, 12, 13, 181 Wiener, Norbert, 10, 181 Will to Power, 9, 180 Wittgenstein, Ludwig, 61, 182 Woodstock, 15, 16, 18, 19, 22, 27, 38, 39, 51, 53, 182 World Trade Center, 10, 120, 121, 133, 135, 137, 140, 142, 146, 147, 155–157, 160–168, 175, 179 World War II, 10 xenophobia, 12, 61, 62, 70, 158, 164, 167