213 11 2MB
English Pages 312 [300] Year 2008
Under the Radar
Critical Issues in Health and Medicine Edited by Rima D. Apple, University of Wisconsin–Madison, and Janet Golden, Rutgers University, Camden Growing criticism of the U.S. healthcare system is coming from consumers, politicians, the media, activists, and healthcare professionals. Critical Issues in Health and Medicine is a collection of books that explores these contemporary dilemmas from a variety of perspectives, among them political, legal, historical, sociological, and comparative, and with attention to crucial dimensions such as race, gender, ethnicity, sexuality, and culture. For other books in the series, please see our Web site, http://rutgerspress.rutgers.edu.
Under the Radar Cancer and the Cold War
Ellen Leopold
Rutgers University Press New Brunswick, New Jersey and London
Library of Congress Cataloging-in-Publication Data
Leopold, Ellen Under the radar : cancer and the cold war / Ellen Leopold. p. ; cm. — (Critical issues in health and medicine) Includes bibliographical references and index. ISBN 978-0-8135-4404-5 (hardcover : alk. paper) 1.
Cancer—United States—History—20th century.
use—United States—History—20th century. States.
4.
2.
Cobalt—Isotopes—Therapeutic
Cold War—Health aspects—United
Informed consent (Medical law)—United States.
United States. [DNLM: 1.
3.
6.
Neoplasms—history—United States.
United States.
3.
5.
Radiation carcinogenesis—United States. 2.
5.
4.
7.
Series.
Human Experimentation—
Patient Rights—history—United States.
Fallout—history—United States.
II.
Cobalt Radioisotopes—history—
History, 20th Century—United States.
history—United States.
Radioactive fallout—
I. Title.
6.
Radioactive
Radiotherapy—history—United States.
QZ 11
AA1 L587u 2009] RC276.L46
2009
363.738—dc22 2008007751 A British Cataloging-in-Publication record for this book is available from the British Library. “Miss Gee” copyright 1937, 1940 and renewed 1968 by W. H. Auden, from Collected Poems by W. H. Auden. Used by permission of Random House, Inc., Faber and Faber Ltd., and Curtis Brown Ltd. Copyright © 2009 by Ellen Leopold All rights reserved No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, or by any information storage and retrieval system, without written permission from the publisher. Please contact Rutgers University Press, 100 Joyce Kilmer Avenue, Piscataway, NJ 08854–8099. The only exception to this prohibition is “fair use” as defined by U.S. copyright law. Visit our Web site: http://rutgerspress.rutgers.edu Manufactured in the United States of America
We have put poisonous and biologically potent chemicals indiscriminately into the hands of persons largely or wholly ignorant of their potentials for harm. We have subjected enormous numbers of people to contact with these poisons, without their consent and often without their knowledge. . . . The public must decide whether it wishes to continue on the present road, and it can do so only when in full possession of the facts. —Rachel Carson, Silent Spring
Chapter 3 Contents
Preface Introduction
ix 1
Chapter 1
Double Jeopardy: Cancer and “Cure”
30
Chapter 2
The Court Considers Informed Consent
42
Chapter 3
The Rise of Radioactive Cobalt
59
Chapter 4
The Cobalt Back Story: “A Little of the Buchenwald Touch”
80
Behind the Fallout Controversy: The Public, the Press, and Conflicts of Interest
109
Chapter 6
Cancer and Fallout: Science by Circumvention
131
Chapter 7
Paradise Lost: Radiation Enters the Mainstream
147
Chapter 8
Subdued by the System: Cancer in the Courts, Compensation, and the Changing Concept of Risk
164
Chapter 9
Hidden Assassin: The Individual at Fault
191
Chapter 10
Experiments by Other Means: Clinical Trials and the Primacy of Treatment over Prevention
207
Notes
235
Index
275
Chapter 5
vii
Preface
Until the last quarter of the twentieth century, very few Americans wrote or published accounts of their personal experience of cancer. This was not because there were no survivors—there were—but because the culture could not yet tolerate the revelations of intimacy that such chronicles exposed. So the individual experience of cancer remained for the most part as uncontroversial and as unexplored as the experience of domestic violence or sexual abuse, just one of many crises that families were expected to deal with on their own. What transpired behind closed doors remained behind closed doors, breeding cruelties and coping strategies that, for the most part, went unobserved and unrecorded. So I was startled to see, in a footnote some years ago, a reference to a medical malpractice suit brought by a woman treated for breast cancer in the 1950s. She not only had a name—Irma Natanson—but she left behind, in the public record, detailed evidence of the private experience of cancer in the 1950s. We now know that the patient’s candid response—to diagnosis, doctors, treatment, recurrence—plays a critical role in moderating cancer management practices. It also helps to keep the more controversial aspects of the disease alive and before the public. This kind of countervailing influence was completely absent from the collective experience of the disease half a century ago. Irma Natanson’s experience makes that perfectly clear. But it also goes further, hinting at the presence of a very different dynamic at work, hastening the arrival of a new and unproven cancer therapy (cobalt radiation). The clues I found in the legal case suggested a murkier back story than those usually associated with medical innovations. So I set out to piece together a plausible explanation for what I came to see as a fateful intersection between a powerful new technology and a defenseless patient. As so often happens, the pursuit led in surprising directions, in this case extending well beyond the realm of science and into the unfamiliar territory of the Cold War. There I found evidence of a political pragmatism—distinctly more military than medical—that put the postwar history of cancer treatment in a new light. Of course, my unorthodox speculations on the long-term repercussions of an ideology on a disease were baffling to some. It was hard, especially at the beginning, to give a wholly convincing account of what I was up to. But ix
x
Preface
my friends, to their credit, proved a patient and tactful lot, willing to give me the benefit of the doubt. For their forebearance and advice, I am extremely grateful. Thank you Rita Arditti, Erin Clermont, Ernie Ebenspanger, Sarah Flynn, Margo Golden, Jean Hardisty, Catherine Leopold, John Mason, Rachel Massey, Martha Matzke, Katherine Powers, Sandra Reeves, Lily Saint, Judith Vichniac, and Lynne Walker. For their willingness to share professional or scholarly expertise with a virtual stranger, I would like to thank Rosalie Bertell, Richard T. Foster, Philip L. Fradkin, Bernie Gottschalk, Dan Guttman, Sheldon Krimsky, Joseph Lyon, Warren Winkelstein, Jr., and the anonymous reader who reviewed the manuscript for Rutgers University Press. The usual disclaimer about their not being held responsible for the views I express applies with particular force here. Yes, the book takes a critical look at the history of medical radiation to demonstrate the complexity of our response to disease. But it does not impugn the good faith of anyone involved in the practice of nuclear medicine. My own firsthand experience is proof enough of the commitment and concern of those actively involved in the control of cancer. I should also add that what I’ve written is neither a comprehensive history of the Cold War nor of cancer during the Cold War period, but is limited to an exploration of their interaction over the second half of the twentieth century. Each of the larger topics commands a separate literature of its own, although interest in the former (measured by shelf space alone) vastly outweighs interest in the latter. Devotees of the history of cancer, though steadily gaining in numbers, still remain more of a coterie than a crowd among contemporary social scientists. My use of primary and other sources reflects this imbalance. Archival evidence of the Cold War is prodigious; locating and gaining access to recently declassified documents can be daunting. The task was made easier for me by librarians and archivists who were unfazed by the drift of their dark materials. Among the many knowledgeable public servants who provided assistance were Marjorie Ciarlante at the National Archives; Jeff Gordon at the Department of Energy Archives, Nevada; and Ryan Robertson at the United States District Court, Utah. For their equally energetic support, I must add Amanda Engineer at the Wellcome Library in London and the ever resourceful Inter Library Loan service at Harvard University. I would also like to acknowledge those who helped me navigate the choppy waters of publishing—Allen Graubard, Lindy Hess, Jill Kneerim, Ellen Reeves, Zick Rubin, and Deanne Urmy. Most especially, I would like
Preface
xi
to thank Doreen Valentine, my editor at Rutgers University Press, whose skillful stewardship kept the project on track and whose vision of the book held me to high standards. My copy editor Alice Calaprice and members of the production staff at Rutgers were also a pleasure to work with. I hope the family of Irma Natanson will find this retelling of her ordeal more a testament to her extraordinary courage than a source of renewed anguish. Finally, I owe a fundamental debt to Rachel Carson who, like Natanson, also died of cancer. Carson first drew the world’s attention to many of the issues raised here. Much of what follows is, in fact, a tribute to the astonishing prescience of Silent Spring. The book invented a way of thinking and a mode of inquiry that have now, almost fifty years later, become indispensable.
Under the Radar
Introduction
Irma Natanson was a young housewife and mother in Wichita, Kansas, when she was diagnosed with breast cancer in 1955. Medicalized overnight, she was, by today’s standards, fast-tracked through consultation and treatment. Her entrée into a complex medical ordeal was precipitate, unrestrained by the kind of critical negotiation between doctor and patient that has become more common over the past few decades. Unfamiliar with both the language of disease and the logic of prevailing treatment regimes, Natanson, through no fault of her own, became a passive player in the decision-making process. Faced with what was considered a medical emergency, she was given no time to discuss or weigh various treatment options (including the option of no treatment) and no opportunity to express her own doubts or concern. Her passage from diagnosis to medical intervention, therefore, must have been short and streamlined. Natanson’s cancer was discovered by her gynecologist during a routine physical exam. She had a radical mastectomy soon afterward and just a week after that had her first appointment with the radiologist recommended by her surgeon. As “luck” would have it, her diagnosis coincided with the introduction of a new but largely untried therapy, cobalt radiation, commonly referred to as the “cobalt bomb.” Newly installed in the hospital where she was treated, it was waiting to be used. With no knowledge of its novelty—or of its potential hazards—Natanson agreed to the recommended course of treatment. The consequences for her were dire, leaving her maimed and disabled for the rest of her life. She suffered years of prolonged and painful medical 1
2
Under the Radar
treatment aimed not at her cancer but at the damage caused by treatment for it. She endured long bouts of hospitalization in a specialist hospital far away from her husband and children. And she was forced to bear many of the exceptional costs of medical care herself. Natanson fought back in the only way she could, taking her grievance to court. A decade before the passage of the Civil Rights Act, when discrimination of every kind was still legal, litigation remained one of the few options open to women seeking redress for perceived injustices. But this would be no piece of cake, at a time when a woman plaintiff could still be judged by an all-male jury. Natanson was undeterred. She formally charged her radiologist with the failure to warn her of the risks of treatment and with the failure to administer that treatment properly. The ensuing lawsuit, Natanson v. Kline, lays bare the details of her suffering as it opens the door to a wider discussion of informed consent. In daring to relive the details of her affliction in open court, Natanson forced the difficult but important issue of patient autonomy—her “right to know” and the consequences of not knowing—into the public record. Her personal story became the catalyst for a landmark legal case. Where did cobalt radiation come from? What safeguards, if any, accompanied its introduction to the treatment arsenal? Who determined its readiness for use or assumed responsibility for its unintended consequences? To answer these questions requires something akin to a natural history of treatment. More than fifty years later, that history is readily discernible. The emergence of cobalt radiation in the mid-1950s is inextricably tied to the history of the United States in the decade following the end of the Second World War. The treatment relies upon radioactive cobalt, an isotope produced in the same nuclear reactors that had been used to generate plutonium for the world’s first atom bombs. So, although it was not a component of the weapons dropped on Japan in August 1945, it was, nevertheless, in at the birth of nuclear energy. In the same spirit of intervention with which it had managed the war, the federal government stepped in immediately after the end of hostilities to underwrite the transition back to a peacetime economy. The potential value of radioactive isotopes (such as cobalt) was quickly recognized. In fueling the resurgence of industry on an unprecedented scale, public money favored, in particular, those newer sectors like chemicals, pharmaceuticals, and medical technologies that were most likely to exploit the isotopes and other chemicals first developed—at government expense—for use in wartime weapons manufacture. Now they were to be converted into sophisticated tools for, among many other uses, the diagnosis and treatment of disease.1
Introduction
3
And, as these experimental products left government labs and moved into private R & D setups, they began another important change in status, from tightly regulated research prototypes to mass-produced commodities. Therapeutic equipment, instruments, diagnostic tests, drugs—all would come to be marketed like any other goods, subjected to the same financial and performance criteria. The story of cobalt radiation captures this shift in industrial outlook. The primary consumers of the new products would be hospitals, physicians, and patients. Of the three, the least well prepared for the transition to market medicine would be patients, especially women patients. For them, interactions with the world of medicine were a holdover from an earlier, essentially precapitalist era. An annual visit to the family doctor was less an economic transaction than a personal visit for a reassuring chat. For the most part, patients still accepted their doctor’s authority and advice without hesitation. This was not an exchange between equals, especially where cancer was concerned. As something close to a death sentence in the 1950s, a diagnosis of cancer would only make matters worse, adding a desperate vulnerability to an already established dependency between patient and physician. A doctor might come to represent a patient’s last hope, all that stood between her and extinction. Despite the obvious paternalism in this relationship, outside the doctor’s office docile behavior was becoming more of a liability than a virtue. Increasingly, women were being asked to play a more active role in the postwar economy, as either members of the workforce, housewives with the power of the purse, or both. These roles drew on their competence and on their ability to absorb and make sense of the barrage of product information that clamored for their attention. Women quickly became informed consumers. But learning how to choose between different brands of washing machines or detergents did little to prepare them for the more consequential role of informed patient. There was, in fact, little that encouraged any crossover between domestic responsibilities, undertaken largely on behalf of others, and personal responsibility for one’s own welfare. Nor was there any natural affinity between the purchase of commodities and the use of medical services (the treatment of the latter as another form of consumption was an idea that did not really take off until much later). In fact, public health services made little effort at the time to initiate prospective patients into the mysteries of medical science; their campaigns continued to focus on prevention and early detection, not on the details of disease or
4
Under the Radar
treatment. That is, they were more interested in getting patients into treatment than in enlightening them about what they would find there. The medical establishment was equally indifferent to the education of the patient (and in particular the female patient). If anything, it aggravated the situation by reinforcing, in the ad pages of professional journals, an image of female helplessness—the depressed woman patient portrayed in her bathrobe in a dark room, desperately in need of the tranquilizing that new drugs like Miltown could provide. As the American woman’s postwar purchasing power grew, so too did this contradiction between the submissive client bowing to the superior wisdom of professionals and the savvy consumer with access to markets and disposable income. The first presupposed a passive relationship based on trust, with information flowing in one direction only. The second cast women in an active role, responsible for gathering whatever information was needed to make educated decisions in the marketplace. If they failed to inform themselves properly, they had no one to blame but themselves— “buyer beware!” The social and economic role of women, in other words, was clearly open to conflicting interpretation in the 1950s, a clear sign of the status quo under stress. Were those like Irma Natanson, newly diagnosed with disease, to be thought of as passive patients, clients, or consumers? Was the physician’s authority absolute or was it subject to challenge and negotiation? Who set the limits on the choices that women could make? There were no clear answers to these questions at the start of the Cold War. Fissures had begun to appear in the traditional doctor/patient relationship but they remained poorly understood. The Natanson trial exposes these tensions, mapping all the conflicting uses of “choice” that were prevalent in the 1950s. Natanson had “chosen” a treatment that destroyed her body, but she then also “chose” to go to court, to hold her medical advisers accountable for their part in misleading her. Her action set in motion a debate on the emerging doctrine of informed consent. This was, essentially, a contest for self-determination, for the right to participate in meaningful decision making from a truly informed perspective. How the cobalt radiation Natanson endured came to be construed as her choice and how the subsequent medical ordeal came to be viewed as her responsibility tell us as much about the experience of cancer today as it does about the singular experience of a Kansas housewife in 1955. The frictions aired in the Natanson case eventually crystallized into distinctive strategies that led down the road to improved legal protections for the patient, on the one hand, but equally to the dominance of free-market
Introduction
5
health care and the rise of victim-blaming, on the other. All are hallmarks of cancer medicine today. They point to a disease that has been shaped as much by politics as much as by medical science. It is the larger political climate of the time that proved to be so influential, not the partisan quarrels of elected politicians or the insider intrigues of the cancer establishment. Quite a lot has been written from the latter perspective, documenting the behind-the-scenes power plays in and among federal agencies and organizations that set research priorities and fund cancer initiatives (the National Cancer Institute, the Department of Defense, the American Cancer Society, and so forth). A few historians have adopted a broader angle of vision and make some important connections between, say, the policies of the Carter and Reagan administrations and the prejudices of cancer research.2 These studies recognize cancer policy not as the reflection of a common objective but as a source of often messy conflict among competing interest groups (scientists, patients, physicians, lobbyists, and so on). Studies of individual cancers elaborate on this tension between the various stakeholders in the cancer enterprise.3 It is much harder, however, to find any sustained analysis that explicitly frames an argument around the history of an ideology in practice. But this is an exercise well worth undertaking. Many of the features of our response to cancer today can, I believe, be traced back to the aspirations of the Cold War. From the 1950s through the 1980s, the disease was uniquely intertwined with the characteristic undertakings and covert operations of the period. Almost every aspect of the current approach to the disease bears the imprint of this Cold War entanglement. The special terror and guilt that cancer evokes, the prominence of radiation therapies in the treatment arsenal, the current bias toward individual rather than corporate responsibility for rising incidence rates, toward research that promotes treatment rather than prevention, toward treatments that can be patented and marketed—all reflect a largely hidden history shaped by the Cold War. The Cold War can be viewed less as a foreign policy than as a domestic ideology and an economic strategy—that is, as both anti-communist and pro-capitalist in intent. Put very simply, the domestic policies of the period, in the name of national security, restricted individual freedoms and put social injustices on hold in order to focus on the single-minded pursuit of a nuclear weapons buildup. At the same time, the government made exceptional resources available to private industry in an effort to expedite the nation’s economic recovery and showcase the dynamism of laissez-faire capitalism.4 These two strategic strands were designed to work in tandem
6
Under the Radar
to generate an image of the United States as both militarily and economically invincible. The pervasive influence of these objectives on American culture has been well documented.5 Little, however, has been written about their influence on less tangible aspects of society, such as the response to disease. This is a serious omission if the Cold War’s impact on the management of cancer has been both consequential and enduring. While many of the more apparent manifestations of the period are now viewed with the mockery of a cultural outlook that has moved on, the contemporary response to cancer is still marked by the legacy of its Cold War involvement. The Cold War began to heat up in the late 1940s. In 1947, the Truman Doctrine announced the United States’ plan to keep Greece and Turkey from falling into communist hands as so many countries in Eastern Europe had done. Two years later, the Soviets detonated a nuclear weapon and communists took control of China. In 1950, the United States entered the civil war in Korea on the side of the South, to prevent a further communist takeover by the North. Suddenly, the relaxation of wartime controls and the return to normal civilian life in the United States came to a halt. So too did the feeling of hard-won security, the sense of safety that Americans believed they deserved after their critical participation in the theaters of war in Europe and East Asia. Now the country was alarmed by the apparent expansionist ambitions of the Soviet Union. Was it conceivable that America would be attacked on its own soil? With Pearl Harbor still a fresh memory, the question of military defense loomed large. The perceived state of unpreparedness prompted a massive buildup of new atomic weaponry.6 But would the new bombs and missiles perform as planned? No simulated detonations could guarantee that they would. Only real explosions of real weapons would tell the military planners what they needed to know. And this could only be carried out on American soil. The decision to launch an atmospheric testing program in the Nevada desert in 1951 had to be explained—and justified—to the American people. Unlike genuine military interventions, the Cold War had no real hostilities and no actual battlefield deaths to report. To whip up and sustain a sense of danger, anti-Soviet aggression had to be manufactured—and continually renewed. The United States had to wage a war of words, relying on images and metaphors to reignite the patriotic fervor that had been inflamed by Pearl Harbor and kept alive over the next four years by American military campaigns—and casualties—in every theater of war.
Introduction
7
The Cold War propaganda machine was primed for the task. It fashioned a serviceable rhetoric that drew heavily on hyperbolic scare tactics. The USSR was portrayed as an enemy with a prodigious propensity for self-aggrandizement, and communism as “a plot to steal the world.”7 This was enough to justify the expansion of the American nuclear arsenal; the defense of the nation required it. The Soviet state was reduced to a faceless evil, given features and flaws as the shifting geopolitical situation demanded. As an amorphous but all-encompassing malevolence, it was used as a foil to drive home the superiority of all things American. It could be invoked to stir up comparisons between the freedom of democracy and the oppression of totalitarianism, or, just as easily, to extol the glories of private enterprise set off against the failures of collectivism, shackled by state direction and control. The starkness of these contrasts—that admitted no subtexts or shading of any kind—fostered a “binary logic of paranoia” that kept the fear of “apocalyptic destruction” at fever pitch.8 It would come to vex the public imagination in much the same way that the mantras of Big Brother came to terrorize the population in George Orwell’s 1984. Beyond these abstract economic and political antagonisms were parallel oppositions closer to home. Here, reductive caricatures of the communist worldview allowed the advantages of American community life to shine. Home owners celebrated the virtues of self-reliance while their Russian counterparts were forced to suffer the indignities of life in bleak and overcrowded public housing. The new American suburb reflected a social system that rewarded the loyal and deserving (especially returning GIs); it could hardly have been more at odds with the threat of deportation to the gulag, a punishment awaiting those disloyal to the Soviet state. But the advantages of American life came at a price. Truman’s loyalty program, introduced in 1947, spelled out the costs of these highly prized domestic freedoms. The new program permitted investigations into the private lives and friendships of federal employees (over time, the FBI would carry out checks on more than two million government workers). Its inauguration put Americans on notice that the search for disloyalty would take precedence over existing civil liberties wherever and whenever necessary. If there were “reds under the beds,” they had to be flushed out. As Joseph McCarthy and his ilk intended, such threats bred suspicion of the familiar. It was, they believed, just as plausible and important to find evidence of betrayal in the home as it was to look for it in public life. Nothing was sacred; intimate relations were as vulnerable to corruption as
8
Under the Radar
anything else. A woman in Utica, New York, was able to have her marriage annulled in 1950 on the sole grounds that her husband was a communist.9 Cold War propaganda did not hesitate to use cancer to magnify these fears, drawing upon the same kind of conceptually crude rhetoric used to craft the American/Soviet comparisons. From what was essentially an intimate condition of the body, it conjured a malevolent abstraction and unleashed it as an avatar of the Soviet menace, the “cancer of communism.” Now inflated to monstrous proportions, the new incarnation was let loose at a time when there were no competing images of cancer to contend with, no “human face” that might tether the disease to lived experience. It would be another thirty years before the public stories (of celebrities like Betty Ford and Nancy Reagan) would shift the perspective, bringing cancer back to earth. Before that, the American public was rarely invited to share the experience of living with the disease, offered only hard evidence of those dying of it. So there was little to offset the Cold War’s vision of cancer as a ravenous fury stalking the American landscape. The disease was commandeered into the Cold War lexicon to add a layer of visceral terror to a geopolitical threat that too often seemed remote and abstract. In shaping it into a useful tool of psychological warfare, propaganda cherry-picked exactly those attributes of cancer that served its purpose, that is, those most likely to terrify, to raise the specter of a deadly fate awaiting those who fell victim to communism. Air-brushed out of the picture were all cancer’s messier features—its troubling biological diversity, its often unpredictable course through the body (including the occasional mystifying remission), its variable incidence rates, and its established links to occupational hazards.10 Cold War propaganda turned its back on these complicating factors, just as it did with the more complex realities of Soviet life. To acknowledge them at all would dilute the image it wanted to convey, of a hopeless contest between a force of nature and a helpless victim. Cancer was to represent the ultimate violation, a sign that all defenses had been breached; in penetrating both the home and the body, it too became a threat to American family life. To Western democracies, disease and communism had been bedfellows from the earliest days of the Soviet regime. But before the end of the Second World War, it was contagion that served as the dominant metaphor of transmission. In 1919, a headline in the New York Times warned that “Bolshevism is Spreading in Europe: All Neutral Countries Now Feel the Infection.” Churchill, at the same time, said that sending Lenin into Russia was like sending “a phial containing a culture of typhoid or cholera to be
Introduction
9
poured into the water supply of a great city.”11 It was not until the United States detonated the world’s first atomic bomb in Japan that the stakes were raised. Cancer then came to replace infection as the primary demon, with radioactivity replacing contagion as the agent of dissemination. Unlike typhoid and cholera, which by then were controllable, cancer remained incurable and so retained the power to terrorize. Cancer’s substitution for epidemic disease in Cold War rhetoric marked the ratcheting up of geopolitical hostilities and the accompanying proliferation of nuclear weapons. Politicians were eager to exploit the increase in ideological toxicity. George Kennan, chief of the United States’ diplomatic mission in Moscow, set the tone in his influential “long telegram” of 1946. In it, he expressed the view that “communism is like [a] malignant parasite which feeds only on disease tissue.”12 The document helped to harden U.S. opposition to the Soviet Union, dismissing support for appeasement and opting instead for the more aggressive policy of containment, set in motion on the ground by the creation of NATO in 1947. The bleakness of Kennan’s metaphor brings home the futility of compromise. These are forces of nature, it seems to imply, implacable and nonnegotiable. Diplomacy stands no chance against them; only the threat of armed aggression can contain Soviet expansionism. A decade later, in the run-up to the 1956 election, the Republican Party, using what had by then become a common conceit, emphasized the progress that had been made against the “cancer of communism.”13 The USSR’s deeply suspect political philosophy no longer simply “spread” from one country to another; it now “metastasized.” Soviet-style communism could, in other words, infiltrate American society as furtively as the disease could invade the American body, with equally deadly consequences. Entangling the two magnified a general sense of vulnerability. The only defense against the twin assaults—cancer as pathology, cancer as ideology—was, the message seemed to be, vigilance. Americans were expected to make a stand against both. And it was not much of a stretch to make individuals somehow accountable for both. The Cold Warriors would do nothing to disturb this blurring of the lines. Indeed, anything that mobilized personal responsibility was a tool they were eager to exploit. If that included tacit support for a misplaced sense of failure, so be it. Americans would surely want to avoid the sense of letting down their country, being found guilty of either civic or personal disloyalty, especially after the passage of the McCarran Act in 1950—which required “subversives” to register with the attorney general—when the definition of
10
Under the Radar
“disloyal” became much more elastic. In shifting the traditional boundaries between American and un-American behavior, the new legislation made any endeavor deemed to be “prejudicial to the public interest” grounds for summary arrest and deportation. Under the new ruling, anything might be judged transgressive. If ideas could get you into trouble, why not illness? Cancer was, after all, just as unhealthy as communism. The publication, in English, of Alexandr Solzhenitsyn’s Cancer Ward helped to reinforce this solidarity between sickness and the Soviet system. The credentials of its author were unassailable. As the darling of soviet dissidents, an insider with personal experience of the gulag—and of cancer— Solzhenitsyn’s imprimatur was a gift to Cold War propaganda. For readers and nonreaders alike, the book’s title endorsed an uninflected view of the USSR that had been encouraged for decades. Published in 1968, Cancer Ward appeared well before the emergence of illness narratives in the United States. At the time, Americans had no way to interpret the experiences depicted in the book, no basis for comparison. When cancer chronicles began to emerge in the United States twenty years later, they were individual renderings of “personal journeys,” not a collective and fictionalized portrait of patients struggling within a deeply alien medical and political system. Inevitably, American readers failed to absorb the story as a saga of disease. Set at the height of the Cold War (in 1955), it was the novel’s metaphorical use of cancer that captured the public’s attention, highlighting the pathology of the communist regime rather than the pathology of disease.14 The private perception of cancer remained very much in the shadow of this public persona. The little that was known or understood about the underlying causes or mechanisms of disease was easy enough to brush aside, clearing a space for the elaboration of a disembodied metaphor, cut loose from human biology. The most disturbing attributes of Soviet nuclear warfare—its global reach, its ability to inflict widespread and long-lasting harm, its element of surprise that exposes the futility of preemptive defense—all could now be identified with cancer, amplifying the terrors already associated with the disease. It was the recognition of an evil intent that now endowed cancer, like communism, with a sinister will to do harm. The disease was, in other words, animated by its malevolence, displaying a human trait rather than a biological one. That, in turn, aroused a more complex human response, converting cancer into an adversary rather than a biological (or genetic) accident. Cold War rhetoric would exploit this error. Designed to terrify, the image would prove to be extremely virulent—and almost impossible to dislodge.
Introduction
11
The magical thinking that the Cold War encouraged would inevitably complicate reactions to individual diagnoses. But precisely because it courted the irrational, such responses could not be aired. And, for the most part, they were not.15 Even between a doctor and a patient, a diagnosis typically remained classified information, hovering in silence over the evasions that enabled physicians to avoid the discomfort of disclosure. Elmer Bobst, the chairman of Warner-Lambert Pharmaceuticals in the 1950s, wrote, twenty years later, that cancer “was and is perhaps the most frightening word in medicine, almost unmentionable in private conversation.”16 This perfectly expressed the “cancer phobia” that the surgeon George Crile Jr. decried. “Cancer has become the whipping boy for all diseases, a symbol of the fear of death,” he wrote in 1955. “It is possible that today cancer phobia causes more suffering than cancer itself.”17 In its willingness to milk the disease for propaganda purposes, Cold War culture overlooked its impact on the millions of Americans with cancer histories of their own. Every time the word “metastasis” was used to intensify geopolitical fears, it also intensified the malaise of Americans who had themselves been diagnosed with a malignancy. Their private fears resonated with public incrimination. The fact that the culture tolerated it (and still does) says a great deal about the power of taboos. “Cancer,” “metastasis,” “tumor,” and “malignancy” survive as metaphors because cancer advocates have not objected to them. Although other diseases can also turn lethal without warning, none has been drafted to serve the language of insurrection or social disorder in this way. Cancer imagery is still in common use today, although now it is more likely to describe terrorism than communism. A 2006 report on the worsening terrorist threat in Iraq “asserts that Islamic radicalism, rather than being in retreat, has metastasized and spread across the globe.”18 A reviewer in the New York Times describes a book about Iraq as chronicling “America’s flailing efforts to contain a metastasizing insurgency.”19 In the same paper a week later, a Lebanese American financier, “referring to Hezbollah’s entrenched position in southern Lebanon,” urges us to “get rid of this cancer.”20 These are just a few examples of usage that has become so widespread it is no longer remarkable. Cancer entered the lexicon as a scapegoat well before the advent of citizen health movements or political correctness. Activists who have more recently taken up the cause of individual diseases are now very sensitive to the nuances of media representation and voluble in their criticism of press coverage. AIDS advocates since the early 1980s were adamant about the
12
Under the Radar
language used to describe them: “We condemn attempts to label us as ‘victims’ which implies passivity, helplessness, and dependence upon the care of others. We are ‘people with AIDS.’ ”21 Such an insistence on self-definition reflects an awareness of the politics of illness that did not exist in its current form before AIDS activism; it would have been unimaginable in the 1950s and 1960s. The use of the word “victim” here and throughout this book is a reminder of the very different consciousness that preceded the rise of disease advocacy; it denotes those who stood no chance of moderating their experience of illness. Cancer metaphors did not remain the exclusive property of Cold War rhetoric. As tensions with the Soviets began to ease in the 1960s, such metaphors found their way to other social ills as well, adding to a heightened perception of danger. Citizens were alerted by the media to “the cancerous spread of the pornography ‘business’ ” and to the “cancerous growth of crime.”22 Stewart Alsop, a well-known Cold Warrior, wrote an article for Newsweek called “Smell of Death: Heroin Malignancy in New York.”23 A condemnation of the McCarran Act appeared in the left-leaning weekly, The Nation, under the heading “A New Lease on Malignancy.” The legislation, it argued, “bludgeons the very heart of free expression . . . the quest for communists cannot be continued without hysteria and witch hunting.”24 Cancer metaphors were, in other words, a bipartisan habit, wielded by the Left as well as by the Right. Liberalism, as a set of progressive beliefs, was also frequently tarnished with the cancer brush, inheriting the opprobrium once attached more commonly to the threat of communism or socialism. Inevitably, cancer became a metaphor-for-hire, to be used polemically, more to manufacture a social ill than to amplify ills already existing. Maurice Stans, Eisenhower’s budget director (later indicted in the Watergate scandal), argued in 1960 that the nation’s economy was being threatened by “compulsive spending [and] cancerous taxation.”25 The imagery was useful to the business sector as well. In the late 1950s, food chains fought against the “cancerous evil” of trading stamps that were “eating away the profits of independent supermarkets.”26 On another front, the need for business insurance in dangerous inner city areas had become “a blight more cancerous than violence.”27 The addition of cancer to the descriptive mix was a shorthand alert to danger on a slow fuse and would, it was hoped, attract the critical attention necessary to prompt a public outcry. Used so promiscuously, the disease eventually became an all-purpose signifier, prodding all its targets in the same direction—toward dystopia. “By the 1950s,” wrote Spencer Weart,
Introduction
13
author of Nuclear Fear, “the word cancer had come to stand for any kind of insidious and dreadful corruption.” To the list of those tainted by association with the disease, Weart added “prostitutes, bureaucrats, or any other despised group as a social ‘cancer,’ while those people afflicted with real tumors came to feel shamefully invaded and defiled.”28 In this framework, real people with real cancer became unpleasant reminders of the drive toward terminal social disorder. Americans diagnosed with cancer found little public sympathy. Compared to the “real” victims of the “real” war—the 400,000 American heroes who had perished on battlefields—victims of cancer were invisible and undeserving. Throughout the late 1940s and 1950s, the memory of the war dead remained fresh in the minds of all of those personally affected as well as those preoccupied with their official commemoration. The construction of World War II memorials in almost every town in the United States extended well into the 1950s and beyond. Their diversity and ubiquity (plaques and obelisks, parks and playgrounds in the thousands) kept patriotic fervor at a high pitch, at least into the 1960s. They were reminders of sacrifice, of Americans doing their duty, loyal to a cause. Cancer deaths, by comparison, had just the opposite effect. Occurring right under our noses rather than on some foreign field, they served no higher purpose whatsoever, invoking no finer feelings and no uplifting collective memory. Instead, they remained unconnected individual events, tapping into a disquieting if unspoken anxiety, fed by the suspected link between cancer and subversion. Compounding the difficulties for those diagnosed with the disease was the appropriation of God by the American side of the Cold War conflict. The protection of a higher power added a sense of security which the godless Soviets, it was believed, could only envy. But preempting God also served to reinforce the authority of earthly powers, fortifying deference and obedience to elected government as well as to divinity. The addition of the words “under God” to the Pledge of Allegiance, legislated in 1954, brought these realms into an even closer alliance. The secretary of the navy, in a speech at the Boston Naval Shipyard in 1950, celebrated the connection, glorifying the United States as “the repository of the Ark of the Covenant” and the “custodians of the Holy Grail.”29 The presumed collusion of divine authority in Cold War maneuvers would surely intensify the fatalism of those diagnosed with cancer after the start of the nuclear testing program. If God was on our side during this time of national crisis, then, for many, personal misfortune must surely be part of His Plan.
14
Under the Radar
The inveterate status of cancer as a public “problem” continues to defy easy explanations. It is not a subject that attracts much attention especially now that cancer books aimed at lay readers focus primarily on personal accounts of disease. But even with the proliferation of this literature and, in general, an unprecedented general openness about cancer, many people still find it a touchy subject and are uncertain how the revelation of a diagnosis will be received in polite society. No other disease is so caught up with questions of personal responsibility and blame. Even heart disease, which every year kills more Americans—women and men—fails to rouse the passions in the way that cancer does. As the cultural critic Susan Sontag put it, “No one asks ‘Why me?’ who gets cholera or typhus. But ‘Why me?’ . . . is the question of many who learn they have cancer.”30 What accounts for this curious lament? Part of the answer lies in the behavior of the disease itself. Rather than announcing itself to the world through a single characteristic symptom—the swollen glands that marked out victims of the plague or the paralysis of polio—cancer proceeds by stealth. The disease can share symptoms with dozens of other conditions and might not be suspected until it has spread to vital organs and become life threatening. It can set up camp within the body and remain there, hidden away, silently gathering force and then quickly overcoming its defenseless host. Viewed through this lens, a victim was less “afflicted” than “ambushed.” The parallels with the dynamics of Soviet infiltration were obvious. The fear mongering that kept paranoia alive throughout the Cold War, with its repeated warnings of the dangers of letting down one’s guard, would only intensify the commingling of personal and geopolitical terrors. Both fed a sense of doom. This long latency period was also, in the American psyche, unique. The common experience of disease was for the most part limited to chronic conditions that were not usually life threatening or to those that struck whole populations of people at once, overwhelming their victims within days, sometimes even within hours. Cancer was altogether more indolent. Even after a diagnosis, the pace of disease could be slow compared to the wildfire transmission of true epidemics. Cancer does not “pass through” a community or burn itself out; its occupation of our bodies—and our imagination— is endemic if not epidemic. But it is not the natural history of the disease alone that has determined its singular status. Just as responsible has been the social history of cancer over the past half century, and, in particular, its involuntary intimacy with the dominant source of Cold War power, atomic energy. The bombs
Introduction
15
dropped over Japan introduced the world to radioactivity—the release of energy in the form of radiation, through the disintegration of atomic nuclei. Harnessed by human ingenuity, controlled fission created the potential for destructive power on an undreamed-of scale. Hiroshima and Nagasaki put that power on display in August 1945. But this was just the beginning. Atomic energy turned out to be alarmingly versatile. Many of the nuclear by-products of the bomb development process created opportunities for the civilian economy as well as for military defense, in applications that were potentially constructive rather than destructive. Intervening in the course of disease was one of them. Cancer had the distinction of being perhaps uniquely involved in human activities that both caused and attempted to cure disease. It was implicated in the fallout controversy that raged throughout the 1950s and 1960s as the disease began to appear, first among the Japanese bomb survivors, and then among residents living downwind of the Nuclear Test Site in Nevada (the so-called downwinders). Cancer also appeared among uranium miners under contract to the U.S. government. On the other hand, it was a disease that could be held in check by X-ray treatment. In other words, the same awesome but poorly understood process lay at the heart of both nuclear weaponry and what came to be called nuclear medicine, encompassing the use of radioactive materials in the diagnosis and treatment of disease. Not surprisingly, an uncomfortable circularity troubled our understanding of the atomic world. This was recognized early—“Atomic energy: cancer cure or cancer cause?” asked a popular science magazine in 1947.31 The Atomic Energy Commission (AEC) itself embodied the difficulty. The legislation that set up the organization in 1946 formalized the lack of a clear demarcation between the promotion of nuclear energy on the one hand and protection from it on the other. The AEC was given responsibility for both—for the development of nuclear power and for its regulation— a fox-and-henhouse arrangement that was open to abuse.32 A parallel lack of clear-cut boundaries in practice between levels of radiation exposure deemed to be harmful and those considered beneficial further complicated the picture. The sharp dichotomy between applications of atomic energy was itself a product of the Cold War. X-rays existed well before that. But generated by electricity rather than by nuclear reactors, their primary use was neither to destroy nor to heal but to serve essentially in a more neutral capacity—that is, to illuminate the inner body, to identify broken bones, tuberculosis,
16
Under the Radar
dental caries, and other disorders invisible to the naked eye. But the diagnostic potential of X-rays that had proved so useful on World War II battlefields—allowing medics to locate bullet wounds quickly—was overshadowed throughout much of the Cold War by the applications of atomic energy that, at their extremes, involved much higher doses of radiation. The advent of cobalt marked this change; its more powerful beams created significant opportunities for the diagnosis and treatment of cancers that hitherto lay beyond the reach of medical management. Radioactive cobalt now provided access to tumors lying deep within the body, bringing many more cancers within the reach of treatment. But if the potential rewards of harnessing higher energy were great, so too were its dangers. In all its applications, medical and military, nuclear power demanded a tolerance for imprecision that could be unnerving. The “yield” of an atomic bomb could not be predicted with any more accuracy than could the weather conditions at the moment of the blast. Unexpected wind shifts could radically alter the extent and intensity of fallout carrying a radioactive cloud (with all its attendant risks) to a densely populated area hundreds or even thousands of miles off course. A dangerous uncertainty hovered over every test shot. A similar sense of risk dogged the early use of radiotherapies. With little uniformity between one machine and another, between one technician and another, and little established data on dose levels proven to be safe and effective, successful treatment outcomes were just as unpredictable. Of course those on the receiving end were unaware of this inconsistency. Patients like Irma Natanson had no more of an idea about the risks of cobalt treatment than Nevada sheep farmers had about the risks of living downwind from the nuclear test sites. Neither was given any warning in advance—or any honest explanation after the fact. Habits of secrecy intervened to keep them in the dark. Irma Natanson and the Nevada downwinders were among the first generation of Americans to be exposed to atomic energy as a by-product of the wartime weapons program. Since then, radioactivity has been incorporated within hundreds of products and processes—at the dentist’s and doctor’s office, the hospital, the research lab, the airport, the farm, the industrial workplace. We have been comprehensively seduced by its protean properties. We have been less consumed by the question of its hazards, which to this day remain incompletely understood. We still cannot document, with any certainty, the carcinogenic consequences of this avalanche of exposure that we have visited upon ourselves. We choose not to keep track of our
Introduction
17
radiation exposures over the course of a lifetime. Lacking any working concept of a radiation audit, we lose sight of the cumulative impact of exposure and close down opportunities for research. The fact that cancers induced by radiation (or by any industrial carcinogen) are indistinguishable from those occurring naturally makes it impossible to establish unambiguous cause and effect between exposure and disease. But if the links between suspected carcinogens and cancer cannot be confirmed beyond the shadow of a doubt, neither can they be disproved. In the context of this uncertainty, every diagnosis raises the possibility of a disease that we have, as a society, inflicted on ourselves. It is this disturbing sense of collective responsibility that is missing from the aura of other illnesses. Heart attacks and strokes happen to individuals and are experienced as personal disasters. They conjure up no sense of communal guilt, none of the Cold War malaise that associates cancer with radioactive fallout, with nuclear warfare and its Japanese—and American— victims. In the wake of Hiroshima, cancer lost its innocence and became something of a mutant hybrid—part disease, part weapon, part scourge. At some level well beyond the reach of public discourse, the confounding of the constructive and deadly aspects of radiation may have encouraged the idea of cancer as a disease somehow complicit in its own malignity. It is, of course, fiendishly difficult to reconstruct the consciousness of any earlier time, especially when the perspective and the political framework have changed so radically in the intervening years. Still, it seems reasonable to speculate that the contradictions that have plagued our sensitivity to cancer have left their mark. The split personality of radioactive material, its ability to kill or to cure is, after all, a creature of our own making, subject, at least theoretically, to our correction. The firewalls we have set up between its two faces may turn out to be only convenient fictions designed to comfort—or trick—us. The application of the same science to produce both the atom bomb and the cobalt “bomb” induces a kind of cognitive dissonance (the consideration, in the early 1950s, of a real cobalt bomb, a radiological weapon that “might lead to the death of everybody in the world,” only magnified the confusion).33 This is one reason the calamity that befell Irma Natanson is so disturbing, even today; it undermines our carefully constructed defenses. In her case, the destructive power of radioactivity made itself visible on the wrong side of the fence. Incontrovertible evidence of bodily harm threatens our learned response to X-rays, undermining a willed belief in the possibility of defining—and enforcing—a clear demarcation between the two faces of radiation.
18
Under the Radar
The contradiction exposed here echoes the larger contradictions of the Atoms for Peace program inaugurated by Eisenhower in his speech to the United Nations in 1953. This was a campaign designed to win support for the development of a “super” (thermonuclear) bomb by highlighting the vast potential of civilian applications of atomic energy. On the one hand, the speech obliquely affirmed the continuing need for a bomb described as “a weapon of genocide” by many of the Atomic Energy Commission’s own physicists, who opposed it; on the other, it expressed the hope that fissionable materials would “serve the needs rather than the fears of mankind.” The potential applications for radioisotopes in insect control, food preservation, manufacturing, medical and biological research were, it was claimed, unlimited. The only prerequisite was peace. The hydrogen bomb would guarantee that. An editorial in the New York Times acknowledged the dizzying contradictions implied by the Eisenhower program: Mankind needs the power, the healing, the knowledge and other gifts that the divided atom can bestow upon us. . . . At the same time the world still lives in deadly fear of the use of atomic weapons in bombs or rockets or some other form to destroy cities and populations. A part of the human race—or a part of the human personality—is still savage and vindictive, whereas another part works magnanimously for the good of all mankind.34
While Eisenhower believed the H-bomb was necessary to guarantee American invincibility, he also gave his support, for a time, to the idea of placing the control of nuclear weapons in the hands of an international body in order to discourage arms proliferation by individual member states. Clearly these were mutually exclusive objectives and, indeed, military concerns soon trumped pacifist idealism. Nevertheless, the intentions of the early disarmament initiatives continue to this day in the International Atomic Energy Agency (IAEA), first proposed by President Eisenhower. Under the aegis of Mohamed ElBaradei, the agency played a central role in overseeing weapons inspection in Iraq in the period leading up to the U.S. invasion in 2003. The IAEA pursuit of “weapons of mass destruction” evokes many of the same terrors that were aroused by radioactive fallout in the 1950s. Both raise the fear of man-made agents of death in the hands of unpredictable political power. It was just this fear of unbounded power that the Atoms for Peace program was designed to allay. It sought to pacify the anxieties set off by
Figure I.1 The first day cover for the “Atoms for Peace” stamp, issued July 1955, drives home the idea of science—universally represented by an image of Albert Einstein—bringing benefits to civil society, here represented by advances in “cancer therapy at Brookhaven National Laboratory.” The overlapping images exemplify the idea behind the program, expressed in the quotation that runs along three sides of the stamp—”To find the way by which the inventiveness of man shall be consecrated to his life.” The words were taken from Dwight D. Eisenhower’s address to the United Nations on 8 December 1953.
20
Under the Radar
Hiroshima and Nagasaki, not to aggravate them. But no orderly demonstration of its potential benefits could overcome the sense of atomic power as a force that remained out of harness, uncontrollable. Secrecy only reinforced the sense of latent danger. Atoms for Peace went forward with significant government support and public fanfare, across the United States and Europe.35 Though less clearly identified as a Cold War initiative, the underwriting of massive investment in the private sector served an important ideological function as well as an economic one. Its purpose was not only to boost economic recovery but to camouflage the more controversial use of atoms in nuclear weapons. The fact that the program relied on public funds was not seen as a contradiction of any free-market principles but simply as pump priming, necessary to get the ball rolling. To build momentum for the new initiative, Eisenhower offered to pay half the building costs of nuclear reactors abroad, in selected countries. By the end of the 1950s, his administration had signed thirty-eight bilateral agreements and approved the construction of thirty reactors.36 In the shadow of these more controversial projects, thousands of radioactive isotopes (like the cobalt used in the machine to treat Natanson) found their way into technical and often cost-saving innovations (such as material thickness gauges, instruments for quality control, and medical tracers). Collectively, they gave an additional boost to private engineering companies, many in transition from war to peacetime production. Major contractors that had participated in the development of atomic weapons (General Electric, Westinghouse, and Union Carbide, among others) were all beneficiaries of this change in outlook. Though the proliferation of isotopes would always remain in the shadow of public support for massive nuclear power projects, it would nonetheless have a significant impact on the shape of the postwar economy. As the use of nuclear by-products intensified, so too did opportunities for potentially hazardous accidental exposures. Workers responsible for the handling, storing, transporting, and disposing of radioactive materials raised alarms; they understood that the problems facing “atomic workers” at nuclear reactors and other remote government-run installations had now come to Main Street. And though the new processes depended on publicly licensed supplies of the radioactive source materials, regulation of their use was passing quickly from public into private hands. This created an entirely new set of problems for those charged with the protection of the public. Recognition of the wider hazards of radioactive
Introduction
21
materials invariably followed the introduction of the materials themselves. Just as the AEC had failed to address the problems of fallout before the start of the nuclear testing program, so too did it ignore the consequences, for densely settled urban areas, of atomic energy in their midst. It fell to municipal and state authorities to make their own accommodations to the new intruders. They had to figure out how to cope with a fire, explosion, or related building collapse that might be caused by radioactive material used or stored on site. They also had to develop their own protocols for the registration and inspection of radioactive materials brought within their jurisdiction. By early 1960, the New York City Department of Health estimated that there were some 20,000 individual sources of radiation in the city, primarily medical and dental X-ray machines.37 In underwriting the transformation of atomic energy from military to peacetime uses, the federal government chose to limit its involvement and, hence, its long-term liability. Though the AEC initially retained control over the distribution of radioactive substances, it soon negotiated the transfer of licensing and regulation requirements to private-sector intermediates. And, with few exceptions, it placed no further restrictions on the environment in which the materials were to be used. Jettisoning what had originally been construed as public health concerns was seen as a necessary precondition for the market development of products incorporating the new isotopes, something the government was eager to promote. Official support for these new products implied official tolerance for nuclear hazards, however diluted, as a permanent feature of the postwar economy. What seemed to matter most was no longer the absolute safety and security of Americans, as it had been during the war, but giving an assist to American industrial recovery and expansion. If the policy ran the risk of exposing some Americans to radioactive hazards, that risk would now be considered insignificant. This more permissive attitude toward nuclear by-products arrived on the scene just as Americans were becoming more aware of the complex repercussions of atomic bombs. As observers continued to track the fate of Hirsohima victims, it became clear that the effects of atomic energy might turn out to be as deadly in the longer run as they had been on the day the bomb was dropped. Casualties would, it turned out, include those succumbing to radiation-induced cancers as well as those killed immediately by the blast. The bombs had destroyed an abstract “enemy,” but disease struck individuals. While the potential scope of nuclear devastation was literally unimaginable, the personal experience of cancer was all too easy to relate to.
22
Under the Radar
With the appearance of leukemias among Hiroshima survivors came a shift in the perception of the American victory in Japan. Yes, the United States had won the war but the second wave of Japanese deaths, the mounting toll of lives sacrificed to cancer, rekindled ethical questions about the use of atomic weapons. The victimization of the wholly innocent Marshall Islanders in 1954 following the detonation, by the United States, of a hydrogen bomb was yet another reminder that the costs of nuclear war were not always borne by enemies. When the same disease came home to roost in the United States—when cancers began to emerge among the many thousands of American witnesses to the Nevada nuclear tests—it began to acquire more ominous overtones. Sustained Soviet threats raised the prospect of a terrible chastisement. Cancer, in the minds of many, would become an agent of retribution, reenacted in every individual cancer diagnosis, adding the burden of personal guilt to the threat of mortality. Cancer had become a by-product of atomic weaponry, a seed sown in a mushroom cloud that could grow anywhere on earth in any season. Few Americans, however, were yet aware of this. Officials at the AEC hoped to keep them in the dark for as long as possible. If the link between fallout and disease were to become generally known, it would raise alarms that would, in turn, wreak havoc on the weapons testing programs on American soil. To allow testing to continue without undue interference required a public relations campaign that would throw the American public off the scent. The latency period between exposure to carcinogenic fallout and symptoms of malignancy was a gift to those in power. The long lead-time into illness, so characteristic of the disease, bought them time to experiment with human lives under a pretense of ignorance. Meanwhile, every additional weapon that was tested during this period added to the cumulative total of atmospheric radioactivity circling the globe. The first major report to the American public on the biological effects of atomic radiation, published by the National Academy of Sciences, did not appear until 1956, more than ten years after Hiroshima. Even then, it played down the consequences for those who had been directly exposed, taking a long-term view of the bombs’ potential harm.38 The focus was on genetic mutations that might be passed along to future generations by those of reproductive age. There was no public admission of any more immediate danger, no mention of genetic damage that might precipitate cancers within the natural lifespan of those who had been directly exposed. By projecting the emergence of mutations over the slower pace of evolutionary change, remedial intervention might be held at bay, postponed to some indefinite future.
Introduction
23
The cultural response to the atomic threat followed the lead of the early reports, hewing closely to the expected genetic consequences of radioactive exposures. Mutants of every sort found their way into films, comics, and books.39 The supernatural powers of comics characters like the Incredible Hulk, Spider-Man, and the Fantastic Four all derived from accidental exposures to heavy doses of radiation.40 Such characters were as often endowed with powers for doing good as they were for doing harm, a phenomenon that reflected the ambivalence that radioactivity provoked. One thing they were all spared was radiation sickness—or cancer. The debilitating effects of disease were of course incompatible with the notion of superpowers, whether wielded by heroes or villains. While there were many scenarios of death and destruction in every form of cultural expression, even the last atomic survivors on isolated shores (in books like Neville Shute’s On the Beach) remained healthy until the bitter end. Terminal representatives of the human race were going to die physically intact, not ravaged by disease. The absence of cancer from such narratives was a measure of its absence from the public imagination, a willed blind spot in the portrayal of atomic energy. The evasive conclusions of the National Academy of Sciences report typified the inconspicuous position of cancer in the early investigations of atomic radiation. The report demonstrated what the antinuclear biologist Barry Commoner referred to as “the unequal pace of the development of physics and biology.”41 American scientists knew enough about nuclear energy to release enormous quantities of radioactive materials into the atmosphere but not nearly enough about the biological impact of these experiments on thousands of unsuspecting “subjects.” How much of this disparity was driven by the primacy of military over medical imperatives is hard to say. The biological effects of radiation (as of cancer in general) did prove to be extraordinarily complex. But there was no comparison between the scale of resources devoted to weapons testing and those directed toward scientific research. A year later, the first of many congressional hearings on the hazards of fallout summarized contemporary thinking on the subject in 4,000 pages of evidence.42 Among the report’s findings were estimates that fallout from tests that had already taken place would ultimately be responsible for 25,000 to 100,000 extra deaths from leukemia and bone tumors worldwide. These were big numbers but not big enough to deter atomic testing (a further twenty-six detonations took place in the four months after the hearings, as part of Operation Plumbbob).
24
Under the Radar
Still, publishing the projected cancer figures was at least a move toward informing the public of the very real consequences of exposure to radiation. Americans now understood that the radioactive fallout that had done so much damage to foreigners halfway around the globe could do just as much damage at home. Fallout respected neither state nor national boundaries. If cancer could strike someone living in Troy, New York, where rain brought down a dangerous amount of fallout two days after a nuclear explosion in the Southwest, then no one was immune.43 Cancer from any source was as unwelcome to Cold War strategists as it was to affected individuals and their loved ones. The disease polluted the public image of the American family, universally portrayed as healthy in mind and body. Vitality on the home front was expected to mirror the robustness of the postwar American economy. The two, in tandem, presented the best possible answer to communism. But cancer spoiled that solidarity. It was not well behaved; it did not succumb to medical intervention as polio would by the mid-1950s, earning kudos for American science with the Salk and Sabin vaccines. Cancer offered nothing to boast about. So, like nuclear weapons in the context of the Atoms for Peace program, it was rarely mentioned; better to keep it under wraps, to draw attention away from the rising incidence of the disease and promote the therapeutic potential of nuclear science instead. The American Medical Association, falling in with the plan to accentuate the positive, insisted as early as 1947 that “medically applied atomic science has already saved more lives than were lost in the explosions at Hiroshima and Nagasaki.”44 Here is the upbeat side of cancer. The trumpeting of treatment did for cancer medicine what the nuclear arsenal did for the Cold War; it provided credibility. As an approach to disease, a method of attack, it would long outlive the Cold War itself. The unsavory intimacy between cancer and the Cold War—the intersection of disease and politics—is most starkly revealed in the history of postwar human radiation experiments. Undertaken in the shadow of the nuclear weapons program and running concurrently with the fallout controversy, they reveal the same Cold War pragmatism and the same compromised ethics. Most of these experiments were administered by defense agencies (the Atomic Energy Commission and the Air Force, among others) whose operations had been classified for decades. Knowing that secrecy would protect them, many prospective investigators hoped to seize on the opportunity provided by the availability of cobalt to simulate exposure to radioactivity
Introduction
25
in real-life scenarios. The intention was to use whole-body radiation to reproduce conditions on a nuclear battlefield or at the site of a nuclear accident. Controlled exposures would, it was hoped, permit investigators to measure the human tolerance for radiation. It was the availability of cobalt radiotherapy that made some of these experiments possible. Having emerged from the wartime weapons program to begin with, it now found a way back to its roots. As a new therapy, it had been developed in a joint effort between scientists from the world of academic medicine in Texas and nuclear scientists at the AEC’s Oak Ridge facilities. The new department of radiology at the M. D. Anderson Hospital in Houston supplied the prototype equipment design, and the AEC supplied the radioactive isotope. But as the Cold War heated up and its ideologues were given more of a free hand (and access to unlimited funds), they were emboldened to make ever greater demands on the medical resources available to them. The purely scientific interest in the potential of a new cancer therapy soon gave way to the military’s more exigent demand for experimental “volunteers” to supply them with measurable information about the biological response to radiation. It is unlikely that these experiments offered any therapeutic value to their participants although they were almost certainly represented as doing so. In most cases, they relied on compliant subjects enrolled without their consent and with no knowledge of the experiment’s true purpose. Among the vulnerable groups drafted to serve were prisoners, veterans, mentally retarded children, and, of most concern here, terminally ill cancer patients. The boundary between the constructive and destructive uses of radiation was, in this way, breached yet again. Terminally ill patients who looked to radiation as their last hope often died prematurely after exposure to X-rays that exceeded any medically approved dosages then in use. Meanwhile, legitimate efforts to track the outcomes of radiation with the new cobalt were put on hold. The marketing of the new equipment however, did not wait upon these results. Cobalt found its way into patient care without robust evidence of either its efficacy or its safety. Natanson’s experience would reflect the hazards of this premature transfer. Details of these secret experiments did not become widely known until after the end of the Cold War. Those conducted with cancer patients turned out to represent just a fraction of a dark universe of experimentation involving thousands of Americans—civilian and military. Eileen Welsome’s The Plutonium Files, published in 1999, brought this larger story to light, identifying, for a popular audience, the full range of activities that made
26
Under the Radar
such a toxic brew of military and medical collusion, stirred up by radioactive isotopes. Equally groundbreaking, Welsome also named many of the individuals caught up in early experiments with plutonium injections and traced their follow-up histories in depth.45 It was her earlier investigative reporting on these experiments that caught the attention of Hazel O’Leary, then U.S. secretary of energy, who authorized a major declassification program designed to reverse the department’s long-standing “culture of secrecy.” A year later, in 1994, President Bill Clinton followed her lead and mandated a specially appointed committee to undertake a review and evaluation of all the surviving evidence (the committee eventually estimated that at least 4,000 experiments had taken place between 1944 and 1974).46 Though kept well hidden, these experiments had a marked impact on the wider culture, and especially on the practice of medicine. Hospital staff, families and friends of the many thousands of participants, and military personnel all had been silent witnesses. How many doctors served both civilian and military masters, co-coordinating patient treatment on the one hand and nontherapeutic experiments on the other? How did they reconcile the demands of the Hippocratic Oath (“first, do no harm”) with those of national security (avoid at all costs whatever might be “prejudicial to the best interests of the government”)? The practice of withholding the truth from terminally ill patients was already well entrenched before the military mindset came to the cancer wards. Doctors routinely prevaricated—or lied outright—about a patient’s prognosis. The postwar research program now gave its official blessing to the practice, granting it a new lease on life, just when it might otherwise have begun to soften. This no doubt set back the arrival of greater openness by decades. The human radiation experiments reflected a blinkered concern with the country’s ability to respond to nightmare scenarios affecting potentially millions of people. In the game plans involving Soviet attacks or accidents at American nuclear installations, the emergence of cancer, though officially denied, was, in fact, taken as a given, accepted as the inevitable price to be paid for the continued supremacy of the United States in a postHiroshima world. This was an opportunistic response that saw the disease only within the narrow context of nuclear war, where it was viewed primarily as a minor nuisance. In fact, though largely invisible to the public, the incidence of many cancers had risen steadily through the first half of the twentieth century (and the death rate more than doubled over this period).47 In disregarding
Introduction
27
the truth (which was in many ways more troubling) and in suppressing the link between radioactivity and cancer, the official research agenda essentially downgraded interest in the causes of the disease. It was a topic best avoided, at least for the time being. Disregarding the disease’s complex causality would come to obstruct lateral thinking about it, discouraging, among alternative approaches, investigations that might have pointed toward prevention rather than treatment. In the context of the Cold War, prevention was in the hands of politicians, not scientists or physicians. The enduring impact of this bias on our approach to cancer has been overlooked. All cancers were equally blighted by this long-term entanglement with the Cold War. Before the 1980s, most Americans did not conceive of the disease as a vast collection of separate illnesses but as a singular evil that could take many forms and turn up anywhere in the body. Shaped by the Cold War imagination, cancer functioned as a monolithic killer whose fortunes were inextricably linked to those of a monolithic enemy. The nature of this relationship is easier to see fifty years later when it can be compared with the more recent intermingling of politics and disease. Today, there is no longer a single disease that is summoned to excite the public imagination just as there is no longer a single enemy. Instead, there are multiple disease perils such as SARS and avian flu, which feed the imagery defining multiple threats of terrorism of many stripes. In the place of the stable one enemy/one disease model (USSR/cancer) have come elusive menaces from no fixed address (and with no state backing). Like shape-shifting viruses, these morph into something new before American intelligence can catch up with them. In other words, because it can be made to resonate so closely with larger political fears, the dread of disease is as useful today in the service of antiterrorism as was the threat of cancer in the service of anticommunism. To the extent they were acknowledged at all, the cancers engineered— or accelerated—by government indifference during the Cold War were laid at the door of the Soviet Union. Geopolitics had itself become carcinogenic. Close to half a million Americans were exposed to excessive levels of radioactivity in the 1950s, as participants in or witnesses to the atmospheric nuclear tests or as “volunteers” in one of the secret experiments or as workers on government contracts (for example, in uranium mines or at nuclear reactors), all in the name of national security.48 It may be the legacy of this bad faith, at least in part, that lies behind the strenuous effort made over the past few decades to offload responsibility for cancer from the government onto individuals. Research turned to the exploration of “risk factors” in an attempt to identify who gets cancer rather
28
Under the Radar
than why they get it. Public health campaigns emphasized personal responsibility for cancer diagnoses, citing the role played by alcohol, exercise, obesity, bad diet—so-called lifestyle factors—in the incidence of disease. They drew heavily on analogies to smoking, an activity whose established link to cancer is several orders of magnitude greater than that of any of the newer risk factors. If government is reluctant to intervene in the case of tobacco (which accounts for four of every five lung cancer diagnoses), it is hardly likely to curtail the production or marketing of any other product whose links to cancer remain much more tenuous. The emphasis on individual rather than collective responsibility successfully diverted attention away from the government’s own contribution to the elevation of cancer rates in the postwar period. In fact, for almost forty years government fought hard to deny culpability, refusing to compensate the victims of its own atomic energy programs. No compensation was ever paid to the families of cancer patients caught up in the secret experiments. But in 1990, after many false starts, Congress finally passed a Radiation Exposure Compensation Act designed to make peace with the downwinders. Fallout is expected to continue killing throughout the twenty-first century. The total number of fatal cancers caused by American nuclear testing has been estimated at 140,000.49 As a former AEC physicist lamented, after the end of the Cold War, “the greatest irony of our atmospheric nuclear testing program is that the only victims of the United States nuclear arms since World War II have been our own people.”50 All of their deaths were preventable, an uncomfortable fact that has inevitably sullied the government’s subsequent relationship to cancer prevention. Radiation-induced cancers were the first malignancies that Americans linked to what are now called “environmental hazards.” They serve here as a proxy for all cancers of suspected environmental origin. The story of radioactive perils together with the official response to them offers a kind of prequel to the rise of environmentalism. In the decades following nuclear weapons testing, revelations about substances like Agent Orange used in Vietnam and accidents like that at Three Mile Island repeatedly raised the same concerns, pointing to government complicity in the rise of cancer rates, its suppression of evidence confirming the etiology of disease, and, ultimately, its fear of financial liability. In every case, accepting responsibility for the health care costs and other damages arising from preventable illnesses would be prohibitively expensive as well as interfering with the country’s defense and energy policies. It would not take a conspiracy theorist to argue that the attempt to shift blame (and attention)
Introduction
29
elsewhere might be dictated more by self-interest than by any scientific imperative. Conveniently for the government, the unleashing of literally thousands of untested synthetic chemicals in newly marketed products provided excellent cover. The radiation hazards connected with Cold War defense programs were literally overrun by a veritable tsunami of potentially carcinogenic substances added to the stream of goods and services in the postwar period. The complexity of chemical exposures and interactions to which we are all now subjected (their frequency, duration, doses, and so on) and the difficulties of disentangling them, one from another, have made it almost impossible to isolate individual or linked mechanisms that might be responsible for triggering malignancies—or other serious diseases. Not that anyone is looking. Having underwritten the private development of atomic energy in the first place, the federal government is hardly likely to legitimize research that might jeopardize its profitability. It has no axe to grind with what the former editor of the New England Journal of Medicine Arnold Relman has called the “medical-industrial complex.”51 And even if it did, the powerful lobbies for the chemical and pharmaceutical industries would block any excessive enthusiasm for research into suspected hazards. In fact, government often seems reluctant to intervene at all. It is willing to identify a small number of carcinogens but is loath to take any action to control them.52 It is much easier and much less costly simply to sidestep responsibility and let the “buyer beware.” That way, the solution to polluted air or water is not for government to eliminate them but for individuals to avoid them. Victim blaming, in other words, is costeffective for the public sector as well as for its corporate backers. It concentrates attention on the consumer of hazardous substances rather than on their producers. The burden of proof shifts correspondingly. We are all, in a sense, Irma Natansons now.
Chapter 1
Double Jeopardy Cancer and “Cure”
The cancer battlefield is littered with desperate remedies. And, at least until quite recently, every one of them was first tested on those who had little or no idea they were serving as guinea pigs. These men and women were doubly unlucky; first, to be diagnosed with cancer, and second, to have their diagnoses coincide with the introduction of a new and unproven therapy. Whether a genuinely new treatment or a recycled version of an existing one, it would remain at best experimental while patients put their bodies on the line to test the new hypothesis. Until evidence could be gathered and assessed, they were as much experimental “subjects” as they were patients. Their vulnerable status, however, was never formally acknowledged. It didn’t need to be. Before the advent of carefully monitored clinical trials, all decisions to modify prevailing treatment regimens were initiated by individual physicians or hospital departments. There were no national agencies mandating or even recommending treatments and therefore no established protocols for changing them. Doctors were independent agents, neither answerable to any higher authority in principle nor answerable to their patients in practice. Physicians could be exposed to new ideas through a variety of channels. They might attend lectures at scientific meetings of their local medical society or conferences in their specialty area. They might read about a promising new therapy in the medical literature. Or they might be exposed to a new therapeutic mindset by a recent staff appointment at their hospital. 30
Double Jeopardy
31
Whatever the original spark of interest, the critical mass of evidence required to “seed” a therapeutic innovation differed from place to place and from time to time. Conservative doctors and institutions might move cautiously, waiting for corroboration from several clinical studies before moving ahead or continuing to resist the innovation even after evidence in its favor became irrefutable. Others might believe that to wait until all the results were in would be to deprive current patients of potential life-saving benefits. Still others might know nothing whatever of changes in the offing. Since medical fiefdoms remained local or regional in nature, there was little cause for friction between them, no matter how divergent their views. Each community could sustain a distinctive medical culture of its own as long as the markets for its services did not overlap with its neighbor’s. The history of breast cancer offers a good illustration of all these forces at work. It was not until 1977 that the National Institutes of Health (NIH) formally recognized the uneven distribution of treatment options across the country. In that year, it set up a Consensus Development Program to help overcome the many disparities in treatment from one place to another and from one type of institution to another. The NIH designated panels of scientists, physicians, and informed laypeople to review the records of medical procedures and technologies. Their brief was to ferret out practices that might still be in common use out of habit but that could no longer be justified by current scientific thinking. By recommending the retirement of ineffective procedures and confirming the effectiveness of alternatives, consensus development panels hoped to bring current treatments more into alignment with state-of-the-art medical knowledge. However, by the time the NIH stepped in, close to a million American women had already died of breast cancer in the twentieth century. Little is known of the individual experience of any of them. How much divergence there was in practice from the standard treatment regime (surgery followed by radiation) is unknown. Statistics covering the period focus narrowly on the measurable connections between individual procedures and rates of survival or death. For the most part, they say almost nothing about the broader pattern of a woman’s life following diagnosis, remaining silent on the compromised quality of her survival, the repeated bouts of therapy alternating with release on probation to “normal life,” the plateaus of remission, the abrupt intrusion of recurrence and the preoccupation with mortality. Before the 1970s, the full story of breast cancer as a chronic disease remains almost wholly undocumented.1 All we know for sure is that
32
Under the Radar
women themselves had little influence over the course of treatments prescribed for them. What happened to them reflected the preferences and prejudices of their doctors. It depended as much on timing and luck—when and where they were treated—as on any more scientifically based criteria. The timing of Irma Natanson’s breast cancer diagnosis could hardly have been worse. She entered into the world of medical treatment in 1955 as a thirty-four-year-old housewife and mother of two in Wichita, Kansas. Her diagnosis coincided almost exactly with the entry of cobalt radiotherapy into the treatment arsenal. Just after the war, control of all atomic materials—including radioactive isotopes like cobalt—passed from military to civilian control, from the Manhattan Engineer District (better known as the bomb-building Manhattan Project) to the Atomic Energy Commission (AEC). The isotopes had been produced at Oak Ridge, Tennessee, by the nuclear reactor that had been built to separate isotopes of uranium for use in the bomb project. But the reactor also had the capacity to induce radioactivity in hundreds of different elements (including cobalt) by bombarding them with neutrons.2 Most of the experimental isotopes that resulted played no further role in the bomb project. But after the war, they would come into their own, undergoing a sea change that would strip them of their associations with destructive weaponry and repackage them as bold avatars of medical progress, targeting disease instead of enemy populations. Accordingly, a year after the peace in 1945, the AEC launched a campaign to put a new face on radioactive isotopes. To prepare the ground for their general acceptance as nuclear by-products, the commission mounted elaborate public relations initiatives that glorified the potential blessings of the atomic age. Exhibitions and road shows—“Main Street Meets the Atom,” “The Atom, Servant of Man”—exalted nuclear science and bred a utopian fervor that got well ahead of itself. Robert Hutchins, the chancellor of the University of Chicago, became an early convert and shared his enthusiasm with the American public. “The atomic city,” he wrote, “will have a central diagnostic laboratory, but only a small hospital, if any at all, for most human ailments will be cured as rapidly as they are diagnosed.”3 Riding this wave of expectation, the AEC moved to attract the interest of scientists and medical clinicians who were actually in a position to provide Americans with the miracle applications of isotopes that they had already been promised. In June 1946, the commission publicly announced that radioactive isotopes were ready and available for scientific research.4
Double Jeopardy
33
In the first photo opportunity for radioactive isotopes, Dr. E. P. Wigner, director of research at the AEC’s Clinton Labs (in the dark suit), hands over a millicurie of carbon-14 to Dr. E. V. Cowdry, director of the Barnard Free Skin and Cancer Hospital in St. Louis (in white). Though just a minute speck, this first transfer of a radioactive isotope from government to private hands was described by the New York Times as “a momentous milestone.” In another break with tradition, reporters were for the first time invited into the building where the nuclear reactor, once involved in bomb-making activity, was now producing radioisotopes. It was here that the public ceremony took place, on 2 August 1946.
Figure 1.1
National Archives photo no. 111-SC-250295.
The Atomic Energy Act of 1954 hoped to consolidate the promotion of isotopes that had begun eight years earlier. Up until then, government had played the lead role in the development and deployment of all radioactive products. From the late 1930s, for instance, the National Cancer Institute had operated a radium loan program, distributing tiny amounts of the precious element, earmarked exclusively for the treatment of nonpaying patients in selected hospitals around the country. The government always retained ownership of the radium and kept a tight rein on the administration of the program.5 Now it was pursuing a different tack, inviting the private sector to become more involved in setting the research agenda for the
34
Under the Radar
future, and providing significant enticements to attract its interest. The new legislation explicitly encouraged the use of radioactive isotopes “to promote world peace, improve the general welfare . . . and strengthen free competition in free enterprise” (italics added). The congressional directive had the desired effect, unleashing considerable research and development activity among private-sector engineering firms in a scramble to get new products on to the market. The emergence of commercially produced cobalt-60 machines, therefore, not only marks the introduction of a new therapeutic material; it also embodies a shift in the nature of government/industry relations. From now on, the public sector would retain a largely regulatory function, licensing the use of most radioactive materials but otherwise playing more an enabling than a controlling role in the dissemination of atomic energy. Government did what it could to encourage industrial competition, by withdrawing from arenas it had previously dominated. It also helped to spur activity by introducing advantageous pricing policies for materials still produced in governmentsponsored nuclear reactors. Cobalt-60 was one of these favored substances. Originally developed in Canada, the Atomic Energy Commission approved its use in the United States in 1951. The isotope offered several advantages over radium, the primary source of X-ray therapy then in use. First, its beams were much more powerful. They could penetrate more deeply into the body with less scatter and so inflict less injury to skin and surrounding tissue. Cobalt was more convenient and safer to handle than some other isotopes like radon gas because it could be shipped as an encapsulated metal. Equally important, it was significantly less costly and more available than radium. It was also easier and much cheaper to prepare than many other isotopes and could be produced in the quantities necessary to warrant private investment in machines that could not be sold without it. Its one drawback was its relatively short half-life of 5.3 years compared to radium’s 1,600 years.6 Shields Warren, the AEC’s director of its division of Medicine and Biology, said that, “compared with cobalt, radium was as ‘outmoded as a Model T Ford’ in the treatment of cancer.”7 The amount of cobalt radiation that could be administered was governed by the tolerance of tissues lying five millimeters below the surface of the skin. But there was no way of determining what those tolerance levels would be without the critical feedback provided—unknowingly—by the first generation of patients. Natanson was a member of this involuntary vanguard, one of the first patients in Kansas, if not in the country,
Double Jeopardy
35
to undergo cobalt treatment. St. Francis Hospital in Wichita, where Natanson was treated, purchased its first cobalt machine in January 1955.8 Natanson was only the twelfth or thirteenth cancer patient to undergo treatment with the new machine. She had a radical mastectomy on May 29, 1955, performed by Dr. Leo Crumpacker, a general surgeon. Just a week later, she found herself in the office of John R. Kline, the head of the radiology department at St. Francis whom Crumpacker had recommended. Of course the two men knew each other; they were professional colleagues. Both also served as members of the Cancer Committee at the Sedgwick County Medical Society. Crumpacker recommended treatment with cobalt even though tissue analysis revealed no lymph node involvement (thirteen nodes had been tested). He also recommended the removal of her ovaries and fallopian tubes to reduce the amount of circulating estrogen in her system.9 The results of this procedure showed no spread of cancer to these organs. Natanson, it appeared, was on the road to recovery. Her husband, Edward Natanson, described her condition in early June 1955: Mrs. Natanson at that particular time was very, very well. She had gone through the two operations and had made a very, very fine recovery. She was able to use her arm because of the therapy; she had almost the complete use of the left arm again. The breast had healed fully. There were actually no scars—just the one large scar but there was a thickness there. We were living a very normal life after the big scare we had.10
In other words, given no indication of any metastasis and the apparent health of the patient, there was no emergency calling for immediate attention, hence, no reason to rush into further treatment. Nevertheless, Natanson’s surgeon recommended cobalt radiation therapy as a “precautionary” measure. And Natanson agreed to it. It’s impossible to know what she understood of this treatment—or of radiation and its effects in general. As a young adult in 1945, she would certainly have been aware of the widespread radiation sickness following the U.S. bombings of Hiroshima and Nagasaki. And she almost as surely knew something about the atomic tests carried out in the early 1950s at the Nevada Proving Ground (there were fourteen such detonations in 1955 alone in Operations Teapot and Wigwam). But like most Americans, Natanson would have known little if anything about the hazards of radioactive fallout associated with those tests. A Gallup poll conducted in 1955 estimated that only one in four Americans knew what fallout was.11
36
Under the Radar
Just a few months earlier, a slew of stories had appeared in the popular press in response to the delayed release of a statement on fallout from the Atomic Energy Commission. Designed to allay the fears of the American public, the statement insisted that radiation exposures remained “far below that needed to produce any detectable effects.”12 All the coverage of this release—in Life, Time, McCall’s, and other national media—accepted the official interpretation and reported accordingly. A story in Newsweek claimed that radioactivity from the atomic tests was “harmless to those now alive.” The New York Times quoted an AEC commissioner who believed that the bomb tests held “no immediate hazard.” There was not a single mention of cancer in any of the stories although Time magazine did raise the prospect of leukemia, a disease that, in the 1950s, the public did not yet associate with cancer.13 Over the course of the decade, Americans, little by little, began to gather some of the pieces for a more complex understanding. Although a fuller picture would not emerge for decades, there was a slowly growing awareness, even in the 1950s, of the potentially far-reaching consequences of atomic power. This was evident in every medium from serious journalism to light fiction (opera would have to wait until the end of the Cold War).14 If Natanson was a fan of Agatha Christie, she might have read her 1954 mystery Destination Unknown, in which two English women discuss the cobalt bomb over knitting. “Cobalt,” one of them muses, “such a lovely color in one’s paint box and I used it a lot as a child; the worst of all, I understand nobody can survive.”15 Radiation, readers from every country were beginning to discover, could travel huge distances, carrying its toxicity with it. One Kansas resident wrote to the AEC to ask whether the tests might explain the elevated levels of radiation that had been found in Wichita. The AEC, in reply, spouted the approved party line; the “increases in activity above background” that had been detected in the Wichita area were “of no significant hazard in terms of health.”16 Did local stories like this raise any alarms with Natanson? Did she make any connection between the radioactive pall that nuclear tests cast into the atmosphere and her own exposure to radioactivity? It is unlikely. But she might well have known something about the use of cobalt as a new treatment for cancer even if she didn’t make the connection with its darker side. Articles heralding the arrival of cobalt began to appear in the late 1940s and early 1950s in popular magazines. “Cobalt for cancer” announced Newsweek in 1948, followed, a few years later, by “C-bomb halts cancer!” in Coronet and similar articles in Reader’s Digest, Look, and other magazines.17
Double Jeopardy
37
Figure 1.2 Cobalt-60 teletherapy. An early machine used in the treatment of cancer at the AEC’s thirty-bed hospital in Oak Ridge, Tennessee. Photograph provided by National Nuclear Security Administration, Nevada Site Office, Nuclear Testing Archive.
Whatever Natanson understood of her impending treatment, she was given no time to think about it one way or the other, starting cobalt therapy the very next day after meeting with Dr. Kline (her treatment lasted from June 6 to July 22). About halfway along, she began to experience pain in her ribs. An ulcer under her arm at the site of the surgical incision became quite painful and drainage from it increased alarmingly. As time passed, the wound area failed to heal. Her condition grew steadily worse and more painful. Kline told her she would have to undergo skin grafting to heal the wound and handed her on to two local plastic surgeons, Doctors A. E. Hiebert and H. W. Brooks. They were quite familiar with radiation injuries. “In comparison with the huge number of treatments given with roentgen and radium,” they wrote in a professional journal, “the actual total number of injuries is relatively small. To the plastic surgeon who sees these patients, however, the number seems large.”18
38
Under the Radar
In early October, Natanson entered the hospital for what would turn out to be the first of an extensive series of surgical procedures. On the first occasion, the surgeons removed dead tissue and covered the area with a modest patch of skin taken from her hip in a procedure called a split skin graft.19 Three weeks later, Natanson went home in what her husband described as a state of “horrible pain.” The graft very quickly sloughed off. In early December, she reentered the hospital for a major skin graft with a pedicle flap. This time, the surgeon cut a larger and thicker flap of skin from Natanson’s back and, without severing its blood supply, pulled it in place to cover the ulcerous wound. But things did not improve. This second graft also failed. Areas that had previously healed began to break down. In other words, the damaged area had grown larger and more resistant to treatment. Her husband recalled life for his wife as “a constant twenty-four-hour-a-day period of being in agony” with her condition “growing relatively worse day by day.” The Wichita doctors agreed. After several consultations, they decided that Natanson’s care should be transferred to a renowned plastic surgeon 400 miles away in St. Louis, James Barrett Brown. Based at Washington University School of Medicine, Brown had pioneered the provision of high-quality plastic surgery for soldiers injured in World War II. He had also been involved from the very beginning in the treatment of what were called “atomic injuries”—radiation burns suffered by workers involved in the development of atomic weapons. Brown wrote a report of the surgical treatment of what he deemed to be the very first group of patients “with known atomic injury (without thermal injury).” The victims had been stationed at Eniwetok atoll in the Marshall Islands, the Atomic Energy Commission’s proving grounds where nuclear tests had been prepared and detonated in the late 1940s (the islands had been captured from the Japanese in 1944). The work that Brown described did not need to be classified. His subjects were not being exposed to further radiation but seeking treatment for damage already done. He was therefore in a position to publish his results openly in the medical literature. The “Report of Surgical Repair in the First Group of Atomic Radiation Injuries” was just one of several similar articles authored by Brown that appeared in the 1950s.20 Their publication would inevitably enhance his reputation as a surgeon with state-of-the-art knowledge of radiation injuries. Dr. Brown would be an obvious choice for a patient like Natanson in a “First Group” of her own. He would be more familiar than most plastic surgeons with the nature of her trauma.
Double Jeopardy
39
On January 1, 1956, when Natanson was “ready to die . . . completely out of her head and completely doped,” her husband hired a private singleengine Cessna from the Air Ambulance Service and, accompanied by a registered nurse, flew his wife to Dr. Brown in St. Louis. There she remained until the following September. Brown believed that the dead pieces of cartilage and bone had to be removed before healing of the remaining soft tissues could begin. He called in a chest surgeon who performed five operations to cut away about half of several dead ribs as well as dead muscle, nerve, and cartilage. Brown himself then carried out skin grafts to cover the chest, lung, and heart. These coverings were admittedly thin and left the patient in a highly vulnerable state. Brown estimated that Natanson would need as many as a dozen further procedures to accomplish a heavier and more durable covering over her chest. The nurses and doctors looking after Natanson in St. Louis remembered that despite an “unusually high dosage” of Demerol taken every two hours, she lived in a state of perpetual torment. “As burn cases go,” one of them remarked, “Mrs. Natanson was in great pain at all times. . . . She just had it and it was there constantly, and she endured it better than a lot of people that I have known. . . . The odor was terrific. It was a definite burn odor, and if anybody has ever smelled a bad burn odor and of infected tissue, Mrs. Natanson had it.” When family members or friends were expected, Mrs. Natanson “would ask me not to let the visitors come in because they would make her so nervous, and she would scream and cry when they were here and the visitors would walk out.” Dr. Brown volunteered that “both the husband and the wife had been very cooperative . . . we noted it as being a gallant effort on her part.” After almost continuous treatment for sixteen months, the aftereffects of the “cobalt bomb” left Natanson with what threatened to become permanent disabilities. These seriously compromised her health and the quality of her life, denying her any hope of a return to normalcy. She had suffered massive burns that caused enormous pain and infection, smelled bad, and refused to heal. She had lost the use of her arm and the use of her left lung, which gave rise to chronic wheezing. The emotional toll of this protracted ordeal on Natanson, her husband, and her young children is, of course, incalculable. What could be calculated was the steadily rising tide of medical bills and related expenses. By 1955, roughly two-thirds of Americans had some form of health insurance. But decades before the advent of comprehensive coverage, reimbursement was commonly limited to hospital and/or surgical costs (with each often
40
Under the Radar
supplied by a different provider, for example, Blue Cross for the former, Blue Shield for the latter). Typically, for those who were insured, benefits covered about a third of all medical expenses. In 1950, Americans spent, on average, $520 (in 2007 dollars) for medical care. Edward Natanson put the out-of-pocket costs of his wife’s ninemonth hospital stay in Missouri (including multiple surgeries and round-the-clock nursing care) at roughly $155,000 (in today’s dollars). His travel and hotel expenses added almost $20,000 more to the total.21 Given the open-endedness of his wife’s ordeal and the prospect of further surgery down the road, financial worries could easily have been a constant concern. It’s not known who first proposed taking legal action to sue for damages. Clearly, the burden of a courtroom drama would fall heaviest on Irma Natanson herself. The taboo against the public acknowledgment of breast cancer had yet to be challenged. Even if no disfigurement or disability had been involved, public exposure would have been a source of great shame. Natanson would be forced to testify, violating the prohibition to suffer in silence. Worse for her, she would have to relive, in heartrending detail, the trauma that still remained at the center of her life. Despite these obstacles, she pressed forward. From some extraordinary reserve of resilience and determination (and perhaps even a sense of unspoken outrage), Irma Natanson, with her husband’s support, made the decision to go to court, filing suit in June 1957. “As the direct and proximate result of the wrongful conduct of defendants,” the petition read, “plaintiff has been subjected to excruciating pain, suffering and discomfort and protracted hospitalization and painful surgical procedures. She has been permanently and dangerously maimed and there is now no way by which her left lung and heart can again be protected by bone. Her left lung is still collapsed and may be permanently useless. She is permanently and irreparably disfigured.” Natanson, the petition further charged, might have been spared this outcome altogether if the defendants had warned her of the treatment’s potential consequences. She had been “entitled to a reasonable disclosure by Dr. Kline so that she could intelligently decide whether to take the cobalt irradiation treatment and assume the risks inherent therein, or in the alternative to decline this form of precautionary treatment and take a chance that the cancerous condition in her left breast had not spread beyond the lesion itself which had been removed by surgery.” Dr. Kline had denied her this choice by failing to inform her of the hazards of cobalt radiation.
Double Jeopardy
41
Kline and his hospital were, in essence, being charged with two kinds of negligence. The first conformed to the intuitive understanding of the term: the failure to provide reasonable care to another which results in an unintended injury. But the second, the failure to warn a patient of the hazards of a medical procedure (that is, to solicit what would come to be called the patient’s “informed consent”), is more an error of omission than of commission. In the days before the idea of informed consent was widely known or understood, it was hard for most people to grasp that the withholding of consent could be construed as a kind of negligence, a legitimate injury in itself comparable to actual bodily harm. The jury in the Natanson trial would demonstrate the difficulty that midcentury Americans still had with the idea of thoroughgoing self-determination that was embedded in this important concept.22
Chapter 2
The Court Considers Informed Consent
Less than a year after the introduction of X-ray machines at the end of the nineteenth century, the first reports of radiation injuries began to appear. Of course, at the time, no one had any idea that the new invention posed a hazard of any kind, so no one anticipated the need to regulate its use. Virtually anyone could set up shop with the newfangled machines and offer any treatment likely to attract a clientele. There was no limit to experimentation, most of it aimed at conditions that were essentially benign. Inevitably, as X-rays were applied to an increasing assortment of complaints (acne, ringworm, unwanted hair, among many others), opportunities for injury multiplied. The earliest report of a cancer appearing after prolonged X-ray treatment (for an ulcer) appeared in 1902.1 A decade later, another study documented ninety-four cases of cancers that were believed to be the result of exposure to ionizing radiation.2 Lawsuits for damages soon followed. In the decade preceding the Natanson trial, there were fourteen malpractice suits involving “X-ray and radium burns” that reached the higher courts.3 These fourteen cases represent just a tiny fraction of the total number of lawsuits that were originally filed. If, as has been suggested, only one in every hundred malpractice suits ever goes to appeal, then the fourteen on record may represent more than a thousand others. Perhaps surprisingly, women were the plaintiffs 70 percent of the time. If Natanson was not the first woman to take legal action following radiation injury, she was probably the first (and possibly the only one) to file a lawsuit following cobalt therapy. A few other women joined her in 42
The Court Considers Informed Consent
43
claiming negligence by doctors treating them for breast cancer. Two had undergone unnecessary mastectomies (one turned out to have had no cancer and the other to have had lymphoma).4 Although these cases were rare and the women who initiated them motivated by personal rather than political concerns, with hindsight they suggest the fitful evolution of a sense of agency among American women. In the 1950s, women had no sense of their special interests being either recognized or represented by any group with the power or influence to help them. Particularly as plaintiffs, the actions they took were individual, not part of any larger campaign. Nevertheless, fifty years on, it is worth revisiting these cases in the light of postwar feminism, still a decade away. The court documents that survive provide a rare glimpse of early struggles to alter the dynamics between women patients and their doctors, however limited the framework or unsatisfactory the outcome. Physicians caught up in malpractice suits often have difficulty with the adversarial approach to justice. The winner in court is the legal team that crafts the more convincing of two competing narratives. To many physicians, this seems a far cry from the scientific method, which, in theory at least, depends more on hard evidence than on plausibility. The habit of stretching the evidence to fit whatever legal framework worked best was not something doctors were at home with. This was the legal equivalent of an empiricist error, that is, working backward from the data to the hypothesis. But doctors serving as medical experts had to be brought into line; their testimony was often critically important to the outcome of a case. So their courtroom appearances had to be carefully rehearsed and their medical mindset temporarily disabled. Like any expert witnesses, doctors had to swallow their objections to the way questions were framed and to learn to give answers that buttressed legal rather than strictly medical arguments. The peculiar nature of malpractice law could create as many problems for members of the jury as for medical experts. Alice Hamilton, a pioneering physician in industrial medicine who had plenty of experience of her own as an expert witness, understood how hard it was to make sense of medical evidence. What jurors needed were coherent stories. What they got was often a jumble of fragments, hard to reassemble into an intelligible narrative. The stylized nature of questioning with all the repetitive strokes, feints, and jabs of medieval jousting could leave witnesses feeling pinned to the wall and bloodied. “The laws of evidence,” Hamilton wrote in Harper’s magazine, “require that the simplest story be interrupted, chopped into bits and messed up till both the witness and jury are confused.
44
Under the Radar
What is essential to the story must be suppressed, for mysterious reasons; what is simple must be made endlessly complicated.”5 The legal case filed by Irma Natanson lived up to all of Hamilton’s expectations. Natanson v. Kline was a landmark in many respects. For perhaps the first time in a public arena, the case exposed, quite literally, the horrific damage done to women’s bodies by prevailing breast cancer treatments. Natanson’s attorneys proposed to reveal to the jury the full extent of her injuries by uncovering the plaintiff herself. The judge closed the courtroom to the public, drew the curtains, and, according to one of the attorneys present, Irma Natanson “bared herself to the waist and all could see the injury . . . the ribs had been destroyed by the radiation and the best that [reconstructive surgery] had been able to accomplish, was to have a layer of skin over the opening. I recall seeing the beat of the heart reflected in movements of this skin flap.”6 The distress this caused Natanson is hard to imagine, but there could have been no more compelling evidence for the jury to consider. What woman would knowingly submit to treatment that would so harrow and mutilate her? Natanson’s legal team understood that radiation injuries were notoriously difficult to prove. The difficulty lay in the nature of the treatment itself. X-rays were expected to destroy cells. That was the point of using them. It was very hard to draw the line between malignant cells that were their intended target and normal cells close by that were not. If radiotherapy was to be permitted to do its job properly, some degree of injury was almost inevitable. Added to this problem was the public’s reluctance to discipline physicians, no matter how egregious their behavior. In an earlier malpractice case brought in Kansas in the 1930s, a doctor had failed to remove radium beads from the uterus of a woman he was treating for cancer. Leaving the radium in place exposed her to dangerous levels of radiation that eventually killed her. The doctor lied about his negligence. Nevertheless, despite acknowledging fraudulent concealment, he was exonerated on the grounds that, by the time his patient had died, the statute of limitations had run out.7 The legal firm defending the physician in this 1936 case served as counsel to St. Francis Hospital in Natanson v. Kline more than twenty years later. The firm, in other words, was not new to the field of medical malpractice.8 It had been counsel to the Sedgwick County Medical Society in Wichita since the early days of the twentieth century. It also represented several insurance companies, including the Medical Protective Company, the first and for many years only medical malpractice insurer in the area
The Court Considers Informed Consent
45
and presumably the source of whatever insurance Kline carried. Mr. William Tinker Sr., the lead counsel for the hospital, was one of the earliest attorneys in Wichita to concentrate his trial practice on medical malpractice defense work.9 In addition to the hospital’s lawyers, Kline’s own attorney, William Kahrs of the firm Kahrs & Nelson, was also an experienced malpractice lawyer. Natanson clearly faced an uphill battle. Kline’s lawyers mounted an effective—and extremely lengthy—attack against the charge of negligence. They marshaled expert witnesses to blind the jury with the “science” of radiotherapy. Their testimony spelled out in agonizing detail the complexity of the process, validating, step by step, every decision Kline had made. Minute descriptions of the abstruse mathematical formulas used to compute doses, angles, and distances were supplied by hospital staff physicists with PhDs. The exactitude of these calculations was expected to dispel any notion that Kline and his team tolerated any but the narrowest margins of error. Testimony was dense with arcane jargon that was unrelenting. The overall impact of this assault must surely have been intimidating (if not annoying) to a layperson, especially in an era when cancer treatments were familiar only to those with personal experience of them (had prospective jurors been questioned about their health and those with cancer histories dismissed?) Running concurrently with this trial in the court of public opinion was the larger controversy about the hazards of radioactive fallout. Here, too, scientists hoping to play down the links between radiation and cancer were able to exploit the gap between the lay and professional understanding of an elusive and still poorly understood process. Eminent scientists on both sides of the debate relied upon the same evidence to make their arguments, whether, like Edward Teller, a physicist with the Manhattan Project and a great enthusiast of nuclear weapons who hoped to minimize the cancer risks or, like Linus Pauling, a chemist at the University of California, who raised alarms about possible long-term effects of exposure. (The dynamics of this debate over the course of the Cold War are discussed in more detail in chapters 5 and 6.) The defense in the Natanson trial further complicated matters for the jury by alternating the line of questioning designed to exculpate Natanson’s treatment with testimony calculated to point the finger at other culprits. They hoped to persuade the jury that Natanson’s injuries were not the direct result of cobalt treatment but rather the result of either a poor blood supply or infection or cancer that had spread to surrounding tissue. Top of
46
Under the Radar
the list of suspects that might have triggered these complications was the radical mastectomy itself. The defense could not call upon Dr. Crumpacker, the surgeon who had performed this procedure on Natanson; he would hardly be likely to criticize his own work. So it was left to Kline, not a surgeon himself but a colleague of Crumpacker, to raise the idea of surgical damage. When he was questioned about the impact of surgery on the blood supply to the affected area, Kline stated that it was “markedly impaired.” Asked whether this result was in any way unusual, Kline stated it was not, adding that, in fact, “the better the mastectomy the more the blood supply is damaged.” If an impaired blood supply could account for the dead bone and cartilage in Natanson’s upper body, then so too could other things. Chronic or runaway infection could do serious damage. Undetected cancer that had spread to the bone could account for tissue necrosis that would ultimately destroy the ribs and crumble. The more alternatives the lawyers could throw out, the more doubt they could sow. How could jurors know for sure which of these explanations was the correct one? Ultimately, they could not. Although the harm done to Natanson’s body was beyond dispute, the source of that damage could not be verified by lay jurors who lacked the confidence to make the necessary fine distinctions. Natanson’s lawyers could find no way around this difficulty. They could find no expert of their own to refute or at least put a dent in the wall of certainty presented by the defense. Because the cobalt machine was so new, there were few people who were in a position to argue the case. There was little published evidence of tolerated doses gleaned from clinical studies to bring before the jury—and they did not find what little evidence there was, buried as it was in esoteric professional journals. The first cobalt-60 therapy unit in the world had, in fact, been set up for use less than ten years earlier, in October 1951, at the Saskatchewan Cancer Clinic in Canada. A second unit was established a bit later in London, Ontario, by a graduate of the Saskatchewan program. It was to this installation, in the Institute of Radio Therapy, that John Kline, already a board-certified radiologist, was sent for several weeks’ training in 1954. In September 1955, just after Natanson finished her own treatment, the Canadian pioneers reported on the results of the first four years of the cobalt machine in use.10 Included in their study were seventy-four patients with breast cancer. Of the total, almost half experienced noticeable side effects from the treatment, particularly in their lungs; after four years, symptoms had cleared up in only eight of them. As a result, the researchers
The Court Considers Informed Consent
47
revised downward their recommended dosages, to levels below those that had been administered to Natanson. More striking, however, is the difference between Canada and Kansas in the selection of patients. The policy at Saskatchewan was “to irradiate the axillary and supraclavicular areas of the involved side only if metastatic lymph nodes are discovered at radical mastectomy” (italics added).11 Natanson, whose thirteen axillary nodes showed no signs of cancer would, in other words, never have been a candidate for this experimental therapy in Canada. The fully loaded cobalt-60 machine in its purpose-built underground room represented a substantial investment (an outlay of about $150,000 in 1955, equivalent to more than a million dollars today). The machine’s continued occupation of valuable hospital space and the employment of staff dedicated to its operation and maintenance, beginning with the radiologist himself, could only be justified by regular and demonstrably effective use. This required a steady diet of patients and smooth operational procedures. To demonstrate the new machine’s success to the wider medical community, researchers needed statistically significant patient samples—and results. Radiologists like Kline were very much at the center of the dilemmas posed by the introduction of new cobalt technology. They were operating in a gray zone with very little outside support. And their services were more of a tough sell. Understandably, prospective patients were wary. The radiologist could promise that treatment would be painless but could guarantee little else. A decision to go forward had to be made more on faith than on evidence. Given the mystery of the whole process—it could neither be seen nor felt—the identity of the new radioactive material at its heart would be of only minor concern to a patient who had had no prior experience of radiotherapy and no prior associations, positive or negative, with cobalt. The radiologist working with the new machine was, therefore, caught between a rock and a hard place. He had plenty of incentives to be persuasive but few to be truthful. Under pressure from all sides to bring in the patients, the “consumers” of the new therapy, he had little solid evidence to offer them. He might genuinely believe that cobalt therapy offered a breakthrough in treatment but couldn’t be sure unless he had the numbers and results to prove it. In the process of finding out, he might inflict considerable harm on patients who trusted him. But if his scruples got the better of him and slowed down the intake of patients, he might jeopardize his
48
Under the Radar
own employment. He knew his prospective patients were biddable. They arrived in his office in a vulnerable state, still reeling from their cancer diagnosis and recovering from surgery. Irma Natanson may have been among the more self-possessed of Kline’s patients, and even she had been an easy mark. Extending the use of cobalt radiation to patients like Natanson, who had no signs of regional spread, was entering territory for which there was little supporting evidence. With almost no prior experience of cobalt and few experimental results to guide them, staff members were essentially conducting ad hoc clinical trials without control groups. The earliest patients were truly guinea pigs. Their experience of treatment would be fed into studies designed to evaluate whatever significant differences arose between radium and cobalt therapies. Of course, none of these patients was aware of playing this role; none had formally consented to participate in any trial. But even if Natanson’s defense team had known all about the provisional nature of her treatment and had as well been familiar with the results of the early Canadian studies, they still might not have been able to put this evidence to work on behalf of their client. What they needed was live, corroborating evidence from an American radiologist they could question on the stand. But they knew it would be virtually impossible for them to find any physician willing to testify on behalf of their client. The prohibitions against betraying a “brother practitioner” were just too strong. When a “keen young radiologist” wrote to a radiation expert in the 1930s, suggesting that he was about “to testify against one of his competitors” in a malpractice suit, the expert minced no words in his reply. “I feel sure that you think too much of your standing in the various societies,” he wrote, “to even consider appearing against any regular physician in a malpractice suit. We cannot be too careful about this, as we do not know how soon we may have to have similar aid from our fellows.”12 This was no idle threat. Some medical societies actually punished physicians willing to testify against a colleague, revoking their memberships or, worse, their licenses to practice. One physician in Nevada in 1956 was sued for “unprofessional conduct” by his own medical board because he had publicly criticized the competence of some of his medical colleagues. The extremity of this “brotherhood” behavior was well recognized by the courts, condemned as “a shocking unethical reluctance on the part of the medical profession to accept its obligations to society and its profession in an action for malpractice.”13
The Court Considers Informed Consent
49
The prevarications of Natanson’s first plastic surgeon, Dr. Hiebert, on the witness stand, illustrate the dilemma. In a letter to the St. Louis surgeon James Barrett Brown introducing the details of the Natanson case, Hiebert and his partner Dr. Brooks admitted that “this was the first cobalt burn we have treated.” But he also revealed that their problem had been “not only treating the patient” but also “to not emphasize too much the roentgenologist’s [radiologist’s] part in this,” that is, not to admit the true cause of the burns. However indirectly put, the message was clear; the maintenance of professional solidarity required a certain economy with the truth. Even though the letter was entered in evidence at court, Hiebert on the stand managed to squirm his way out of difficulties, disavowing his contribution. When asked for his original diagnosis, he replied, “Ulcer of the left chest.” When pressed further for the cause of this ulcer and its likely prognosis, Hiebert answered with more evasive words: “Well, I could not verify any statement on this because I am not a surgeon and not a pathologist, and so I wouldn’t want to stick my neck out too far on something that is out of my direct field.” Still, the plaintiff’s lawyer pressed on: “Will you look at your file and see what you wrote down on your diagnosis?” Dr. Hiebert: “Well, I didn’t write down a regular diagnosis. I will read what I do have. ‘Ulcer of the left axilla. Said to have followed x-ray therapy post-operative radical breast surgery.” Said to have followed. As though the diagnosis were corridor gossip that Hiebert had caught wind of second or third hand. But his perseverance paid off. In the absence of any countervailing “expert” evidence, the diagnosis of radiation burns was trampled on whenever it cropped up. Every time a witness for the plaintiff used the phrase “irradiation burns,” the defendant’s lawyers objected and asked that the words be struck from the record. Ultimately, the carefully crafted narrative recounting the meticulous preparation and delivery of Natanson’s radiation treatment grew in stature as it was replayed on the stand by several key witnesses. The cumulative weight of this story eventually pushed aside whatever reservations jury members may have had. Emphasizing the details of actual treatment crowded out the inadequate attention that had been paid to Natanson’s prospective treatment (the failure to warn her of possible risks beforehand). Kline’s defense convinced jurors that the medical team at St. Francis’s had known precisely what it was doing and had gone to painstaking lengths to provide Natanson with exactly the right treatment, not one roentgen more
50
Under the Radar
or less. On December 16, 1958, the jury returned its verdict; it found Kline and St. Francis Hospital not guilty of the charges brought against them. Within three days, Natanson’s lawyers filed a motion for a new trial. The request was denied. An appeal soon followed specifying various trial errors. This was heard in early 1960. The opinion for the Kansas Supreme Court, filed April 9, 1960, was written by Justice Alfred G. Schroeder. A 1941 graduate of Harvard Law School, Schroeder was the last justice elected to the Kansas Supreme Court (in 1956). Two years later, Kansas voters approved a constitutional amendment that established a nonpartisan selection process for appointing justices. Schroeder’s earlier tenure as a District Court judge in Harvey County had been controversial. He had presided over several hearings and motions in litigation involving the right of the City of Wichita to drill water wells in an underlying aquifer. Schroeder made a number of rulings from the bench in favor of the farmers. This did not endear him to the towns that depended on the aquifer for their water supply. Nor was everyone happy with the participation of the same farmers in his subsequent election campaign. Nevertheless, he remained a respected Supreme Court justice for thirty years.14 In the Natanson appeal, Schroeder, in his opinion, ruled that the trial jury had, in fact, committed serious procedural errors that warranted another review of the case in court. Mistakes had been committed “in the matter of instructing the jury.” First of all, negligence charges had been brought against Kline alone rather than extending to all the professional and technical staff he supervised in the case. This was critical since Kline, like most other radiologists, had no understanding of nuclear physics himself and was therefore incapable of determining the accuracy of computations prepared by others. Dr. Darter, the physicist who made all the key calculations, had had quite limited experience of cobalt radiotherapy. The machine used in treating Natanson at St. Francis Hospital, installed just four months before her arrival, was the first he had ever worked with. (In fact, the installation was still not complete at the onset of Natanson’s treatment, missing its rotational equipment.) Moreover, Dr. Roys, the “expert” who testified on Darter’s behalf, corroborating the doses he computed, was not himself a physician but an academic with no clinical experience to validate his judgment. Second, the justice ruled that some of Natanson’s allegations of negligence had been improperly excluded from the instructions to the trial court and so had not been considered by the jury. Schroeder noted that while Kline dismissed the idea that his treatment had been responsible for the
The Court Considers Informed Consent
51
damage done to Natanson, he did admit that such treatment could cause serious radiation burns. His testimony, Schroeder opined, revealed that Dr. Kline “knew he was ‘taking a chance’ with the treatment he proposed to administer and that such treatment involved a ‘calculated risk.’” The treatment of a cancer patient with radioactive cobalt is relatively new. . . . [It] is so powerful that the Atomic Energy Commission specifies the construction of the room in which the cobalt unit is to be placed. . . . A periodic report of radiation outside the room must be made to the Atomic Energy Commission in accordance with regulations. These facts were given by Dr. Kline in his testimony. . . . [They] are not commonly known and a patient cannot be expected to know the hazards or the dangers of radiation from radioactive cobalt unless the patient is informed by a radiologist who knows the dangers of injury [from it].
This revived the court’s concern with the issue of informed consent, a concept that had had little impact on the trial court jury. The failure of the idea to make any real impression is not surprising. As a legal concept, informed consent had not been rigorously tested by the American courts (the phrase itself first appeared in a case just a few years earlier).15 As an idea, it had no common currency in the wider culture. In fact, the last time it had attracted widespread notice was half a century earlier, in the case of Schloendorff v. The Society of New York Hospital. That case involved a woman who, in 1908, had agreed to a diagnostic procedure under ether but had not consented to any follow-on surgery. The doctors had performed surgery anyway, removing a uterine fibroid. After the operation, the patient developed gangrene in her left arm and, like Natanson, had to have some fingers amputated. She then sued the hospital. The trial jury turned in a verdict in her favor and the appeals court confirmed the decision. Drafting the opinion for the Court of Appeals in New York was Justice Benjamin Cardozo, eighteen years before his appointment to the U.S. Supreme Court. In the Schloendorff decision, he wrote: “In the case at hand, the wrong complained of is not merely negligence. It is trespass. Every human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient’s consent, commits an assault, for which he is liable in damages.” This opinion threw down the gauntlet, putting the medical establishment on notice—or so it might seem. But the courts failed to take up the challenge. Written in 1914, Cardozo’s opinion might have blazed a very
52
Under the Radar
different legal trail had not the first and then a second world war intervened. The germ of the idea survived the interval in a dormant state, but the legal debate it touched on was not really taken up again for another fifty years. Schroeder, therefore, in 1960 was essentially charting new territory in his exploration of informed consent. In reviewing Natanson’s medical history, he highlighted the conditions under which treatment recommendations had been made to her, implicitly endorsing a more nuanced approach to the discussions between doctor and patient. In his view, “there was no immediate emergency concerning the administration of cobalt radiation treatment such as would excuse the physician from making a reasonable disclosure to the patient. We think . . . [Dr. Kline] was obligated to make a reasonable disclosure of the dangers within his knowledge which were incident to, or possible in, the treatment he proposed to administer.” Upon the record here presented, Dr. Kline made no disclosures to the appellant whatsoever. He was silent. In fact, Kline could hardly recall the meeting in which the subject of cobalt treatment had first been raised. “I remember in a very vague way . . . we discussed the treatment, about how long it took, the number of areas we would irradiate. . . . Her first treatment occurred the first day she came. I am not sure of that but I think so.” To the defense, this imprecision (not to say indifference) was irrelevant because, they argued, Natanson had chosen to undergo treatment. She had not been in any way coerced. In agreeing to go forward, the lawyers argued, she “assumed the risk and hazard of the treatment” herself and proceeded at her own risk. She was, in other words, a free agent consenting to a free market transaction. Never mind that the theory of economics on which this behavior is based assumes that all parties to such a transaction have access to perfect information (Natanson clearly did not). The maneuver still managed to elevate Natanson’s participation to that of a sovereign consumer. So while in reality she was in thrall to her physician’s decision making, she was presented as someone who acted decisively on her own behalf, who was “her own person.” Schroeder appears to have set this argument aside. Gingerly, he approached the recognition of hazards that physicians needed to address. By suggesting the admission of “dangers which were possible,” Schroeder opened the door to an admission of uncertainty. He recognized the physician’s duty to be more candid about his or her inability to anticipate every possible side effect. Physicians, he wrote, had an obligation to explain
The Court Considers Informed Consent
53
“the probability of success or of alternatives, and perhaps the risks of unfortunate results and unforeseen conditions with the body.” Alas, the issue of whether and how much information a physician should disclose to a patient came smack up against a deeply entrenched paternalism, which held that doctors could be trusted to make decisions in their patients’ best interests. This had always been the sticking point in discussions of informed consent. However at odds with reality, the tradition of medicine, much like that of the law, was premised on an unquestioned identity of interest between professional and client. The similarities in the relationships between lawyers and their clients on the one hand, and that between doctors and patients on the other, were not lost on the judges (in this or any other case) reviewing informed consent. All of them were reluctant to trespass into territory held to be professionally inviolable. The American Medical Association (AMA), witnessing the rise of medical malpractice litigation, did not hesitate to reinforce these parallels, however different the practices of the two professions were in reality. An AMA-backed “Interprofessional Code” confirmed the lawyerdoctor fraternity: “The aims of the two professions are essentially parallel in their services to society and this necessitates a full understanding at all times and full cooperation when that is called for.”16 Justice Schroeder got the message. He was so anxious to avoid stepping on the toes of his fellow professionals that in his written opinion he backed away from his strong move in the direction of patients’ rights, choosing instead to reinforce the ultimate decision-making power of the physician. “The duty of the physician to disclose, however, is limited to those disclosures that a reasonable medical practitioner would make under the same or similar circumstances. How the physician may best discharge his obligation to the patient in this difficult situation involves primarily a question of medical judgment,” he wrote. The court added that “medical judgment” authorized the physician to exercise whatever restraint he deemed necessary. The ruling stated, “There is probably a privilege, on therapeutic grounds, to withhold the specific diagnosis where the disclosure of cancer . . . would seriously jeopardize the recovery of an unstable, temperamental or severely depressed patient.” The latitude this granted the diagnosing physician could hardly be broader since it fell to the doctor to make the determination of his or her patient’s mental status. But while it gave more power to the physician to limit the nature of his disclosures to patients, the so-called therapeutic privilege had
54
Under the Radar
to rely on vague and outdated concepts of “good faith” and “discretion” that would be unlikely to hold up in court. Charting a course through this minefield was a formidable challenge. The court, in the end, could not figure out how to grant more decision-making power to the patient without taking it away from her physician. The ambivalence expressed here would continue to plague the discussion of informed consent for decades. Nonetheless, the decision marked a significant—if limited—step along the path toward patient decision making. The thoughtful opinion filed by Schroeder drew attention to the complexities of informed consent that later courts would attempt to resolve. It introduced a workable standard for information sharing—disclosures that a reasonable medical practitioner would make under the same or similar circumstances. This would be widely cited in related cases over the next several decades, although it too would be subjected to challenges. (Did, for example “same or similar circumstances” restrict the standard for comparison to what other doctors would do in the same county? In the same region? To what other doctors would do in the same specialty? Every state would interpret the concept in its own way, a response that hindered its passage from the esoteric world of the courts into the broader world of social policy.) Fortunately for Natanson, the opinion concluded by reversing the decision of the lower court “with directions to grant a new trial.”17 By this time (April 1960), almost five years had elapsed since her diagnosis. During the interval, her medical costs continued to soar as she submitted to further rounds of surgery. Some of these procedures aggravated her injuries. The blood supply to her arm was cut off, leaving it pinned to the side of her body—and lifeless. Years later, surgeons at the Mayo Clinic would try to restore the use of her arm by separating it from her left side and reactivating its blood supply, but they would fail to achieve the desired results. Over time, gangrene would develop in her hand and she would have some fingers amputated. As a pianist, Natanson would find this an especially cruel blow. The prospect of yet another trial must have been daunting. Dr. Kline and St. Francis Hospital requested a rehearing in early August. In denying their request, Schroeder took great care to clarify the position of the court, recognizing “that this is a case of first impression in Kansas and one establishing judicial precedence of the highest importance to the medical profession.” His opinion sharpened the court’s position on informed consent and its view of the physician’s responsibilities: “A physician violates his duty to his patient and subjects himself to liability for malpractice,
The Court Considers Informed Consent
55
where no immediate emergency exists and . . . if he makes no disclosure of significant facts within his knowledge which are necessary to form the basis of an intelligent consent by the patient to the proposed form of treatment” (italics original). Dr. Kline, it argued, “failed in his legal duty to make a reasonable disclosure to . . . his patient as a matter of law.” Finally, the opinion spelled out the consequences of this failure for Natanson: “While [she] did not directly testify that she would have refused to take the proposed cobalt irradiation treatments had she been properly informed, we think the evidence presented by the record taken as a whole is sufficient and would authorize a jury to infer that had she been properly informed, the appellant would not have taken the cobalt irradiation treatments.” In other words, in choosing from among treatment alternatives, the choice of no treatment was as valid as any other. The court also cited a related decision filed by the Missouri Supreme Court just two days after its own April 9 opinion. The timing of the Missouri decision (in Mitchell v. Robinson) was yet another sign that society was beginning to warm to the issues raised by the emerging doctrine of informed consent. In the Missouri case, a patient suffering from severe emotional distress (but mentally competent) had been subjected to electric shock therapy without being fully aware of the full range of possible consequences. As a result, he went into convulsions that caused the fracture of several vertebrae—and sued his physicians for malpractice. The Missouri court, as Schroeder put it, “reached the same conclusion as this court on the duty of a physician to inform his patient of the hazards of treatment.” How much the opinion denying a rehearing in Natanson influenced the final outcome of the case is open to speculation. Neither side would have relished the prospect of returning to the District Court for another jury trial. But this latest opinion made it clear that a retrial might involve a totally different legal strategy from the one that had worked so well for the defendants the first time out. This time, medical experts would no longer be so likely to save the day since the ground had shifted away from the details of treatment itself and toward the more nuanced—and less predictable—issue of patient self-determination. The Schroeder opinion had, in fact, urged the jury at retrial to take up the issue of informed consent as a matter of first priority. Whatever the thinking of the parties concerned, the two sides finally agreed to a “compromise settlement” a few months later. There would be no further trial. Natanson was awarded an undisclosed sum in damages at
56
Under the Radar
the end of November 1960; Kline and St. Francis Hospital were each ordered to pay half the costs.18 The courageous Irma Natanson survived in her compromised state for another thirty years. Very late in her life she confided to her nephew that she could “still smell her burnt bone and burnt skin. I am still smoldering.” In 1989, she was diagnosed with another cancer of “unknown” origin. This time, her doctors left no paper trail that might link her later cancer to any earlier treatment.19 By then, everyone had forgotten—or had chosen to forget—that thirty years earlier an expectation that Natanson would eventually suffer treatment-related cancers had been entered into the record. In itemizing Natanson’s claim for damages before the appeals court, her lawyer anticipated (and made allowance for) the future medical costs that would arise from further cancers diagnosed years hence: “The excessive radiation administered to plaintiff by defendants will result in carcinogenesis of the head of the humerus, the scapula and the chest wall. Plaintiff must remain under constant medical supervision to detect the onset of such radiation induced cancer and will have to undergo additional surgery for its excision when it occurs” (italics added). In his testimony at the same trial, Dr. Brown, Natanson’s plastic surgeon in St. Louis, confirmed this likelihood. “Chronic irradiation burns, if they last long enough, [if] the patient lives long enough, will frequently change over to carcinoma . . . she would have to be watched carefully off and on for as long as she lives.” This time out, Natanson did not live long enough to undergo much medical treatment. By the time the new cancer was discovered, it had already spread through her body and she died just a few months later, in September 1989, at the age of sixty-eight. What emerges from a retrospective look at the Natanson trial is the sense of its role as a harbinger of change in the lives of American women. Natanson set the stage for the broader transformation of the concept of “choice” down the road. It would be another decade or so before the women’s movement went public with the demand for full reproductive rights. But in the meantime, trials like this one represented preliminary skirmishes in the reshuffling and redefining of women’s roles. They may have been isolated events lacking either consciousness of or coordination with any larger purpose, but they performed an essential function just the same. They should be recognized for what they were, precursors of social change. Natanson was a casualty of a particular moment in American life when the imperatives of postwar industrial expansion simply outran most other
The Court Considers Informed Consent
57
considerations. Social protections for the civilian population attracted little notice at a time when the country’s leaders were still calling for collective sacrifice and vigilance. By contrast, the government was eager to demonstrate its loyalty to the military. It kept the faith with returning servicemen (and those who might be called upon to serve again—in Korea or elsewhere—should the need arise) by actively promoting their reintegration into American life. The GI Bill, federal mortgage insurance, and highway programs, all facilitated the affordable education and suburban development that smoothed the veterans’ way back to the American Dream. Everyone else was on his or her own. Neither women nor cancer patients nor consumers were anywhere close to being recognized as a constituency with distinctive interests. Set against the dramas being played out on the national stage, they were entirely invisible. Identity politics, in a word, did not yet exist. The demand for civil rights was in its infancy; it wasn’t until the end of 1955 that Rosa Parks refused to be sent to the back of the bus. The liberation of women also remained far off, as the feminist and activist Betty Friedan would demonstrate so convincingly almost ten years later in her book The Feminine Mystique. Irma Natanson was not a pioneer like Parks or Friedan. Nor did she become a celebrity. She was nonetheless instrumental in setting in motion a debate on the emerging doctrine of informed consent, one that had wide reverberations through the courts and, eventually, in the culture. We know about her only because she had the grit to step out of the script of housewife and mother and, against all odds, recast herself as litigant, thereby ensuring that her ordeal would be inscribed into the public record. For its discussion of informed consent, Natanson v. Kline became a landmark case, cited as much for the contradictions it raised as for the genuine innovations it introduced to the debate. In the immediate aftermath, the decision was widely reported in the medical and legal press. The Journal of the Kansas Medical Society reported the details of the case without once ever mentioning it by name. The Columbia Law Review noted that a physician who failed to disclose the hazards of a recommended treatment would be “liable for negligence regardless of the skill with which the treatment was administered.” The Harvard Law Review offered up what was perhaps the most optimistic response to the opinion, suggesting: “The fears of the medical profession that this merely opens the door to additional damage suits may prove unfounded. Patients aware of the risks they face are likely to be far
58
Under the Radar
less surprised and angry when unfortunate results do occur, and may well be less inclined to ascribe them to the doctor’s negligence. Increased communication may well result in decreased litigation.”20 Irma Natanson, the woman whose misfortune served as the catalyst for this debate, was herself forgotten. But fifty years on, it is now her side of the story that speaks to us, that has a relevance that goes beyond its spare narrative value in the pages of legal journals. If it is too late to hear directly from Natanson herself, we are now at least in a position to see the circumstances of her situation with much greater clarity and to understand how the disease and the treatment that crippled her came together with such dramatic effect in 1950s Kansas.
Chapter 3
The Rise of Radioactive Cobalt
To the extent that Americans have given any thought to the history of cancer treatment, it is the doctor/patient relationship that frames their understanding and the operating room that provides the setting. For many, in fact, the history of cancer has been more or less synonymous with the history of surgery or, more accurately, with its avatars, the great surgeons. If the names and details of patients are now forgotten or off-limits to researchers, those of the surgeons who treated them are not. Many women with a personal history of breast cancer, for example, now recognize the name of William Stewart Halsted, the father of the radical mastectomy in the United States, or of George Crile Jr., the surgeon who spearheaded the shift toward the use of more conservative surgery. These men belong to a long line of heroic surgeons stretching back to the late nineteenth century. Breasts were always an easy surgical target. Tumors in them were often palpable, sometimes even visible. Breasts were accessible, on the body rather than in it. Unlike a liver or a kidney, they were not vital organs; they could be sacrificed without loss of life, at least in theory. First on the therapeutic scene and unchallenged for centuries, surgeons were well placed to shape the medical response to breast (as to many other types of) cancer. Not only did surgery become the primary treatment; surgeons became the gatekeepers to whatever additional treatments followed. Indeed, in Irma Natanson’s case, it was her surgeon, Dr. Crumpacker, who recommended and introduced her to the radiologist, Dr. Kline. Inevitably, the prominence of surgery colored the common understanding of all cancer treatment. It kept alive the belief that an unmediated or self-contained relationship between doctor and patient alone had the 59
60
Under the Radar
power to determine the final outcome in any fight against disease. The image of a patient and her surgeon caught up in an isolated heroic struggle survives to this day, despite the intervention on a massive scale of medical technologies and pharmaceuticals that have radically altered the practice of medicine in cancer diagnosis and treatment. We still cling to the human connection at the heart of the therapeutic experience. Of course, this is perfectly understandable and reasonable—emotional guidance and support are every bit as critical in the treatment of a potentially fatal disease as any drug. The more complex and difficult the nature of treatment, the more desperately we need the stability of a relationship with a physician we can trust. But from a historical perspective, the continuing survival of this aura of exclusivity—which saw the battle joined by patient and surgeon alone— obscures very real changes that have taken place in the nature of cancer treatment. The old-fashioned service relationship that once governed cancer care has now been infiltrated—some would say appropriated—by powerful industrial interests representing corporate giants. Widely prescribed cancer drugs, universally used mammography equipment, linear accelerators, MRI and CT machines, all have arrived unheralded in cancer centers across the country, representing billion-dollar industries with marketing imperatives to match. As a society, we seem to care very little about how they got there. Traditional medical history fails to enlighten us. Until fairly recently, it has been dominated by celebrations of surgery.1 This is not the result of any intention to deceive; rather, it reflects the availability of records and the enthusiasms of those in a position to make use of them (biographies of famous surgeons are often written by their acolytes).2 It also reflects the simple fact that most readers are conditioned to prefer reading about the lives of “great men” and their surgical breakthroughs than to slog their way through dry texts on the histories of science and technology. The absence of the human dimension certainly goes some way toward explaining the limited appeal of radiology to the popular imagination. Before the arrival of high-energy radiotherapies after the Second World War, there was no distinct image of the radiologist. The therapeutic use of radium in the years leading up to the war had been managed as much by general practitioners and by surgeons as by radiologists. Before the advent of the radiation oncologist, the specialty was primarily associated with diagnostic imaging. The radiologist was mostly out of sight, at the beckand-call of the physician-in-charge. The primary-care doctor would specify his or her suspicions (broken bone? undiagnosed tuberculosis?), and the radiologist would be expected to confirm or disprove them. The important
The Rise of Radioactive Cobalt
61
work of developing and reading X-ray films was carried on off-site. So the radiologist’s connection to the patient was tenuous at best. An aphorism captures this remoteness. “A physician sees a patient at the bedside and imagines the disease. A radiologist sees the disease on the film and imagines the patient.”3 A radiologist had none of the immediate appeal of the early cancer surgeons. He (occasionally, she) was not a heroic figure who could cure cancer with his bare hands. On the contrary, his hands were tied from the start; he relied on machinery and where radium or isotopes were involved, on radioactive material that was tightly regulated, whether through licensing or some other means. Furthermore, he could control but he could not actually demonstrate the source of his power. Though his interpretive skills were considerable, they left no visible traces. Next to his machine, in other words, the radiologist lacked drama. Without the machine, he was nothing, a mere technician. From the perspective of narrative interest then, the radiologist could be featured like the Wizard of Oz, dispensing wisdom from behind a curtain, but he couldn’t steal the scene on center stage, where the surgeon, with marquee billing, could save the life of the leading lady with only a penknife. The radiologist was, by comparison, more a stagehand than a star.4 The messy turf battles for recognition fought by radiologists reinforce the complicated relationship between medical skills and technology that defines their medical specialty. Many radiologists, who saw themselves first and foremost as physicians, wanted to be salaried like surgeons rather than be paid a percentage of what hospitals billed for radiological services. But hospitals often saw radiology as part of a package of services that could be administered by technicians as well as by doctors, and they wanted compensation to reflect this relationship. In 1939, almost half of radiologists were still being paid a percentage of hospital billings rather than a salary.5 This kind of institutional dispute, while of critical importance in the evolution of the profession, does not easily capture the public imagination. But beyond the limitations imposed by the lack of narrative drama is the difficulty posed by the hard science involved. A true appreciation of the power of radioactive energy depends on an understanding of science that is simply beyond the grasp of most people. With no knowledge of particle physics, most of us can make little sense of the invisible rays, whether they are emanating from radium, cobalt, or from an accelerator. Without the excitement brought to the story by scientific understanding, it quickly falls flat. So for a variety of reasons, some of which are connected to its Cold War origins and some not, there have been few histories of radiology in the
62
Under the Radar
popular culture. And it is no accident that the few that have made it into print have concentrated for the most part on the impressive achievements of diagnostic radiology, which, from its early focus on bones and lungs, eventually found ways to visualize almost every part of the body (from the gastrointestinal tract, to the kidney, brain, and heart).6 Being able to “see” what was going on made it possible to identify, treat, and often cure many medical conditions that had, until then, remained beyond the reach of medicine. Radiotherapy rarely appears in the histories of diagnostic radiology. But its obscurity is hardly unique. Most medical technologies now commonly in use in American hospitals are equally unsung. The problem is not the lack of histories per se; only a tiny fraction of any subject ever gets the historical attention it deserves. The problem is that the lack of any popular interest in the rise of medical technologies has been accompanied by a parallel lack of awareness of the impact that technology has had on the practice of medicine. We still want to believe in the determining role of the doctor/patient relationship and so hesitate to acknowledge the scale and influence of the medical-industrial complex, now a full partner in cancer care. Cobalt radiotherapy, like every other advance in medical technology, was a hybrid, a product of research in science and technology financed by both government funding and private capital. Unlike surgery, its development required the commitment of substantial human and physical resources sustained over long periods of time, with no guarantee of success. This represented a significant shift in scale from the investment of manpower and money required to improve cancer surgery. It was a job for institutions, not individuals. And its center of gravity moved out of the hospital. State and federal government and their scientific agencies, academic laboratories and medical facilities, foundations, private contractors, all were essential players in the story. As much business as scientific or medical history, the story draws on every aspect of postwar American enterprise and Cold War culture. This is another reason it has remained beyond the reach of orthodox medical history. The history of cobalt-60 radiotherapy is emblematic of the postwar rise of scientific research funded explicitly by government to fuel industrial development. The spirit of market capitalism that spurred the diffusion of domestic appliances after the war applied with equal fervor to the new market for atomic energy. By encouraging the use of a controlled substance requiring public oversight, government guaranteed itself a role in facilitating the development of the new technology. This put the public sector in a position to underwrite many of the early costs of cobalt research, costs that
The Rise of Radioactive Cobalt
63
would otherwise have been prohibitive for any private manufacturer. By contrast, one technology cobalt was replacing—X-rays supplied through bulky glass tubes—depended on electricity, a universally available public utility that needed no special support from government. Curiously then, the use of radioactive isotopes unleveled the playing field, spurring the rise of what would become a very profitable—and very private—medical equipment industry. For cobalt-60, as for other radiation sources, the path to profitability was given a further boost by interest from the military. The advent of the Cold War raised the fear that the Soviet Union would follow the example set by the United States at Hiroshima and Nagasaki, that is, it would bring the country to its knees through the use of atomic weapons. All variations of the imagined “doomsday” scenario included the mass exposure of both U.S. military and civilian populations to harmful if not lethal doses of radiation. To protect both, the government needed to know much more about the impact of such exposures. There was only one way to find out—through experiments with human beings that would scientifically measure the body’s reaction to, and tolerance of, radioactivity. The secret research program set in motion to achieve this objective eventually sponsored thousands of experiments in hundreds of American institutions. Participation required regular access to radiation facilities. With government support, industry rose to the challenge and supplied the necessary equipment. The intersection of military and medical interests that powered this industrial expansion is visible in miniature in the development of cobalt radiotherapy. The closest thing to an official history can be found in the twentieth anniversary (1964) publication of the M. D. Anderson Hospital in Houston, Texas, one of the earliest state-supported cancer hospitals in the United States.7 Although the hospital first admitted patients for cancer treatment in 1944, it had no permanent staff before the appointment of its first director and surgeon-in-chief, Dr. R. Lee Clark, in 1946. Clark came to the hospital from the air force, where he had been director of surgical research and of the department of surgery at the School of Aviation Medicine. He subsequently made the first appointments of department heads that included, in 1948, the selection of Dr. Gilbert H. Fletcher as head of the department of radiology. Before the introduction of cobalt, Europe had taken the lead in the development of radiotherapy. Supplies of radium were more plentiful there, thanks to rich deposits of uranium ores in what was then the Belgian Congo
64
Under the Radar
as well as in Portugal and in what is now the Czech Republic. Extraction and processing costs were much higher in American mines in Utah and Colorado, discouraging competition with foreign producers. If the element’s greater availability helped to stabilize its price in Europe, the organization of medical care enhanced its use. The historian John Pickstone has pointed out that radiotherapy in Britain became a distinct medical specialty, separate from diagnostic radiology, earlier than it did in the United States. This gave its practitioners a critical advantage—access to beds in the hospital—which opened up opportunities for greater experimentation.8 It put physicians in a position to recommend radiotherapy as a primary rather than a secondary treatment, a practice unheard of in the United States, where surgery remained the gateway to cancer care. Sweden too ignored American orthodoxy, pioneering the joint use of both forms of treatment for gynecological cancers, which were often delivered by the same specialist, in radiotherapy centers rather than in hospitals.9 Equally significant, both Europe and Canada provided cancer treatments under the auspices of state-sponsored medicine. Governments were able to bear the high costs of radium—and of treatment that used it—without fear of reprisals from private-sector physicians. In Canada, for instance, the state exercised virtual monopoly control over the mining, processing and distribution of radium within its borders.10 The fees for radiotherapies that made use of the element (as well as those that did not) were normally paid by one tier of government or another.11 In the 1950s, provincial-federal agencies covered the treatment costs of the great majority of patients. Individual provinces determined their own allocation of cancer resources. British Columbia, for example, set up a network of diagnostic clinics across the province but centralized the provision of treatment. Britain, too, approached the distribution of cancer care strategically. After the creation of its National Health Service (NHS) in 1948, cancer hospitals were “rationalized” into general hospitals. According to the historian David Cantor, radiotherapy was now available at only one site within each NHS region.12 Health care, in other words, remained distinctly service based in systems (like those in Canada and the U.K.) that were subject to planning constraints. These tended to restrict opportunities for market development as well as for the private practice of radiotherapy. Without the spur of a homegrown radioisotope program designed specifically, as the 1954 legislation in the United States had been, to “strengthen free competition in free enterprise,” there was little incentive to encourage the research and development that would facilitate the transfer of radiotherapy to the private sector.
The Rise of Radioactive Cobalt
65
R. Lee Clark had witnessed the innovations in European radiation treatments firsthand, as a student at the American Hospital in Paris just after the end of the war. He was impressed by what he saw. In 1947, he sent Gilbert Fletcher to study the well-established radiology centers in Stockholm, Paris, London, and Manchester. On his travels, Fletcher met an English physicist called Leonard Grimmett who had designed one of the earliest radium-based therapy units in the 1930s. This got Fletcher’s attention. He invited Grimmett to join him at the new radiology department in Houston. Grimmett, apparently eager to return to radiophysics, accepted the offer and moved to Texas in 1949. Both men recognized the potential advantages of cobalt. The therapeutic use of radium was limited to cancers that were relatively small and easy to get to—those on the skin, lip, neck, and breast. Tiny radium “seeds” could be implanted in body cavities (in the cervix and the uterus, for example), but this often required surgical assistance, both to insert the seeds at the launch of treatment and to remove them at its conclusion. The internal use of radium also exposed surrounding normal tissues to injury (the bladder and bowel were commonly damaged by radium treatment to the cervix). The more powerful high-voltage X-ray machines then in use could be positioned farther away from the patient and so, under some circumstances, could reduce the likelihood of harm to the patient. But they too were problematic. They were expensive, difficult to install and operate, and they produced particle beams that could be hard to manage.13 Radioactive cobalt, by comparison, would be far easier to control than either X-rays or radium. It behaved more predictably. It was vastly simpler and cheaper to produce. It offered the chance to treat deep-seated tumors with less risk of radiation leakage to other parts of the body. If successful, cobalt would surely extend the reach—and upgrade the status—of radiotherapy within the realm of cancer medicine. The promise of cobalt also reinforced the need for collaboration between radiology and physics. Physicists had been involved with cancer therapy since the late 1930s when they were called in to prepare radium seeds from radium salts. But with the introduction of higher-energy machines, their input became indispensable. The calibration and close monitoring of equipment, the calculations of dosages, and the maintenance of a safe working environment were all critical to the successful operation of radiotherapy within a hospital setting. Grimmett and Fletcher were eager to collaborate and set to work designing a machine that could safely harness the new radioactive isotope
66
Under the Radar
and direct its energy to a carefully circumscribed target. The first obstacle they came up against was bureaucratic rather than technical. The Atomic Energy Commission (AEC) had been set up by Truman in 1946 to take over control of all nuclear weapons research and development from the Manhattan Project, the federal initiative responsible for the development of the atomic bomb. The AEC was the sole supplier of radioactive cobalt, produced in its nuclear reactor at Oak Ridge, Tennessee. Commission approval was required for any use of the material, which could be obtained only through a licensing agreement. Shields Warren, at the time the director of the AEC’s Division of Biology and Medicine, thought the FletcherGrimmett project too ambitious and withheld grant support for it.14 In an end run around official resistance, local philanthropists in Texas organized a charity football game with all-star players and celebrities to raise money for research. This got the ball rolling. Meanwhile, R. Lee Clark, the M. D. Anderson director, found an alternative, if more indirect route to the AEC. Clark was already a member of the medical panel of the Oak Ridge Institute of Nuclear Studies (ORINS) in Tennessee, a consortium of twenty-four southern universities that had access to the labs at Oak Ridge for purposes of research and training. Like everything else at Oak Ridge, this too was sponsored by the AEC. ORINS served as the academic arm of the Oak Ridge National Laboratory. After the war, the Oak Ridge reactor became the primary producer of more than a dozen radioactive isotopes, including cobalt-60. Membership on the ORINS panel brought Clark into contact with Dr. Marshall Brucer, the director of its medical division. According to the Anderson hospital history, Brucer also “was interested in the idea of using cobalt-60 in a therapy unit.” So in December 1949, Clark arranged and attended meetings at Oak Ridge, where Grimmett’s preliminary designs for a cobalt-60 unit could be discussed with others who were interested, including representatives from ORINS, the AEC, Fletcher, and Grimmett. The designs “were accepted in principle”; so too was the corresponding request for a small amount of cobalt to be shipped to Houston for experimental use. ORINS then invited university and research centers to submit their own competing designs for a cobalt unit. The task was challenging. To house radioactive material safely while at the same time permitting it to deliver energy reliably and flexibly required expertise in materials science as well as in physics. In February 1950, twelve submissions were evaluated at a conference held in Washington, D.C. In the end, the AEC selected Grimmett’s design for development. It then asked General Electric to construct
The Rise of Radioactive Cobalt
67
a prototype of the winning solution. GE agreed to undertake the work, signing a contract in July 1950, just as Fletcher was announcing the production of a cobalt unit to the Fifth International Cancer Congress in Paris. Grimmett never lived to see his design come to fruition; he died less than a year later. When the GE fabrication was complete, the machine was shipped from Wisconsin to Oak Ridge for preliminary testing on animals. In September 1953, a representative from Anderson was dispatched to Tennessee to escort the new machine to Houston; he followed in a car behind the truck carrying the precious (and potentially hazardous) cargo. After careful installation in the hospital over the next six months, the first patients were finally treated with the new cobalt therapy on February 22, 1954. The cozy relationship of General Electric to this project would today raise hackles. It had, after all, been the result of a “no bid” contract, that is, none of GE’s competitors had been invited to submit estimates for the job.15 But in 1950, the fear of favoritism, while real, was not industry’s primary concern. At the time, evidence of a closed selection process signified a great deal more than a simple preference for one particular contractor. What was at stake was the future relationship between the government and the market in the production of new medical technologies. During the war, all nuclear installations were government owned and contract operated (the Clinton Laboratories at Oak Ridge, for example, the first nuclear reactor, had been built to government specifications by DuPont in 1943 and operated subsequently by Monsanto). The relationship between the government and the contractor bore none of the hallmarks of a market transaction but followed the pattern set by wartime procurement practices. Competitive bidding was rare. Every contract was a negotiated agreement between two parties, based on what was called a “cost-plus-fixed-fee” basis. Given the massive scale and complexity of much defense work, it was impossible to pin down exact costs in advance. The cost-plus arrangement guaranteed that the contractor would not lose money on the deal and provided an incentive on top of that in the form of an agreed rate of profit. The goal was not to create speculative opportunities for profit but to contain ordinary business risks. On occasion, contractors took what appeared to be antimarket measures to extremes. DuPont, in considering the construction of the nuclear pile at Oak Ridge, was mindful of the negative public relations that the firm might attract by taking on work that would facilitate the production—and eventual use—of atomic weapons. To preclude the possibility of being
68
Under the Radar
smeared as war profiteers, DuPont decided to set the fixed fee of their costplus contract at one dollar and to return any profits that might accrue from allowances for administrative overheads.16 The other big contractors also made a show of this patriotic gesture. But, in reality, few profits were ever rebuffed. Secrecy in the details of AEC contracts made it easy to pad them with additional payments under “general overhead,” “home office expense,” and unexamined expense accounts.17 The negotiated contract may have been a holdover from the war, but it would not have seemed unusual to Americans familiar with the public works projects of Franklin Roosevelt and the New Deal. When the objective was to provide jobs for Americans and generate purchasing power, direct intervention was both warranted and welcome. The general public had no problem with the cozy relationship between government and private contractors. Truman and Eisenhower inherited this way of doing business and did not shy away from adapting it to the needs of the arms race prompted by the Cold and Korean Wars. Though Eisenhower would come to deplore the stranglehold of the military-industrial complex on the American economy, he did what he felt was necessary at the time to expedite immediate defense objectives. Public research and development, therefore, now became as much a tool of postwar industrial recovery as it had been of WPA-type economic recovery or of weapons development. But it was no longer tailored exclusively to the needs of wartime defense programs. Government R & D funds were now also made available for the conversion of weapons by-products into peacetime goods or services. Prospective applications for spin-offs from nuclear reactors were eligible for the same kind of support enjoyed by defense contractors. This gave products that promised to incorporate radioactive isotopes considerable advantages over any new consumption goods or, for that matter, experimental medical treatments originating outside the framework of the Atomic Energy Commission. Innovations arising outside the charmed circle were not eligible for the extensive public pump priming that gave isotopes such a head start. Cobalt radiotherapy was one of the many beneficiaries of the new strategy. In exemplifying the goals of the Atoms for Peace program, it would— or so it was hoped—help to pacify the public’s fear of nuclear energy. The more the public knew about the peaceful applications of radioactive isotopes like cobalt-60, the more quickly the fear of nuclear power—and nuclear weapons—would disappear. The isotopes did, in fact, lend themselves to thousands of applications in both industry and medicine. The
The Rise of Radioactive Cobalt
69
mechanism most likely to expedite their rapid diffusion (and the message that came along with it) was the market. Government would do what it could to speed this process along. In August 1955, at the suggestion of the United States, the United Nations convened an international conference in Geneva to promote the “Peaceful Uses of Atomic Energy.” This was really the first international trade fair in atomic energy. It provided an opportunity for the American exhibitors to showcase their successes with the new technology. It also allowed them to see how their own products stacked up against the competition. At the entrance to the American technical exhibit, the welcoming remarks from President Eisenhower—in four languages—gushed about the “great strides already being made in putting atomic energy to work in industry, agriculture, medicine and research.” Newspaper coverage was equally effusive. Exhibits included reactor models with diagrams and flow charts, a display of radiation-detection instruments, and a large, continuously operating cloud chamber. Exhibitors were drawn from every sector of the economy, though equipment manufacturers (such as Pratt and Whitney, Raytheon, General Electric, and others) inevitably outnumbered hospitals (Sloan Kettering and New England Deaconess) and academic institutions (MIT and Johns Hopkins). The more than one hundred American participants at the Geneva event represented the atomic energy A-list.18 Back at home, engineering firms on this list were eagerly drawn into early discussions of cobalt radiotherapy equipment. Many of them were already in the X-ray equipment business with long-established hospital clients and medical connections. Before the government stepped up to the plate with cobalt, innovations in X-ray applications were more likely to come from close partnerships between manufacturers and hospitals. For instance, the X-ray firm Kelley-Koett, an important supplier of equipment in the early twentieth century, made machines for sale to the Mayo Clinic that incorporated the clinic’s own specifications. In return, Mayo allowed Kelley-Koett to market the same machines to other hospitals without paying royalties.19 Now government was to be injected into the design and production process. This took some getting used to. But it wasn’t long before manufacturers and physicians saw the advantages of the new relationship. Their increasing enthusiasm for cobalt machines would quickly spur market development. The number of radiologists trained to operate the early equipment prototypes grew accordingly, increasing at a rate that far outstripped the rise in the number of physicians.20 New departments of radiology soon
70
Under the Radar
sprang up to accommodate them. If cobalt proved to be a successful form of treatment, every radiology department in the country would want it. The AEC guarantee of a steady and subsidized supply of the cobalt source material made all the difference. Radium for therapeutic uses had been extremely expensive and often hard to obtain.21 Now the government was intervening to overcome what would otherwise have been crippling uncertainty. But some doubt still remained about the extent of its involvement going forward. How much control would it be willing to cede to the manufacturer? After the AEC reverted to its standard practice of selecting GE to work up its own prototypes, industry representatives began to worry that opportunities to market the new medical technology would be more limited than they had been led to believe. Did government intend to continue business as usual, retaining its traditional procurement policies after all? Industry needed reassurance. Marshall Brucer at ORINS gave it to them. Addressing a meeting of X-ray equipment manufacturers, he confirmed that the AEC and ORINS were “not in the X-ray business and do not intend to get into it.” His views were backed up by an AEC colleague who added that the AEC “desires to see free competition develop among the manufacturers of teletherapy [radiotherapy] units” and “to see as many manufacturers get into the teletherapy business as is economically possible.”22 Even Leonard Grimmett, the scientist, understood the potential of the equipment he had designed. In a 1950 presentation on the new machine, he anticipated the transformation of his prototype into a low-maintenance commodity. “It is our eventual hope,” he remarked, “to produce a simple, clean, and reliable machine, needing no servicing or replacements. . . . It would seem to be a sound way of using atomic products, which should bring the benefits of high-voltage radiation within reach of the ordinary hospital.”23 The AEC worked hard to overcome many of the barriers facing wouldbe players in this emerging field, essentially providing the services of a trade association. It organized and hosted a series of conferences to work through early bottlenecks in design and production. One perennial problem was that of standardization among manufacturers. The design of the source capsule (the cobalt container within the machine) was particularly problematic since, first, cobalt could be shipped to manufacturers in packaging of a wide variety of sizes and shapes, and, second, the relatively short half-life of the material meant that it would have to be changed several times over the life of the machine. By 1953, the manufacturers had thrashed out a solution to this problem under the aegis of the AEC.24
The Rise of Radioactive Cobalt
71
One source of competition the AEC could not control was the emerging cobalt equipment industry in Canada. The Canadians had been the original pioneers of treatment with the new isotope, opening the first noncommercial unit in October 1951. Besides having rich supplies of the element for isotope production in their Chalk River reactor, the government-funded Atomic Energy of Canada Ltd had the power to produce both the radioactive source and the machine. This put them in a position to sell an integrated radiotherapy unit, an advantage it retained until the late 1950s when General Electric and Westinghouse followed their example. The clear demarcation the AEC was trying to establish in the United States—government kick-starting the process by supplying the radioactive material and private industry taking over from there—did not exist in Canada where the government was more actively involved at every stage of the process. Fortunately for the United States, the Canadian operation could not by itself meet the global demand for cobalt equipment, leaving the door open for the more expected pattern of capitalist development to assert itself south of the border and, eventually, almost everywhere else. The Canadians, however, did enjoy a distinct advantage in marketing their cobalt machines in the USSR and in China, both off-limits to American exporters. Canadian sales of cobalt machines to communist countries were, in fact, kept hidden, for fear of alienating American customers who viewed exporting to the USSR as trading with the enemy.25 Cobalt was not to be wasted on those who were immune to the blandishments of “atoms for peace.” If the Soviet Union was driven by Cold War considerations, it felt no need to justify that fact to its citizenry. Nor was it moved to smooth the way for up-and-coming private enterprise. But in withholding the benefits of capitalism from a communist state, the U.S. government demonstrated that it too could allow ideology to trump economic self-interest. The sacrifice of profits was, however, modest. There would be plenty of other opportunities for manufacturers to market the new technology elsewhere, among friendlier nations. To cultivate demand for cobalt at home, ORINS, in 1952, set up a Teletherapy Evaluation Board that joined representatives from twenty of the affiliated medical schools in its southern university consortium to “investigate, develop and evaluate radioisotopes for teletherapy.”26 The idea was to encourage participating institutions to experiment with radioisotopes on their own with a view to pooling their clinical findings several years down the road. Even though they lacked the funds to build or buy the larger 1,000-curie machines, they could still invest in—and experiment with—more
72
Under the Radar
modest equipment (200-curie capacity) that could be used to treat head and neck cancers. This would familiarize them with the potential benefits that the more powerful machines might offer when they became commercially available. Sharing results with other members of the consortium would speed up the experimental process and yield meaningful data much sooner than any single medical school could achieve on its own. The clinician whose appetite for the new technology had been whetted by modest experimentation with the smaller units would, it was hoped, be motivated to pitch the purchase of a 1,000-curie cobalt machine to the hospital board when the larger machines eventually came onto the market. Having addressed both the supply and demand concerns of the emerging product manufacturers, the AEC still faced the problem of regulation. Cobalt was a hazardous substance that, in the wrong hands, could do a great deal of harm. Postwar government licensing of radioactive materials for use by academic researchers and commercial firms raised the specter of potential liability on an unprecedented scale. The possibilities for negligence in the shipping, storing, and day-to-day handling of hundreds of radioactive isotopes were literally infinite. In the first ten years of the program, the AEC made over 90,000 shipments to 4,000 users in every sector of American industry—agriculture, chemicals, food processing, plastics, textiles, as well as medicine. The AEC chose to minimize its role as enforcer. It limited its authority to the allocation of licenses for radioactive substances. Applicants for cobalt from an AEC reactor (like Natanson’s John Kline at St. Francis Hospital) had to be licensed physicians in good standing with their local medical society and with at least three years of experience in radiation therapy. But, the AEC insisted, “the design of the therapy unit, the design of the source and the loading of the teletherapy [radiotherapy] unit will be the manufacturers’ responsibility.”27 In this way, it managed to off-load accountability onto both the radiologist and the manufacturer. The Canadians, eager to penetrate the American market, repeatedly asked for details of the AEC’s approval procedure for their cobalt-60 equipment. They offered to submit blueprints, even to send a representative to Oak Ridge to discuss the drawings. The Isotopes Division of the AEC finally informed them that there was “no procedure at this time for licensing equipment which will contain radioactivity,” only a procedure for “the procurement and the use of radioisotopes.”28 Having cleared that hurdle, Canada proceeded to supply six of the first ten cobalt units installed in American hospitals (the first going to Montefiore Hospital in New York in November 1952).
The Rise of Radioactive Cobalt
73
In handing over control of hazardous materials to the private sector, the AEC made sure that it handed over liability as well. Accordingly, a section of the Atomic Energy Act of 1954 stipulates that “the licensee absolves the government of responsibility for damages ‘resulting from the use or possession of special nuclear material by the licensee.’ ”29 Although the AEC still retained immediate control of materials like cobalt that were produced in their own nuclear reactors, the commission deemed that radiation released by commercial machines that incorporated this material was not its responsibility. As a worried radiologist put it, “Radiation from such sources is not subjected to any legislative control whatsoever. This is a most unfortunate situation, as emphasized by the fact that the only sentence that is in italics in the new Bureau of Standards Handbook . . . reads, and I quote: ‘It is our strong conviction that any radiation control act should provide coverage for all kinds of ionizing radiation, regardless of source; to do otherwise is only to invite confusion and conflict.’ ”30 Those who drafted the new regulations were well aware of the potential harm that public liability could inflict because accidents involving radioactive materials had already occurred among workers handling uranium, plutonium, and other isotopes at the government’s own nuclear installations.31 Alerted to the open-ended nature of the hazards involved, regulators were eager to sever links between the original radioactive source materials and the final products. They preferred a clear “hands-off” policy, one that uncoupled the public sector from the chain of responsibility. From the government’s perspective, civilians like Irma Natanson, the final “consumers” of cobalt produced by Oak Ridge reactors, were twice-removed from any AEC involvement; the physician was responsible for the administration of the cobalt, and the manufacturer for meeting all the safety recommendations of the National Council of Radiation Protection. Accordingly, the government had no intention of interfering in the medical use of radioactive isotopes. The determination of safe and appropriate exposures to X-rays, all agreed, “must not be legislated upon, and the necessity of the procedure must be left to the judgment of the physician.”32 This withdrawal behind the curtain of sovereign immunity was as clear an indication as any of the AEC’s limited commitment to cancer therapy. But it suited the medical profession perfectly. While most physicians energetically resisted any government encroachment into the direct provision of medical care, they welcomed financial support that did not compromise their independence. Cancer doctors were, in fact, among the greatest boosters of the Atoms for Peace program. Radiologists were, after all, among its
74
Under the Radar
beneficiaries. By making isotopes like cobalt generally available, government investment in R & D (with no strings attached) had significantly enhanced the nature of the care that radiologists (and, later, oncologists) were able to offer their patients. As long as these benefits flowed in one direction, with no concessions exacted, doctors were content. Inevitably, many physicians aligned themselves with the corporate interests that also benefited from government largesse, that is, the equipment (and later drug) manufacturers that supplied the medical innovations extending the reach of cancer treatment. The two groups were mutually dependent. Physicians were the essential go-betweens, introducing the latest therapies to their patients, sending critical feedback to manufacturers, administering clinical studies testing new equipment or drugs (with funds often supplied by the companies themselves), and advising hospital purchasing boards on investment in new technologies. This alliance (rare before the postwar period) explains, to some extent, the relative absence of the medical profession from the public debate about the links between radiation and cancer (taken up in chapter 5). Most cancer doctors administering radiotherapy lined up squarely with the Atoms for Peace program, steering well clear of the nuclear weapons controversy. After all, the same companies that manufactured their cobalt equipment (like General Electric) were also building nuclear weapons or managing nuclear sites on government contracts. It would take exceptional courage for a physician to confront this conflict of interests head-on. The lack of controversy surrounding the new therapy was yet another indication of its marginal status within the larger framework of the AEC mission. The agency’s interest in cancer was always peripheral to its main medical research objectives—the study of the effects of radiation exposure on the many thousands of its own or contract workers engaged at its atomic energy installations (nuclear reactors, bomb test sites, or any facility manufacturing components for the bomb). This focused on the short-term problems of radiation sickness as a disease that compromised the efficiency of soldiers and workers. Cancers occurring “naturally” among the civilian population were of little or no interest. Cancer research sponsored by the AEC had been a by-product of war and reflected a military mindset. After the peace of 1945, for a brief moment it may have looked as though the AEC’s civilian interests would constitute a genuine swords-to-ploughshares conversion and command a correspondingly greater portion of its resources. But the dictates of the Cold War would soon prove every bit as demanding as the U.S. military offensives
The Rise of Radioactive Cobalt
75
had been before the summer of 1945. Not surprisingly, the AEC’s $2.6 million devoted to cancer research in 1956 accounted for only about 9 percent of the Division of Biology and Medicine’s $30 million budget (which itself constituted less than 2 percent of the AEC’s total budget of $1.6 billion for the year).33 The AEC was accustomed to testing weapons, not cancer treatments. As preparation for the experimental appraisal of new therapies, weapons testing could hardly have been less appropriate. Experimental bombs and missiles released massive energy that was designed to kill a targeted enemy at once. It was a short-term proposition that construed the “enemy” as an abstract concept rather than a population of individuals. Transferring this imagery to cancer therapies would prove awkward. Though X- and gamma rays were also intended to kill, their target was cancer cells, not thousands of human beings. The metaphor of destruction did not easily accommodate the vast difference in scale between the two types of target. Nor did it support the ultimate objective of treatment, which was, of course, to prolong life, not to eradicate it. The AEC’s handling of potentially hazardous new therapies like cobalt reflects its inexperience with an approach to radiation that was essentially alien. In keeping with its lack of familiarity with the therapeutic process, the commission minimized the importance of clinical assessment. It might have preferred to maintain a clear line between research and treatment, but in practice this proved to be difficult. Any new therapy involving potentially hazardous exposures to radioactivity had to be tested before it could be made generally available—and had to be tested on humans as well as animals. At least partly for this reason, the AEC sponsored the construction of four small-scale cancer centers across the country.34 There, experimental new technologies using a variety of radioactive sources were tested for their practicability and efficacy on a very few (mostly terminally ill) cancer patients, at government expense. Mindful of the hackles this intervention might raise among the opponents of socialized medicine, the hospitals were intentionally modest in scale, limited to thirty beds in the case of the new facility at Oak Ridge. The restricted number of patients at any of these facilities meant that the experiments carried out there had no statistical significance. In any case, the AEC had no concept of the rigorous study methods that would come to govern the management of clinical trials decades later. In the 1950s, there was still no standardized approach to the clinical testing of therapies aimed at cancer patients. In the case of cobalt, though scientists could call upon the cumulative experience of fifty years of X-ray
76
Under the Radar
treatment with radium, this would only take them so far. Aware of their limited clinical facilities, the scientists and physicians staffing the new government-sponsored cancer centers must have realized that reliable evidence about new therapies would come only with time, as data were collected and collated by statisticians in general hospitals with much larger patient loads. Though aware of the controversial nature of their clinical practice, viewed by many as yet another form of government trespass, they knew their days were numbered, that they represented only a transitional stage in the handover of radiotherapies to the private sector. And they were right. Even in the early 1950s, before the market for cobalt equipment had established itself, AEC scientists were raising questions about “the advisability of continuing with further support to teletherapy.” In 1955, the AEC, sponsoring five teletherapy research projects, decided that the program would not be expanded and that existing grants would be wound down after a few more years. At a meeting of the Advisory Committee for the Division of Biology and Medicine, a senior scientist asked whether their function was “to help in research or to help in therapy?” to which the division director gave the clear reply, “We do not support therapy for therapy’s sake.”35 How much pressure was brought to bear by X-ray equipment manufacturers to bring the new product to market as quickly as possible? And how ready and willing was the AEC to comply? By the summer of 1955, when Natanson was treated, eighteen commercially built cobalt units of more than 1,000 curies had already been shipped. With no formal approval process in place, no alert gatekeeper on the lookout for hard evidence, it was almost inevitable that cobalt therapy would be prematurely sprung from the lab to the hospital clinic. In fact, the first five-year follow-up study of the new technology did not appear until 1957, two years after Natanson’s ordeal. The wholesale privatization of the new treatment was given an enormous boost by the passage of the Atomic Energy Act in 1954. For the first time, the new legislation opened up opportunities for the private development and ownership of nuclear power. No longer limited to the role of contractor, private companies could now become a full partner in the construction and management of nuclear power plants. The pursuit of profitability in this industrial undertaking had now been officially legitimized. In the wake of this sea change, small fish like cobalt therapy passed unnoticed into the commercial mainstream. By the late 1950s, the AEC began to wind down its production and distribution of cobalt-60 “as private firms demonstrated capability to supply the market on a competitive basis.”36
The Rise of Radioactive Cobalt
77
A decade later, the AEC ceased operating in this arena altogether.37 There were now private suppliers of cobalt sources as well as equipment manufacturers. The market for the new technology had become thoroughly selfsustaining. Cobalt came to dominate the market for radiotherapy equipment for the next quarter century. Though still in use today, it was eventually eclipsed by an offspring of the early cyclotron, the linear accelerator. The high-water mark for cobalt was the 1970s. An estimated 970 machines were in use in 1975, more than double the number of linear accelerators then in use. By 1990, the number of cobalt machines had fallen to about half its peak level while the number of accelerators had grown almost fivefold, approaching 2,000 machines.38 The newer technology eliminated all the operational problems associated with the relatively short half-life of a radioactive substance like cobalt. In 1961, the Public Health Service sponsored the first large, randomized clinical trial to assess the value of postoperative radiotherapy following mastectomy. The study, conducted under the aegis of the National Surgical Adjuvant Breast Project (NSABP), had been prompted by “therapeutic uncertainty” surrounding the use of the treatment. Evidence to date had been wildly contradictory. Some studies argued that postoperative radiotherapy actually harmed a patient’s chances of survival, others that it showed a marked benefit. Over the course of three to five years, the NSABP trial followed the progress of 1,103 “study patients” who were treated at twenty-five different institutions. All the participants had undergone radical mastectomies before enrolling in the trial. The results, reported in 1970, “failed to demonstrate the advantage of postoperative irradiation as an adjuvant to surgery in the treatment of operable breast cancer.” Five years out, the same percentage of women in both the radiotherapy group and the control groups were disease-free (50 percent). The treated group, however, did show a decreased incidence of both local and regional recurrences in areas that had been irradiated. But it showed no survival benefit. Later NSABP trials would put postoperative radiotherapy in a better light when combined with less radical surgery. The odds of survival for women undergoing a lumpectomy followed by postoperative radiotherapy would turn out to be essentially the same as for women undergoing more radical surgical procedures. Radiotherapy, in other words, could spare many women the loss of a breast.39 By the early 1960s, cobalt machines were up and running in academic medical centers across the country. (Almost all the participating institutions
78
Under the Radar
in the early clinical trials were university affiliated; five of them had been members of the Oak Ridge Institute of Nuclear Studies academic consortium.) Over the course of the 1950s and beyond, the use of cobalt skyrocketed. By June 1956, sixty cobalt-60 radiotherapy machines were in operation in the United States. A year later, the number had risen to 110. An AEC survey carried out at the time estimated that 5 percent of all U.S. hospitals were, by then, using radioisotopes routinely in either the diagnosis or treatment of patients and/or in research.40 Changes at M. D. Anderson exemplify the trend. In 1956, they took their original machine back to Oak Ridge to have its cobalt source replaced by a new one, at the same time upgrading from a 1,500- to a 2,000-curie machine. The next year, the hospital acquired a second cobalt-60 unit to complement the older one, this time designed and manufactured by Atomic Energy of Canada. The new machine incorporated some modifications, including a rotating suspension mechanism which allowed the machine to be used with greater flexibility—and therapy to be targeted with greater precision. It was first used to treat patients in September 1958. For both manufacturers and hospitals, investment in cobalt equipment was a long-term commitment that carried significant risks. For the manufacturer, R & D was expensive and time consuming, with no guarantee of success. For the hospital, the purchase of a cobalt-60 unit also necessitated the costly construction of a purpose-built room in which to house it and specially trained staff to operate it. A new machine would take years to pay for itself.41 The very scale of the undertaking on both sides encouraged a belief in the technology’s durability. The idea had been daring; the reality was imposing. There was, in other words, an investment dynamic at work here, operating independently of feedback measuring cobalt’s therapeutic effectiveness. The machines were in widespread use well before reliable clinical evidence justified their expense, at least in the context of breast cancer. Manufacturers, even with government support, still had to create their own markets. The financial imperatives that drove the new technology ran ahead of demand but were every bit as important in establishing the permanence of radiotherapy as a treatment for cancer. A spin-off of the military-industrial complex, the success of cobalt admitted corporate America as a full partner in the practice of cancer medicine. The introduction of cobalt into general use in advance of robust evidence demonstrating its effectiveness—and limitations—set a postwar pattern that has endured. Despite the elaborate protections now offered by the many-layered structures of clinical trials, new drugs and new therapies are
The Rise of Radioactive Cobalt
79
still marketed before their long-term consequences are fully revealed. The forty-year history of hormone replacement therapy for menopausal women illustrates the same dynamic at work fifty years after the introduction of cobalt.42 The premature rush to market more or less guarantees that the first few generations of users will be guinea pigs. The well-orchestrated fanfare that accompanies the arrival of any new product plays to a receptive audience that wants to believe in it. The newcomer, once over the hurdle of formal approval, is given the benefit of the doubt, essentially a grace period in which critical judgments are withheld. This creates an opportunity to chase what I would call “time-lag” profits—revenues that accumulate before evidence undercutting the safety or efficacy of any new product becomes too great to ignore. The lag can last anywhere from a few years to decades. In the theoretical world of free market economics, competition would eventually be expected to sort out the winners from the losers. In the world of cancer patients, the evidence prompting a reassessment of costs and benefits derives not from market feedback but from real human beings for whom winning and losing have a more immediate and exigent significance.
Chapter 4
The Cobalt Back Story “A Little of the Buchenwald Touch”
Compressed into a few pages of the history of the M. D. Anderson Hospital, the story of cobalt radiotherapy in the 1950s displays all of the hallmarks of orthodox medical history. It celebrates medical pioneers intuiting a new application of an existing form of treatment, in this case radiotherapy with a new source, cobalt-60. With institutional backing that guaranteed a stable experimental environment, the men pursued the research and the clinical testing they deemed necessary to demonstrate the effectiveness of their innovation. Once the kinks were worked out of the system, the therapy could then be made available to cancer clinicians across the country, prolonging, if not saving, the lives of many cancer victims. Of course this version of the story overlooks the consequences for those like Irma Natanson who paid the price for the postwar rush to privatize and to bring new treatments to market. How many others suffered exactly as she did? How quickly were the results from the early use of these machines disseminated among practitioners across the United States, either through clinical training or through the literature? In the 1950s, the mechanisms that existed for the collection and exchange of data on experimental treatments remained informal and patchy. Moreover, there was little incentive to report on treatment failures, at least not before the statute of limitations on medical malpractice claims had run out. Given these impediments, it’s hard to believe that there weren’t other Irma Natansons, all unsuspecting casualties of a still inadequately understood therapy. Cancer patients like Natanson were the victims of a fast-paced industrial expansion that significantly extended the reach of market economics 80
The Cobalt Back Story
81
into the treatment of cancer. Its boosters would brook no criticism of this transformation. On the contrary, they would argue, the rapid success of cobalt revealed the vibrancy and resilience of American capitalism. As such, it provided the best possible demonstration of the superiority of the American system, the way of life that the Cold War sought to defend. But extolling the virtues of market medicine also served another purpose. The commercial success of cobalt camouflaged a tale of Cold War opportunism that exploited the cover of therapeutic care for decidedly nontherapeutic ends. In a political culture dominated by fear, it was difficult to restrain the military imagination. In 1946, well before the English physicist Leonard Grimmett had ever heard of the M. D. Anderson Hospital, the U.S. Air Force conceived the fantastic idea of developing a nuclear-powered airplane. An on-board nuclear reactor, the air force contended, would be able to supply the energy necessary to power a long-range bomber at high speeds while taking up just a fraction of the space of conventional jet fuel. The airplane would also, it was argued, be much less vulnerable to radar tracking and interception than were high-altitude bombers.1 Air force enthusiasm for the idea was boundless. But it did not take a rocket scientist to raise some very obvious objections. First, uranium used as fuel might be lightweight in itself but required substantial protective shielding that would add several additional tons to the overall weight of the plane. Second and more serious, flying a nuclear reactor around the world would not only expose the crew to potentially lethal doses of radioactivity but, in the case of an accident—especially one occurring over a heavily populated area—might release catastrophic amounts of radiation. Such a scenario did not seem to trouble the airplane’s boosters. But the project, dubbed “NEPA” (for Nuclear Energy for the Propulsion of Aircraft), was not universally popular. Among its many detractors were some very powerful scientists, including Robert Oppenheimer and the president of Harvard, James Conant, both of whom were members of the AEC’s General Advisory Committee at the time the project was announced. President Eisenhower was also dismayed that the commission would spend “so much money on such a fanciful idea.”2 But despite their opposition, none of these critics took action to block it. NEPA won enough funding support from the air force to set up preliminary operations at Oak Ridge with former Manhattan Project contractors (with Monsanto on the design of the power reactors and Fairchild Engineering on the design of the airframe). The AEC would later invest heavily in NEPA as well.
82
Under the Radar
The project came up immediately against the issue of radiation hazards. No one knew what levels of exposure, if any, human beings could safely tolerate. A nuclear-propelled aircraft would clearly place members of its crew perilously close to radioactive material. Crew members would be doubly at risk since the planes would also be carrying nuclear cargo, missiles en route to enemy targets. Were the harmful effects of intermittent exposure (on, say, weekly flights) cumulative, or delayed? Did the human body have the capacity to recover from any harmful effects between exposures? How much radiation could a person absorb and still be able to function (that is, still be able to pilot a plane)? With hard data in hand, the air force would be able to determine “how long a flight and how many flights an individual may take in the course of his career and not be injured by the effects of nuclear power plant.”3 NEPA’s medical advisory committee believed that these questions could only be answered by direct experimentation with human beings.4 NEPA needed persuasive data if it was ever to get its nuclear plane into the air. And it drew up a plan to get it. Specifically, it wanted to use wholebody radiation as a surrogate for in-flight emissions from a nuclear engine and to measure its impact on whatever substitutes for a flight crew that it could find. Whole-body (sometimes called total-body) radiation was first attempted in the 1930s as a treatment for cancer. It was designed to expose the entire body uniformly to X- or gamma rays from an external source (or sources) rather than targeting a particular organ or a limited field. From very early on in its experimental history, however, it became apparent that the technique showed promise only when applied to widely disseminated cancers like lymphoma or leukemia. And even for these cancers, it required high doses of radiation that increased the risk of bone marrow depression, a potentially fatal side effect (in the days before bone marrow transplants were available). For the majority of other cancers, particularly solid tumors, total-body radiation showed “deleterious effects, often resulting in increased tumor growth or rapid death of the patient.”5 For these malignancies, the treatment never became standard practice. After Hiroshima, however, total-body radiation found a new lease on life. This time it was no longer medical scientists but military strategists who saw, in the exposure to radiation from multiple sources, a reasonable approximation to the kind of radioactive exposures that would follow a nuclear attack. Delivered experimentally, total-body radiation would, they hoped, simulate the conditions on a nuclear battlefield and so provide
The Cobalt Back Story
83
valuable feedback that could be used to help protect both troops and civilians. What they envisioned was, in effect, a kind of war game in a hospital setting. Humans would be exposed and then monitored. Investigators would search for biological markers to measure the impact and severity of exposure. Ideally, the work would establish reliable indicators (“dosimeters”) that could be measured in the urine or blood. These would enable the military to make an accurate assessment of the level of danger to which combat troops would be exposed. But how would enthusiasts for this work ever get permission to employ real weaponry (specifically, those releasing radiation) on real participants? By the late 1940s, human experimentation had become a controversial topic. The Nuremberg trials had drawn the world’s attention to the extreme abuses inflicted by Dr. Josef Mengele and his ilk in the name of medical science.6 Coverage of the trial of forty-two Nazi doctors put the question of medical ethics on the map. The verdict in 1947 was a front-page story—seven of the accused were found guilty and hanged; another eight were sent to jail. The revelations of mistreatment flushed out by the trial stirred public outcry. It raised doubts about the integrity of the medical profession. To fill what appeared to be an international void and reset the course of human experimentation going forward, the tribunal issued a set of ten principles that became known as the Nuremberg Code. Of paramount importance and ranked first, “the voluntary consent of the human subject is absolutely essential. This means that the person involved . . . should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud [or] deceit.” Tenet 4 of the Code goes on to add: “The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.” The Nuremberg Code sparked considerable discussion among the management elite in the U.S. defense department. Documents declassified in the 1990s reveal a substantial familiarity with the subject in the late 1940s and early 1950s.7 They show that every aspect of it was explored; various consent forms, for example, were drafted and circulated. The question of what types of human subjects were permissible—prisoners, healthy volunteers, sick patients in VA hospitals—was also hotly debated. But overriding all of this interagency communication was the need for secrecy. An internal memo sent out by the AEC under the subject heading “Medical Experiments on Humans” instructed that “no document be released which refers to experiments with humans and might have adverse effect on public opinion or result in legal suits. Documents covering such work . . . should
84
Under the Radar
be classified secret.”8 The circulation of such a warning suggests a disconnect between the broad but noncommittal discussion of ethical guidelines under consideration and the actual conditions that individual defense agencies found acceptable in experiments they designed and funded on their own (in the absence of any executive mandate, each agency was left to make its own decisions about what policies, if any, to apply within its ranks). Although the record is too incomplete to allow any generalizations to be made, there is little evidence to suggest that any measures to safeguard the interests of human subjects were systematically adopted, either through executive mandate across the board or by any single agency acting on its own. The Nuremberg guidelines were recommendations only, without the force of law; every country was left to respond to them in its own way. In the United States, they served primarily to trigger an inconclusive internal debate. Six years after they were first issued, in 1953, they reemerged, largely intact, in a top-secret memo from Eisenhower’s secretary of defense addressed only to the secretaries of the army, navy, and air force.9 The protections laid out in the memo left open the question of enforcement and remained discretionary. Judging from the wholesale disregard of most of them in experiments carried out over the next two decades, it is safe to conclude that the memo’s impact on the conduct of human experiments was modest at best. As the Advisory Committee on Human Radiation Experiments Final (ACHRE) Report put it in 1995, the “stated positions” that circulated in classified documents “were often developed in isolation from one another, were neither uniform nor comprehensive in their coverage, and were often limited in their effectuation.”10 In that critical six-year interval between the publication of the Nuremberg Code in 1947 and the Department of Defense’s classified response to it in 1953, the Soviets detonated their first atomic bomb and the U.S. entered a war in the Korean peninsula when the Soviet-backed North invaded the American-backed South. Cold War fever flared up. The U.S. military pressed harder for research it hoped would yield a critical advantage against its communist enemies. NEPA supporters hoped to capitalize on the growing sense of urgency. They made human experimentation their highest priority recommendation and lobbied hard for it over the next few years, pursuing “various prominent individuals” to help them persuade “influential agencies” to approve their plans for human experimentation.11 NEPA spokesmen floated the idea of using prisoners as experimental subjects and pitched it to Shields Warren, the head of the AEC’s Division of
The Cobalt Back Story
85
Biology and Medicine and, as such, the final arbiter on the NEPA proposal. Though an ardent supporter of nuclear energy, Warren had not long before returned from Japan where, working as a pathologist, he had witnessed the consequences of radiation firsthand as part of a team documenting the health effects of the atom bomb. He rejected NEPA’s proposal: “It’s not very long since we got through trying Germans for doing exactly the same thing.”12 For some of Warren’s colleagues on the AEC committee, the difficulties raised by human experimentation were more than hypothetical. A few committee members had themselves taken part in secret research with human subjects. One of them, the physician Joseph Hamilton, was a veteran of the Manhattan Project. Earlier in his career, he had actively promoted the development of “radiological warfare,” that is, the use of isotopes as weapons of mass destruction; in the form of military poisons, they could inflict long-term damage, destroying municipal water supplies or contaminating the soil and air for decades.13 Hamilton had also given injections of plutonium to a handful of cancer patients in San Francisco, without their consent.14 Now, several years later, he adopted a more circumspect posture in response to the NEPA proposal, recommending the use of large monkeys such as chimpanzees instead of humans. If the researchers were to go ahead and use prisoner volunteers, he wrote to Warren, “I feel that those concerned in the Atomic Energy Commission would be subject to considerable criticism, as admittedly this would have a little of the Buchenwald touch.”15 Although boosters of the nuclear aircraft continued to work hard to bring key decision makers on board (the Public Health Service, for instance, “expressed a willingness to proceed with the NEPA proposal”),16 they couldn’t sway the ultimate gatekeeper, the AEC committee. After several further attempts in the fall of 1950, NEPA finally acknowledged defeat, noting for the record, that their “earnest effort to obtain needed information . . . had failed and that NEPA can offer the Air Force no better data for estimating radiation hazards associated with nuclear powered aircraft than those submitted almost two years ago, at which time the estimates were unsatisfactory.”17 But if the official paper trail ends here, suggesting compliance with the prohibition against human experimentation, the deeper, formerly restricted records tell a different story. Nine months before their formal admission of failure, that is, in March 1950, representatives of the air force met twice with R. Lee Clark, Gilbert Fletcher, and Leonard Grimmett at the Anderson
86
Under the Radar
Hospital, explicitly to discuss their possible participation in a joint experiment with the Aviation School of Medicine involving the use of cancer patients. The link between the two institutions was probably forged by Clark, who had come to the hospital from the air force. He would have been most likely to welcome their attention—and the possibility of grant money that came with it. At the second meeting, Fletcher agreed to submit a project plan “embracing the NEPA requirements.”18 These “requirements” actually specified every detail of the project, including all staffing requirements and an itemized budget. They had been drawn up months before any meeting took place between the air force officers and the Anderson doctors, and there is no evidence that they were ever subsequently modified. NEPA called the experiment the “Psychomotor Performance of Patients Following Total Body Exposure to Penetrating Radiations.”19 The radiation would approximate the exposure that air force pilots of nuclear-powered planes would be likely to experience. Participants would be exposed to “whole body dosages in the range of 10 to 100 roentgens, given over periods of from 30 minutes to several days.”20 Before and after their exposures, the volunteers would undergo tests of motor skills deemed to be critical to flying a plane. The tests involved measurement of manual dexterity, handeye coordination, and steadiness, similar to those administered by the air force in the selection of aviation cadets during World War II. The experiments implicitly acknowledged the need to demonstrate the relative safety of radiation exposures in nuclear-powered aircraft. NEPA administrators understood that their project would never go forward without it. But they also believed that the formal rejection of their experimental proposal was just an obstacle in the critical path of development. If they could somehow make a case for the safety of the air crew, they believed the project might still have a chance. In an environment protected by complex rings of secrecy, a well-ordered paper trail would be all that was required. Unwilling to give up its larger objective, NEPA did an end run around the prohibition against human experimentation, reinterpreting it as an injunction against experimentation on healthy subjects. Substituting cancer patients for healthy volunteers allowed them to recast their study as an adjunct to cancer therapy. A careful choice of words transformed radiation that had been purely experimental into exposures that now became therapeutic. “Incurable cancer patients expected to be free of constitutional symptoms for several months” were to be the designated stand-ins for pilots. Cancer patients were the obvious choice because their treatment
The Cobalt Back Story
87
already involved radiation. Experimental exposures that used the same equipment in the same place but altered the level and frequency of doses could, therefore, hide in plain sight. An internal NEPA memo illustrates the expedient conversion from one perspective to another. The fact that human experiments had “been forbidden by top military authority” should not deter them, it said. Since the need is pressing, it would appear mandatory to take advantage of investigation opportunities that exist in certain radiology centers by conducting special examinations and measures of patients who are undergoing radiation treatment for disease. While the flexibility of experimental design in a radiological clinic will necessarily be limited, the information that may be gained from the studies of patients is considered potentially invaluable; furthermore, this is currently the sole source of human data.21
The coded message conveyed through “special examinations” that could be conducted in “certain radiology centers” would have been immediately apparent to any authorized reader. On paper, the substitution of “study” for “experiment” would maintain the necessary firewall between what was permissible and what was forbidden. A March 1952 conference summary illustrated the tolerance for cognitive dissonance that the careful use of language permitted. First it announced the prohibition on “human experimentation. This was not carried out although desirable [sic].” Then it recorded the following: “The M. D. Anderson Hospital at Houston, Texas is making a study of human patients who have been exposed to various amounts of x-irradiation. The tests on humans involve coordination and psychomotor responses before and after irradiation.”22 This economy with the truth would be expected from the project’s military boosters. For them, the end justified whatever means they could mobilize. They were unlikely to have been troubled by the ethical difficulties their experiments posed. But how were the participating physicians brought on board? One can only speculate about their response. As civilians, neither Fletcher nor Grimmett would have known anything about the military’s stand on human experimentation, although they would both almost surely have been aware of the Nuremberg Code. Did they make a connection between the Nazi abuses and their own experiments? Or was the prospect of research funding just too seductive to allow any reservations to cloud their enthusiasm? Both men would have been aware that alternative sources of research money were scarce. (In fact, from 1945
88
Under the Radar
through 1960, defense research expenditures accounted for 80 percent or more of the entire federal R & D budget.)23 They would also have appreciated that in developing applications for cobalt-60, they were pioneers, ahead of the pack; they may have been eager to exploit that lead. Colonel John E. Pickering, the air force’s officer on the study, when asked more than forty years later whether and “how much the doctors at M. D. Anderson would have known about the NEPA project,” replied, “I would suspect that Lee Clark was well aware of this. Knowing Gilbert Fletcher, he would ask enough questions to know . . . so I think my answer would be unequivocally yes.”24 The contract got underway in the summer of 1951 under the joint direction of John Pickering and Gilbert Fletcher.25 It remained in effect through several renewals, until 1956. The air force would oversee the administration of psychomotor tests and the Anderson group would administer the radiation. Over the lifetime of the project, 263 “volunteers” were enlisted. All were terminally ill cancer patients who had already undergone radiation treatment for their malignancies, some several rounds. None had cancers that were expected to benefit from whole-body radiation. Most of the volunteers were African American or Hispanic. These were patients who, in 1951, still had to be housed separately, according to Jim Crow laws of the South. Drawn from local hospitals, many were indigent. An exposé of Houston’s charity hospitals written ten years later documents the execrable standard of care still prevalent in the mid-1960s—the “overpowering stench of poverty, sickness, neglect.” Said one of the staff physicians, “We are forced to treat the poor Negroes as another species, test animals, relics of the Stone Age.”26 Not surprisingly, there is no evidence that any of the patients in the Anderson experiments were ever informed of the true purpose of the study in which they took part or asked to give their formal consent in writing. It is much more likely that their involvement was presented to them as simply the next stage in ongoing treatment. Over the six years of the Anderson experiment, those who participated received radiation doses ranging from 15 to 200 roentgens. The lower doses were administered, along with the tests of motor skills, in the early years. Gradually, radiation doses were ratcheted up. The thirty patients in the final group to be tested (including three women) each received a single whole-body exposure of 200 roentgens and were closely monitored over the course of a few weeks. Most suffered the nausea, vomiting, and bone marrow depression that are characteristic of radiation sickness. A year later, only three of them were still alive.
The Cobalt Back Story
89
Each of these thirty patients had been hand-picked by the physicians assisting Dr. Fletcher in administering the experiment. The doctors had access to their patient records and were familiar with their medical histories. They probably knew most of them personally. Yet they deliberately selected patients who were suffering from cancers known to be resistant to total-body radiation, specifically, from solid tumors in the breast, head, and neck rather than from disseminated disease like leukemia. This was done to minimize the confounding of radiation’s impact on the body with its impact on disease. The doctors, in other words, had chosen patients who were least likely to benefit from total-body radiation in the hope that their reactions would come closer to approximating those in a healthy subject. Theoretically at least, opportunities existed for the physicians to inform their patient-subjects of the possible risks involved in advance. After all, the doctors had already witnessed the symptoms of radiation sickness in subjects exposed to lower doses during earlier rounds of the experiment. It is, however, almost certain that there was minimal, if any disclosure. Disclosure of risks would have exposed the experimental nature of the undertaking. The fiction about piggy-backing an extracurricular interest onto existing therapy that had finessed the problem of human experimentation in the contract application had now to be maintained. Every year, the contract had to be resubmitted for renewal, leaving a paper trail that needed to be consistent even if it was classified. Since M. D. Anderson was exclusively a cancer hospital, it seems likely that all its patients understood something about the diagnosis that had brought them there. But any candid discussion between doctor and patient of his or her likely prognosis was extremely unlikely. Such exchanges were, in the 1950s, rare indeed in any hospital. It’s more probable that the extra dose of experimental radiation was held out as a last hope, a lure to win the patient’s compliance rather than what it really was, the cynical manipulation of a patient for whom no hope remained. In its explicit disregard for the welfare of its patients, the M. D. Anderson experiments recall perhaps the most notorious of the medical abuses of the time—the Tuskegee studies of black men with tertiary syphilis in Alabama. Here, male subjects were encouraged to undergo spinal taps, which carried a risk of serious headaches. Although the procedure served a diagnostic rather than a therapeutic purpose, it was presented to its patient-subjects as a “special free treatment.”27 The M. D. Anderson patients were also being “treated” free of charge. Did the practice of charity medicine increase the passivity of patients, who
90
Under the Radar
were expected to be more compliant, grateful for whatever “care” they received? Did the absence of any fee-based contract between physician and patient weaken the physician’s commitment to the tenets of the Hippocratic Oath, loosening the restraints of accountability as well as the fear of litigation? In fact, no survivor of the Houston experiments (or surviving family member) ever did bring a lawsuit against the Anderson or any of its doctors. The perceived privileges of rank were clearly at work here, as the confessions of a participating physician revealed in an interview decades later: “Before Medicare, Medicaid . . . we medical students and our teachers looked upon ourselves as belonging to another social class from the patients we were taking care of. That they were lucky to be getting care of the best doctors in the community. . . . I really felt that had a little to do with the fact that we felt we were free to test these people and carry out studies on them.”28 The Hippocratic Oath and the ethics of human experimentation were subjects that interested Lee Clark, the M. D. Anderson president. Whether or not his concern had been sparked by the research contract he had helped to procure, Clark pursued the topic, undertaking a historical review of medical ethics for an address he delivered to colleagues in 1960 at the Mayo Clinic, shortly after the air force contract had expired. He called his address “The Ethics of Medical Research in Its Human Application.” In it, Clark drew primarily on literary rather than medical references, intimating that the solution to the ethical challenge was a question of civilized behavior rather than one of power or politics. Citing William Shakespeare, Bernard Shaw, and William James set the discussion within a classical tradition that left the nobility of the medical profession intact while admitting the fallibility of individual practitioners. The closest Clark came to an admission of the subject’s difficulties was to acknowledge what he called “dual ethical complexities that physicians must solve.” “Any professional group,” he wrote, “has an absolute and a relative system of ethics and the demands of each are not identical.” In other words, theory is one thing, the nitty-gritty decisions individual physicians are forced to make in practice quite another. But Clark stops here. He makes no mention of contemporary experiments in the United States nor any reference to his own experience unless one counts his remarks about Nazi medical abuses; these come fairly close to describing the circumstances of the radiation experiments carried out on his watch. As he describes them, “they were not the isolated and casual acts of individual doctors and
The Cobalt Back Story
91
researchists working solely on their own responsibility, but were the product of coordinated policy-making and planning at high government military [levels] . . . approved by persons in positions of authority who under all principles of law were under the duty to know about these things and to take steps to terminate or prevent them.”29 This was sailing close to the wind. At some level of consciousness, Clark clearly understood the consequences, for patients, of a secret chain of command that jeopardized their best interests. But he didn’t—or wouldn’t—see himself as complicit in any such “coordinated policy-making.” What helped keep a firewall in place between Clark’s public posture and his private experience was the by-now ingrained habit of secrecy. This was not the defensive seclusion of an oppressed minority trying to stay below the radar but rather an empowering secrecy that, during the Cold War, was a mark of ultimate insider status. Of course Clark’s response was not unique. Gilbert Fletcher found himself in an even more challenging situation. Unlike Clark, who was an administrator, Fletcher worked directly with patients. As the head of the department of radiology, he was more aware than most of the versatility of radiotherapy in the treatment of cancers. He was also aware of its limitations. In a thoughtful article published in 1955 summarizing the current thinking on the use of radiotherapy, he wrote: Ionizing radiation is highly toxic and damaging and should never be used without a specific purpose. Often, the patient is in worse condition after an ill-advised course of radiotherapy than he would be if he had not been treated. Radiation necrosis superimposed on growing cancer is more painful than cancer alone. However tempting it is to the physician to do something for the hopeless patient, the greatest kindness is not always to employ aggressive therapy by the indiscriminate use of a highly damaging agent.30
It is hard to understand how Fletcher reconciled this understanding with the selection, shortly afterward, of the final thirty patients in the NEPA project, all of whom would be subjected to the potentially toxic radiation dose of 200 roentgens and then asked to perform tiresome repetitive tasks that bore no resemblance to treatment. Again, mandated secrecy was the prompt for the willful misrepresentation of “medical” intervention. As a management strategy, it closely mirrored the practice of “compartmentalization,” the administrative policy at the heart of the Manhattan Project.31 To impose fortress conditions on what
92
Under the Radar
was, in reality, a loose and widely dispersed network of operations, General Leslie Groves introduced a form of heavy-handed partitioning. In his mind, “secrecy and security were synonymous.”32 To put this belief into practice, Groves dictated that every constituent undertaking be cordoned off from every other so that almost no one would be in a position to understand the mission or timeline of the overall project. To coordinate the complex strands of activity and keep him informed, a very few “area engineers” were granted special authority to peer over the walls separating the disparate research and production sites. Such a policy would, it was hoped, severely restrict the opportunities for leakage; if enemy agents infiltrated one operation, the damage they caused could be contained. Even scientists were kept in the dark about what their colleagues at another location might be up to. Workers who quit jobs at one Manhattan Project site could not be rehired at another. Curiosity was discouraged; asking questions could get one fired. Most people working in the program did not, in fact, know its true purpose until the bomb was dropped on Hiroshima. Though scientists chafed at these conditions, the success of the bomb project vindicated the policy of compartmentalization, at least among those who supported it. When the Cold War reawakened the same fear of enemy agents and espionage just a few years later, it was inevitable that the same “culture of secrecy” would reassert itself and be applied to every initiative that had implications for national security, including, of course, the radiation experiments. But under the recycled regime of compartmentalization, the physician-investigators did not follow the protocol of their predecessors. Instead, they breached the critical barricade separating classified work from everyday medical pursuits. Physician-researchers could be preoccupied with both at the same time and in the same setting. Given this uncomfortable proximity, what internal “area engineers” were physicians able to call upon to regulate communication between the two spheres and to keep traffic between them to a minimum? It is hard to overlook the influence of the Cold War command structure on physicians’ responses. Doctors were under orders to follow prescribed codes of behavior. The demand for concealment followed research funds into every hospital that signed a defense agency contract. Such conditions displaced—or at least disturbed—the prevailing philosophy of care. The difficulties this created may have been one more reason for physicians to select, for inclusion in their experiments, just those patients who would be most malleable, that is, those least likely to ask challenging questions that
The Cobalt Back Story
93
might stir up unresolved ethical issues. Ethnic minorities and veterans both fit the bill since racism and military discipline served equally to reinforce medical authority. As in-patients, both groups also offered another critical advantage: they could be physically segregated from other cancer patients, in either separate wards or separate buildings. The exceptional stresses of laboring with a disease with very high mortality rates added further fuel to the fire. The working environment in a cancer hospital in the 1950s must have been grim indeed. The repeated failure to save lives had to take its toll on the outlook of those charged with administering care. Between 1944 and 1960, the ten-year survival rate of all patients treated at the M. D. Anderson was, in fact, less than 25 percent.33 Physicians inevitably called upon a wide range of coping strategies to keep their spirits up. Such tactics would not begin to shift until survival rates began to improve in the early 1990s.34 Until then, the costs of the rationalizations that protected the medical profession would continue to be borne by cancer patients. In demonstrating their readiness to make whatever concessions were necessary, physicians played their part in sustaining the state of high alert that helped to legitimize the secret experiments. But while senior strategists plotted behind closed doors and dealt, if at all, with a faceless public, physicians caught up in the radiation experiments were far closer to being foot soldiers. And like foot soldiers everywhere, they had the tougher assignment, inflicting damage on individual victims, their own patients, in a form that reflected the Cold War’s special mix of paranoia and aggression. On the other hand, physicians were already primed—by years of medical experience—to equivocate with patients. They knew how to tailor medical consultations to the precise set of circumstances that each case presented. They determined exactly what the patient and/or family needed to know and what might be withheld. So couldn’t the additional burden imposed by official secrecy be construed as just a more extreme form of the standard evasions that already governed exchanges between cancer doctors and their patients? Isn’t this why defense contractors never felt the need to require experimental patients to be isolated from other patients receiving cancer treatments at the same institution? Didn’t they simply presume that a doctor’s authority was still unconditional so that the administration of a classified experiment posed no extra challenges? At the time, any personal experience of the disease would support that assumption. Outside of specialized cancer hospitals where admission more or less confirmed a diagnosis of malignancy, the revelation of cancer was
94
Under the Radar
more often withheld from patients than revealed. If any disclosures were made about the prognosis, they were typically made to family members, not to the patients themselves. The lack of candor characterizing the doctor/patient relationship was more or less the norm. In other words, physicians working with terminally ill patients were already practiced prevaricators. Adding one more deception would not be much of a stretch. But it did exacerbate bad habits. And it would make it harder, down the road, to relinquish a common practice that had been in use for so long. The impetus for change would, in the end, come less from the medical profession than from patients emboldened by the consumer and women’s health movements of the 1970s. Back in the 1950s, physicians were still in the driver’s seat. A survey carried out in 1961 asked cancer surgeons “What is your usual policy about telling patients? Tell? Don’t tell?” Ninety percent said they would regularly withhold the truth.35 What the patients wanted was a different matter. A study of cancer patients and their families undertaken at around the same time reported that almost nine out of ten of those questioned said that they would want to be told if they had a malignancy.36 It would be a dozen years or so before physicians caught up with them. By the late 1970s, the great majority of cancer doctors, responding to a questionnaire much like the one circulated in 1961, now reported that they would choose candor over concealment.37 By then, the AEC had withdrawn from medical research and was making no further demands on the doctor/patient relationship. The AEC had, in fact, disappeared altogether, its surviving functions assigned to other federal agencies in 1974. With its dissolution came a relaxation of the culture of secrecy that had invariably accompanied AEC research funds into the cancer wards. The same attenuation of medical and moral subterfuges accompanied the winding down of radiation experiments funded by all other defense agencies. Their withdrawal from experimentation with the sick must surely have helped set the stage for the greater openness that evolved between cancer doctor and cancer patient, beginning in the late 1970s. The Aftermath and the Consequences for Cancer
The Anderson researchers did concede that experimental data gathered from terminally ill patients might be significantly different from data supplied by subjects in good health. They knew they had no way to distinguish between the body’s reaction to already advanced disease and its reaction to radiation. The follow-up period was also very short, a matter of weeks,
The Cobalt Back Story
95
rather than months or years. Nevertheless, the researchers put a positive spin on the data they collected, estimating that a “safe” dose of radiation, which most people could endure without major complications, lay somewhere between 150 and 200 roentgens. Today, this would be considered a hazardous dose, with possibly serious consequences to some fraction of the people receiving it.38 After the experiment ended, Fletcher and his colleagues published their results in medical journals. The first version appeared in the in-house magazine of the Air Force School of Aviation Medicine, a journal with a highly restricted readership. The article included references to the “terminal” cancer patient and to the “pathology of atomic bomb casualties,” references that were deleted from a later version that was aimed at an audience of radiologists.39 Both accounts point out that a “critical evaluation of the procedure as a therapeutic tool was excluded from the report,” as though such an evaluation had been undertaken and would be rushed into print as soon as it was ready. This was, of course, a necessary fiction. Neither published paper makes any mention of the NEPA project that had underwritten and programmed the investigation. Nor did the M. D. Anderson Hospital broadcast its own involvement; NEPA is not mentioned once in the pages of its twentieth-anniversary history. The air force’s commitment to NEPA proved to be durable. Support from the AEC (the primary cosponsor) continued to wax and wane. By 1960, NEPA had soaked up hundreds of millions of research monies, from both the air force and the AEC. Discussion of the human risks involved never featured overtly in any debate about the project; it was the feasibility of the reactor and shielding design that took most of the heat. By 1960, fourteen years after its inception, NEPA looked no closer to becoming a reality than it had in 1946. By then, it had cost the government more than a billion dollars. The House appropriations subcommittee, responsible for overseeing the endless and massive handouts that kept it afloat, remarked that it had “had a very irregular history.” Despite hard lobbying by the industrial contractors on the project—General Electric and Pratt and Whitney40— NEPA had lost key supporters and, with it, its high-priority status. Finally, in March 1961, the recently inaugurated President John F. Kennedy delivered the coup de grace and killed it off for good.41 The experiment conducted on behalf of NEPA at the M. D. Anderson was just the first of several whole-body radiation experiments with cancer patients supported by the Department of Defense (DOD). The army and navy both got into the act, the latter using cobalt-60 therapy units at the
96
Under the Radar
Figure 4.1 “Nuclear Nursing.” In an official photograph released by the Department of Defense in 1958, a navy doctor and his assistant instruct a class of nurses “in techniques for monitoring patients who have received radiotherapy. . . . This device [something resembling a Geiger counter] may also be used in monitoring atomic battlefield casualties.” National Archives photo no. 434–RF–34–710328.
Naval Hospital in Bethesda, Maryland. Nonmilitary hospitals like Sloan Kettering in New York also participated. Most of them were, like the Anderson, caught up in DOD attempts to pin down both the acute and subacute effects of radiation in the hopes that such information would feed into strategies governing the deployment of combat troops on the nuclear battlefield. The last major series of experiments (1960 to 1971) was carried out at the University of Cincinnati Medical School under the direction of Dr. Eugene L. Saenger, a radiologist who had been a consultant to the AEC
The Cobalt Back Story
97
and to the Public Health Service before entering academia. Funded by the Department of Defense, the studies had all the markings of the M. D. Anderson experiments concluded just a few years earlier. Here too, the subjects were mostly terminally ill cancer patients who were neither informed of the serious risks involved nor told of the likely side effects of radiation exposure. They were instead allowed to hope for some therapeutic benefit. Fifty-six of the eighty-eight participants were African American. All were treated in a public hospital. As in Houston, here too participants were subjected to psychological testing designed to map any cognitive impairment that the cobalt-60 radiation might induce. Write-ups of the experiments in the medical literature deleted any reference to DOD funding or to its military objectives, portraying them instead as evaluations of the palliative potential of total-body radiation for patients with advanced cancers. Finally, official history once again airbrushed the experiments out of the picture. Like the anniversary history of the M. D. Anderson published thirty years earlier, the 1995 history of the University of Cincinnati omitted any reference to the radiation experiments that had gone on for more than a decade in its medical school. Saenger’s experiments were, if anything, more brutal than those carried out in Texas. The patients were sicker and more of them were subjected to radiation doses at the higher end of the range. In an interim report presented to a conference of radiologists in 1964, Saenger revealed that ten of the fifty-three patients who had participated had died within thirtyseven days of “treatment,” after the onset of serious radiation sickness.42 What distinguishes the Cincinnati experiments from the others is their notoriety. While the Houston experiments remain virtually unknown to the American public, those carried out by Eugene Saenger were exposed to public scrutiny, locally and nationally. The critical difference between them may well have been a start-date in Cincinnati that was ten years later (1961 versus 1951 in Houston). By the early 1960s, it had become much harder to keep the radiation experiments hermetically sealed, safe from the public’s attention. In the intervening decade, the American public had become aware of the controversy surrounding the release of radioactive fallout following nuclear tests in the American Southwest. The well-publicized story of the Japanese fisherman inadvertently exposed to a rain of white ash after the Bravo test in the Pacific Ocean also captured their imagination.43 If the public was not yet entirely savvy on the subject of fallout, it was at least becoming familiar with the complexities of radiation—and of government
98
Under the Radar
involvement in its dissemination. Equally important, the work of HUAC, the House Un-American Activities Committee, declined in importance through the late 1950s, losing some of its power to silence dissent. Journalists may have been emboldened to speak out once again without fear of careerending reprisals. Leakage to the press about the Cincinnati radiation experiments reflects this softening in the carapace of Cold War paranoia. It came in the form of articles in the Washington Post and Village Voice in October 1971, by Stuart Auerback and Robert Kuttner.44 Both picked up the story of the Cincinnati experiments from Roger Rapaport, who had interviewed Saenger for a book he published in 1971 (The Great American Bomb Machine). This was just a few months before President Richard Nixon signed into law the National Cancer Act, a piece of legislation designed to elevate the fight against the disease into a national crusade. The Cincinnati stories were, essentially, the first dispatches from the secret radiation underworld. The impact of all this unwelcome publicity was considerable. It raised concerns in Congress; Alaskan senator Mike Gravel asked for a comprehensive review of the project from the American College of Radiology (ACR), an organization that, as it happened, counted Eugene Saenger among its members. The ACR’s evaluation did not include interviews with any of the surviving participants or their families. In summarizing their conclusions, the ACR president assured the senator that the study had been “validly conceived, stated, executed, controlled and followed up; the process of patient selection conformed with sound medical practice; and procedures for obtaining patient consent were valid [and] thorough.”45 Gravel nonetheless found the report “evasive, disorganized, and deficient in almost every piece of relevant information.”46 Threatened with a possible congressional investigation by Senator Edward Kennedy and others, the University of Cincinnati finally pulled the plug on the project in 1972, bringing the history of total-body radiation experiments to a close. The careers of the physicians serving as principle investigators in these experiments suffered no ill effects from their involvement. Cold War secrecy continued to shield them from public exposure for the next twenty or thirty years. Physicians like James J. Nickson at Sloan Kettering and Gilbert Fletcher at the Anderson went on to enjoy distinguished careers as heads of department and to remain well-respected members of the cancer elect. Both were invited to serve on a new national committee of eleven radiologists set up by the National Cancer Institute (NCI) at the end of the 1950s to give advice on radiation therapies for cancer. Gilbert Fletcher
The Cobalt Back Story
99
served as the committee’s first chairman and James J. Nickson, its first secretary. Their involvement in the secret experiments with cancer patients was easily dwarfed by these later achievements and quietly dropped from their resumés. In a classic illustration of the interwoven strands of radiation history (and of its concentration in the hands of a small elite), Fletcher also played a significant role in the evolution of screening mammography. In the mid1950s, while the secret air force experiments were under way, he suggested to one of his radiology residents, Robert L. Egan, that he attempt to develop a reliable technique for diagnosing breast cancers with the use of X-rays. Egan took up the challenge, correlating preoperative X-rays with postoperative pathology reports, with surprisingly accurate results. He published his first evaluation of 1,000 cases in 1960.47 But Egan had difficulty winning support for the new idea until Lee Clark intervened on his behalf, lobbying among federal health agencies and others in Washington, including the NCI and the American Cancer Society. The idea finally caught the attention of the Public Health Service’s Cancer Control Program which awarded the M. D. Anderson a grant of $38,000 to demonstrate the reproducibility of Egan’s results.48 These were reported in a joint paper co-authored by, among others, Clark and Egan.49 It may have been these very negotiations and the research that came out of them that enhanced Clark’s national reputation and raised his visibility in Washington. In 1971, following the passage of the National Cancer Act, he was elevated to the newly established and very powerful three-person panel set up to oversee the administration of the entire National Cancer Institute. The new legislation liberated the NCI from oversight by the NIH and gave the panel direct (bypass) access to the White House. Clark served on the panel from 1972 to 1977. He also served as the president of the American Cancer Society in 1976–77 (his tenure there was best known for his fight against the banning of saccharin). The careers of men like Fletcher and Clark highlight the Cold War intimacy between the AEC and its minions on the one hand and the cancer infrastructure on the other. So many of them moved in and out of both worlds, mixing medical and military cultures. Central to both were radiologists. Those who were first recruited for the Manhattan Project had been selected from the field of cancer therapy rather than from diagnostic radiology because of their experience with high-energy radiation.50 The physics embedded in the World War II weapons programs would, in turn, be as critical to the later development of emerging therapies using radioactive
100
Under the Radar
isotopes as it had been to the design and understanding of atomic weapons. In fact, it was radiologists who brought physics and biology together, in wartime and postwar studies focused on the biological effects of radiation. And according to some, it was their perspective on radiation that made the secret experiments so vulnerable to abuse. John W. Gofman was the quintessential insider. A chemist and a physician, he worked on both the separation of plutonium for the Manhattan Project and on radioisotope therapy at the University of California at Berkeley. Yet despite his success as a valued member of the inner circle, he became a well-informed and voluble critic of AEC.51 In Gofman’s view, the postwar pursuit of radiation was a natural extension of the unbridled experimentation that had taken place throughout the 1920s and 1930s when radiologists went wild with the possibilities that X-rays opened up. Subject to no formal restraints, scientists applied the new technology to medical conditions running the gamut from bronchial asthma to the prevention of Sudden Infant Death Syndrome (the latter involved irradiating the thymus of at-risk babies). Of course they had no idea of the long-term harm their enthusiasms inflicted. There was no consensus about what constituted a high or low dose and none on the cumulative effects of repeated exposures. In the short-run, it all looked good. Radiologists, in other words, were themselves victims of what Gofman, interviewed in 1994, called “disaster creep.” It was a habit of mind that bred a rather cavalier attitude to the use of radiation. Scientists and physicians never in the early part of this century, never thought of the possibility that what they had to look out for was something 40 years down the line . . . this was lost on people who came in to run the Atomic Energy Commission Biology and Medicine Program after the passage of . . . [the] Atomic Energy Act [1954]. They brought in the whole troops from radiology from all over this country. These people all had this mindset that 200 to 400 [rads] of X-ray or gamma rays can’t hurt you. Poo-pooed it.52
This view of the radiologist’s outlook suggests a greater resonance between military and medical enthusiasms for radiation research. On neither side did the complicating factor of risk play much of a moderating or determining role. It was the boldness of vision that mattered and the ability to attack a problem unhampered. Risk aversion, in this framing, was a caution for the faint-of-heart. Looking back from 1979, Science magazine acknowledged the inevitable bias in this cozy relationship. “The radiation research community has lived
The Cobalt Back Story
101
almost entirely off the energy and defense establishments. . . . For anyone seeking objective scientific advice it is practically impossible to find someone knowledgeable who was not trained with AEC money.”53 How much did these mutually reinforcing interests and attitudes contribute to the broader response to cancer over the period? Specifically, what influence did they have on what would become “acceptable” levels of risk (or toxicity) in the next generation of cancer treatments? Though impossible to determine, the presence of experienced radiologists in prominent positions must surely have played a role. In the days when they were still generalists, their reach was wide-ranging, especially within the precincts of government. Their wartime and early postwar familiarity with—and understanding of—government science and its military protocols made radiologists ideal candidates for public service. Cold War administrations were eager to exploit their versatility. Many of those who had worked on the Manhattan and other projects during the war were, after 1945, recruited to serve on the civilian AEC, primarily as administrators with relevant scientific experience. Others, through the 1950s and 1960s, continued working in one of the national research labs like Oak Ridge or Brookhaven, or in the Public Health Service or at the National Institutes of Health. Some took academic appointments or served on one of the radiation standards-setting bodies (discussed in more detail in chapter 7). However they varied, all these positions required a fundamental belief in atomic energy as well as a practiced understanding of political expediency. Whether contributing to the national cancer research agenda, allocating funding, serving on review boards, or administering clinical trials, these bureaucrats, scientists, and academic physicians all contributed something to the creation of a consensus on the direction, substance, and permissible limits of contemporary treatment. What degree of pain and suffering was it “reasonable” to expect patients to tolerate? How should toxicity be measured against the expected benefits of treatment? How advanced did disease have to be before it was legitimate to recommend participation in trials of unproven remedies? The answers to these questions are critical yet they depend, in part, on intangibles that are rarely discussed and remain poorly understood, especially by patients. It is not unreasonable to suppose that professional veterans of the secret radiation experiments played some role in their resolution over the course of the postwar period, however indirectly or intermittently. After all, the experiments had no separate identity before they were stigmatized as a group, in the 1990s. Until then, they may have prompted few recriminations in the minds of those
102
Under the Radar
who took part. It is not at all clear what researchers took away from their experience, how it informed their thinking about radiation, cancer treatment and/or patients.54 Alas, we can only speculate about their later impact on the evolution of medical practice. At the highest levels, there were many crossovers, of the sort typified by Lee Clark. In 1948, Harry Truman appointed Leonard Scheele, a radiologist then working at the NCI, to the important post of surgeon general, replacing Thomas Parran.55 When Scheele left government, he became a senior vice president at Warner Lambert Pharmaceuticals. Shields Warren returned to academic medicine in 1952, at Harvard Medical School. But he continued to be involved in radiation policy, serving as a delegate to the Atoms for Peace Conference in Geneva in 1955 and then on the United Nations’ Scientific Committee on the Effects of Atomic Radiation. Stafford Warren (no relation to Shields), had done early work at the University of Rochester on breast X-ray techniques that prefigured mammography. He then served as the Manhattan Project’s chief of radiological safety (as an army colonel). After the war, he returned to civilian life to become the first dean of the UCLA Medical School. Charles Dunham, another physician with experience in nuclear medicine, served as director of the AEC’s Division of Biology and Medicine from 1955 to 1967, and then as chairman of the Division of Medical Sciences of the National Academy of Sciences between 1967 and 1974.56 The presence of radiologists among the higher echelons of national health and cancer policy did the profession no harm. Their rise to prominence was accompanied by an important shift in emphasis—away from cancer prevention (which radiology did not address) and toward treatment. The tradition of high office continues: in 2002, George W. Bush appointed yet another radiologist, Elias Zerhouni, to serve as director of the NIH. By the time President Clinton issued his executive order in January 1994 to investigate the history of human radiation experiments, most of the medical professionals who had administered them were either retired or dead. Clark himself died just a few months after Clinton set the Advisory Committee to work. But even if Clark had lived long enough to attend the committee hearings and be interviewed for the archives (as was Eugene Saenger), he would have been spared public embarrassment. The committee asked for no public apology from any of the cancer doctors nor from anyone else involved in administering the secret experiments. And many responses were less than penitent. Even after the release of incriminating documents from
The Cobalt Back Story
103
Energy Department archives, the M. D. Anderson continued to deny any wrongdoing. When interviewed by a local newspaper in 1994, Dr. Lester Peters, the hospital’s head of radiology, insisted, “There’s really nothing I can find that would question the ethics of the study at all. I think in the context of the 1950s, the experiments were fully justified as a therapeutic endeavor for people with hopeless cancer.”57 Peters’s response typifies the latter-day reactions of many of the institutional players compromised by their participation decades earlier. The experiments with cancer patients represent just a tiny fraction of the thousands of government contracts supporting human radiation studies with radium, plutonium, radioactive strontium, polonium, and cobalt between the 1940s and 1970s. For most of them, little documentation survives.58 It is impossible to know how many more of them may have disappeared from the record altogether, leaving no traces behind. From the evidence now available, it appears that the great majority of experiments were conducted in Veterans Administration hospitals across the country, rather than in the academic medical centers mentioned here. It is probable that the superior record keeping of the military accounts for some of the VA’s dominance but certainly not all; the ACHRE Final Report lists literally thousands of experiments at close to a hundred different VA hospitals. Veterans were the perfect subjects. They were already accustomed to military culture with its unquestioned hierarchy, discipline, and secrecy. Many of the soldiers were probably healthy or close to it. Perhaps most importantly, they were captive; the fact that the government paid their medical bills made them particularly vulnerable to whatever experimental proposals were put to them. As for the total-body radiation experiments, there is no evidence that any of the data they produced was ever of much use to anyone. As mentioned earlier, all of the results confounded the effects of disease and the effects of prior exposure to therapeutic radiation with the effects of the experimental total-body exposures. There was no way to disentangle one from the other. But even if all the patients had been of roughly the same age with the same types of cancer at the same stage of development (they were not), the numbers involved would still have been too small to yield meaningful results. Viewed in retrospect, the experiments illustrate all the damaging consequences of secrecy in science. As argued by the American Association for the Advancement of Science, “secrecy almost always impedes scientific progress.” In their 1965 report, the AAAS Committee on Science in the
104
Under the Radar
Promotion of Human Welfare reasoned that “free dissemination of information and open discussion is an essential part of the scientific process. . . . Science gets at the truth by a continuous process of self-examination which remedies omissions and corrects error. This process requires free disclosure of results, general dissemination of findings . . . and widespread verification and criticism of results and conclusions.”59 The radiation experimenters could afford none of these safeguards; they were, in fact, charged with the task of avoiding them. The scientific process was a peacetime luxury. In conditions of war—or Cold War—nonscientific substitutes would serve, however crude or dangerous. In the end, a great deal of public money had been squandered. The actual costs may not be commensurate with the human costs, but they were, nonetheless, significant. Because these experiments remain much less known today than, say, the Tuskegee syphilis experiments, we tend to overlook the stranglehold of defense interests on all medical research funding throughout the Cold War. In 1960, the AEC’s reported expenditure of $49 million on “biology and medical research” was considerably larger than the total value of research grants awarded by the National Cancer Institute in the same year ($34 million). On top of this, every corps of the military operated its own research establishment; the army alone had a medical research budget of just under $16 million in 1960.60 The Department of Defense, the navy, and the air force each managed a sizable medical research establishment as well. The pronounced defense bias of public funding also influenced the direction and timing of cancer research. The vast sums poured into the secret experiments put investment in properly controlled clinical studies on hold, at least throughout the 1950s. The little evaluation of cobalt therapy that did take place had to rely on very small and statistically insignificant samples of individual cancers. An early Canadian study followed eighty-four cancer patients over a six-month period. A small sample studied for a relatively short period, the study was further hampered by the inclusion of patients with a variety of cancers (three with breast cancer, for instance, and five with ovarian cancer). The authors sound a cautionary note in presenting their findings, acknowledging that “the conclusions made concerning the results of treatment are preliminary observations only which may be changed as time passes.”61 Inevitably, most of the troubleshooting and feedback associated with use of the early machines derived from experience in the field with patients like Irma Natanson, at the mercy of relatively inexperienced practitioners.
The Cobalt Back Story
105
If anyone was in a position to help standardize the use of the new technology, it was Gilbert Fletcher. His involvement in the air force experiments ran concurrently with his clinical explorations of cobalt’s therapeutic potential. As a specialist in an academically affiliated cancer hospital, he was no doubt better informed than most of the results that began to trickle in from other hospitals experimenting with cobalt. But he was himself neither constrained nor protected by anything other than his own best judgment, considered together with advice from his physicist and other members of his team. In a limbo between the introduction of the new therapy and any professional or medical confirmation of its value, he was operating very much on his own. The same could be said for John Kline, Natanson’s doctor at St. Francis Hospital in Wichita in 1955. It would be another six years before the need for a nationally coordinated knowledge base was formally acknowledged, at least in the case of breast cancer. That is six years in which cobalt pioneers operated in a kind of therapeutic wilderness, six years treating patients with no consensus (no shared knowledge base) and little feedback to guide them. The Roads Not Taken
A similar tale to the one recounted here could be told about the development of other medical technologies spurred by the Cold War. The closest to the history of cobalt would be that of particle accelerators like the cyclotron, first developed by Ernest Lawrence and his colleagues at the Berkeley (eventually Livermore) laboratory in California in the 1930s. Just as it had done with cobalt, the AEC invested huge sums of money in the lab to support the design and development of prototype machines. The new technology bypassed the need for nuclear reactors; it could generate isotopes itself. This did away with the need to have radioactive source materials sent back to nuclear reactors to be reirradiated. Like the new cobalt machines, accelerators had significant potential for use in both weapons development and in cancer therapies. And also like them, the early clinical testing of particle prototypes afforded opportunities for secret experimentation. To measure the rate of uptake of various radioactive isotopes, for example, a small number of cancer patients at the University of California at Berkeley were injected with plutonium and other isotopes and their excretions subsequently monitored. They were no better informed than their counter parts in Houston. One of them, a man diagnosed with stomach cancer, turned out to be suffering from an ulcer, not cancer. In general, early accelerator experiments with cancer patients were relatively rare, as
106
Under the Radar
were accelerators at the time, because each one cost much more to design and build than a cobalt machine.62 Given the massive support required to launch these technologies and their close affinity to weapons programs, the link between Cold War imperatives and cancer therapies is hard to ignore. Because their compromising early histories have been suppressed, we have never asked whether the medical applications of cobalt or particle accelerators or any of the later generations of radiotherapy would ever have enjoyed the success they have had without the sustained investment of Cold War funds critical to their development. Evidence suggests that government support was the determining factor even though the motives that galvanized that support turn out to be more complicated—and less expressive of the public good—than once suspected. Radiation did go on to become the undisputed third arm of the cancer treatment triumvirate, joining surgery and chemotherapy. Its influence on the design and administration of cancer therapy has been profound (for the profession of radiation oncology and for the evolution of complex treatment regimes that interweave radiotherapies with the two other modes of treatment). As an approach to disease that treats by external particle bombardment (whether by X-rays, gamma rays or, now, protons), it reinforces a mode of thinking that has itself been influential in shaping our response to cancer. Yet for all its many truly useful applications, radiation is not, on its own, a cure for most forms of the disease. What has the powerful presence of these costly machines crowded out of cancer control? What alternative or supplementary approaches have fallen by the wayside, lacking the kind of heavy-duty institutional backing granted to radiotherapies? Inevitably, every medical specialty jealously guards its own turf and proselytizes on its own behalf. Its success at protecting its own status and privileges varies from time to time and from institution to institution. Though there is no linear story line to trace here, competition between cancer specialties has certainly shaped the history of the medical response to the disease. One of the first demonstrations of the antagonisms at play was the dismissal of an early interest in immunotherapy by a powerful cancer “chief” devoted to the new “science of radiology.” James Ewing was a pathologist by training and the first to describe a rare bone cancer that bears his name (Ewing’s sarcoma). In the 1930s, he became the director of Memorial Hospital in New York (soon to become Memorial Sloan Kettering). In that position, he exerted an influence over the development of his institution
The Cobalt Back Story
107
that would be hard to imagine today. Ewing became a passionate (not to say fanatic) booster of radiotherapy. “It is not too much to say,” he declared in 1938, “that radiation treatment of cancer is the outstanding contribution of medicine to humanity in the present century and outweighs all previous progress in this field.”63 Ewing’s unchecked enthusiasm for the new therapy blinded him to the potential benefits of another innovative form of treatment developed by a member of his own staff. William B. Coley, chief of the Bone Sarcoma Unit, had experimentally injected cancer patients with bacterial organisms to stimulate their immune systems, on the hunch that it might lead to a shrinkage or even a disappearance of their tumors. The body’s immune system, he theorized, would fight the infection and the cancer at the same time. Despite some promising results, Coley’s research conflicted with Ewing’s commitment to radiation as the only acceptable treatment for bone cancers. Ewing had no patience for the conceptual thinking that lay behind his colleague’s work and used his influence to block the use of “Coley’s toxins” in his own hospital. With the power to convert a personal prejudice to an institutional bias, he was in a position to derail the development of immunotherapy at one of the nation’s most important cancer centers for decades.64 Behind Ewing’s judgment lay another important factor in the course of treatment history—financial support. When a former mentor became a major donor to the hospital, he stipulated that his gift be used exclusively for radiation equipment and related clinical facilities. This targeting of philanthropic funds was construed less as a restriction than as an opportunity to put Memorial Hospital on the map as an important center for radiotherapy. Here was a windfall that would not only pay the salaries of hospital staff but would give the hospital a competitive advantage over its rivals. Who could quarrel with this? Once absorbed into the system, the philanthropic source of the newly expanded services would come to be understood as confirming a trend already underway. The emphasis on radiotherapy would acquire an air of inevitability, a sense that it was simply building on its own success. This raised the bar for competing cancer therapies down the road. A new idea would only be as good as the friends and influence it could attract. The massive support for radiotherapy set the costs for new entries impossibly high. And so it remains today with new machines locking in ever larger shares of hospital budgets. We no longer question the organizing metaphors they represent (bombardment by particles). We ignore the cumulative
108
Under the Radar
impact of chance, timing, prejudice, or institutional support on the evolution of what we accept as the best quality care now available. We assume that cream rises, that state-of-the-art treatments have proved themselves over time on a level playing field, open to all comers. We are equally deaf to the animating spirits of the other modes of treatment as we are to those determining the history of radiation. But a comprehensive history of cancer treatment must consider all therapeutic strands together because the fate of each one has had—and continues to have—demonstrable impacts on all the others. We also still need to ask what other imagery we might mobilize to guide responses to the disease, other than those based narrowly on warfare of one kind or another. The overworked metaphors of battles, enemies, and victims reinforced by the Cold War cancer connections neglect the wider picture, the conditions that give rise to war and alternative pathways to its resolution. The persistence of the same basic treatment modalities for over half a century in the context of a continuing failure to prevent or cure the disease—makes the need for these questions as relevant now as they were half a century ago.
Chapter 5
Behind the Fallout Controversy The Public, the Press, and Conflicts of Interest
The secret radiation experiments carried out on terminally ill cancer patients were a clear sign of military interest in the hazards of radiation. The experimenters were not asking whether radioactivity was harmful— they already knew it was—they were hoping instead to define the limits of human tolerance. Cold War weapons planners were equally well informed. Insiders all, they had access to a substantial body of theoretical and empirical evidence that left no one in any doubt. The harmful effects of radiation were well known to the scientific community long before the advent of radioactive fallout. In work that won him a Nobel Prize, the geneticist Herman Muller in the 1920s demonstrated that X-rays could induce mutations in genetic material.1 He cautioned against the indiscriminate use of X-rays in medicine. The medical community knew about elevated rates of cancers among X-ray technicians and radiologists before the Second World War. Martyrdom among the pioneers of radiation therapies had, by that time, been well documented.2 More importantly, the perils of radioactivity were well known to those involved in the development of the atomic bomb. Twenty years after Hiroshima, Stafford Warren admitted that every step of the “research and manufacturing process in which uranium or plutonium was used was fraught with radiation hazards,” from the mining of uranium to the separation of isotopes to the handling of radioactive ashes and waste products.3 All involved the production and circulation of dangerous isotopes that workers could breathe in or ingest, that could settle on their clothing or equipment, that could contaminate ventilation or water cooling systems. 109
110
Under the Radar
In an irony that would not be lost on the victims of fallout decades later, defense planners took extraordinary precautions to address these risks. According to one official history, concern for safety was prompted as much by the need to maintain absolute secrecy about the bomb project as it was to safeguard the health of those working in potentially hazardous environments.4 It would be difficult to hide the serious injury or death of anyone working with radioactive materials at any of the secret sites carrying out research for the project. It would also be hard to explain any noticeable pollution of the air or water supply in surrounding areas if radioactive materials contaminated clothing or otherwise found its way into the outside world. In August 1943, the Manhattan Project established a Medical Section charged with responsibility for defining, measuring, and monitoring the health hazards of project operations and, equally important in light of the later Cold War experience of nuclear testing, with enforcing whatever protective measures were deemed necessary. A retrospective assessment by the Office of the Surgeon General two decades later determined that the program had fulfilled its mission. This overlooked some inconvenient truths in order to showcase security as “the original driving force that made the most hazardous, and probably the single largest, industry under one control in World War II the safest of all wartime enterprises.”5 The achievement was, like almost everything else associated with the Manhattan Project, classified. So it was totally lost on the American public who remained well and truly in the dark. What did average Americans know about radiation hazards—and their possible links to cancer? And when did they know it? If the public’s understanding of radiation was, at the start of the 1950s, limited by secrecy, it was also a phenomenon that was still unfamiliar. The understanding of cancer, on the other hand, suffered from overfamiliar distortions; the unexamined bundle of terror, fatalism and old wives’ tales that passed for awareness reflected centuries of unhappy accommodation to a disease that remained intractable and mostly fatal.6 Coverage of the disease in the popular press did little to challenge entrenched prejudices. It lacked even the pretense of critical distance. Very few journalists were themselves professional science writers.7 They were not in a position either to verify or to elaborate on the cancer stories they reported. And very few of their readers had much of what is now called science literacy (in the late 1950s, only about one in nine Americans claimed to have taken any science courses at the college level).8 The medical profession still spoke to the press from a commanding height; its authority was as unassailable in an
Behind the Fallout Controversy
111
interview with a reporter as it was in consultation with a patient. Journalists were inclined, like patients, to take physicians at their word. So they passed on to their readers—without comment—the same enthusiasm for unproven treatments and cures that cancer researchers communicated to them. Medical credentials bestowed legitimacy on almost any work. In accepting unconfirmed research as newsworthy, the press functioned more as an equal opportunity bulletin board than as a critical barometer of medical progress. If there were few suggestions that radiation could cause cancer, there was no dearth of alternative hypotheses. Dr Frank P. Paloncek, for instance, of the Roswell Park Memorial Institute, was reported by the New York Times as “hinting that a disastrous personal loss or demoralizing experience might precipitate the onset of cervical cancer in women.”9 The story offered no evidence to back up this assertion. In a similar vein, the newspaper’s story “Radiation Found to Aid Longevity” purported to summarize work carried out by two researchers “with Union Carbide.” It would have been more accurate to identify them as AEC scientists, which they were.10 It was also misleading to ascribe the report to the Journal of the American Medical Association when, in fact, the journal had simply provided cover for work carried out at the Oak Ridge Institute of Nuclear Studies. “A growing amount of experimental evidence,” the Times’ story enthused, “indicates that chronic exposure to lowlevel nuclear radiation is not harmful and may even increase the life span.” If this were the case, the piece continued, “there was little justification for limiting the growth of the atomic energy field on the basis of the . . . still unproven concept that all radiation—no matter how little—is damaging.” On a lighter note, the New York Times also reported the speculations of a congressional committee that considered the possibility of “sending cancer patients into space for therapy with natural radiation.”11 With stories all over the map—and off it—and none backed up with anything close to convincing evidence, how were readers to distinguish between the feasible and the fabulous? The undisciplined media coverage was to some extent a reflection of a fundamental lack of cohesion in the cancer world itself. To the layperson, it was not clear who, if anyone, was in charge. Nor did there seem to be any consensus on the pace and direction of research. The National Cancer Institute had yet to become a household name. There were few comprehensive cancer centers, no nationally coordinated clinical trials, no celebrity survivors raising the visibility of individual cancers. The lines of research that have since come to dominate the official response to the disease were,
112
Under the Radar
fifty years ago, barely visible in the general melee of ideas in play. As a result, there were no shared premises with which to make sense of the barrage of cancer stories that appeared in the press. Nor were there any agreed benchmarks to measure success or failure. Without a common language, there could be no cancer advocacy. The most coherent source of information about the disease, at least until the 1970s, were secondhand reports, in newspapers, of research originally published in medical journals such as the New England Journal of Medicine. These articles, deemed newsworthy in themselves, began to offer the public a glimpse of the ideas and the methodology underlying cancer news headlines. They represented a more selective perspective on the disease, documenting the results of studies that, unlike the JAMA article mentioned above, had been peer-reviewed and/or evaluated by the journal’s own editorial committee. This brought more rigor to the study of cancer. In demonstrating a willingness to lay bare the sources and limitations of scientific data, articles in the New England Journal of Medicine and elsewhere often reported findings that were open to interpretation. The idea that research results could be inconclusive, that they could even lead to scientific dead ends, expanded the outlook of a public accustomed to the steady forward march of medical progress. Translated into language readers could easily understand, newspaper reports on cancer studies enhanced the public’s understanding of the complexity of the disease, while at the same time increasing its familiarity with the basic concepts and nomenclature. However, reports of such studies, with a few exceptions, remained rare, accounting for only a fraction of the space devoted to cancer in the major dailies. Although they would play an important role in advancing public awareness of the cancer/radioactivity link, they would have little impact before the late 1960s.12 Until then, the bits and pieces that found their way into newspapers remained more a jumble of confusing and unconfirmed fragments than consistent or well-supported arguments. This made Americans easy prey for the evasions of Cold War propaganda. Essentially, the government had at its disposal a submissive press. Cold War discipline was a powerful deterrent to investigative journalism, especially during the McCarthy period. If there were any journalists who harbored suspicions about the secret radiation experiments in the 1950s or 1960s, they were not sharing them with readers. Reporters were all too aware of the fine line between the interests of the American public and those of national security. In 1952, Joseph McCarthy had put the press on notice with a special investigation into the communist infiltration of the
Behind the Fallout Controversy
113
news media. The press understood this as a challenge to “its right to criticize government.” The henchman assigned the task, Harvey Matusow, identified “76 hard-core Reds on the editorial research staff of TIME and LIFE magazines” and similar armies of subversives on the staffs of prominent newspapers. McCarthy also subpoenaed two newspaper editors to hearings in Washington.13 Reporters understood how easy it would be to wander off course onto the wrong side of a barbed-wire fence. At a time when anyone could bring charges of disloyalty against coworkers, friends, or relatives, it was better to play it safe. Given this restraint, it may not be surprising to learn that, by 1961, only one in five Americans believed that fallout was dangerous.14 If Americans still knew little about cancer, fallout, or the secret radiation experiments, by the early 1950s they did at least know something about radium poisoning. The case of the women dial painters had brought the subject notoriety. Working at the U.S. Radium Corporation in Orange, New Jersey, before the First World War, women workers had applied radium paint to the faces of clocks and watches to make them glow in the dark—and licked their brushes between applications to keep a sharp point on them. Over time, many of the painters sickened and died of bone and other cancers.15 But their illnesses had often been attributed to many other causes, including heart disease and even, occasionally, syphilis so the link with cancer was tenuous. The death of Marie Curie in 1934, after a lifetime’s exposure to the radium she first isolated in 1898, was attributed by the New York Times to “pernicious anemia.”16 But even if the obituary had reported leukemia as the cause of death, readers would not have made the link to cancer. At the time, leukemia was still considered as a separate “disease of the blood.” So Curie’s death would not necessarily have been associated with those of the dial painters or with any other cancer deaths. The boosters of Cold War weapons programs took advantage of the inherent difficulty of the subject to put as much distance as possible between radioactivity of any kind and cancer. An admission that fallout might, in fact, be carcinogenic, would have serious implications for atomic tests then underway as well as for those in the pipeline. Stirring up the fear of nuclear weapons would play into the hands of those favoring disarmament. It would also raise questions about the adequacy of safety measures then in place to protect civilians living downwind of the Nevada Test Site (the so-called downwinders). Eventually Americans would discover that the government had taken much more care to protect its own atomic workers at nuclear installations
114
Under the Radar
than it had done to protect its civilian population. If cancer were brought into the mix, the consequences of official negligence—and preferential treatment—would be greatly increased. To ward off that possibility, Cold War planners mounted a campaign of disinformation and denial that would do everything it could to keep the disease out of sight. The coverage of fallout and the coverage of cancer were not to connect. They were to remain parallel but never intersecting perils. Bad Luck or Bad Faith?
We now know that the military understood the dangers of fallout from the start of the nuclear testing program, just as they had understood the dangers of radioactivity in the many by-products of the atom bomb project. In July 1946, after atomic tests were detonated over the Bikini Atoll in the Pacific, an officer overseeing the radiological safety of the program sent a memo about it to General Leslie Groves, then head of the Manhattan Project. Forty years later, its contents surfaced in a civil suit in California, where the Justice Department stated that “government officials and scientists were aware of the hazards of radiation since the inception of the nuclear weapons programs . . . specifically that fallout could cause cancer.”17 But official denials were handed down from high places almost immediately after the bombs were dropped on Japan. Just a month after Hiroshima, Brigadier General T. F. Farrell, the chief of the War Department’s atomic bomb mission, “denied categorically” that the bomb had “produced a dangerous lingering radioactivity in the ruins of the town.”18 Three weeks later, Stafford Warren backed him up. Speaking as an army colonel, he claimed that “radioactivity in the Nagasaki area is virtually negligible and constitutes no hazard to health.” Warren stressed the point, insisting that “the radioactivity in Nagasaki . . . was only 1000th of that emanating from an ordinary luminous watch dial.”19 In the early postwar years, it was easy to maintain this posture, especially with bomb victims half a world away. Cancer’s latency period provided a breathing space for those eager to get the nuclear weapons testing program up and running on U.S. soil. But as evidence of the dangers began to leak out, more active strategies were needed to prevent the story from gaining traction. Stonewalling became the AEC’s weapon of choice over its forty-year campaign to keep the truth at bay. By temporizing in its dealings with the public, exploiting administrative inertia, withholding critical scientific data, and either censoring or delaying publication of governmentsponsored research (sometimes for decades), it gained the advantage of time.
Behind the Fallout Controversy
115
As long as harmful evidence could be suppressed, nuclear testing could continue as planned, releasing ever more radioactive fallout into the environment. Cancer was a casualty of this strategy. It became entangled in a longrunning scientific dispute with an agenda. On one side was governmentsponsored science that served as an instrument of official policy. Confusingly, this was often presented as part of a virtuous commitment to the war against cancer. A newspaper puff piece—“AEC adds to funds for disease studies”—highlighted the commission’s budget allocation “for research in the use of the atom to combat cancer.” But even this story had to concede, in its very last sentence, that the AEC report “made no mention of progress in this field.”20 On the other side of the fence was an underfunded alternative science supported by independent foundations and academic institutions with only weak (or no) ties to government. Some early fallout studies, for example, were sponsored by the University of Utah or by the Medical Care and Research Foundation of Denver. A few prominent scientists who raised alarms about radiation hazards (the chemists Linus Pauling at Caltech and John Gofman at Berkeley, to name two) were given safe harbor within universities that did not necessarily endorse their politics. Although they were vastly outnumbered, atomic dissidents did attract enough attention to create at least the appearance of a controversy. In the tug-of-war that ensued, cancer inevitably became a political football kicked around by players with very different perspectives, scientific and political. The idea that science was biddable was in itself unsettling. That evidence could be framed in a variety of ways to support widely different conclusions was common knowledge to scientists but news to most Americans. That disease could be commandeered to serve political ends was an inconvenient truth, not easy to digest. The eventual appearance of cancers among atomic test victims in Japan, Pacific Islanders, and downwinders in the American Southwest did eventually spark a national debate. But it was a long time coming and drawn out over decades. Because cancer did not generate the public drama of epidemics, it did not attract the same kind of intense media scrutiny or sense of urgency. It was a back-burner disease that found it hard to grab headlines. But if cancer was slow to gather a critical mass of interest, it was also inexorable. The disease was not burning itself out and quietly fading from view. It was doing just the opposite, claiming victims in increasingly remote and unexpected places. Its appearance among the clean-living
116
Under the Radar
Mormon communities of the Southwest, beginning in the early 1960s, ratcheted up awareness of the disease as something more than a personal calamity. From the time the first suspicions of fallout began to circulate among Nevada and Utah residents, linked to news of childhood leukemias among local families, every new diagnosis would raise the same question of causation: Had the disease occurred “spontaneously” or was it the consequence of government policy? Was the diagnosis just bad luck or was it the result of bad faith? These were questions that had never been posed before. Cancer had been attributed to many causes but never to government connivance. The possibility of official complicity could hardly be raised with impunity. As the environmental historian Philip Fradkin cautioned, “it was as if the very mention of radiation as a possible cause of cancer was an institutional taboo.”21 Inevitably, the taboo migrated from the public to the domestic sphere, discouraging even the most private speculations about the source of disease. If suspicion of government negligence was too painful a thought to entertain, fatalism was there to fall back on. Among the staunchly patriotic Mormon communities living downwind of the Nevada Test Site, accepting a diagnosis as the will of God was less dangerous than casting doubt on the goodwill of the American government. It was far safer (and possibly nobler) to keep those treacherous thoughts to oneself, at whatever cost, than to question authority. The naturalist Terry Tempest Williams, herself a Utah Mormon with a cancer history of her own, describes the conflict raised by the suspicion of government complicity. “In Mormon culture,” she writes, “authority is respected, obedience is revered, and independent thinking is not.”22 In her community, “if you were against nuclear testing, you were for a Communist regime.” Drawing attention to cancers among the downwinder population could itself be construed as an expression of disloyalty. Even when, against the odds, doubts were finally raised in public, they set off few alarms. Before the advent of national media, small-town news rarely traveled out of state. The earliest stories of cancers among those living downwind of atomic testing may have cropped up in the pages of the Deseret News in Salt Lake City or even in the Los Angeles Times, but they did not make waves east of the Mississippi. Nor did accounts of the early lawsuits brought against the government by those who had suffered economic losses (after fallout poisoned their sheep).23 Most of these events remained strictly local news. Throughout the Cold War, the AEC continued to insist that radioactive fallout remained “harmless to those now alive.”24 The AEC chairman
Behind the Fallout Controversy
117
himself, Lewis Strauss, took the lead, insisting that “so far as we are aware, no civilian has ever been injured as a result of these tests.”25 The government was materially aided in its deception by the inherent difficulty of the subject. The phenomenon of fallout was, after all, new and the science required to understand it, arcane. The path to perception was obscured by basic concepts of physics that were beyond the grasp—or patience—of most readers. This kept Americans in a state of ignorance, forced to rely on the secondhand reports of journalists who were often no better qualified to make sense of what was happening than their audience. This sense of being left out of the loop has now become a familiar one. Today, we’re much more alert to the machinations of political spin-meisters and to manipulation of and by the media. But fifty years ago, the great majority of Americans had little reason to suspect their government of pursuing objectives that were not in their own best interests; this of course excludes the minority victims of racism, especially Native, Japanese, and African Americans whose cynicism was well founded. For the most part, Americans saw themselves as recovering from a war in which decisive U.S. intervention had not only helped to bring hostilities to a close but had also spared the U.S. mainland direct enemy attack. Like many of the Mormon victims in Utah and Nevada, they were grateful to and proud of their government. Cold War enthusiasts were able to coast on this groundswell of goodwill for several years. In the honeymoon period ushered in by the peace, the bombs in Hiroshima and Nagasaki were seen as having done their job; they brought the enemy to its knees. There was no hint at the time that the nuclear blasts packed an additional punch that would take years to play out. Nor was there reason to suspect that the use of atomic weapons, rather than marking the end of a war, would come to signify the start of another one, this one with no prospect of an end. The first tear in the scrim of postwar complacency was triggered by the detonation of a hydrogen bomb over an island in the Pacific Ocean. This was the Bravo test set off by the United States over Bikini Atoll on March 1, 1954. The ensuing fallout cloud took its planners by surprise. Exceeding all expectations, it covered an area of 7,000 square miles. In the form of white radioactive ash, it rained down on the unsuspecting inhabitants of the Marshall Islands. It also fell on U.S. service personnel administering the test close by and, farther away, on the Japanese fishing trawler, the illnamed Lucky Dragon. The story of the hapless fishermen, their subsequent illnesses—and in one case, death—were reported in detail in the American
118
Under the Radar
press. So too was the discovery of the contaminated fish in the ship’s hold and, later on, the revelation of widespread pollution in the surrounding ocean.26 Readers took notice. Though carried out at a remote (and presumably safe) location in the far Pacific, the detonation had unleashed radioactive fallout that proved to be lethal to both humans and wildlife. Military experts in charge of the testing program, it turned out, had failed to predict the energy released by the bomb or to circumscribe its area of impact. Fickle winds and weather could ruin their best-laid plans, with disastrous consequences. The Lucky Dragon had been trawling in waters eighty-five miles away from the point of the blast, well outside the designated exclusion zone. When the geographical reach of the Bravo test was superimposed onto a map of the eastern seaboard of the United States, the full potential of a similar explosion occurring over densely populated areas became immediately clear. As Americans began to connect the dots for themselves, they clamored for reassurance. Fears unleashed by the Lucky Dragon affair needed to be addressed, anxieties assuaged. How dangerous was this mysterious white powder? Now was the time for the Atomic Energy Commission to come clean about what they knew and did not know about the perils of fallout. But instead of rising to the occasion, the AEC held back, postponing the release of any formal statement for almost a year. When asked by Tennessee senator Estes Kefauver at a congressional hearing “Why couldn’t there have been some official information about the effect [of fallout] a long time ago?” Willard Libby, an AEC commissioner and a Nobel Prize–winning chemist, replied, “We have been busy collecting information . . . we wanted to be correct . . . [we were] trying to be very careful and accurate in the release.”27 It was hard to force the issue as long as the AEC sat on the classified data surrounding the event. Without key evidence, it was impossible to know how to measure the seriousness of what had happened. At the heart of this controversy, then, was secret science. This was not a sunlit search for a magic bullet like penicillin that could save people’s lives and bring glory to medical research but a darker science that endangered human life. It exploited the fear of Soviet and Chinese aggression to justify a continued reliance, in peacetime, on a military command structure that was, in many respects, antidemocratic. The science of nuclear destruction that placed the ability to annihilate the Earth’s population in the hands of a very few was a reminder of the extreme imbalance in power between the government and its citizens. The advent of fallout (like the human
Behind the Fallout Controversy
119
radiation experiments carried out in Houston and Cincinnati) demonstrated the remoteness of that power, so vast that the deaths of innocent Americans caught up at its edges could be made to appear insignificant. Missing in Action: The Medical Profession
The government’s response to the question of fallout was fronted by scientists employed in the arms race. Primarily physicists and chemists, they spoke for themselves, with government clearance. Portrayed as men of integrity, their words, it was hoped, would carry an unassailable authority that would preempt further debate. In fact, the esoteric nature of the work, its classified status, and its official backing did deter many skeptics or opponents from speaking out. Challengers would find it difficult to argue the science from an equally well-informed position. Lacking institutional support or a public platform, many were simply shut out. But problems of access were overshadowed by the disincentive of fear. No scientist, no matter how well placed or how highly credentialed, would be immune to potentially career-ending prosecution, as the suspension of Robert Oppenheimer’s security clearance in 1954 demonstrated.28 A scientist flagging up the dangers of radiation was, in the eyes of the Cold War–mongers, as guilty of treason as a citizen espousing a left-wing political agenda. The two could not be uncoupled. The diatribes of David Lawrence, the publisher of U.S. News & World Report, exemplified the bluntness of this pairing. “Evidence of a world-wide propaganda is accumulating,” he wrote in 1955. “Many persons are innocently being duped by it and some well-meaning scientists and other persons are playing the Communist game unwittingly by exaggerating the importance of radioactive substances known as ‘fallout.’ ”29 The extremism that characterized ideological commentary in the 1950s cast the scientific intimations of cancer as a betrayal of American dignity. Such portents of disease impugned both the integrity and the competence of the American weapons program. But they also hinted at the darker idea of disease as a sign of weakness, growing unseen in the body (and in the body politic). Any Cold War strategy contaminated by cancer would be a source of disgrace and therefore, by definition, inadmissible. Every new diagnosis of the disease was an unbidden reminder of the possibility of something rotten at the heart of the enterprise. Only the imposition of a taboo could keep this thought at bay, render it literally unthinkable. Cancer was already an outcast in civil society; it would be easy to banish it altogether, in the name of national security.
120
Under the Radar
Greatly facilitating this silence was the absence of the medical profession from the fallout debate. Those with the most relevant firsthand knowledge of the disease chose not to contribute to it. There was, in fact, no concerted response from the medical profession at all. In its place was a broad array of reactions reflecting the disparate and sometimes conflicting interests of doctors. At the top of the pile were those select few physicians, a handful of radiologists, who served on the Atomic Energy Commission. These men were political rather than medical appointments, serving as administrators rather than doctors. They were individually recruited to expedite national defense policy. Much further down the food chain were doctors in active practice. Normally the interpreters of disease and gatekeepers to all things medical, for the most part they remained silent in the fallout debate, through the 1950s at least. Their absence deprived the public of a familiar source of guidance and reassurance. Whatever restraining influence might have been brought to bear by the Hippocratic injunction to “do no harm” was lost to view. Exceptionally, an individual physician did speak up to remind the public of the implications of that oath. “It would seem apparent that with our present lack of factual knowledge about the potential . . . carcinogenic properties of radioactivity,” wrote one concerned doctor, “we should suspend large-scale radioactive enterprises until our biological knowledge is more secure than it is at the moment.”30 This articulation of what, much later, would come to be known as the “precautionary principle” was extremely rare. When the concept was finally formalized in 1997, it came not from the medical profession but from the environmental movement.31 At a meeting of the American Medical Association (AMA) in 1957, a family doctor acknowledged the bystander status of his profession: “We seem to be ensnared in an incongruous situation wherein the physical scientist controls the ‘atomic pile of education and knowledge,’ and the physician passively stands aside.” Intimidation tactics, he insisted, kept many scientists and doctors quiet, generating a “fear of involvement in an issue sufficiently controversial to provoke security clearance trouble. Thus we see that much of the work done in nuclear radiation has not been made available to the medical profession in general and cannot have been exposed to the acid test of widespread clinical observation.”32 Science had certainly enjoyed a head start in the fallout controversy. The long latency period of many cancers kept the early discussions focused on the short-term effects of exposure; radiation sickness was understood
Behind the Fallout Controversy
121
and treated as an acute illness, not as cancer. Malignancies had yet to show themselves in significant numbers. As long as they remained out of sight, they could remain out of mind. The fallout debate could, then, be defined as a scientific rather than a medical issue. This permitted talk of mutations rather than malignancies. Discussions of genetic damage remained conveniently theoretical. Mutations wouldn’t show up for another generation by which time, defense strategists fervently hoped, the United States would have won both the Cold War and the arms race. Physicians on the government payroll (at the Public Health Service, NIH, VA hospitals) or on federal research contracts (like Gilbert Fletcher and Eugene Saenger) kept their heads down. As represented by the AMA, their professional interests were fueled by the same ideology that drove the Cold War. Outside of government, there was, in fact, no greater adversary of the Soviet system. If anything, the AMA’s use of anticommunist rhetoric was even more strident than official propaganda. Ever on the lookout for signs of socialist cabals, the organization saw evidence of dangerous infiltration everywhere, including within its own profession. During the Cold War, medical societies introduced loyalty oaths and many of those who refused to sign them were summarily fired.33 The assault on civil liberties— which had an immediate impact on the lives of many physicians—also pushed aside debate on the politics of health care. Doctors who had advocated a greater role for government suddenly found themselves scrambling to protect their livelihood. Cold War ideology proved to be just as useful to the AMA as it was to the AEC. Under cover of a militant anticommunism, the AMA was able to defeat the last important attempt to legislate for a compulsory health insurance program. The physicians’ group viewed the plan as an unwelcome intrusion into the private world of medical practice. Put forward by the Truman administration, the proposal precipitated what was, at the time, the most expensive lobbying effort ever mounted.34 The AMA hired a public relations firm that orchestrated a massive campaign involving local and national organizations of every stripe (chambers of commerce, Lions and Rotary clubs, public libraries, dentists and pharmacists, and many others). Propaganda emphasized the potential domino effect of any public involvement in health care. “Would socialized medicine lead to socialization of other phases of American life?” a widely circulated campaign pamphlet asked rhetorically. “Lenin thought so,” the pamphlet replied. “He declared: ‘Socialized medicine is the keystone to the arch of the Socialist State.’ ”35
122
Under the Radar
The newly formed National Health Service in Britain, inaugurated in 1948 by a popular Labour government, became the AMA’s special whipping boy. The organization was eager to cast the British experiment as a dangerous precedent. Suspicious of any organization with leftist affiliations, it even managed to impugn the integrity of British nuclear workers, raising doubts about their complaints of fatigue, skin conditions, and other occupational hazards associated with work at nuclear reactors. As the AMA journal chided its readers: “It has to be remembered that, with a labor government in control of the country, workers have every opportunity to exploit real or alleged grievances.”36 The outbreak of hostilities in Korea in 1950 threw Truman’s legislative agenda into disarray and brought a sigh of relief to the AMA. But the organization retained the services of its PR firm, Whitaker and Baxter, until the Republican victory in 1952 assured the defeat of any compulsory health insurance program. By then the AMA had spent over $2.5 million (close to $20 million in today’s dollars). The election of Eisenhower finally pacified the lobbying group, ushering in a period of relative calm. Having secured their primary objective, its members were hardly likely to rock the boat by jumping into the fallout fray. The AMA’s primary concern was, after all, the protection of the medical profession. Its distance from—and insensitivity to—issues of public health raised by the advent of fallout are evident in its bellicose posturing throughout the period. En route to its decisive victory, the president of the AMA urged members to remain on guard. “Certainly we cannot afford to retire to our ivory towers now and politely disparage the work of the surgeons who cut out the cancer of socialized medicine.”37 More than a decade later, the organization’s opposition to Medicare suggested a similar indifference to the plight of those most at risk for cancer and most in need of “socialized medicine,” that is, the elderly on fixed incomes. The AMA’s posture intentionally conflated the payment for medical care with the provision of care itself. Although direct intervention in the nature of medical treatment had never been part of any legislative proposal on health insurance, the AMA wanted to make sure that it never would be. In linking the two together it was serving notice that all future attempts to introduce such ideas would be tarnished by the “socialist” brush. With both reimbursement and treatment under the watchful eye of the AMA, the options for direct government intervention in health care were extremely limited. The four modest cancer treatment clinics run by the AEC in conjunction with its own experiments were kept out of the headlines and quietly disappeared well before the end of the Cold War.38
Behind the Fallout Controversy
123
They were not to compete with private hospitals. The AMA’s stranglehold on the regulation of medical practice took its toll on all aspects of cancer policy, influencing the scope of economic and research strategies as well as the delivery of medical care. Opportunities for the more rigorous monitoring and evaluation of costs were squandered. Nationally administered health insurance would have introduced a fiscal discipline that might have radically altered the landscape of medical services. It might, for instance, have prompted greater interest in cancer prevention, redirecting the course of research toward long-term solutions. It might also have set the bar higher for both old and new treatments, a commitment likely to increase efficiencies over the long term. Such shifts in emphasis would have benefited the taxpayer as well as those directly affected by disease. Instead, the privileging of the private sector, riding the wave of Cold War paranoia, led the cancer establishment in a different direction, relaxing institutional constraints on the distribution and costs of care. Though the lobbying group remained on the sidelines during the most active years of the nuclear controversy, not all doctors followed its lead. Responding to the threat of a Soviet attack, many turned their attention to civil defense. Physicians, after all, were expected to take a leading role in caring for the sick and wounded on the front line following a nuclear explosion. But the elaboration of detailed plans was essentially a quixotic task, given the odds against the survival of necessary medical personnel, hospitals, or transport in the wake of a nuclear catastrophe. The casualties of a thermonuclear war were, as the physician and essayist Lewis Thomas described them, “beyond the reach of any health care system.” “Modern medicine,” he lamented, “has nothing whatever to offer, not even a token benefit.”39 Its core mission was to save—not to take—lives. In overriding the interests of the medical profession, the threat of a nuclear war inevitably alienated its practitioners as well. The job of protecting the population was made even harder—for those willing to participate—by the lack of official assistance. A civil defense director complained that the work “was ‘seriously hampered’ by the security requirements of the Government loyalty program.”40 Planners could get little information on fallout data; without it, their medical response made little sense. The frustrations of this work did a lot to politicize the doctors involved. They fed a growing discontent that led, in 1961, to the formation of Physicians for Social Responsibility, an organization that campaigned to end atmospheric nuclear testing.41
124
Under the Radar
Many other doctors were also concerned to disseminate information about radiation sickness, an illness that was new to most of them. Doctors needed to learn (and teach others) about its early symptoms (such as burning and tearing of the eyes, nausea, cataracts) and about effective antidotes. Alas, there was nothing new they could bring to what might be radiationinduced cancers; their symptoms were identical to those of cancers from all other causes and they would be treated similarly. The carcinogenic impact of radiation might be inferred from the greater frequency of certain malignancies but it could not be identified by any symptoms of the disease itself. Finally, the absence of practicing radiologists also needs to be acknowledged. Radiologists familiar with the literature in their field would have been well aware, before the mid-1950s, of the existence of what were called “radiation cancers”—cases of disease that bore “irrefutable evidence of the carcinogenic effect of ionizing radiation.”42 Information on severe radiation injuries (malignant and nonmalignant) had been gathered and analyzed since the very early years of the twentieth century. The often long latency period between exposure to radiation and the first signs of cancer had also been recognized. None of this evidence, however, made its way from the pages of esoteric journals into the public arena. Radiologists were hardly likely to draw attention to the dangers associated with their emerging specialty. Rather, they had their hands full trying to contain the damage to their profession occasioned by the fallout debate. The closer look at the fate of radiology uncovers something more about the conflicting motives of those dragged into the controversy. For those who took part in it—as well as those who did not—the response was a calculated scramble. In each case, participation or nonparticipation was dictated by narrowly partisan interests, whether military or professional. The government would no doubt have preferred to avoid public disclosure altogether, but it was hard to hide a mushroom cloud. Scientists dependent on government largesse (either directly or indirectly) knew their duty—to provide whatever damage control was necessary. The AMA kept to the sidelines, more concerned with safeguarding the independence of the medical profession than with safeguarding the health of Americans. Physicians in practice, and especially the radiologists among them, were the weakest link in this chain. As such, they inadvertently became the fall guys, sacrificed to the “greater good” of Cold War imperatives. Throughout the debate, there was little representation of what might be termed “the public interest.” The sense of urgency manufactured by the Cold War recognized only the most immediate public health concerns and
Behind the Fallout Controversy
125
only when it had to, when, for example, downwinders began to complain about the lack of protection from the nuclear tests. Anything less pressing had to get in line. As long as the link between fallout and cancer remained unproven, it would stay an issue that could be postponed to the indefinite future. It belonged to the realm of preventive medicine which was, ultimately, a peacetime preoccupation and a public health problem distinct from medical care. Under the conditions of red alert that prevailed through the 1950s, the first order of business was to secure the future survival of the country; the future health of the population could wait. Another group with sufficient clout to get the attention of the press was the American Cancer Society, then as now the largest cancer charity in the country. But the ACS was run by physicians of the AMA stripe, and its efforts during the 1950s were directed toward treatment and research. On the subject of the fallout/cancer connection, it had nothing to say. Nor did the Federal Radiation Council, charged in 1959 with the task of setting radiation standards for peacetime nuclear operations. In 1962, it admitted for the first time that “its radiation protection guides did not apply to nuclear fallout.”43 They were “not a dividing line between safety and danger in actual radiation situations; nor are they alone intended to set a limit at which protective action should be taken or to indicate what kind of action should be taken.” Congressman John Lindsay ridiculed this temporizing as “some of the most ingenious administrative double-talk of our time.”44 As important as the silence of the medical establishment in the fallout debate was the defection of the anticommunist Left. They too failed to connect with the issue. But they were not idle. Many became willing warriors in the cultural Cold War, eager to promote the virtues of intellectual freedom in a postwar Europe still vulnerable to the charms of Marx and Lenin. Well-known public figures, from Carson McCullers to Arthur Schlesinger Jr., were recruited to pay tribute to the virtues of democracy, often at expenses-paid conferences in Europe. The Congress for Cultural Freedom, held in Berlin in 1950, set the tone for many to follow. The covert sponsorship of the Central Intelligence Agency not only ensured a steady source of funding for these efforts but helped to keep the attention of participants distracted from the Cold War at home. While American intellectuals abroad insisted that freedom required “the toleration of divergent opinions,” where fallout was concerned, they chose not to exercise that prerogative.45 The dismissal of preventable cancer from the public agenda does not, of course, signify any slackening of the disease itself. On the contrary, incidence rates rose unrelentingly throughout the Cold War along with the
126
Under the Radar
number of deaths (between 1950 and 1960 alone, the total number of cancer deaths rose by 27 percent).46 But there was no one in a position of authority secure enough to draw attention to this trend. Patient advocacy groups were at least two decades away. Until their arrival, cancer patients had no standing in the public debate, no means of expressing their unique perspective. By the time they began to raise their collective voice in the 1980s, the course of the “war on cancer” had already been set by directives legislated in the 1971 National Cancer Act. Since then, attempts to retrofit patient-driven initiatives onto the basic infrastructure have had to cope with all the disadvantages that plague late starters in any race. X-Rays Take a Hit
Radioactive fallout was not the only source of manmade radiation. Thanks to the postwar promotion of isotopes, there were now hundreds of products that contained radioactive substances. In an attempt to camouflage the singular status of fallout in the public imagination, the advocates of nuclear weapons testing decided to put them all on show in a major public relations initiative. Against a backdrop crowded with radioactive hazards, fallout, it was hoped, would appear as just a minor irritant, one in a large family of acceptable risks. Accordingly, spokesmen for nuclear weapons marshaled statistics that professed to compare the relative significance of all the sources of radioactivity introduced in the twentieth century. Though the numbers were more guesstimates than verifiable data, they were brandished as the central talkingpoints of several promotional interviews and print campaigns. The National Committee on Radiation Protection (NCRP), for example, attributed just 2.5 percent of the total average radiation exposure to fallout and assigned exactly the same share to televisions, watches with luminous dials, and power plants. Against these minor exposures, the NCRP set medical and dental X-rays which, by their reckoning, accounted for a full 50 percent of all exposures to radioactivity (and this included only diagnostic, not therapeutic exposures).47 The statistics did not draw any distinction between the medical X-ray as a “calculated risk designed to benefit the subject” and exposure to fallout, which was essentially “a gratuitous dose” of radiation.48 It limited its comparison to the common denominator of percentages, which dispensed with the finer distinctions of choice and informed consent. As a public relations tactic, the comparisons did the trick. The ensuing demonization of medical X-rays was, in fact, a measure of their success. At least for a time, the fear of X-rays did seem to eclipse the fear of fallout.
Behind the Fallout Controversy
127
Popular magazines, taking the message to heart, helped spread the alarm— “more peril from X rays than bomb experiments.” “Medical and diagnostic use of X-rays is not harmless.”49 Though no doubt unintended, the comparison forced some elements of the medical profession into a defensive posture in a debate in which they were only marginal players. Most people, scientists included, still associated the medical use of X-rays almost exclusively with diagnostic tests (used to identify fractures, bullets, shrapnel, tuberculosis, or pneumonia). The same applied to those who carried them out. Before the arrival of cobalt, radiologists were linked almost exclusively with diagnosis.50 As a medical specialty, they belonged to a caste that was clearly inferior to the privileged group of academic research physicians with experience of highenergy radiation under their belts. These were the men who had been drafted into the Cold War managerial elite of men who had no natural affinity with the lowly family physician and others who bore the brunt of the campaign to discredit medical X-rays. A February 1958 article in the Reader’s Digest addressed the controversy head-on. “What’s the Truth about Danger in X Rays?” revealed the paucity of regulations in the field—“any physician, osteopath, chiropractor or naturopath, without the slightest radiological training, can obtain and use X-ray equipment for any examination he may choose.”51 He was also free to use the machines to treat ringworm, bursitis, and acne or to remove excess hair. In a survey of X-ray machines in dental offices, “units were found to be giving up to a dozen times as much radiation as is necessary when the most modern equipment is used.” Finally, there was the infamous fluoroscope machine, designed to enhance the shoe salesman’s “expertise” by revealing the contours of the human foot in long and dangerous radiation exposures. In 1958, such machines were still in use in all but one state.52 The cumulative impression created by this recitation of radiation sources was more that of a free-wheeling market with rich opportunities for abuse than a carefully monitored medical service. In highlighting the hazards of X-rays, the weapons boosters did not intend to create difficulties for the practice of radiology. The unfavorable comparison between medical X-rays and fallout was simply a casualty of their larger purpose—to protect the nuclear weapons program. Medical X-rays were too good to pass up; they made the perfect scapegoat. References to stray radiation, to the dangers of “incautiously administered X-rays” and to physicians who “willfully neglect the elementary precautions” just happened to be part of the story. The consequences, however, were real. Inevitably,
128
Under the Radar
the negative publicity took its toll. Patients showed a reluctance to come forward for treatment. Physicians complained that “beyond any question, a good many of the 100 million people a year who get X-rays have been frightened.”53 At a press conference in 1956 to launch the first major study on the biological effects of radiation, the chairman of one of the report’s committees expressed the view that “we have been rather profligate about using X-rays, we have used [them] in many instances where the medical importance was zero.” With what, in retrospect, seems a casual indifference to the consequences, chest X-rays were routinely used to screen for tuberculosis; in 1950, an estimated 15,000,000 Americans had one.54 They were required for admission to hospitals, to the armed services, and to many sectors of employment (in much the same way that drugs or AIDS tests might be used today). They were also a regular feature of annual physicals, for children as well as for adults. It wasn’t until the late 1950s and early 1960s that mass X-rays programs began to disappear, elbowed out by the tuberculin skin test.55 In addition to their widespread use in TB screening, X-rays were also a routine feature of many pregnancies, used to measure the size and position of the fetus in relation to the size of the mother’s pelvis. There seemed to be no awareness that the effects of such exposure were additive and permanent. The 1956 committee, acknowledging that some restraint was called for, recommended that “the medical use of X-rays should be reduced as much as is consistent with medical necessity.”56 These cautions were clearly warranted. But, for the most part, they addressed only the diagnostic uses of X-rays. Therapeutic applications exposed many cancer patients to doses that were thousands of times higher than those associated with routine chest and dental films. As the new therapies became more widely used and patients began to understand the differences in the risks involved, their anxieties rose accordingly. A diagnosis of cancer was already considered a death sentence. Now it would appear that if cancer didn’t immediately kill its victims, then treatment for it would. The ubiquity of the attack on medical X-rays reveals the very limited influence at the time of the fledgling specialty of radiology. In the late 1950s, of 227,000 practicing physicians in the United States, fewer than 5,000 identified themselves as full-time radiologists.57 And the territory they had staked out for themselves still remained provisional. Most other physicians were content to allow radiologists to control the execution of X-rays, but many (surgeons in particular) balked at the idea that radiologists were uniquely qualified to interpret them. The turf wars between the
Behind the Fallout Controversy
129
newcomers and the established specialists help to explain why the AMA did not rush to the radiologists’ defense. Radiologists would have to look out for themselves. This they did, launching their own counterattack in campaigns extolling the virtues of the new technologies. In 1955, the year cobalt radiotherapy was introduced (and Irma Natanson was treated), the American College of Radiology circulated 100,000 copies of its first pamphlet aimed at a general audience—“X-rays Protect You!”58 A few years later, radiologists acknowledged that “public fears of radiation have been so inflamed by debate on fall-out and nuclear war that the medical use of X-ray has been impaired.” Addressing colleagues at the annual meeting of the American Roentgen Ray Society, a radiologist pointed to the “semi-hysteria” that the subject had whipped up, and claimed that it had “seriously clouded, in the public eye, the legitimate use of radiation for medical purposes.”59 It was “time to inject some common sense into our swooning population,” warned another radiologist.60 Radiologists were fighting an uphill battle. This would prove particularly challenging fifteen or twenty years later when the American Cancer Society and the NCI hoped to win the support of American women for the new diagnostic technique of screening mammography. Paradoxically, the Cold War had not only helped to underwrite the medical technology that would make mammography feasible; in its campaign against fallout, it managed to make that technology frightening as well. Here was the classic “atoms for peace” dilemma all over again, the fear of radiation and the fear of cancer commingling and reinforcing each other. In the end, the fear of radiation became so pervasive and so clamorous that it galvanized efforts to drive down the levels of exposure to a fraction of what they had been in the early mammography experiments of the 1960s. Alas, the technical improvements in the procedure’s safety were never been matched by improvements in its accuracy (with consequences for women and the practice of medicine that are taken up in chapter 7). Inevitably, mammography aroused the same cognitive dissonance that was a feature of almost everything touched by radioactivity and linked to cancer. This explains, in part, its perennial insecurity. Unlike the uncontroversial Pap test for cervical cancer, mammography has had to sell itself anew to every generation of American women. Consensus about it has been—and remains—rare. Unrelenting media coverage displays the extreme volatility surrounding so many aspects of the subject (the earliest age at which the benefits of screening outweigh the risks, for example,
130
Under the Radar
remains controversial). Women have been bombarded with conflicting and confusing advice from the very beginning. Mammography may have become an almost universal procedure, but something of the “provisional” still clings to it. Its failure to project a stable aura of confidence is perhaps best exemplified by the headline of a New York Times feature story in 1976—“Is mammography safe? Yes, No and Maybe.”61
Chapter 6
Cancer and Fallout Science by Circumvention
With the voice of the medical profession silent or, like the radiologists, otherwise engaged, it was skeptical scientists, who, by default, came to represent public opposition to official nuclear policy. The dissidents who entered the fray were members of a small and elite group. Although none of them, by the late 1950s, was still working in an official capacity for any federal agency, many, especially the physicists, had had some experience with the wartime weapons development program. Now back in academic research, they kept in touch with former colleagues at the AEC and elsewhere and were familiar with the outlines of the government research program, if not with its details. Essentially, they were insiders turned outsiders, a status which conferred advantages that no physician enjoyed. And they were fully equipped both to understand and to dispute the official version of the fallout story. Contrary to Cold War propaganda that lumped all disaffected scientists together as communists, the dissenters represented no one fixed set of beliefs—some supported disarmament, some did not—but they all shared doubts about the official science that was being fed to the press. A catalyst for their mobilization was the AEC’s delayed and inadequate response to the threat of radioactive fallout raised by the 1954 Bravo test in the Pacific. The commission did not issue a press release in response to the event for almost a year after the blast. A year after that, the AEC finally issued a formal report, Some Effects of Ionizing Radiation on Human Beings.1 The report did not mention cancer or any other long-term peril, insisting that continued testing at the Nevada Test Site would create no off-site health or 131
132
Under the Radar
safety hazards. The flurry of coverage that followed in the popular press parroted the government’s optimistic outlook, initially without qualification. The key was the report’s exclusive emphasis on short-term, acute effects. “Medical and biological advisors,” Newsweek informed its readers, “do not believe that this small amount of additional exposure is any basis for serious concern at this time.” An AEC commissioner quoted in the New York Times insisted that the bomb tests hold “no immediate hazard.”2 The report steered clear of any reference to the potential significance of any enduring effects, that is, to the possibility of the eventual onset of disease years later. The monitoring of radioactivity near the Test Site was not set up with any long-term follow-up in mind. Uneven at best, it lasted just hours or days, not weeks or months. Opportunities for the thorough measurement of fallout that might have revealed the sensitivity of some cancers to radioactivity were, in this way, utterly squandered. Even the spotty evidence of fallout that was collected was withheld from independent researchers. Without access to the data and certainly without official encouragement, concerned scientists had to look elsewhere to make their case. Among the first to come forward were two scientists from the University of Colorado—Ray Lanier, head of the radiology department, and Theodore Puck, head of biophysics. Puck had done research investigating the effects of varying doses of ionizing radiation on tissue cultures, work that began to suggest that X-rays damaged chromosomes.3 The two men, concerned that fallout from tests conducted in the spring of 1955 had “reached a point where it could no longer be safely ignored,” issued a statement to the press asserting their belief that “for the first time in the history of the Nevada tests the upsurge in radioactivity measured here has become appreciable.” Puck and Lanier added a rider, in language familiar to readers today: “We feel strongly that our belief in the principle of public discussion of the problem in our democracy does not constitute grounds for an attack on our patriotism.”4 The governor of Colorado thought otherwise: he called the report “a publicity stunt” and suggested that the two men be arrested. Others raised similar concerns about the fear of speaking out. A report of a meeting of the Federation of American Scientists in the spring of 1955 commented that “great difficulty had been met in finding scientists willing to talk in public about the problem,” citing “security regulations, fear of controversial issues and administrative restrictions.”5 The physicist Ralph Lapp was another thorn in the side of the AEC. Although he had worked at both the Argonne National Laboratory in
Cancer and Fallout
133
Chicago and the Office of Naval Research, in the 1950s he wrote as an independent journalist, often in the pages of the Bulletin for the Atomic Scientists. Lapp took issue with the data on fallout as it had been officially packaged. He lamented the serious gaps in the official investigation following the Bravo test in the Marshall Islands. The AEC report, Lapp claimed, had, first of all, failed to acknowledge the long-term persistence of fallout radioactivity and so made no attempt to estimate—or directly measure— the levels that might be in the atmosphere weeks or months after the blast. Second, it had failed to present data illustrating the variation of the radiation dose rates in various parts of the Marshall Islands. Third, the report failed to state where in the atoll the measurements were taken.6 It was, in other words, a perfunctory piece of work, one that Lapp believed accurately reflected AEC indifference (if not hostility) to the epidemiology of radiation-induced disease. This apparent disinterest was an enduring response, one used to cover a multitude of sins. And though it wouldn’t become known until twenty-five years later, the same inadequacies brought to light in the Pacific tests marked the monitoring and mapping of fallout following testing on American soil. The same doubts emerged about the accuracy of the AEC’s instruments and fallout maps together with the same accusations of carelessness in follow-up readings that missed “hot spots” as well as “cold spots.” Lapp also chastised the AEC for its failure to keep the American public informed: “I believe that the national interest demands a better relation, a freer flow of communication between the AEC and the press.”7 It also demands, he urged, “only reasoned and careful estimates of the hazards based upon factual knowledge. Reckless or non-substantiated statements do a disservice to the AEC and to the Nation.” He cited several examples of what he deemed “reckless” statements that had recently appeared in newspapers. In congressional hearings on fallout, he read into the record the remarks of Merril Eisenbud, director of the AEC’s health and safety laboratory in New York, as they appeared in the Sunday News: “The total fallout to date from all tests would have to be multiplied by a million to produce visible, deleterious effects except in areas close to the explosion, itself.” He also quoted Willard F. Libby, the only scientist at the time on the Atomic Energy Commission who, in a speech in June 1955, claimed that “as far as immediate or somatic damage to the health is concerned, the fallout dosage rate as of January 1 of this year in the United States could be increased 15,000 times without hazard.”8 Hyperbole seemed to be an occupational hazard, no doubt abetted by the vast scale of atomic energy itself.
134
Under the Radar
The popular media made minor celebrities of hitherto unknown AEC spokesmen like Eisenbud and Libby. But they were not the government’s only cheerleaders. Equally passionate about nuclear weapons was the better-known Edward Teller, the Hungarian-born physicist often referred to as the “father of the H-Bomb.”9 His thermonuclear weapon, first tested in 1952, was almost 1,000 times more powerful than the bomb that had been dropped on Hiroshima. As a refugee from Nazi Germany, he harbored what could reasonably be described as a pathological hatred of the Soviet Union. Teller played an important role in raising the temperature of the fallout debate. With a demonstration of the grandiloquence that the subject provoked, the February 10, 1958, issue of Life magazine announced a cover story— “Dr. Teller Refutes 9,000 Scientists”—as though Teller were a courageous David holding out against overwhelming odds rather than one of the most powerful scientists in the country. The “9,000 scientists” Teller rebuked were actually names on paper, signatories from almost every country in the world (including the United States) to a petition submitted to the United Nations by the activist Linus Pauling. Pauling, a professor of chemistry at the University of California, was one of the most outspoken and the most unrelenting proponents of nuclear disarmament. In 1951, he had been accused by the House Un-American Activities Committee of demonstrating “a pattern of loyalty to the Communist cause.”10 His U.N. petition called for a halt to the nuclear tests, a gesture he saw as a first step toward “the ultimate effective abolition of nuclear weapons.” A book—No More War!—followed in the same year. Pauling also sued the Atomic Energy Commission, alleging, among other charges, that the Atomic Energy Act of 1954 did not authorize atomic testing and that the tests violated the freedom of the seas. The court found that he had no standing to sue and dismissed the case. Teller thought the scientists’ U.N. petition was peddling “at best half truths.” Its statements were, he wrote, “misleading and dangerous. If acted upon they could bring disaster to the free world.” He argued that “worldwide fallout is as dangerous to human health as being one ounce overweight.” Pauling took the argument seriously and dismantled it scientifically in an extended reply in The Nation.11 At the end of Teller’s polemic in Life, he reveals the Promethean vision that lay at the heart of the nuclear enterprise: “The spectacular developments of the last centuries in science, in technology and in our everyday life, have been produced by a spirit of adventure, by a fearless exploitation
Cancer and Fallout
135
of the unknown. . . . It is possible to follow this tradition without running any serious risk that radioactivity, carelessly dispersed, will interfere with human life.” Radioactivity, in other words, is a spoiler, a nuisance messing up the reductive purity of this boldly creative pursuit. This is mythology in the service of ideology, with science at the mercy of both. The subjugation of science to policy is not, of course, something unique to the Cold War. More recent opposition to global warming, stem cell research and Darwinism is just as dismissive of science as was Cold War rejection of the hazards of fallout. The organizing principle of opposition may now be religious faith rather than classical myth but the controlling imagery is just as powerful. A critic has written: “What we are seeing is the empowerment of ideologues who have the ability to influence the course of science far more than ever before. They say, ‘I don’t like the science, I don’t like what it is showing,’ and therefore they ignore it. And we are at a place in this country today where that can work. The basic integrity of science is under siege.” These are not the words of the maverick scientists in 1956 but of Alan Leshner, chief executive officer of the American Association for the Advancement of Science, half a century later.12 Five years after the Teller/Pauling exchange, in 1963, the United States finally signed a Limited Test Ban Treaty, agreeing, with the Soviet Union and Great Britain, to end above-ground nuclear testing.13 Shortly afterward, Linus Pauling was awarded the Nobel Peace Prize (his second Nobel—the first, in 1954, recognized his scientific accomplishments). It’s hard to miss the parallels between the international political climate in which the Nobel Peace Prize Committee made its choice in 1963 and that of 2005, when it selected Mohamed ElBaradei and his International Atomic Energy Agency (IAEA) as the Peace Prize recipient. Originally created by Eisenhower in 1957 as part of his “Atoms for Peace” program, the IAEA was charged with inspecting nuclear installations worldwide, a mandate it continues to fulfill to this day. The agency had completed the inspection of over 200 sites in Iraq at the time of the U.S. invasion in 2003 but, had found no indication of any prohibited activities related to the production of weapons of mass destruction. Two years later, the Nobel Committee recognized the IAEA’s efforts and tacitly applauded its mission just as it had earlier applauded Pauling’s efforts to ban nuclear testing. But in 2005, the work of an international agency (with all its resources) was shown to have had no more restraining power against the will of the United States government than the protests of an individual American half a century earlier.
136
Under the Radar
Circumventing Secrecy
Then as now, scientists out of step with the prevailing political perspective faced considerable obstacles. Throughout the Cold War, research funds were cut, employment terminated, publication delayed, dissemination of results thwarted. But scientific dissidents faced an additional hurdle—that of secrecy. It was not just the evidence on fallout collected after the Nevada tests that was withheld from interested researchers; the evidence following the bombings in Japan and the American nuclear tests in the Pacific also remained under the control of the Atomic Energy Commission. The AEC released only what it chose, only when it chose. Those who believed that nuclear weapons were potentially more dangerous than government press releases suggested could neither confirm nor disprove the authorized version. Nor could they undertake experiments of their own. They were forced to turn to evidence unrelated to the nuclear tests to make the case—however indirectly—for a link between fallout and cancer. Since radiation had been widely used to treat nonmalignant conditions for years before anyone raised the alarm, it was possible to go back and trace the long-term health status of patients who had undergone these treatments. As surrogates for fallout victims, they were far from perfect, but follow-up information about them was at least available and accessible. There were three main groups of surrogate populations. First was a cohort of 15,000 men in Great Britain who had been treated with radiation for a rheumatic disease of the spine called ankylosing spondylitis; second was a group of 1,400 American children who had been treated with X-rays in infancy for an enlarged thymus gland; and third was the population of radiologists themselves, who, along with their technicians, were regularly exposed to radiation as an occupational hazard. Excess cancers were discovered among all three of these groups and reported in the medical literature in the 1940s and 1950s.14 But were the findings significant? The actual numbers involved were small because diseases like leukemia were rare. To demonstrate a significant increase in their incidence required extremely large sample populations. Without them, it would be impossible to argue convincingly that a handful of extra cancers was, in fact, the result of fallout rather than of chance. The other drawback of these studies was their dependence on a pattern of radiation exposure that was quite different from that associated with fallout. Therapeutic exposures involved the administration of fairly high, constant doses of radiation repeated at regular intervals over a finite period. Fallout, by contrast, involved the continuous exposure to low-level radiation
Cancer and Fallout
137
of varying but largely diminishing intensities. It was a new phenomenon, unexplored before the advent of the atom bomb. How closely it paralleled the behavior of planned radiation exposures in a clinical setting remained unknowable. This left the door open for interpretations that betrayed the same political prejudices that characterized the rest of the fallout debate. Those seeking to minimize the potential harm of ionizing radiation postulated the existence of what was called a “threshold” dose. This was a level of exposure below which radiation was presumed to be harmless, either because it was too low to do any harm or because the body had the ability to repair the minimal damage that might be inflicted and could therefore forestall the initiation of disease. In this formulation, the question of exposure becomes “how much?” which implies that some exposure, however defined, is likely to be safe. Set against this is a belief in what is called a “linear” relationship between exposure and harm. It argues that any exposure to ionizing radiation has consequences, no matter how small or how difficult to measure. Although greater exposure may lead to greater harm, there is, in this framing, no inconsequential dose at any level.15 The first of these two hypothesized relationships leans heavily—and repeatedly—on efforts to nail the numbers. As long as the scientific wheels are spinning, threshold doses remain in contention. Sometimes the numbers shift upwards, sometimes they hold steady; as long as they are actively under investigation, the basic presumption of a safe exposure level remains in play. Years may go by, research contracts renewed, results refined, but still no conclusive doses are agreed upon. Meanwhile, the release of radiation continues undiminished. The alternative “linear” approach, which postulates no safe dose, does not require the scientific precision of the first. A credible evidence of harm, it argues, should be all that is required to galvanize intervention. What is needed is not more science but the political will to withdraw any suspected toxin from circulation, once and for all, before it does further harm. The Cold War debate over fallout marked the rise to dominance of the first approach over the second. Government set the terms of the controversy, mobilizing its superior resources to promote a set of beliefs that protected its own interests. By keeping the conversation pegged to the question of safety as it chose to define it, government science essentially transferred the burden of proof onto those wishing to eliminate hazards rather than those choosing to tolerate them. It was very hard, after all, to disprove the existence of a threshold dose. The scientific evidence was simply too inconclusive. At least for the time being, this was enough to
138
Under the Radar
hold off dissent. Soon enough, however, reports from other quarters would make problems for official prevarication. Epidemiology: Evidence of Harm
For the first decade of the Cold War, Americans saw the victims of Hiroshima and Nagasaki as enemy war dead. The numbers were hard to digest, an estimated 90,000 to 120,000 killed in the aftermath of the blast at Hiroshima and a further 60,000 to 70,000 dead at Nagasaki.16 The devastation was unprecedented but it brought the war to an end. For those Japanese lucky enough to have survived the bombings, the war was over, too. Though their lives might be in ruins, it was time now to concentrate on rebuilding, on getting on with the peace. This was, in short, the prevailing American view of the Japanese bombings. The emergence of the bombs’ aftereffects would seriously complicate this picture. Both the short-term and long-term consequences of exposure to ionizing radiation would prove to be significant and often fatal. Acute radiation sickness (nausea, fever, vomiting) took its toll early. But delayed effects were just as disturbing, from particles released by the bombs themselves or from the decay of radioactive materials in the fireball. Fallout was an unexpected side effect, something the military had not planned for and knew little about. Until the Bravo test in the Marshall Islands in 1954 (almost ten years after the Japanese bombings), the survivors of Hiroshima and Nagasaki (the hibakusha) were the world’s first and only victims of nuclear weapons. As such, they would become one of the most closely watched populations in history. In 1947, the brand-new AEC funded the Atomic Bomb Casualty Commission to monitor the health effects of atomic weapons on the Japanese survivors. And beginning in the early 1950s, the commission’s findings were regularly reported in the medical literature. In 1952, monitoring picked up the first signs of excess leukemias.17 As time went by, evidence of other cancers (with longer latency periods) began to emerge. The first solid tumor to show up in greater numbers than expected was thyroid cancer, in 1959.18 This was followed by breast cancer, which first showed itself between 1955 and 1960 but was not reported until 1967.19 Later still, unexpected cancers of the lung, stomach, colon, bladder, and esophagus also began to appear, among others. In general, the earlier the age at which radiation exposure took place, the greater the risk of cancer down the road; and the longer the data are studied (and refined), the greater the risks appear to be. With every updated assessment, the
Cancer and Fallout
139
lifetime risk values for most of the radiation-related cancers are revised upward.20 Until the last of this unique population dies, the data will continually be subject to revision. In the 1950s, American civilians knew little of these revelations. Nor were they aware of similar evidence emerging on their own soil. Certainly no one drew attention to the parallels between the citizens of Hiroshima and Nagasaki in 1945 and the Mormon communities living downwind of the Nevada Test Site in the early 1950s. What could a remote enemy population and patriotic U.S. citizens possibly have in common? Americans still believed that most Japanese victims had died almost immediately from the bombs’ heat and blast. They had no reason to link them to cancer or to any longer perspective. Military strategists were eager to prolong this state of ignorance; negative publicity of any kind might jeopardize the weapons testing schedule. As it turned out, they had little to fear. In the early 1950s, there were few complaints, even about the inadequacy of the early warning and monitoring systems in the vicinity of the Test Site. As an AEC commissioner admitted candidly in 1954, “the nuclear testing program resulted in a lack of balance between safety requirements and the requirements of the program. When cancer research studies came into conflict, the balance was apt to tip on the side of the military program.”21 The struggle to redress this imbalance would last for half a century. For much of that time, the military was able to restrict the radiation debate to scientific considerations that were purely theoretical. Calculations estimating exposures were abstract and had no names or faces attached to them. It was easy to package information in a framework that excluded anything that might prove controversial. Propaganda could, for instance, lean on the concept of an average radiation exposure for all Americans rather than singling out the most dangerous “hot spots” where readings were much higher. Incorporating radiation exposures in areas of the country where fallout was either minimal or non-existent naturally brought the overall average down to a relatively harmless level. Most Americans were not equipped to see through the ruse. Given the intentional obfuscation of the true state of scientific knowledge, there is some irony in the post-Sputnik panic about the low level of scientific literacy in the country. The Soviet launching of the first satellite in 1957 caught the United States off guard and prompted all manner of reproach and lamentations among the scientific community. How had American scientists allowed their Soviet counterparts to get ahead of them?
140
Under the Radar
Sputnik put them to shame. Not only did it trigger the space race, which became another formidable component of Cold War competition; it also spurred an interest in improving the quality of science education in schools.22 But, as is still evident today, science literacy on its own would never be a match for political expediency. What began to level the playing field in the fallout debate was evidence from the field of epidemiology, a branch of science that asked questions physics did not. The early epidemiological investigations concerning fallout were observational rather than theoretical. They focused attention explicitly on the populations that had been most heavily exposed, that is, on those living downwind of the Nevada Test Site in the 1950s. They were designed to compare differences in the incidence or mortality rates of cancer between two specific groups, downwinders on the one hand and, on the other, control groups living far from the nuclear test area. Right away, this added a human dimension to what had been a vague and remote scientific debate. Americans at risk were still nameless, but they had at least been distinguished by their geographical location, on American soil. This was a start. The victims of Hiroshima were of course nameless, too. But gradually, as the work of the Atomic Bomb Casualty Commission came to light, attention shifted from Japan’s war dead to its atomic survivors. Many Americans read about the visit to New York, in 1955, of the twenty-five disfigured “Hiroshima Maidens,” victims of the atomic bomb who were flown to New York to have their faces reconstructed by volunteer plastic surgeons at Mount Sinai Hospital.23 They, like other Japanese survivors, were not soldiers or kamikaze pilots but civilians, eager to return to their former lives, as far as they could. Their predicament was easily understood by American readers. The revelations of sickness and disease that cropped up in U.S. newspapers began to endow the Japanese survivors with a presence that, as time went by, undercut the “enemy” status that had set them apart. The appearance of leukemias and, then, of other cancers raised the temperature of this concern, rekindling the ethical ambivalence that is still attached to the United States’ first-strike use of atomic weapons. Had it been absolutely necessary to drop those bombs in order to end the war? Or had it done more harm than good, unleashing a new warfare of unlimited destructiveness and potentially global consequences? The rising toll of cancers kept these questions alive. Eventually, parallels would be drawn between the timelagged revelations of disease in Japan and those in the American Southwest. Both would generate a deep distrust of government.
Cancer and Fallout
141
To postpone this day of reckoning, the AEC took preemptive action on several fronts. Where independent research challenged the official perspective, the AEC relied upon well-placed henchmen to criticize and undermine any findings that came into conflict with its own position. The commission also exploited its vastly superior media connections to keep the “correct” view before the public. In 1960, for example, the New England Journal of Medicine (NEJM) published the first summary of the findings of the Atomic Bomb Casualty Commission.24 Among other discoveries, the commission noted the rise in leukemia among Japanese survivors, beginning in 1948. It also reported the first signs of an increase in other cancers, following the establishment of tumor registries, and promised to provide further updates as data accumulated. The alarm, in other words, had been sounded. Two years later, however, with the prospect of a nuclear test ban looming large, the NEJM published a lead article that sought to undermine the drift of the Bomb Commission’s findings.25 In yet another demonstration of the influence of Cold War policy on the practice of medicine, the journal inadvertently showcased the position of the AEC by reproducing a lecture delivered by one of its backroom operators, speaking in his capacity as a medical doctor rather than as a spokesman for atomic energy. The lecturer was Shields Warren, the former director of the AEC’s Division of Medicine and Biology, who returned to work at Harvard and the Deaconess Hospital in 1952. He set out to explain the challenges of radiation to his fellow physicians in a talk entitled “You, Your Patients and Radioactive Fallout.” Warren offered a useful overview of the complex technical issues involved, highlighting the variable effects of radiation depending on the dose, the frequency, and the duration of exposure. But after a lengthy lead-in, he finally got down to the business at hand—the need to endorse the AEC’s nuclear testing program. Understanding that doctors must be able to reassure their patients, Warren insisted that the dangers of fallout from atomic testing were “minimal . . . the possible danger from fallout from all tests to date,” he asserted, was equivalent to “one tenth the risk received from increased radiation encountered by a person moving [flying] from Boston to Denver.” Framed in this way, the message disregarded all the disturbing evidence gathered from the Japanese bomb survivors—which Warren himself had helped to collect. In its place was a benchmark that dispensed with human agency altogether, substituting exposure to natural “cosmic” radiation experienced on a routine domestic flight. Exposure to fallout, the comparison implied, was every bit as innocuous.
142
Under the Radar
In addition to exercising influence over the media, and especially over the medical media, government agencies were in a position to control the national research agenda as well. For much of the Cold War, the AEC kept a tight a rein on epidemiological studies of fallout. Where it could, it used its considerable influence to block proposals and/or methodologies it did not favor. It also imposed a paralyzing review process on research the AEC sponsored itself. Two early studies illustrate the commission’s intention to keep a close watch on emerging epidemiological findings.26 The first was carried out by Edward Weiss at the Public Health Service in 1961. Weiss found a higher than expected number of deaths from leukemia in two Utah counties. His study was never published. The second, undertaken by Harold Knapp at the Atomic Energy Commission in 1963, investigated the contamination of the local milk supply by radioactive iodine. The AEC, apprehensive about public reaction, feared that disseminating Knapp’s results would have “potentially detrimental effects upon the government’s nuclear weapons testing program.”27 The chairman of a special committee set up to review the report acknowledged that “if Knapp publishes his findings, the public will know that they haven’t been told the truth up to now.”28 Despite the odds against them, some independent scientists did undertake epidemiological studies to trace the effects of radioactive fallout on the exposed populations of Utah and Nevada. They faced considerable obstacles in attracting funds but fewer problems of censorship. Like all independent research in the field, studies funded by private foundations or academic institutions were hampered by the same lack of data and by the limited statistical power of the small numbers of cancers that were detected. But the independent studies were at least able to point openly to a positive relationship between exposures to fallout and disease. According to the historian Howard Ball, the 1974 Rallison Study, an investigation of excess thyroid cancers in children living near the Nevada Test Site, was the first major nongovernmental report on cancer and fallout, undertaken by a medical researcher at the University of Utah.29 A few years later, Joseph Lyon, an epidemiologist at the same university, started work on what became a landmark study in the field. He found levels of childhood leukemias that were significantly higher among downwinders during the 1950s. Lyon highlighted a methodological habit of mind that has been a feature of the government’s response to fallout from the very beginning and has colored epidemiological research in the field ever since. This is the
Cancer and Fallout
143
marked preference for study designs that keep their distance from the actual health effects of fallout in exposed human populations. Although such effects are directly observable, funded studies tend to concentrate on the elaboration of theoretical risk assessments, which are not. The creation of a parallel universe of abstract hypotheses keeps the inconvenient facts of lived history well out of sight. The basic tool for this work is a dose reconstruction. This is an estimate of the radiation exposure that an individual might have experienced if he or she had been at a particular location at a particular time. It is a disembodied measurement, hard to get a grip on because it is describing the experience of a phantom presence, not that of a real human being. A reconstructed exposure to radiation doesn’t mean that anyone actually received the estimated dose at the specified time and place, only that, had anyone been there, this is the dose of radiation they would most likely have received. (The tortuous use of the conditional tense illustrates the hedging of bets that hangs over the enterprise.) In this remodeling of events, real human beings appear only as ghostly figures slotted into the drama retrospectively. In fact, it is their very intangibility that made dose reconstructions so appealing and that allowed them to become a popular tool of public health policy. But not everyone was willing to allow their drawbacks to pass unnoticed, as Joseph Lyon demonstrated to a congressional committee in 1981: Given the uncertainties of dose reconstruction 20–30 years after the events, I feel that a study of adverse health effects must be done rather than to rely on imprecise and inadequate dose measurements for inference about human health effects of fallout in this population. The approach presently being pursued by the Federal Government is reminiscent of the 1950s, when inadequate measurements of radiation exposure were made and the population was assured that those levels could not produce disease. The appropriate method of determining health effects is to measure the health of the population, not to guess at the level of exposure and use uncertain extrapolations from other studies to predict the health effects that would be produced.30
Very few epidemiological studies ever followed Lyon’s advice to examine the actual health of those flesh-and-blood individuals who really had been exposed to fallout. No studies sought them out with a view to documenting the adverse health effects they might have suffered. Nor did any
144
Under the Radar
begin with a group of cancer victims and attempt to work backwards, to look for biological markers of radiation exposure in their blood, urine, or elsewhere. They moved instead in the other direction, hypothesizing exposure levels at specific times and places without identifying (or seeking out) those who might have been subjected to them. In this way, dose reconstructions embody the defensive posture so characteristic of the government’s response to fallout. The figures do not in themselves suggest any cause-and-effect links between radiation and cancer. Theoretically, they could be used for that purpose, in conjunction with other data. But the government fought hard to keep a firewall between the data and the diseased. It understood that the framing of the research, much like the framing of the public debate, functioned as a critical feature of damage control. Harold Knapp, the author of the 1962 paper on radioactive iodine in milk supplies, threatened to go beyond the permissible limits in his own work and was duly chastised for it. When he said that his calculations “had particular relevance to the persons residing around the Nevada Test Site because the levels of fallout had been so much higher there,” a critical reviewer from the Federal Radiation Council “objected to what he called the tone of the thing on the grounds that it tended to personalize the problem.”31 The arms-length perspective of Cold War dose reconstruction persists to this day. It is at least partly responsible for our still-limited understanding of the long-term impacts of fallout on the health of U.S. citizens. “It still amazes me,” Joseph Lyon wrote in 2006, “that 55 years after the beginning of aboveground testing, no comprehensive assessment of the overall mortality of Washington County, Utah, one of the most heavily exposed counties in the U.S., has ever been done. We completed about 80% of the scientific work necessary to do such an assessment, and then our funds were stopped.”32 Dose reconstructions became the essential building blocks of a new research paradigm that implicitly accepted widespread exposure to environmental carcinogens. They uncoupled the traditional cause-and-effect hypotheses associated with epidemiology. The customary line of attack had been to identify the causes of disease (bacteria in the drinking water, disease-bearing mosquitoes) in order to alter the conditions that led to outbreaks of epidemic disease (cholera, yellow fever). The search for corrective action was, in fact, the pragmatic motivation behind much epidemiological work. But the derivation of dose reconstructions following the nuclear tests focused more narrowly on the measurement of exposures as end results in themselves rather than as a catalyst for further action.
Cancer and Fallout
145
Dose reconstructions could be ethically and politically neutral. They were descriptions only and they applied to individual rather than to community exposures. They neither accused radiation of causing cancer nor exculpated it. But they served a clear political purpose. The steady funding of research to derive ever more carefully crafted estimates also had the effect of normalizing the original exposures, building up a tolerance for atmospheric contamination. If fine distinctions mattered, it must follow that some level of risk was acceptable, below the threshold of harm. The presumption of such a threshold, inherent in the dose reconstruction approach, undermined the argument for “zero tolerance.” “Risk management” took its place, demonstrating that data, properly formulated, could itself dictate policy. “Management” suggested tinkering from a distance rather than radical overhauls on the ground. In other words, it obviated the need to reduce the actual prevalence of risk, to go back to the source itself with a view to eliminating the cause of harm altogether. Just as “effective” would be the comprehensive calibration of risks, a set of measurements that recognized and accepted the persistence of a wide spectrum of hazards. This would provide policy makers with the tools they required to pin down and, on occasion, to shift the location of the permissible levels of exposure along the length of a broad continuum. The concept of “acceptable risk” would lead policy makers out of the wilderness of uncertainty—and liability—by stabilizing the debate. Dose reconstructions would join forces with quantitative risk assessments (calculations of cancer risks from other sources) to yield recommended radiation standards. The existence of standards essentially naturalized the presence of radiation in everyday life (and set the pattern for almost all environmental hazards that followed). Although this has not laid the matter to rest, it has gone to ground within an institutional framework that is both powerful and remote (this is explored further in chapter 10). There are curious parallels between the contradictions of fallout research and those raised by the secret radiation experiments running concurrently but out of public view. Behind the pursuit of dose reconstructions was the presumption of a minimal, “threshold” dose, a level of exposure below which radiation would prove to be harmless. Behind the secret experiments was the presumption of a maximum dose of radiation, above which human life would be seriously threatened. Their efforts were, of course, not coordinated but, taken together, they could be described as reaching for an intermediate zone of tolerance, a spread over which exposures might become acceptable, ultimately even routine. Both pursuits, by
146
Under the Radar
implication, took the presence of ionizing radiation as a given, a permanent feature of the landscape. They accepted man-made radioactivity in much the same way they accepted radon, a naturally occurring radioactive gas found in the soil and in the stone foundations of many homes. Radiation as a by-product of human engineering was to be considered, like radon, as something inevitable, an emanation with potentially harmful side effects that needed to be managed. This way of thinking stood in sharp contrast to the view of radioactivity as a potentially toxic environmental pollutant that should be controlled if not entirely eliminated. The latter view was no more evident in Cold War epidemiology than it had been in the secret radiation experiments. Together, the two strands of radiation research constitute a kind of mapping exercise, more an effort to understand an unfamiliar process than any attempt to contain it. In pursuing their limited objectives, both approaches set aside any immediate concern for the health of the study populations under investigation. In both cases, the hands-off posture cost human lives. Most of the casualties were cancer-related deaths for which government was responsible. They were not victims of wartime “friendly fire”; the military knew they were out there and did nothing to protect them. This uncomfortable fact raises serious doubts about the government’s avowed commitment to cancer prevention. Disregard for the health of the downwinders, as for the experimental subjects, reflects the primacy of national security concerns. The military saw their mission as keeping Americans alive. It was someone else’s job to keep them healthy. Governed by short-term rather than long-term considerations, the Cold War ordered government priorities and dictated the allocation of government research funding. It was visible in every lab and every academic institution in receipt of public monies. An army colonel advocating for the experimental use of human subjects put it bluntly in 1950: “We are very much interested in long-term effects but when you start thinking militarily of this, if men are going out on these missions anyway, a high percentage is not coming back, the fact that you may get cancer twenty years later is just of no significance to us.”33
Chapter 7
Paradise Lost Radiation Enters the Mainstream
Nothing so effectively confirmed the arrival of radiation as a permanent feature of the postwar landscape as the agreement to regulate it. As Shields Warren, former chief of the AEC’s Division of Biology and Medicine, insisted, “We must look at the paradise of no radiation exposure as utterly unachievable in this life.”1 The wide-ranging activities that generated radioactive hazards were much too valuable politically and economically to ever consider renouncing them. Formal recognition of radiation’s potentially dangerous side effects would be a small price to pay to smooth its path to legitimacy. Recommended exposure limits, carefully set by experts in the field, would demonstrate a commitment to safety on the part of all those eager to protect their access to atomic energy. The biggest postwar push for standard setting came not from the medical profession but from the AEC.2 Though the commission dictated nuclear weapons policy, it depended upon private contractors to build and manage the nuclear installations it designed. Contractors, in turn, relied for the most part upon organized labor to get the job done. For workers, radiation hazards represented a serious occupational risk. But just how serious was a question that neither contractors nor unions were in a position to answer. Although the AEC had a statutory responsibility for nuclear safety, it could not, in this instance, set its own standards. That would constitute meddling in the collective bargaining process, inviting the charge that they were setting “radiation levels to suit their operating needs rather than the safety of the worker.”3 But if the nuclear industries were to enjoy a bright future, radiation hazards had to be 147
148
Under the Radar
addressed and reconfigured as a controllable “input” in the production process. What was needed was an “independent” authority to supply recommendations for permissible exposures at the workplace in a simplified format that all the parties could understand and agree to work with. To achieve this, argues the historian of science Gilbert Whittemore, the AEC latched on to the National Committee on Radiation Protection (NCRP), setting it up as the prime standards-setting authority and helping to finance its operations.4 The NCRP was chosen as a neutral go-between that would, at least on paper, favor neither management nor workers. As an advisory committee with no statutory powers, its recommendations could never be legally enforced. They would, nonetheless, come to carry the weight of law. This gave the AEC the best of both worlds, allowing them to demonstrate their clear support for radiation safety but without tying their hands. It would also give a boost to the emergent cancer industries that incorporated radioactive isotopes in production, helping them to establish more of a level playing field with other sectors like pharmaceuticals, whose development of cancer treatments was, at the time, unencumbered by this additional layer of known risk. Debating, setting, and revising notional limits to exposures might demonstrate a willingness to grapple with the issue of hazards. But denying statutory powers to the standards-setting bodies was an equally forceful statement about the status of these risks. Opting to recommend rather than to regulate may have served the best interests of the committees themselves, insulating them from the vagaries of shifting political winds and so guaranteeing their own survival. But it also reinforced the idea that radiation was safe enough to allow individual users or government agencies to make their own decisions about what standards to adopt, depending upon their own circumstances and requirements. This sent the message that radiation exposures were manageable. Lauriston Taylor, the first president of the NCRP, demonstrated his commitment to that belief and an untroubled acceptance of its logical consequences. Once a nuclear operation had been carefully designed, taking recommended exposure limits into account, Taylor saw “no alternative but to assume the operation is safe until it is proven unsafe.” As Gilbert Whittemore remarks, “In theory, this placed the burden of proof on the unions; in practice it treated workers as experimental subjects” (italics added).5 The Cold War’s ethical stance, in other words, would now be formally enshrined in the solution to radiation protection. The same thinking that permitted the sacrifice of
Paradise Lost
149
cancer patients in the AEC’s secret radiation experiments was now summoned again to forge the NCRP’s philosophy, succinctly conveyed by Lauriston Taylor: “It is recognized that in order to demonstrate an unsafe condition you may have to sacrifice someone. This does not seem fair on one hand and yet I see no alternative. You certainly cannot penalize research and industry on the suspicion of someone who doesn’t know by assuming that all installations are unsafe until proven safe.”6 In the late 1950s, the AEC had to fight off a move to have its statutory control over radiation protection transferred to the Public Health Service. The Bureau of the Budget, charged with reviewing the roles of federal agencies in the area, argued against the transfer, explicitly on the grounds that “the whole future use of radiation would depend on the decisions of officials whose major mission and experience is public health.”7 Aired at the height of popular media interest in fallout, this was a debate understood by the public. The filmmaker Pare Lorentz put the case in an article he wrote for McCall’s magazine, throwing his weight behind the proposed transfer of responsibility to some other agency “not concerned with weapons but with health.”8 Its critics understood the drawbacks of the fox-and-henhouse arrangement at the AEC. As one of them put it, in serving the dual role of procuring nuclear weapons and regulating its risks, “the government is easily cast in the role of Typhoid Mary playing health commissioner.”9 Radiation Standards and Their Cold War Connections
The AEC’s interests could hardly have been better served than they were by the National Committee on Radiation Protection. Despite the legal niceties that separated them, the two shared a close identity of interests. And these were influential not just within the United States but on the world stage as well; the International Commission on Radiological Protection (ICRP) promoted a very similar agenda.10 The national and international operations were closely connected. Americans served in key positions in both and almost all of them had deep Cold War connections. The United States delegation to the ICRP Main Committee, for example, included not just Lauriston Taylor, president of the NCRP and an ex-AEC physicist but also G. Failla, chair of the AEC’s Advisory Committee on Biology and Medicine and the radiologist Robert S. Stone, a veteran of the Manhattan Project and author of the “Buchenwald” memo cited earlier. Also serving as a member of the NCRP’s Main Committee was Eugene Saenger, whose radiation experiments reflect precisely the priorities evoked in Taylor’s attitude.
150
Under the Radar
From the very beginning, health concerns took a back seat to the imperatives of active weapons programs and commercial nuclear power. The clearest demonstration of the committee’s bias was its failure to intervene in any productive way in the fallout controversy. The NCRP raised no objections to above-ground weapons testing and proposed no countermeasures specifically to improve the safety of those at risk.11 Committee delegates were invited to attend a conference on fallout in 1962, organized by the Federal Radiation Council. They reported back that “in spite of the relatively high levels of iodine-131 resulting from fallout, the situation did not appear to be critical enough to warrant the introduction of control measures.”12 A letter from one of the delegates circulated to NCRP members expressed the view that “it may occasionally be necessary to permit radiation levels that represent a larger risk, simply because of overriding considerations of national interest.” The fallout controversy did at least prompt the standard-setting bodies to lower the prevailing levels of permissible exposures. Under pressure from the National Academy of Sciences whose recent report on fallout received widespread media coverage, they took action. By the late 1950s, both the NCRP and the ICRP agreed on five rem per year as the maximum occupational exposure to whole-body radiation (for those working in potentially radioactive environments like uranium mines, nuclear power plants, nuclear reprocessing plants). For the population-at-large, permissible exposures were much lower. The maximum was set at 0.5 rem per year, one-tenth the recommended maximum for radiation workers.13 The dose limit, however, did not include either exposures to background radiation (such as radon) or to medical X-rays; both were considered “uncontrollable” and therefore unquantifiable. To put this in context—one-half of a rem (0.5) is roughly equivalent to 1 percent of the minimum dose that cancer patients at the M. D. Anderson Hospital received during the first years of the secret air force experiments. By the late 1950s, at the height of the fallout controversy, the NCRP came under increasing pressure to revise its permissible doses downwards. At exactly the same time, the M. D. Anderson experiments moved in the opposite direction, bombarding their patient-subjects with ever increasing amounts of whole-body radiation, reaching levels up to 400 times higher than the NCRP’s recommended maximum exposure for members of the general population. Robert Stone may have been the only member of the NCRP with direct experience of secret experiments but, as a radiologist in receipt of defense
Paradise Lost
151
agency research funds, he typified the committee membership. From the start, his colleagues were primarily physicists and radiologists rather than biologists. The committee culture that evolved was, therefore, one of atomic energy enthusiasts. Members were, for the most part, tolerant of atoms for war and eager to promote atoms for peace. In the early 1970s, at least twenty-five of the NCRP’s sixty-four members were employed by the AEC, another six were on grants from the Department of Defense or working directly for manufacturers of nuclear reactors, such as Westinghouse or General Electric.14 With these credentials, members naturally saw themselves as facilitators rather than watchdogs. They viewed the elaboration of radiation standards as a necessary precondition for the efficient application of nuclear energy. It was not expected to place obstacles in the path of atomic progress. This was, of course, a mindset that tended to minimize the potential harm that radiation could inflict. Nowhere, in fact, do the standard-setting bodies claim that the levels of exposure they have set are safe. Although nonspecialists could be forgiven for thinking so, permissible dose levels were not designed to eliminate risks, merely to contain them at a level that still left the nuclear power industry room to maneuver. In the commission’s own words, the primary objective of their recommendations was “to provide an appropriate standard of protection for man without unduly limiting the beneficial practices giving rise to radiation exposure.”15 The self-serving orientation of the standards-setting bodies did not escape criticism. John Gofman, the quintessential renegade (he had worked on the separation of plutonium for the Manhattan Project), thought it “very difficult to conceive of an organization with a greater vested interest in the preservation of high levels of radiation.”16 The lawyer Harold P. Green, also speaking as a former AEC staff member, maintained that “risk-benefit decisions are not scientific problems. They’re political concerns and should be debated in the rough-and-tumble of the political process. What benefits does the public want and what risks is it willing to assume? The NCRP, in effect, has been saying to the public: ‘You are going to have to assume these risks in order to have the benefits we say you want.’” 17 That the control of cancer and other radiation-induced illnesses was not of central concern to the decision-making bodies is evident from their membership. With rare exceptions, there was no one fighting this corner. Oncology did not exist as a separate medical specialty until the 1970s; by that time, the early bias of the Main Committee was set in stone. From 1970 on, standards were set by legislation under the auspices of the Environmental Protection
152
Under the Radar
Agency (EPA) and administered by the Nuclear Regulatory Commission (NRC). The EPA’s ability to implement effective radiation protection was hampered from the start by the same forces that had obstructed research into fallout a decade earlier. If anything, the lobbying organizations protecting huge government contracts in arms and nuclear power development had grown even stronger in the intervening years. Lobbyists had better access to politicians on key committees who could help safeguard their interests, keeping interference to a bare minimum. As their influence rose, funding and staffing for radiation research began to diminish, noticeably. The EPA came under attack for ineptitude. A 1978 report by the General Accounting Office (GAO) bore the working title “Failure to Adequately Protect the American People from the Hazards of Radiation.” In congressional hearings examining the EPA’s difficulties, a California congressman reminded his audience that stakeholders in the use of nuclear and radioactive materials consistently underestimated the hazards of radiation. A reporter for BioScience summarized the (by now) familiar predicament: “Critics charge that political considerations have led to active suppression of research on radiation hazards and that scientists working in this area have abruptly lost their funding or their jobs when their findings displeased the government.”18 It is not that the regulation protection agencies took no interest in the health effects of radiation. On the contrary, both the NCRP and ICRP issued a veritable avalanche of carefully detailed reports on the impacts of exposure, estimating levels at which damage may occur to every organ in the body from every possible type of exposure in every conceivable context, setting separate standards for children, pregnant women, radiation workers, astronauts, and so on. (The list of publications is in itself a sobering reminder of the penetration of radioactivity into almost every cranny of industry, agriculture, and medicine—as well as the home.) The problem is rather that from the very beginning, the committees agreed to steer clear of the single most important source of exposure, this is, the medical use of radiation. They would not interfere in the practice of medicine. Accordingly, their concept of a “controllable dose” (from sources that could be regulated) explicitly excludes radiation associated with therapeutic exposures. Such accommodation should not be surprising. The ICRP was, after all, established under the auspices of the International Congress of Radiology. The commission’s enduring commitment to medical independence is entirely consistent with its heritage. Shields Warren, himself a pathologist, gratefully acknowledged the hands-off policy at a meeting of
Paradise Lost
153
the Health Physics Society in 1960. The NCRP recommendations, he said, “have been so framed as to not infringe on the judgment of the physician in arriving at the diagnosis of the lesion from which his patient is suffering nor to hamper him in the use of ionizing radiation for purposes of treatment.”19 One member of the ICRP was troubled by the medical profession’s resistance to regulation. This was Karl Z. Morgan, a physicist at the AEC’s Oak Ridge facility. Morgan had fathered the discipline of health physics, devoted to the study of the health effects of radiation; he was also the founding editor of the journal that gave voice to its concerns. A true crossover, with direct experience on both sides of the aisle, Morgan was ideally suited to address the problems of radiation protection. He served on the Main Committee of the ICRP from 1950 to 1971. In response to the discovery of increased cancer incidence among children exposed to X-rays while in their mothers’ wombs, Morgan, in 1964, was able to mobilize enough support among the thirteen ICRP Main Committee members to win approval of what was called the Ten-Day-Rule. This delayed pelvic and abdominal X-ray exposure to women of childbearing age for the first ten days following the onset of menstruation, unless such a delay would cause harm to the mother. The regulation was adopted by the commission. Almost immediately afterward, however, it was censured, in print, by two of its own Main Committee members—Robert S. Stone and Lauriston Taylor. The dismayed Morgan, looking back on this episode years later, found it “ironical and very incongruous that through their publications, the chairman of NCRP and most of its members have consistently deprecated the risk of exposure to low levels of radiation. . . . Conflict of interest seems to be a contagious and virulent disease.”20 This was not an isolated occurrence. Committee members were, it seemed, often more interested in protecting the professional interests of its members than those of the general population. Most Americans were unaware of the lack of formal restrictions on the medical use of X-rays. The regulatory framework was so crowded with federal agencies that it would seem to have covered every contingency. But, in fact, the jurisdiction of each agency was limited. The FDA, for instance, could control the manufacture of drugs, and it could set performance standards governing the manufacture of X-ray machines. But in neither case could it dictate the final uses of these products. That was left to the discretion of the doctor. The fact that patients were unaware of (or indifferent to) the lack of statutory protections in this arena is another demonstration of the inaptness
154
Under the Radar
of the term “informed” consumer. This is one reason it was still possible, twenty years after Natanson’s treatment, for almost four hundred patients in Columbus, Ohio, to suffer similar overexposures to the same cobalt-60 therapy that she endured in Wichita in 1955. At least two and possibly as many as ten of the patients in Ohio died from radiation poisoning following treatment. A machine had been improperly calibrated, leading to doses that were up to 40 percent higher than those prescribed. By early February 1978, over one hundred lawsuits for damages had been filed against the hospital where the patients were treated.21 The NRC quickly intervened to administer damage control. But more was needed to assure the public that such a mishap could not happen again. To provide that guarantee, the Nuclear Regulatory Commission drafted revisions to its own regulations, recommending that future “misadministrations” be reported to the patient as well as to the NRC. The medical community balked at this proposal. In words evocative of the radiologist’s defense at the Natanson trial, the Office of the General Counsel, according to a history of the case, “cited legal barriers to informing patients and asserted that such a requirement ‘would insert the NRC into the physician-patient relationship without a demonstrated benefit.’” 22 But a demonstrated benefit to whom? By the late 1970s, informing the patient had been formally and widely acknowledged as a physician’s responsibility. But where it was severed from consent, it apparently lost its urgency, opening up a space for the old professional prejudices to reassert themselves. For the Ohio victims, it was of course too late for consent. In 1980, the NRC did finally issue a ruling that mandated the reporting of therapeutic misadministrations to both the NRC and to the referring physician.23 But though it also suggested reporting any such event to the patient or to the patient’s family, it let physicians off the hook by adding an escape clause—unless the referring physician objected. This came close to the hedging in the Natanson ruling that saw the physician’s “obligation to the patient” as “primarily a question of medical judgment.” These were, in any case, very minor adjustments and the public knew nothing about them. Media coverage of hospital negligence was, in any case, fairly rare. Grubby medical mistakes had none of the radioactive glow that lit up stories about fallout and, later on, about nuclear power plants. “Misadministrations,” by contrast, were considered dull stuff, of only local interest. Their invisibility, however, highlights a more serious omission. From very early on, it was clear to scientists that, for the vast majority of Americans,
Paradise Lost
155
the cumulative exposure to medical radiation was far greater than that accruing from exposure to all other forms of nuclear energy. Government scientists had said as much in the 1950s and 1960s, in their efforts to play down the impact of fallout. Fingers had been pointed at the overuse of X-rays from before the Second World War. J. Samuel Walker, a historian of radiation protection, cites the 1972 BEIR report which estimated that diagnostic tests alone “accounted for about 90% of the ‘total man-made radiation dose to which the U.S. population [was] exposed’ and 35 percent of the total from all sources. Nuclear power, by contrast, contributed less than one-half of 1 percent from all sources of radiation, including natural background.”24 But this disparity—between the levels of radiation associated with medical procedures and those associated with all other manufactured sources—runs directly counter to popular perception. The fear of radioactivity seems, rather, to be inversely proportional to the actual level of risk. Americans remain largely indifferent to the hazards of medical radiation but extremely anxious about emissions from nuclear power plants. Arguably, both responses are manipulated outcomes, the result of maneuvering that is more cultural and political than medical. The topsy-turvy relationship between perceived and actual risk can be attributed, at least in part, to the portrayal of radioactivity in the media. For more than a decade, fallout was the media star, drawing passionate (and often intemperate) responses from celebrity advocates on both sides of the aisle. Publicly aired antagonisms made good copy. Readers thrilled to images of mushroom clouds that suggested destruction (and hubris) on a scale hitherto unimagined. Medical X-rays, on the other hand, were decidedly humdrum and did not fire the imagination. But when they became the scapegoat for fallout during the early controversies about the dangers of radiation, the medical establishment fought back—too much was at stake to let it go. From Unplanned to Planned Exposure: The Advent of Mammography
From the mid-1960s on, the crusade to promote screening mammography became a major plank in a larger campaign to restore confidence in X-ray technology. It was, perhaps, the cancer establishment’s most ambitious attempt to create a national demand for an annual service that, unlike the Pap test—first widely used in the 1950s—was not entirely risk free. The advocates of the new technique faced an uphill battle. First, they would have to make breast cancer part of the public discourse. Women
156
Under the Radar
could not simply be commandeered into screening centers. They had to be convinced of the necessity to undergo what was still an experimental procedure. The disease had to be let into the national conversation, if only to generate a sense of urgency among American women. In this way, the need to create a demand for mammography played a major role in hastening the normalization of breast cancer. Second, to make their case, the early backers would have to undo the collateral damage that had been inflicted on X-rays in the late 1950s and early 1960s (recounted in chapter 5). The timing of its arrival—just after the introduction of cobalt radiotherapy—did not help. The massive new cobalt machines, showcased in stories celebrating the Atoms for Peace program, could arouse as much terror as awe. They were, after all, designed to exploit the destructive power of radiation—and they looked the part. Mammography would need to distance itself from this imagery, to cultivate instead the more “magical” side of X-rays, their ability to “see” inside the body, to catch potential tumors before they could be felt by the human hand. The need to detach itself from wider concerns about radiation exposures became more compelling in the mid-1960s, just as the first articles about mammography started to appear in women’s magazines. The press began to report a resistance, in the popular culture, to the growth of nuclear energy as a source of electric power. (The 1960s turned out to be a peak decade for the industry; utility companies placed orders for twenty plants in 1966, and, in 1967, for thirty-one more.) Once again, it was the AEC that managed the public relations angle, extolling the virtues of nuclear energy and playing down accompanying fears. But despite their best efforts, apprehensions grew, as the public became aware of the more disturbing ramifications of nuclear power. The problems of the disposal of spent fuels conjured up visions of a permanently poisoned water and food supply. The fear of nuclear accidents—or sabotage—rekindled all the old anxieties about fallout. It didn’t help that the media occasionally found the drama of weapons imagery irresistible. “Atomic Bomb in the Land of Coal” was the title of an article in Fortune magazine describing the TVA’s decision to a build nuclear power plant in Alabama.25 Such metaphors undermined the official representation of atomic energy as a clean, safe, and reliable alternative to coal, reinforcing instead its unpredictable destructive potential. Many remained unconvinced by the industry’s sunny reassurances. Women, in particular, were identified as “lacking in nuclear enthusiasm.”26 Public relations campaigns acknowledged the need to address the problem,
Paradise Lost
157
but instead of comparing the costs and benefits of coal and nuclear-powered electricity, they showcased women in the familiar role of consumer, linking their insatiable demand for domestic appliances to the need for energy to run them. A promotional film called The Atom and Eve, produced by Connecticut Yankee Power in 1965, featured a delirious postwar housewife in her dream home, infatuated by her self-cleaning stove, her freezer, and her spray iron. As a demonstration of “women’s intoxicated dependence on nuclear power,”27 the scene has more than a whiff of finger-pointing below its sexist imagery. There is a suggestion that the new domestic technologies have been bestowed upon women as gifts rather than as products that would be available only for as long as they remained profitable. The restive response of women who did not fall into line with the marketing plan was a worrying sign. Mammography, for its own protection, had to be cordoned off from these wider apprehensions. Relabeling X-rays as “mammograms” signaled the beginning of a separation process that would eventually redefine the common perception of the new technique. Cut off from the longer history of “unnatural” radiation, public debates about mammography would, after setbacks in the 1970s, make little connection between the radiation involved in annual screening and radiation from any other source. The first test of the new screening technique followed after several decades of piecemeal developments by lone researchers at many different institutions (including important early contributions in the 1930s by Stafford Warren, a radiologist at Rochester Memorial Hospital before he moved on to the Manhattan Project and the AEC). In the mid-1960s, Dr. Philip Strax, at the Guttman Institute in New York, pulled together the latest technical improvements and, with a study design provided by the statistician Sam Shapiro, mounted the first randomized clinical trial designed to measure the impact of mammography on mortality. He enrolled 62,000 women members of the Health Insurance Plan (HIP) in New York, dividing them into a group that received screening mammography and a group that, for the most part, did not. The results pointed to a reduction in the number of deaths from breast cancer in women over fifty who had received regular screening. For women under fifty, however, the results were equivocal. Shortly afterward, the National Cancer Institute joined forces with the American Cancer Society to launch the first national trial—the Breast Cancer Detection Demonstration Projects (BCDDP)—to test the safety and effectiveness of the new technique.28 The appeal went out to women across the country to participate; they were to be screened free of charge for two
158
Under the Radar
years. Their mammograms would, it was hoped, identify tumors well before they showed any clinical symptoms (at a stage when, it was argued, they were still “curable”). Where before, the fear of medical X-rays had been used to trump the fear of fallout, now it was the fear of cancer that was used to trump the fear of X-rays. Advocates of the new screening technique had their work cut out for them. Not only did they have to overcome well-established fears already associated with a cancer diagnosis—of hospitals, surgery, losing a breast, pain, disfigurement, death (not to mention runaway medical expenses). They also had to grapple with the fear that mammography itself might cause cancer, as fallout had done. A few years after the start of the NCI/ACS demonstration project, John Bailar, the editor of the NCI journal at the time, raised the possibility that the risks of mammography for women under fifty might outweigh its benefits. Although he acknowledged that a great deal of uncertainty clouded the subject, he maintained that “abundant experimental evidence exists that ionizing radiation can cause breast cancer.”29 He estimated the additional breast cancers that mammography might produce over time. Others joined in. An NCI committee chaired by the radiobiologist Arthur Upton estimated that mammography increased a woman’s lifetime risk of breast cancer by one percent (equivalent to six preventable breast cancers per million women, after a latency period of ten years).30 But Upton’s estimate was based on the assumption that the maximum radiation exposure was limited to one rad. In actual practice, a commentator in Science acknowledged in 1976, that is seldom the case. X-ray equipment varies; so does the amount of irradiation necessary to get a good picture in different circumstances. Even at the 27 NCI-ACS centers, where equipment is said to be carefully calibrated and well-monitored, exposure ranges from 0.3 to 6.5 rads per examination. There is little doubt that radiologists may use more. All of which is to say that, for the majority of women, the 1-rad maximum does not apply, and the amount of radiation exposure, especially for the woman who has annual mammograms, may well come close to dosages that are known to be hazardous.31
The hue and cry raised by Bailar, Upton, and a few others took its toll. The press fanned the flames of alarm, asking questions—“Breast Cancer: Does One Exam Cause a Tumor?”32 that had no clear answers. The possibility that the new procedure was more likely to cause cancers than to save lives among women under fifty hit a nerve already primed to respond. The
Paradise Lost
159
number of women willing to come forward for their annual mammogram dropped markedly. As one cancer historian noted, “Hospitals from coast to coast reported overnight drops in mammography of as much as 70 percent.”33 Screening boosters had to redouble their efforts, leaning hard on primary care physicians as well as on public channels of communication. The prospect of mass defection by American women added a sense of urgency to efforts already under way to drive down the levels of radiation involved. The BCDDP hoped to accelerate the use of technical innovations (such as increasing the amount of filtration in the radiation beam, using more sensitive film, and other improvements) that could reduce overall exposures. Physicists brought in to measure exposures at participating BCDDP institutions reported that between May 1975 and December 1976, the average exposure to the skin surface fell from 3.6 rads to 1.4 rads.34 Directors of the NCI screening program could then claim that “radiation doses now used have been reduced to about a third of their old level.”35 But hard questions about the radiation risks of mammography remained unanswered, primarily because they were never properly asked. This has made it impossible to verify claims about reductions in radiation exposures over time or to make meaningful comparisons between one generation and the next. Some researchers have argued, for instance, that improvements in the quality of mammography introduced between the HIP and BCDDP studies achieved a “ten- to twenty-fold decrease in radiation dose.”36 Later commentators have asserted that innovations incorporated into screening equipment since the 1980s have “reduced radiation dosage by 50 percent.”37 There is no rigorous science to back up either claim. Radiation doses in mammography might have been truly hazardous or they might have been relatively harmless; we cannot say which. The BCDDP, which gave mammography a national identity, was not designed to measure its risks, only to assess its effectiveness as a diagnostic tool. This was in keeping with the broader history of exposures to man-made radiation recounted in this book. Safety concerns, where they were raised at all, were of secondary importance and of primarily retrospective interest. The failure to anticipate the risks of screening, to go after them prospectively, is another legacy of the Cold War. Demonstrating a pattern of response that is now familiar, short-term pragmatism won out over longerterm risk just as it had in the nuclear testing program. In both cases, expressions of interest in the true health consequences of exposure are fobbed off onto some unspecified future far down the road. Both are failures to address the possible long-term risks up front. They do not require any
160
Under the Radar
active conspiracy on the part of policy makers, only a sustained indifference. The resulting voids are in themselves significant, shaping the public discourse by their absence, going unremarked, sometimes for decades, sometimes indefinitely. In the early days of mammography, besides the objections raised by Bailar and Upton, there was little evidence of concern. There was certainly no clamor for research on radiation risks coming from those in the best position to demand it. In the brief debate about the safety of mammography, neither the Federal Radiation Council nor the NCRP had anything to offer. This was clearly an issue they saw as beyond their jurisdiction. They were not going to give succor to the skeptics who, like John Bailar, raised exactly the sort of concern that should have engaged their interest. The fact that mammography, at its safest, might expose American women every year to a dose of radiation that was, on average, two or three times the NCRP’s own permissible maximum of 0.5 rad was not, apparently, enough to raise the alarm. If practicing physicians had steered clear of the fallout debate in the 1950s, physicists would now return the favor. Though they had the potential to inflict serious damage on the public perception of mammography, they chose to step aside. The question of radiation standards would not get in the way of a promising new technology. With the field cleared of scientific adversaries, the boosters of mammography found themselves in a favorable position. They could focus their attention on getting women to enroll in the demonstration project, in numbers sufficient to provide statistically meaningful results. To that end, the American Cancer Society mounted national outreach campaigns through local newspapers (and magazines like Consumer Reports) encouraging women to participate, even providing a convenient list of the screening centers that were open for business. A measure of the eventual success of these campaigns is marked by the gradual shift in tone in the representation of radiation in women’s magazines. A question asked in the April 1962 issue of Good Housekeeping— “Must You Fear Medical X-Rays?”—gave way, more than a decade later in the same magazine, to the more assertive “Don’t Be Scared of Breast X-rays.” At roughly the same time, an article in Harper’s Bazaar insisted that “mammography [is] now safer than ever.”38 And indeed, screening mammography techniques had improved over the dozen or so years since the first largescale trials began (even if radiation exposures still remained high). But making inroads on the question of safety was only half the battle. Proponents of screening mammography also had to prove that the
Paradise Lost
161
new procedure was effective, that it could improve the odds of surviving breast cancer. The standard bearer for this part of the campaign would be “early detection.” Women were to be drawn into the medical fold with the promise that the earlier their disease could be discovered and treated, the better their chances of a cure. If cancers could be caught before the onset of symptoms, while women were still healthy, then the biological distinction between being cancer-free and harboring early-stage disease would, for practical purposes, disappear. Once they understood the advantages of periodic vigilance, women would be sure to submit to the screening test. Eventually, their anxieties about radiation would abate and they would come to accept the procedure as a routine part of their annual checkup, if not its centerpiece. “Early detection” had been the mantra of the American Cancer Society since its very first cancer control campaigns. Well before the Second World War, the ACS had mobilized volunteers across the country to serve in a “Women’s Field Army.” Their assignment was to familiarize women with the recommended protocol, that is, to get them to report, like military recruits, for regular medical check-ups that might detect cancers at the very earliest stage. With the advent of mammography, the ACS adopted the same blanket approach. Never mind that the new technology was flawed, that mammography would miss up to 20 percent of tumors and that many breast cancers would turn out to be fatal no matter when they were detected. This was a case of putting the best face forward, at whatever cost. By June 1976, the joint NCI/ACS Breast Cancer Demonstration Project had screened 263,000 women. Among them, mammography identified 1,597 breast cancers. But it also missed another 236 malignancies (about 13 percent of the total).39 The overlooked tumors had appeared between screenings, suggesting either fast-growing cancers or cancers that radiologists had, for one reason or another, failed to spot earlier. The technique, in other words, was not infallible; it had revealed its limitations, even in the Demonstration Project. But no admission of any shortcoming would be acknowledged. The public outreach program would make no concessions to accuracy that might give women a reason to hesitate. Nothing would be allowed to interrupt the gathering momentum that drove women to the threshold of cancer medicine and that would eventually draw many of them inside, where they would become compliant patients. Did the strategists of the mammography campaigns ever consider the consequences of these evasions? Or did they feel that they were fighting
162
Under the Radar
such an uphill battle against the fear of X-rays and of nuclear power in general that to present anything less than unqualified support for the new technique would be to doom its chances? In the mix with paternalism, was there also a perceived need to oversell mammography in order to give it a fighting chance at success? Whatever the true motivations, the reluctance to tell the truth about the limitations of the technique has taken its toll on many thousands of American women—and on their radiologists and insurance companies. All have been caught up in extensive medical malpractice litigation. What was, at the time of the Natanson trial, exceptionally rare (a breast cancer malpractice suit involving a radiologist) has now become an all too frequent occurrence.40 In fact, breast cancer has become the single most common cause of medical malpractice claims in the United States. And radiologists are the specialists cited most often as defendants. This time, however, they are charged not with mistreating their patients but with misdiagnosing them. Most of the claims are based on delays in the diagnosis of breast cancer and the start of treatment, arising from the misreading or mishandling of mammograms.41 And three of every four claims brought against physicians involved now result in an award for the plaintiff. Despite the crippling burden this imposes on the practices of many radiologists, little has been done to correct women’s unrealistic expectations of mammography. Diagnostic radiologists are still being sacrificed to the greater good, that is, by the need to safeguard the reputation of the underlying technology. They are not to blame for its inadequacies. Yes, they can make mistakes in reading the films. But there is a fine line between human error and the finite ability of mammography to reveal to the naked eye every kind of suspect breast tissue. Once again, it is diagnostic radiologists who must take the hit for the technology’s shortcomings just as they had done half a century earlier when fears of medical X-rays were manipulated to provide cover for the dangers of radioactive fallout. Today’s concern may not involve safety but the need to maintain the position (and reputation) of mammography within the cancer hierarchy. The technique remains controversial; there are still some researchers who believe it does more harm than good.42 But the promotion of screening brooks no doubts. It continues to rely on the same mantra of early detection without adding any caveats about the technique’s fallibility. (Curiously, early detection never put in an appearance in the Natanson case. Natanson illustrates the difficulties of getting out of treatment rather than getting into it. And her ordeal could not have started any earlier than it did. Even if
Paradise Lost
163
screening mammography had been readily available, Natanson was much too young at the time of her diagnosis [thirty-four] for anyone to have recommended it.) In applying for “gold standard” status in the cancer establishment, mammography had clear advantages. It had friends in high places. Radiology was by then a fixed arm of the treatment triumvirate. Radiologists had well-oiled links with the X-ray and other nuclear equipment manufacturers; firms like GE and Picker (which supplied hospitals with their early cobalt machines) would now become suppliers of mammography machines. Their interests were well represented in Washington. Again, the question has to be asked, what alternative approaches to cancer diagnosis have been crowded out by the dominance of a technique that was more or less grandfathered in? Though mammography currently has no real rivals, other solutions have been pursued that take a very different approach to diagnosis. For example, in the 1970s, a surgeon and researcher named Otto Sartorius attempted to find a method of identifying malignancies by analyzing fluid taken from ducts in the breast (interest in his work has since passed to a handful of other researchers, including Dr. Nicholas Petrakis and Dr. Susan Love). The approach has subsequently broadened to consider a more comprehensive mapping of the breast—and its behavior—as a tool in diagnosis. It sets aside the ruling image of cancer as a foreign invader, an assumption so critical to our understanding of the disease that we hardly notice it. In its place is a more open-ended exploratory perspective that begins with the breast rather than with disease and that may lead to diagnoses and even treatments that are less invasive than those currently available. Although breast cell analysis poses considerable difficulties (especially in shaping it into an easily administered and universally applicable procedure), the approach has received little of the attention and research support that accompanied the early development of mammography. As long as it depends more on human skill than on expensive hardware, it cannot be so easily packaged as a commodity. Until then, it can never hope to win the critical support of the medical instrument and equipment sector that was so quick—and so primed—to exploit the potential it saw in mammography.
Chapter 8
Subdued by the System Cancer in the Courts, Compensation, and the Changing Concept of Risk
Radioactive fallout may have been the first environmental hazard to capture the public’s attention after the Second World War, but it was soon joined by a virtual tidal wave of others.1 The 1960s and 1970s witnessed revelations of massive exposures to chemical carcinogens, primarily at the workplace. The emergence of cancer among industrial workers reflected the injection of thousands of new chemical compounds into postwar industrial processes. Only a fraction of them had ever been tested for carcinogencity.2 By 1968, the use of new organic chemicals had grown to 120 billion pounds a year, an increase of more than 160 percent over the previous decade. Among those chemicals was, for example, vinyl chloride, used extensively in the manufacture of plastics for plumbing and sewer pipes, insulation on electric wires, soft toys, shower curtains, and intravenous (I.V.) bags for fluids, among many other applications. Because it was so widely used, revelations of rare liver cancers that were traced to the compound reverberated across the country, hitting home at thousands of manufacturing plants.3 The exposure of industrial carcinogens like vinyl chloride altered the focus of the public debate on environmental hazards. Industry now became a target of suspicion and blame, as attention to new chemical hazards began to eclipse an interest in radiation. The Limited Test Ban Treaty of 1963, which sent nuclear testing underground, took the fallout debate along with it, at least for the time being. The columnist Stewart Alsop noted in 1967 that “in recent years there has been something like a conspiracy of silence about the threat of nuclear war.”4 Anxieties about a real war—Vietnam—displaced those 164
Subdued by the System
165
conjured up by a war on the horizon. With the relegation of a nuclear Armageddon to the back burner came a corresponding slackening of interest in the hazards of nuclear detonations. The tests, however, did not abate; by 1992, four times as many of them had been carried out underground as had been detonated in the atmosphere before 1963.5 Fallout may have acclimated the public to hazards with unintended consequences. But the global contamination that now worried the public most was environmental pollution. The fear of radioactivity that had emerged in the fallout debate before 1963 now attached itself to nuclear power, in particular to the threat of emissions from commercial power plants and their disposal of radioactive wastes. These concerns had been galvanized by, among other driving forces, the publication of Rachel Carson’s Silent Spring in 1962. The book brought home, to many for the first time, both the global consequences of human interventions in the natural world and the interconnectedness of all human, plant, and animal ecologies. Silent Spring contributed to what, by the late 1960s, had become a critical mass of citizen concern. The still unresolved question of fallout, aggravated by the spread of commercial nuclear power, now fed a gathering storm of anxiety about a host of industrial carcinogens that made workers sick and contaminated communities nearby. Finally, the federal government took action. In fact, the 1970s witnessed the emergence of a broad-based institutional response to the threat that environmental hazards represented. Calling upon legislation, the courts, regulatory agencies, and research, successive administrations brought the collective powers of government to bear on efforts to define and, wherever possible, to contain the risks that so many unnatural hazards seemed to pose. The mapping of what I consider a zone of tolerance for radiation paved the way for similar forms of accommodation with other hazards down the road. The solutions, in other words, did not aim at eliminating risks but only at domesticating them, bringing them into line with the overall objectives of national security and the American economy. However they were packaged, such measures remained, at heart, attempts to normalize the increasingly pervasive presence of toxins in our midst. The campaign got under way in 1970, with the establishment of the Environmental Protection Agency (EPA). The new agency formally recognized the widespread damage that human meddling inflicted on the planet and claimed it would address this head-on, taking on “responsibility for research, standard-setting, monitoring and enforcement with regard to five environmental hazards; air and water pollution, solid waste disposal,
166
Under the Radar
radiation, and pesticides.”6 The passage, in the same year, of the Occupational Safety and Health Act, also marked a new interest on the part of the federal government in protecting the health of its workers and, at the same time, pointed to an erosion of the long-held belief in the workplace as exclusively private property, off-limits to government oversight. Running parallel with the heightened awareness of both environmental and occupational risks was a growing sense of the inability of postwar medical science to make any headway against the cancers associated with them. Lyndon Johnson, conscious of this dilemma, brought Cold War imagery to bear on the fight. The successful development of the atom bomb had demonstrated the superiority of American science put to work in defense of the nation. Now it should demonstrate that same superiority in defense of the nation’s health. When comparing the position of American and Soviet science, Johnson found the United States wanting, facing a “medical Sputnik.” In 1965, he announced “a worldwide war on disease.” A few years later, Kenneth Endicott, the director of the NCI, proposed an attack on cancer that would “emulate the Manhattan Project.”7 The cancer establishment duly reinvented itself with the passage, in 1971, of the National Cancer Act. As cancer mortality rates continued to climb, the new legislation sought to enhance the credibility of official efforts to eradicate the disease. It also gave the NCI greater independence from the National Institutes of Health—and direct access to the White House. President Nixon saw the War on Cancer as perhaps the last plank in Eisenhower’s “Atoms for Peace” program. “The time has come,” he announced, “when the same kind of concentrated effort that split the atom . . . should be turned toward conquering this dread disease.”8 By the end of the decade, Congress had approved an almost threefold rise in research money allocated to the NCI. As important as legislative changes were social trends that altered the public’s relationship to authority—of every kind. The rise of social activism among women, consumers, health advocates, and, not least, environmentalists began to chip away at the entrenched privileges of power. The unquestioned loyalty and obedience that Cold War conformity had demanded now gave way to a host of fragmented special interests, each with its own social and political agenda. Across-the-board grievances were publicly aired, many for the first time. Cancer was itself a beneficiary of this move toward greater openness. It finally renounced its outlaw status and entered the public domain. In 1975, Rogers Morton, the U.S. secretary of the interior, went public with his own diagnosis of prostate cancer, becoming the most
Subdued by the System
167
highly placed government official ever to do so. Rose Kushner became a nationally known breast cancer advocate, publishing her landmark book Breast Cancer: A Personal History and an Investigative Report. The actress Lee Grant hosted a television documentary about breast cancer called “Why Me?” All three events were reported in the New York Times. Finally, with the election of Jimmy Carter in 1976 came an administration more receptive to the problems of environmental exposures. Though he did not represent a radical shift in outlook, Carter’s willingness at least to examine the evidence in public opened the door, however briefly, to a fuller disclosure of the world of environmental hazards. So pervasive was the assault on industrial carcinogens on his watch that, for a brief moment, many believed society would make real gains in cancer prevention. Reflecting the spirit of the times, the surgeon general expressed the view that up to 20 percent of all cancer deaths might be linked to chemicals and other hazards on the job.9 This would turn out to be the high-water mark for both intervention—and prevention. Emblematic of the change in outlook was the long-running saga of the mineral fiber asbestos, which caught both manufacturers and federal agencies in its net.10 Like vinyl chloride, asbestos made its way into hundreds of products—military, industrial, and domestic. As a fire-proofing and insulating material, it proved to be invaluable, used in everything from acoustic tiles to brake linings. Between the 1940s and the early 1970s, the fiber was installed in thousands of navy ships. It was also commonly used in building construction and high school chemistry labs. Manufacturers supplied the same material to both public and private clients alike. Exposure to asbestos caused lung cancers, including a rare form called mesothelioma. By the early 1970s, it was clear that asbestos workers were dying at a rate that was seven times higher than expected. Beginning in 1966, what would become an avalanche of litigation against producers of the material led to revelations of corporate negligence and deception that recalled the government’s own handling of fallout a decade earlier. The hazards of asbestos, like those of radiation, had, in fact, been known to the industry for decades (it was first suspected to be carcinogenic in 1935). The litigants’ charge that their employers’ had failed to warn them of risks or to take action to protect them from harm has a familiar ring. What distinguished these claims from those filed by downwinders was their backing by organized labor. Unions could draw upon financial, legal, and administrative resources that were well beyond the reach of the resident victims of nuclear testing in the Southwest. And they were already
168
Under the Radar
primed for the task by their experience with radiation; a few key union leaders understood the assault of postwar chemicals as an extension of the earlier infiltration of radioactivity into the workplace.11 They also understood that the number of litigants among industrial workers was potentially huge. Occupational hazards were now geographically widespread, integral to a wide range of industrial processes, reflecting the versatility of many suspected carcinogens. Farm workers were as much at risk from arsenic as were copper and lead smelters. Painters and printers were as likely to get cancer from exposure to benzene as were rubber and petroleum workers. Radioactivity had shown a similar versatility, finding, in short order, thousands of applications in industry and agriculture as well as in medicine. Toxic chemicals were on their way to becoming every bit as ubiquitous. They too were often hidden from view, embedded in products that betrayed no sign of them. Industries facing major compensation claims for illnesses and deaths soon turned to the same argument that Cold War strategists had relied upon in their own dismissal of fallout two decades earlier. The old debate on low-level exposures was given a second wind by the “outing” of cancers among industrial workers. Risk was relative, the chemical manufacturers insisted, just as AEC physicists had done before them. Cancer-causing chemicals were poisons like any other. It was all a matter of determining the dose at which they became toxic; below that dose, no harm done. Of course, there was no convincing evidence to back up this claim. The complexity of exposures coupled with the long latency periods of most cancers made that impossible. But once again, scientific indeterminacy became a useful delaying tactic and smokescreen, allowing the underlying conflicts to remain safely in the shadows. Attention turned to the business of setting acceptable exposure levels, a process that became a critical feature of collective bargaining. Industrial workers, after all, did not want to jeopardize their livelihoods any more than their health; the former, understandably, often had the more immediate claim on their attention. Inevitably, the practice of negotiating acceptable levels of exposure, although undertaken by labor with an imperfect grasp of the true risks involved, helped establish arbitration as a legitimate response to the threat of hazards, underwriting support for the idea of “tolerable risks.” This would dilute and delay the demands for safe alternatives. Complicating the surge of industrial protests over the use of dangerous chemicals was the slow realization that the hazards experienced by employees at their worksites were not, as had once been thought, safely
Subdued by the System
169
cordoned off from the general public. Toxic waste materials were routinely released into rivers and streams abutting industrial production sites and eventually found their way into the local soil and air as well, with often deadly consequences (the saga of the Love Canal in New York State is the classic example of the long-run history of environmental contamination). In many cases, traces of industrial toxins found their way into the finished products themselves. Once sent out into the world, dispatched to market, they became Trojan horses, carriers of harmful exposures waiting to happen. Plastic products made with PVC, for instance, trapped vinyl chloride gas inside them; if accidentally released while in use or when damaged, the gas could be harmful. PVC also produced toxic fumes when burned.12 There was no way to protect the consumer from the myriad accidental ruptures, fires, and breakages that plastics were heir to. The reach of potential harm was, it turned out, limitless. Americans were all in the same boat. At the heart of this conflict was the contested role of government. How should it mediate between the competing interests of industries and citizens? How far could it intervene in corporate investment strategies, dictating the terms of production and altering basic cost/benefit assumptions? Or, from the people’s perspective, how much protection in advance or compensation in retrospect could citizens legitimately expect? The government’s own actions supplied the answer. When up against the same challenge in the 1950s and beyond, it had shown its disregard for the public trust. At every turn, its own definition of “the greater good” condoned the sacrifice of innocent civilians. If the guardians of national security could not be held to higher ethical standards, was it reasonable to demand more from industry? With the interests of corporate America increasingly represented by well-heeled lobbyists with direct access to legislators, where would such demands come from? The political alliances that first emerged with the rise of the military-industrial complex in the 1950s and ’60s had continued to deepen. The odds for significant change in the status quo were, therefore, poor. But working against this trend was the arrival on the scene of another critical player in the cancer debates—the press. By the late 1970s, after the work of investigative journalists had uncovered massive corruption in the Nixon White House, reporters began to take a much greater interest in the politics of fallout. By 1979, their engagement with the subject was unmistakable. In January of that year, the Washington Post kicked things off with a story revealing that federal health officials had known about the excess leukemia deaths among Utah residents as early as 1965 (the article
170
Under the Radar
cited the unpublished evidence gathered by Edward Weiss).13 Further Freedom of Information requests from the Post yielded more documents and revelations of official hypocrisy. In February, the New England Journal of Medicine published the study undertaken by Joseph Lyon at the University of Utah, the first independent report from the field and the first published account to contradict the official position on fallout. The study pointed to excess leukemias among downwinders during and immediately following the most active testing period. Its publication anticipated by just a month the core meltdown at Three Mile Island nuclear power plant near Harrisburg, Pennsylvania. The accident alarmed a public that was now more sensitized than ever to the threat of cancer; inevitably it rekindled anxieties about nuclear energy that had lain dormant for most of a decade. In July, the New York Times ran an extensive five-part front-page series on the hazards of radiation, introducing the public to the real complexity of the issues. The series took on the controversial subject of low-level radiation, discussed the increasing use of medical X-rays and the exposure of hundreds of thousands of Americans to radiation at work.14 Irene H. Allen et al. v. United States
A month later, in August 1979, a group of twenty-four claimants filed a major lawsuit against the government. All the plaintiffs in Irene H. Allen v. United States had lived downwind of the Nevada Test Site through the 1950s. They described themselves as “victims of leukemia, cancer or other radiationcaused diseases or illnesses.” In their suit, they charged the government with negligence, with failing to take “reasonable care” in warning them of the hazards of radioactive fallout. Coverage of the trial proceedings, which lasted for two and a half years, helped to spread awareness of radiation as a highly contested subject on which there was no scientific consensus. The adversarial nature of courtroom testimony added drama to an already awkward set of circumstances. Here was the government apparently willing to fight its own citizens, questioning their motives, minimizing their suffering, and discounting their stories of official betrayal, all on the record. The callousness of their legal posture must have shocked more than a few onlookers. But the government had been in this position before and was, by the time of the Allen trial, already a seasoned defendant. An estimated quarter of a million soldiers had been exposed to nuclear radiation at least six years before the domestic testing program was launched. Thousands of troops had been assigned to extensive cleanup operations in
Subdued by the System
171
Hiroshima and Nagasaki immediately after the bombings there in August 1945. Hundreds more had been sent, at the same time, to man the nuclear test stations in the Pacific Ocean. None had received safety instructions or been given any protective gear or monitoring badges to wear while on duty. Their cancers began to appear in the late 1960s and early 1970s, some years ahead of those that emerged in Utah and Nevada. Among the victims were several who applied to the Veterans’ Administration for service-related benefits to cover mounting medical bills and loss of income. They were all turned down.15 The official response, delivered in a 1979 White House letter to veterans and widows, was that the maximum radiation dose that any serviceman could have received in Japan between September 1945 and July 1946, was just one rem.16 The claim is extraordinary, given what was known by 1979 from evidence gathered by the Atomic Bomb Casualty Commission. But its duplicity is even more disturbing in the context of the estimated 2,000 radiation experiments carried out in VA hospitals, in secret, through the 1970s. Some of these involved veteran/subjects with multiple myeloma, an uncommon cancer found among several GIs stationed at Hiroshima and Nagasaki.17 The VA response—a blanket denial—demonstrated a bureaucracy that was hardened to the task. But until the filing of the Allen suit, the VA had dealt with veterans submitting claims one at a time. Now the government faced a more concerted challenge: the collective grievances of a group of civilians that included a quartet of innocent American children, all of them victims of cancer. Handpicked from a larger group of 1,200 potential plaintiffs, the 24 who were chosen—half male and half female—represented many of the malignancies that had appeared with greater frequency since above-ground testing had begun in 1951. Accordingly, leukemias, the cancers with the shortest latency period, dominated the group. Every other plaintiff stood for one of a dozen other cancers (including breast, lung, kidney, prostate, and thyroid). Only five of the plaintiffs were still living at the time of the trial; the others were represented by surviving family members. Almost everything about the bellwether case was out of the ordinary. Those bringing a tort action against the government were denied a trial by jury. The case was brought before a single judge, Bruce S. Jenkins of the U.S. District Court for the District of Utah. He called a total of ninety-eight witnesses to testify over a three-month period, yielding a transcript of almost 7,000 pages. Dispatches from the courtroom showcased the work of the cancer epidemiologists, introducing many Americans to the issues
172
Under the Radar
Attorneys for the plaintiff downwinders in the case of Irene Allen v. United States. From left to right: Stewart L. Udall, Ralph E. Hunsaker, David M. Bell, J. MacArthur Wright, and Dale Haralson. In pretrial hearings, Haralson remarked: “One has to wonder about the justice of a position that says, ‘We convinced the people they were safe; now bar their death claims because they believed it.’”
Figure 8.1
Photograph: Joe Bauman, Salt Lake City, Utah.
involved and to the observational science that was attempting to make sense of them. Among the witnesses called (or subpoenaed) to the stand were many of the authors of fallout research papers. In exacting detail, scientists on both sides of the fence presented voluminous evidence supporting—or rejecting—the existence of a clear link between exposure to radioactive fallout and a wide range of malignancies. To present the evidence for such a link, the prosecution called, among many others, Joseph Lyon from the University of Utah and John Gofman, by then an emeritus professor of cell biology at the University of California who had worked on the Manhattan Project. Rebuttal witnesses (arguing that no link had been conclusively proved) represented the collective might of government science and included Charles Land, an epidemiologist at the NCI; Glyn Caldwell, chief of the Cancer Branch at the Centers for Disease Control, Clark Heath from the Public Health Service; and Gordon Dunning from the AEC Division of Biology and Medicine.
Subdued by the System
173
Also appearing for the government was Eugene Saenger, whose experiments with whole-body radiation on cancer patients had been brought to a halt a decade earlier. His professional standing unimpaired, Saenger argued in court that the doses of fallout radiation received by the plaintiffs were too small to have caused their cancers. The government estimated that the Allen plaintiff with thyroid cancer had been exposed to 30 rads. When Saenger was asked whether he would change his mind about the toxicity of the dose if it had been ten times higher, he said “no.”18 But he knew from his own experience that a dose of 300 rads could, in fact, be fatal, causing death from acute radiation sickness if not from thyroid cancer. In entering misleading testimony into the public record, Saenger served as a conduit for the philosophy of the secret experiments, allowing it to color the mainstream representation of cancer. His cavalier approach to radiation doses reflected an old experimental habit. If a subject survived exposure at 150 rads, then the dose in the next round would be ratcheted up a notch to 200 rads, and so on until the limits of survival were reached. It was an attitude that bred a tolerance for hazardous exposures that would, in time, erode into complacence. Despite the preponderance of government witnesses like Saenger, the courtroom in the Allen trial managed, to a remarkable degree, to neutralize the vast disparity of resources behind scientists on the two sides. The limitations of the scientific evidence created opportunities for both sides. For example, Caldwell’s study of leukemias among military observers of the 1957 Smoky Test could draw upon measured exposures to radiation because the soldiers had been wearing film badges, for exactly this purpose. But, as the prosecuting attorneys pointed out, the badges did not register the contribution of what were called “internal emitters,” that is, radiation lodged in the body that might have been ingested by inhaling radioactive particles or by drinking milk from contaminated cows or by eating contaminated food. There was, accordingly, no way to know how much radiation had escaped measurement altogether. The problem of small numbers added further scope for speculation. The Caldwell study found nine leukemias among its sample of 3,300 veterans, set against an expected 3.5 leukemias for the group as a whole. The findings in the Lyon study were of a similar order—thirty-two deaths where thirteen had been expected. Though the differences might have been statistically significant, the relative rarity of leukemias made it harder to state with certainty that radiation had been responsible for the additional malignancies.
174
Under the Radar
Charles Land from the NCI argued that the excess cancers could just as easily have been the result of chance. Inevitably, the inherent indeterminacy of cancer causality drove much of the testimony. As one of the expert witnesses reminded the court, when a cancer is induced by ionizing radiation, the structural and functional features of the cancer cells, and the gross cancer itself, show nothing specific to ionizing radiation. Once established, a radiation-induced cancer cannot be distinguished from a cancer of the same organ arising from the unknown causes we so commonly lump together as “spontaneous.” Spontaneous is an elegant term for describing our ignorance of the cause (italics in original). The fact that radiation-induced cancers cannot be distinguished from other cancers itself indicates that there are profound common features among cancers, likely far more important than the differences.19
The speculative nature of causality opened the door in the courtroom to exhaustive presentations of the scientific evidence on the subject from all perspectives. The transcript, in fact, probably constitutes the single most comprehensive representation in one place of all the key players and the state-of-the-art science on the subject in the early 1980s. It also embodies the state of fallout politics with its parade of government scientists towing the party line. Karl Morgan was a physicist for over thirty years with the Oak Ridge National Laboratory and a founding editor of the journal Health Physics. Like John Gofman, he was an insider-turnedoutsider who agreed to appear for the plaintiffs at the Allen trial. He knew all about political correctness. He understood that protecting employees and members of the public from the harmful effects of exposure to ionizing radiation constituted only a secondary objective of the nuclear-industrial complex. In exchange for the generous economic support given to our profession, we were expected to present favorable testimony in court and congressional hearings. It was assumed we would depreciate radiation injury. We became obligated to serve as convincing expert witnesses to prevent employees and members of the public who suffered radiation injury from receiving just compensation.20
In the end, what influenced the judge’s opinion may have been less the weight of scientific evidence than the repeated expression of official indifference that leached out of almost every discussion. The judge found that
Subdued by the System
175
the information provided to residents living off-site had been “woefully deficient.” The local population had not been informed of the risks associated with fallout; they had not been given advice about preventive measures they might have taken to protect themselves. And the few warnings they were given had failed to provide enough information to be useful and effective. Had the government “accurately monitored the individual exposures in off-site communities at the time of the tests,” Judge Jenkins noted in his opinion, “accurate estimates of actual dosage to individual persons could have been achieved. The need for particular precautions could have been evaluated with confidence.”21 Further, had the existing monitoring programs continued for weeks or months, rather than days, it would have provided much-needed information about the chronic hazards of radiation, whose contamination outlasted by years any efforts to measure it. But even now, he lamented, “we have more direct data concerning the amount of strontium-90 deposited in the bones of the people of Nepal, Norway or Australia than we have concerning residents of St. George, Cedar City or Fredonia.” Equally damning, from early on in the testing program, official operatives “drew distinctions between the radiation exposure standards applicable to workers at the Test Site and the radiation levels deemed acceptable in the off-site communities.” Government had, in other words, implicitly acknowledged the hazards of radiation by taking precautions to protect its own atomic workers (including its scientists) while abandoning the civilian population to its own fate. The circumstances recalled by Norma Jean Pollitt, the one plaintiff with breast cancer in the Allen trial, typified the downwinder experience. She had witnessed the atmospheric tests as a child in Utah, between the ages of nine and sixteen, and was diagnosed with cancer as a young mother and teacher, in 1978, at the age of thirty-six. (Every detail of her treatment ordeal—and every corresponding medical bill—is included among the trial papers.)22 Pollitt testified that her local radio station announced every atomic test shortly before it took place but did not warn listeners to stay inside during the blast or to remove and destroy any clothing exposed to fallout or to avoid drinking milk produced by cows grazing at the time of the blasts (many rural families still depended on milk from “backyard cows”). On the contrary, Pollitt and her family looked forward to each test with the anticipation one might feel for an approaching comet or shooting star. As she described it, “My father would wake us up . . . about four o’clock in the morning and we would all go into the living room which was the window
176
Under the Radar
facing west and we would watch for the flash. . . . It was on the radio at the time, they would give an actual countdown for it. . . . Another time we were on a trip . . . and there was an atomic test that morning and as we laid in our sleeping bags, we watched the atomic cloud drift over us.” When asked whether she was ever told that there was any danger associated with these clouds she replied, “No, in fact we were assured that they were safe. That’s why we got up and watched them. It was quite an exciting thing for the quiet life of southern Utah to have something like that happen.”23 Pollitt was one of ten plaintiffs awarded damages in the judgment handed down by Justice Jenkins in May 1984. The others were the eight victims of leukemia and the one plaintiff with thyroid cancer. In making his determination, the judge leaned heavily on evidence of the long-term health effects of radiation on the survivors of Hiroshima and Nagasaki. Gathered in Japan by the Atomic Bomb Casualty Commission, the extensive data had been codified, evaluated, and published in what came to be known as the BEIR (Biological Effects of Atomic Radiation) report.24 This was the reference Jenkins cited. But while his opinion makes no explicit comparisons between the Japanese and American victims of nuclear weapons, the implication is clear. It was legitimate to view the fate of the Japanese bomb victims as a harbinger of what would befall their American counterparts. The BEIR report published in 1980 pointed to the emergence of leukemias, thyroid, and breast cancers. Jenkins’s awards followed accordingly. Norma Pollitt didn’t survive long enough after the trial to receive any compensation. Sadly, none of her co-plaintiffs fared any better. The government, undaunted by the adverse publicity surrounding the judgment, appealed the decision immediately. Two years later, the tenth circuit court reversed the judgment on the grounds that actions of the federal government were protected, as a king’s would be, by a sovereign immunity that granted it wide discretionary powers beyond the reach of legal challenge. In one last attempt to get another hearing, the plaintiffs appealed to the Supreme Court. In 1988, the high court justices turned them down, refusing to revisit the case, thereby exhausting the plaintiffs’ legal options. But though the issue of compensation died in the courts, it remained alive in Congress. Over the next several years, various committee hearings rehashed the evidence and, intermittently, drafted compensation bills that enjoyed brief flurries of attention. Formal recognition of federal responsibility finally arrived in 1990, with the passage of the Radiation Exposure Compensation Act (RECA).25 The new legislation authorized “compassionate
Subdued by the System
177
payments to individuals who contracted certain cancers and other serious diseases as a result of their exposure to radiation.” The awards were fixed, set at $50,000 for resident downwinders, $75,000 for workers who participated in atmospheric nuclear weapons tests, and $100,000 for uranium miners. In determining which cancers would be eligible for compensation under the act, the Department of Justice, administering the program, made use of the most recent BEIR estimates, just as Judge Jenkins had done. A fifth edition of the BEIR report coincided with the passage of RECA in 1990. BEIR V had the advantage of another ten years of follow-up among the Japanese bomb survivors. The cancer risk estimates it published were several times higher than those published a decade earlier in the edition that Judge Jenkins had consulted (BEIR III). The increases it reported were attributed to three main sources. First, estimates of the original doses received by the bomb victims were revised upwards. Second, with the passage of time, the number of excess cancer deaths among atomic bomb survivors had continued to rise, especially among those who had been exposed at a young age. And third, deaths from cancers other than leukemia had also increased at a steeper rate than had been anticipated by the earlier BEIR report.26 Partly as a result of these revisions, RECA’s 1990 list of eligible diseases included thirteen separate cancers. Legislation passed ten years later added a further seven malignancies to the list. RECA, inaugurated at the end of the Cold War, demonstrated that a compensation program could be cost-effective. A modest payout could offer reparations while at the same time setting clear limits to government liability. In carefully circumscribing its eligibility criteria and its awards, RECA essentially followed the pattern set by the Allen lawsuit. The court case sought restitution for damages already inflicted. It was looking for retrospective acknowledgment of harm done in the past, not pressing for radical change in the administration’s approach to cancer going forward. The plaintiffs did not demand free medical screening or medical care, either for themselves or for those with similar radiation-exposure histories who might face cancer diagnoses down the line. Nor did they seek compensation for economic losses. Their demands were, in fact, quite limited. Though most of the testimony establishing government negligence and the credibility of the science applied to all the plaintiffs, the judge’s decisions were rendered individually. This had the effect of playing down the collective interests—and the collective power—of the plaintiffs as a group. They did not speak with one voice to demand the control of radiation nor
178
Under the Radar
its elimination from the environment. The judgments they sought were strictly monetary. Each represented a separate transaction between a citizen and his or her government. RECA incorporated all these restrictions and added some more of its own. Most of them served to cordon off groups of cancer victims, one from another. Although downwinders and uranium miners were both eligible under the original RECA criteria, for the most part each identifiable subgroup had to implement its own grievance procedures. “Atomic” veterans, those nuclear guinea pigs who had served as spectators to blasts in the Nevada desert, were forced to apply for compensation to the Veterans Administration rather than to the Department of Justice. Keeping these groups at arm’s length from each other helped to stagger—and dilute— media coverage. Most stories about them attracted only local rather than national attention, limiting public outrage to those communities that were directly affected. Adding to the atomic discontents were the hundreds of thousands of workers at nuclear weapons facilities across the country. At the close of the Cold War, they too began to mobilize their own demands for compensation. As a group, they represented a different kind of threat. Because they were formally employed by private contractors, not by the government itself, their claims could not simply be waved aside under the banner of sovereign immunity. Unlike veterans, they were legally in a position to file lawsuits against the government, potentially thousands of them. So their claims had to be taken seriously if the nightmare of endless litigation was to be avoided. A 1999 congressional investigation lamented the long history of official indifference to their plight: “The men and women who have worked in this facility helped the United States win the cold war and now help us keep the peace. We recognize and won’t forget our obligation to them.”27 A year later, Congress passed the Energy Employees Occupational Illness Compensation Program Act (EEOICPA). The legislation entitled each successful claimant to $150,000 plus medical costs. RECA and EEOICPA were expected to achieve what litigation could not, that is, to bring the challenges of nuclear fallout under control. In accepting the terms of the compensation program, applicants renounced all further claims against the government. There were to be no more expensive ordeals like Irene Allen v. United States. Compensation bought immunity. It might also help to win back the public’s trust.28 There was a role for science to play here as well. Properly applied, it could pacify an indignant public by providing, at long last, an uncensored
Subdued by the System
179
look at all the available evidence. Accordingly, in the early 1980s, while Judge Jenkins was hearing testimony in the Allen trial, Congress mandated an investigation of radioactive iodine released by the above-ground tests thirty years earlier. This was to be a major report on the effects of radiation which, with guaranteed public funding, would be able to resolve many outstanding issues once and for all. Alas, the NCI responded with a study design that, in taking care to limit government liability, left many questions unanswered. As with so many earlier efforts, this one also took the form of a search for hypothetical dose estimates rather than attempting to identify the actual health effects in the actual exposed population. Expected by many to supply broad, possibly conclusive findings on the health risks of fallout, the new study focused instead on exposures to iodine-131 only; this was an isotope associated with thyroid cancer, a disease with an exceptionally low mortality rate. The study turned its back on the more lethal radioactive isotopes released in fallout—and on the more fatal cancers associated with them. Its goals were, in a word, extremely modest. But even this politically cautious, risk-averse approach to the subject did not save the project from spectacular foot dragging. Publication of its final results was delayed so often and for so long that Richard Klausner, by 1995 the director of the NCI, was forced to issue a public apology. Fifteen years after the project’s start, in 1997, “the sense was,” according to Dr. Bruce Wachholz, chief of the radiation effects branch at the NCI, “that nobody was really terribly interested in this.”29 Finally published almost a decade after the end of the Cold War, the study findings did not include any estimates of the numbers of thyroid cancers linked to radioactive iodine. Nor did it include any estimates of the risks associated with exposures to iodine-131, something that Congress had explicitly asked for. The report did nothing that might strengthen the perceived link between government and thyroid cancers going forward. To the public, it offered only a lame word of advice: “Persons concerned about fallout exposure should consult a health professional.” The results were indicative of the larger inhibitions shaping the research program—the fear of lawsuits and open-ended liability and the fear of direct public involvement in the provision of medical care. One reviewer wrote of the larger failure: “No satisfactory report has been made of the true extent of environmental contamination of the continental United States or of the health effects attributable to fallout from atmospheric nuclear testing. Given social, legal, scientific, and political realities, such a report may
180
Under the Radar
never emerge.”30 Another critic put it this way: “Not one dime has been spent conducting research or medical follow-up on any of the 458,290 Americans that the Department of Energy lists as having been present at one or more of the atmospheric bomb tests.”31 The NCI results had, nevertheless, at least mapped the extent of contamination. Given the documented exposures, it was hard, even for Richard Klausner, to avoid the inference in 1997 that the atomic tests would “very likely” be responsible for as many as 75,000 additional thyroid cancers, with up to 70 percent of them still to be diagnosed.32 By 1997, however, the world’s attention had been diverted to the 1986 Chernobyl accident and to the drama of its aftereffects. The twentieth anniversary of this explosion in 2006 was a newsworthy event and prompted extensive coverage of the thyroid disease that plagued its Ukrainian victims. Few Americans were ever aware that the fallout intentionally released over the Nevada desert in the 1950s had contained three to four times more radioactive iodine than had the accidental explosion in the Ukraine. Even fewer Americans understand that many more of these cancers are yet to be diagnosed and that all of them were preventable. Smoking and the Rise of Victim Blaming
Cold War weapons policy was indifferent to cancer prevention. That its nuclear testing program had caused thousands of unnecessary deaths was unfortunate. But these deaths would not be allowed to interfere with the ongoing requirements of defense strategy or, of increasing importance (at least until the accident at Three Mile Island), with those of commercial nuclear power plants. Whether carcinogenic or not, radioactivity was here to stay, a cost of doing business in the nuclear age (RECA and EEOICPA were evidence that these costs could be contained). Cancer research, therefore, would do nothing to undermine it. If that meant turning its back on environmental contamination, so be it. There were other ways to get at cancer prevention. The smoking controversy pointed the way. It opened up a new perspective on the subservience of science to politics.33 The well-funded and wide-ranging strategies mobilized by the tobacco industry would be of great interest to the Atomic Energy Commission, which would face similar challenges of its own. Beginning in 1950, large-scale studies carried out in both Britain and the United States demonstrated a strong correlation between smoking and lung cancer. In work that put cancer epidemiology on the map, the findings
Subdued by the System
181
were impressive, reproducible, and hard to counter; smoking was responsible for the great majority of lung cancers.34 Of particular relevance here is the fact that several radioactive isotopes, such as polonium-210, feature on the list of suspected carcinogens in tobacco smoke. Whatever the specific agents at work, science had pinned down a toxic activity that was directly and irrefutably linked to cancer. And smoking affected a substantial proportion of the American population, cutting short thousands of working lives and imposing significant health care costs on the economy. But science, as the experience of radiation research has shown, was only as powerful as the interests that espoused it. And the moral high ground occupied by public health advocates was soon starved of oxygen without the critical backing of influential insiders. The epidemiological evidence was, after all, up against the combined interests of both the government and the tobacco industry. Tax revenues from the sale of cigarettes—and from the incomes of tobacco workers—were a godsend to both federal and state treasuries. Tobacco elected representatives to both houses of Congress (and both parties) from every leaf-growing state in the South— and intended to keep on doing so. Lobbyists with massive resources at their disposal were well placed to remind legislators of the blessings of tobacco. The hard evidence linking smoking to lung cancer was potentially a great spoiler. It threatened to break up an entrenched alliance whose renewable bounty bred an addiction of its own. With too many politicians beholden to tobacco and a government dependent upon the revenue streams that tobacco taxes generated, there was little likelihood that Congress would impose any radical change on the status quo. The probability that it would consider an outright ban on tobacco—something within its power to legislate—was close to zero. No, the official response would have to leave the basic structure of the industry intact. Instead of directly intervening to regulate the product itself, it would regulate information about the product (and to some degree, access to it). What might have been construed by an interventionist government as a public health crisis was, therefore, reconfigured as a market imperfection. The problem could be solved by eliminating the flaw rather than the hazard. The warning message from the surgeon general, the caveat that, after 1965, appeared on cigarette packs (“Caution: Cigarette smoking may be hazardous to your health”) would alert prospective buyers to the dangers of smoking. At a stroke, the gesture would transform smokers from potential cancer victims to informed consumers. They would be left to make their own risk-benefit calculations, weighing the fear of disease in the distant
182
Under the Radar
future against the pleasures of immediate gratification. They could, in other words, choose to be victims. It would be their decision—and, consequently, their responsibility. In this formulation, future “product liability” suits brought by smokers with lung cancer against tobacco companies would be hard to win; no one had forced them to buy cigarettes. This change in the status of smokers represents a watershed moment, I believe, when private interests succeed in driving a permanent wedge between cancer prevention and public accountability. The transfer of responsibility from government to individuals occurs just as the AEC is beginning to grapple with inconvenient epidemiological evidence of its own, generated from within the Public Health Service and the AEC itself, pointing to elevated rates of cancers among downwinders. These findings would require special handling, too. From the 1960s on, the tobacco lobby never challenged the diagnoses of lung cancer among smokers. The evidence was just too overwhelming. What it did question was the underlying mechanism at work in the etiology of disease. Until science could satisfactorily explain the exact nature of the causal agent that linked them, the connection remained alleged, not proven. Once again, the attenuation of the smoking/cancer link by long years of latency between exposure and disease brought to light the vulnerability of epidemiology, whose research findings could not be explicitly demonstrated but could only be inferred. Statistical significance, however compelling to scientists, was harder to sell to the lay public. It created an opportunity for all kinds of mischief, an opportunity that was exploited as ruthlessly by the AEC in its denial of radiation hazards as by the tobacco industry in its own suppression of scientific evidence. Both had extensive access to the media, which they vigorously exploited. Their “scientific” broadsides were designed to override—or at the very least to cast doubt on—the more disinterested results of independent research. As a temporizing tactic, the subterfuge bought time, postponing interference to a far-off future. If the government took some tips from the tobacco industry’s playbook, tobacco occasionally returned the favor. Borrowing the idea of a “threshold” dose from radiation research (and, before that, from toxicology), some scientists took up the search for a “safe” cigarette or a “safe” level of smoking that would incur no risk of cancer. They too relied on “dose-response” studies that tried to identify “the smoke intake doses at which the risk of disease in smokers approaches that in nonsmokers.”35 The impetus—and funding—for this work came not from the tobacco industry but from inside the NCI, some of whose scientists believed that any reduction in the harm
Subdued by the System
183
inflicted by smoking was justified, given the difficulties of quitting. Other NCI scientists attacked the idea as a chimerical search for a “one-fanged rattlesnake.”36 The fact that the project was funded for almost a decade says a great deal about the grip of “relative risk” on the scientific imagination—and official policy. The “less hazardous” cigarette may not have succeeded but the broader idea of “safe” exposures to environmental toxins proved to be extremely serviceable. The need to accommodate first radiation and then smoking bred a tolerance for risk that has now become an accepted feature of modern life. Environmental hazards have become innocent until proven guilty, subjected to risk assessments that set the bar high for demonstrable harm. The result is, typically, inertia, backed up by a mountain of research. Smoking became a self-destructive behavior, an addiction fueled by faulty psychology. The public health response to it took this as a given. Airbrushed out of the picture was the fact that the tobacco industry did everything it could to manufacture demand for its products, setting up smokers to fail and encouraging them to maintain long and costly habits. With these machinations banished from the story, the complex phenomenon of smoking is reduced to a habit. And addiction can be treated as either a malleable character flaw or as a “spontaneous” carcinogen, an internalized “environmental” hazard as elusive and impossible to regulate as radon. C. Everett Koop, former U.S. surgeon general and head of the Public Health Service, chose the latter interpretation to frame his own approach to addiction. “Despite our best efforts,” he wrote, tobacco users “will be unable to completely abstain from tobacco. . . . After all, their volitional control over their tobacco use may be little different than their volitional control over the expression of cancer in their bodies.”37 Such an equivalence removes at a stroke all the complicating factors beyond the reach of individuals that contribute to the creation of a smoking habit. The transformation of smoking from a profitable pollutant to a personal addiction greatly expanded the horizons of cancer researchers. It opened the door to a brave new world of human traits, biological markers, and behaviors, all ripe for the picking. The new orientation dovetailed neatly with the government’s own compelling need to find scapegoats for the cancers it had generated. If the tobacco industry could avert liability in the face of the most compelling evidence ever marshaled against a manufactured carcinogen, the government could certainly follow suit. If that derailed traditional public responsibilities and substituted more of a market response
184
Under the Radar
to disease, so be it. The public and corporate sectors would move in tandem here, reinforcing a model of government greased by mutual self-interest. The Cardio Connection
The withering away of social activism in response to both smoking and radiation represents a clear reordering of the public health mandate. In part, the adjustment reflects the broader political swing to the right after the start of the Korean War. The anticommunist witch hunt carried out by Joseph McCarthy rose on the ashes of the last campaign for a compulsory national health insurance plan. The government, from now on, would curtail its support for the public provision of health care. The new perspective was reflected in the change in command at the top of the Public Health Service: Surgeon General Thomas Parran, who had supported the push for “socialized medicine,” was replaced in 1948 by Leonard Scheele, who did not. Ratifying this shift in outlook was a new approach to disease prevention, one that, down the road, would have serious consequences for cancer prevention in particular. The poster child for the new strategy was an innovative study of coronary heart disease. By the middle of the twentieth century, cardiovascular disease accounted for more than half of all deaths in the United States.38 Though not strictly speaking an epidemic disease, it had reached what were considered to be “epidemic” proportions. This helped to mobilize a public health response that was explicitly directed toward prevention. What emerged was the Framingham Heart Study, one of the most influential epidemiological investigations ever undertaken. Originally sponsored by the Public Health Service and the newly formed National Heart Institute, it was designed to follow some 6,500 disease-free volunteers in Framingham, Massachusetts. At two-year intervals, starting in 1950 and continuing for twenty years, their heart health would be evaluated through interviews, clinical exams, and lab tests. The study was designed to test the importance of an array of hypothesized risk factors by measuring their association with evidence of heart disease as it arose over time. Among the factors to be tested were the use of tobacco and/or alcohol, hypertension, high blood cholesterol, and weight gain, markers that are all now thought of as “lifestyle” choices, that is, behaviors under individual control. Those that showed significant correlations with heart disease pointed the way to risk reduction through “recommendations for the modification of personal habits and environment.”39 The fact that the selected list of potential risks covered such a wide range of markers obscures the one feature they all share. That is their ability to be
Subdued by the System
185
easily quantified—by the physicians who were to carry out the physical exams and the lab technicians who processed the blood tests. The proposed study was, in a word, in their comfort zone. Medical science was already familiar with the measurement of cholesterol and blood pressure. It was not familiar with the measurement of more elusive risks to human health that originated in the wider environment. In the late 1940s, when this study was designed, practicing physicians had little or no idea that such hazards even existed. Epidemiologists were another matter. They were used to casting a wide net in pursuit of possible culprits responsible for the transmission of infectious diseases. The Framingham model elbowed this open-endedness aside, substituting a narrower medical framework for one that was more classically epidemiological. If practicing physicians had felt threatened by the rise of epidemiology with its spectacular revelations of the smoking/lung cancer connection, the central role they had been cast to play at the heart of disease prevention put them back in charge. What this meant in practice was a redefinition of the search. While cholesterol and blood pressure could be considered as possible risk factors, air and/or water quality were excluded from consideration. The causes of disease, like the evidence of disease, were to be found in the same place, that is, within the human body. This preference did not reflect any conscious conspiracy on the part of physicians or anyone else. In the context of research methodology, it made perfect sense. The Framingham Study was already hugely ambitious, labor intensive, and costly. There was no need to add to its complexity and no pressure from any quarter to widen its scope. It’s only with hindsight that its bias becomes visible. Almost sixty years on, the assumptions behind the study design seem clearer (and no less justifiable for that). They showcase the centrality of the doctor/patient relationship. If physicians could pin down the risks of disease in their own patients, they were also in a position to help them reduce or even control those risks. That made them important players in the promotion of prevention. The Framingham study results more than justified its design, demonstrating that many of the hypothesized risk factors were indeed associated with higher rates of heart disease. They have subsequently been used to underwrite broad health policy recommendations that target changes in smoking habits, diet, and exercise as the strategies deemed most likely to reduce the incidence and severity of disease. The rates of heart disease have, indeed, declined markedly.40 The Framingham study broke new ground for the Public Health Service. Heart disease, like cancer, lay beyond its traditional sphere of
186
Under the Radar
influence, which was the world of infectious diseases. But diseases like yellow fever and typhoid were no longer all-consuming concerns. Vaccination programs, antibiotics, and sulfa drugs, water purification and other infrastructure investments now made it possible to keep disease outbreaks to a minimum. That provided public health agencies with an opportunity to redefine their policy objectives going forward, adding long-term research strategies to its more usual short-term expedients. As a paradigm of disease prevention, the model set in motion by the Framingham study would prove to be hugely influential. Before the advent of these changes, public health responses had often been dictated by medical emergencies that had justified the use of far-reaching measures. Traditionally, any corrective agent that showed the slightest promise, no matter how poorly understood, was put to use immediately in the hopes that it might moderate the impact of an epidemic (the Public Health Service was, in fact, criticized for rushing the new polio vaccine into use in 1955 before it had been adequately tested). It was an act-first-and-ask-questionslater mentality. If a contaminated well was suspected of spreading disease, it was shut down before its waters could be thoroughly tested, not after. The disappearance of this attitude from the public health arena did not pass unnoticed, as the report of the Royal College of Physicians on smoking and health acknowledged in 1962: “The great sanitary movement in the mid nineteenth century began to bring infective diseases such as cholera and typhoid under control long before the germs that caused these diseases were discovered. The movement was based on observations such as that drinking polluted water was associated with the disease. If the provision of clean water had had to await the discovery of bacteria, preventable deaths, numbered in thousands, would have continued to occur for many years.”41 The resolution of the dilemmas posed by the smoking/lung cancer connection and by the promise of the “lifestyle”/heart disease link marked the abandonment of this activist tradition in the public sector.42 Rash interventions into the wider environment would now be much harder to justify. Heart disease, like cancer, crept up on its victims slowly and struck them one at a time rather than cutting a broad swath through the community. It did not justify any precipitous action. Barring exceptional circumstances, intervention would now be attenuated accordingly and would proceed at a more cautious pace. Carefully planned education campaigns—and a responsive public—would, it was believed, over the long haul, subdue disease just as effectively as decisive corrective action had once done.
Subdued by the System
187
Cancer was poised to fall in with this ready-made agenda. Given the appeal of the new model for heart disease, the nation’s number one killer, it was natural to hope that the same approach could achieve similar results when applied to what had become the country’s second most common cause of death. The bridge between them was smoking, a behavior that was shaping up as a primary risk factor for both diseases. If smoking could account for such a high percentage of disease in heart disease and cancer, surely it was reasonable to anticipate that other bad behaviors implicated in the former (especially those associated with diet and exercise) might turn out to be just as fateful in the latter. Were prevention truly driven by the nature of disease alone, the paths to it would vary considerably from one disorder to the next. But, as we’ve seen, the response to illness is mediated by a wide range of social and political considerations; these can sometimes carry more weight than any clues the disease itself offers up. Applying the new risk factor methodology to cancer prevention swept away most of the disease’s distinguishing features and cut it off from the environments in which it arose. But it did more than discourage the wider pursuit of prevention. It also brought a politically troubling disease into close alignment with a disease that was innocent of both Cold War entanglements and industrial pollution. Heart disease was well behaved; it did not point the finger at either government or industry. It demanded neither a public apology nor compensation. If it could set an example for its more intractable cousin, so much the better. In practice, bringing cancer prevention into line with the model set for heart disease severely constrained the hunt for the causes of cancer. The primary targets in both cases became “lifestyle” factors, behaviors that individuals were thought to be able to control themselves, if properly encouraged to do so. Investigators did not, for the most part, go after suspected carcinogens in the wider environment—industrial or nuclear wastes, groundwater contamination, air pollution, among others—whose impacts would be much harder to measure, let alone control. Research presumed instead that the driving force behind risk reduction would be the individual’s own incentive to improve his or her chances of survival. By implication, motivation was the key to success. Sir Richard Doll and the Arc of Environmentalism
The shift in perspective over the past fifty years is captured nicely in the career trajectory of perhaps the most influential cancer expert of the postwar period, Sir Richard Doll (1912–2005). An English epidemiologist, Doll
188
Under the Radar
Figure 8.2 Sir Richard Doll, on the occasion of his receiving an honorary degree from Harvard University in June 1988 (he would receive thirteen honorary degrees in all). In 1962, he won the United Nations Award for Cancer Research and, in 1972, he was knighted. Photograph: Harvard University News Office.
was knighted in 1971 and received just about every kind of accolade that the scientific community could bestow, in both England and the United States. In the 1950s, with his colleague Bradford Hill, Doll published one of the first studies that convincingly linked smoking to lung cancer.43 But he didn’t stop there. For the next few decades, he widened his net and drew attention to the potential carcinogenic effects of a wide range of chemicals and environmental exposures—asbestos, radiation, even oral contraceptives—and he threw his weight behind prevention.44 But by the early 1980s, Doll seemed to turn away from research into the consequences of environmental carcinogens and aligned himself instead with the emergent “lifestyle” approach to cancer causality. In a major 1981 study of “avoidable risks” sponsored by the American Academy of Sciences, Doll and his coauthor Richard Peto concluded that diet topped the list, responsible for a full third of all cancer deaths, followed by smoking, while pollution and industrial exposures took up the rear, accounting for just a tiny 4 percent.45 At the same time, Doll came to the defense of chemical companies,
Subdued by the System
189
adding his considerable authority to depositions playing down the toxicity of chemical and radioactive hazards. What he did not do was disclose his substantial financial links to companies whose products he was called upon to evaluate under oath. Doll was a well-paid consultant to Monsanto, manufacturers of vinyl chloride and the dioxin-contaminated defoliant Agent Orange, used extensively in Vietnam. But he failed to acknowledge this relationship when he testified to the relative safety of vinyl chloride in a court case brought by industrial workers with cancer.46 Doll saw no conflict of interest here, and no need either to broadcast or hide his connections to industry. Most of his colleagues however, were, like the public, unaware of this relationship. Doll’s international prestige remained intact and his views continued to carry enormous weight. Inevitably, his widely publicized tolerance for known carcinogens added momentum to an industry-led charge against the regulation of environmental hazards. Running parallel with this was a change in the tone of his work on ionizing radiation. Early in his career, Doll looked at the increase in cancer deaths following radiotherapy for ankylosing spondylitis, a chronic inflammation of the spine (the work is mentioned in chapter 6). Always careful not to push the data further than they could safely go, in this case he concluded that “the increased mortality rate from cancer is largely an effect of radiotherapy,” that is, the additional deaths following heavy exposures to radiation were “due to the treatment.”47 In a similar vein, in investigating elevated rates of leukemia in some parts of Scotland, he was willing to entertain “the possibility that the excess may have been contributed to by an above-average exposure to ionizing radiations.”48 Doll’s stance left the door open. If further research confirmed the connection between radiotherapy and cancer, that might then invite some form of government regulation or control. At least it did not rule it out. Twenty years on, his position hardened. Typical of the later, more circumspect Doll is his interpretation of the evidence pointing to increased mortality among men from the United Kingdom who had participated in atmospheric nuclear weapons tests in the 1950s and 1960s. Here, after repeated reviews of the evidence, he concluded that “participation in nuclear weapons tests has not had a detectable effect on participants’ expectation of life or on their risk of developing cancer or other fatal diseases.”49 The fact that the initial findings suggested otherwise were, he now thought, more likely to be due to chance, just a statistical quirk. (This was a reading that has been widely challenged.)50
190
Under the Radar
The timing of Doll’s apparent change of heart is noteworthy because it traces, within an individual career path, the arc of the larger response to environmental hazards over the course of the Cold War. As long as the communist threat hovered, the need for a strong national defense continued to justify government interventions into both the economic and political life of the country. Successive Cold War régimes manifested an impressive capacity for ambitious undertakings driven by clear policy objectives. Perhaps inadvertently, public activism in pursuit of Cold War objectives helped to keep alive the hope for social change, encouraged by memories of the New Deal (or, in England, by the election of a popular Labour government in 1945 and the establishment of a National Health Service in 1948). The creation of an extensive regulatory infrastructure in the early 1970s would have reinforced such expectations, reviving hopes that the state would stand up to industry if that was what was required to bring environmental toxins to heel. Independent cancer researchers like Doll might be forgiven for imagining, early on, that publicly sponsored interventions might be mobilized against disease as well as against the Soviets. For a brief moment, perhaps, there was even a hope for mobilizing political will on behalf of true cancer prevention. The changing geopolitics of the 1980s leading up to the fall of the Berlin Wall would soon put an end to such heady expectations, taking with them the justification for public vigilance and engagement on a massive scale. In its place came a reordering of public-interest priorities, including the official response to cancer, which narrowed in scope as opportunities for private initiatives widened. With the expansion in private sector involvement came, inevitably, the accommodation of science to free-market imperatives. In this context, Richard Doll’s late career path, which led to a preference for “avoidable” risks and to serious conflicts of interest, was a clear harbinger of the shift in focus and influence that have come to characterize the contemporary management of cancer.
Chapter 9
Hidden Assassin The Individual at Fault
. . . Cancer’s a funny thing. Nobody knows what the cause is, Though some pretend they do; It’s like some hidden assassin Waiting to strike at you. Childless women get it And men when they retire; It’s as if there had to be some outlet For their foiled creative fire. —W. H. Auden, “Miss Gee”
The notion that individuals are responsible for bringing cancer on themselves is, of course, not new. As a disease stretching back to antiquity, cancer has been rationalized by every system of belief, incorporated into every variety of folklore, mythology, and religion. All cultural responses to it have shared the sense that disease is a manifestation of sin, that individuals are personally responsible for the ills that befall them. Both the Old and New Testaments of the Bible make an explicit connection between the wrath of God and disease. Cancer, in this context, is an affliction or punishment, as much spiritual as physical in nature. 191
192
Under the Radar
As a response to disease, the notion of individual complicity has proved distressingly difficult to dislodge. Modern science seems to have made little headway against a set of beliefs that seem to be almost hardwired in the human psyche. All diseases have, at one time or another, been held up as signs of God’s displeasure, as retribution for moral laxity or some other transgression. It was not until recently that “bad things” could happen to “good people.” Until then, the moral order required an unequivocal link between cause and effect. Bad behavior had to have bad consequences; the converse must, therefore, also be true. To break that bond was, indirectly, to open the door to chaos, to be left without a convincing explanation for the apparent randomness of fate. Until relatively recently in recorded history, all diseases were likely to be interpreted in this way. But, from the nineteenth century on, science began to uncouple this relationship. Sanitary reformers discovered that virus-infected mosquitoes or waterborne bacteria could transmit disease. People got sick not because they had sinned but because they chanced to be in the wrong place at the wrong time. They could be spared sickness simply by exterminating the breeding grounds of mosquitoes or shutting down a contaminated well. Victims of epidemic diseases, it turned out, were just unlucky, not unworthy. The public could now learn to fear contaminated water rather than the wrath of God. This was a revelation that would reverberate throughout society. Human agency emerged as a power that could raise doubts about the fixed order of things, on earth as in heaven. The victories of public health could be liberating. But cancer did not share in this enlightenment. Still incurable, it was stuck in a time warp, triggering the same archaic reactions as it had always done. Not an epidemic disease, it did not spread indiscriminately from one person to another but seemed to handpick its victims individually, reinforcing the idea that each one of them had been singled out for a purpose. Cancer’s very diversity, its ability to turn up anywhere—to change its course, appearance, lethality—has left its ultimate mysteries intact. It continues to be baffling and intractable, a rebuke to our faith in science and, consequently, a source of perennial anxiety. All this may help to explain why cancer, almost alone among diseases, still feeds whatever guilt-inducing explanations gain a purchase on the culture. The propensity for self-blame that epidemic outbreaks kept alive for centuries is now gratified almost single-handedly by this one disease (and a few related diseases such as AIDS). This dubious distinction has been further aggravated by the retirement, at the end of the twentieth century, of many of the traditional explanations for chronic conditions that we might describe as victim-blaming by proxy.
Hidden Assassin
193
These helped to spread the burden of guilt. But illnesses such as autism and schizophrenia, once attributed exclusively to bad mothering, were now laid at the door of genetic inheritance. Saddling mothers with responsibility for their children’s mental illnesses points to the prominent role of women in the annals of victim blaming. Breast cancer, in particular, has been a popular target. Unlike many other types of malignancy, this is a tumor that is visible on the body. That made it a perfect mark for more widely held anxieties about the role of women in society. In the nineteenth century, for instance, most female maladies, including malignancies, were attributed to reproductive disorders of one kind or another. A Victorian health manual described cancers as “especially liable to arise in those women who have suffered several abortions or unnatural labors.” A contemporary medical textbook argued that tumors were caused by “a derangement in the uterine functions producing a vascular determination that extends to the breast.” The concept of “derangement” was exceptionally generous, encompassing “disturbed rest, exposure to cold, late hours, fatigue,” as well as “great mental grief.” A sense of breast cancer as punishment for pushing the limits of conventional female behavior (staying out late, dancing, participating in sports) was already quite evident. The disease was also linked to the practice of birth control.1 Many of these old wives’ tales have been around for centuries. The Cold War can’t be blamed for introducing their punitive logic. But the susceptibility to entrenched superstitions and irrational terrors was one that was easy for the Cold War to exploit. The ideology of anticommunism was, after all, just another faith-based system with its own demonology and its own definitions of guilt and innocence. It would not be hard to bend cancer to its will, to impose its own set of beliefs on a credulous population especially at a time when the authority of government was paramount and evoked fears of its own. The unwelcome appearance of the disease among those living downwind of the Nevada Test Site was not to be blamed on the nuclear weapons program. Other demons would have to take the hit, to crowd out the suspicion that fallout caused cancer. In the fifties and sixties, medical X-rays, radon, and “cosmic rays” played that role, summoned to divert the public’s attention from anything that might be more damaging to national security. They were the sources of radiation most likely to trigger human cancers, opined government spokesmen at the time. But conjuring up one set of hazards to serve as a smokescreen for another created problems of its own. Medical x-rays and the others were not themselves safe. To cite them as
194
Under the Radar
possible causes of cancer did not solve any problems or let government off the hook; it merely shifted attention and responsibility from one federal agency to another (from, say, the Department of Defense to the EPA or the FDA). What was needed was an explanation that eclipsed the environment entirely, blotting out the earth and the atmosphere. The angle of vision had to be narrowed, to exclude finger-pointing that could expose government to potentially unlimited liability. This was finessed by lifting individuals out of their habitats, shaking off the roots and connective tissues that bound them to their surrounding ecosystems, and putting them under the microscope. The human body became the exclusive focal point of interest, shutting out the wider world. With basic science in charge of the research agenda, the search for the causes of cancer (as well as cures) would remain fixed on the body; it would not wander dangerously outside the precincts of human biology. This anthropocentric view of the cancer universe was at odds with the philosophy of the incipient environmental movement. The idea at the heart of Rachel Carson’s Silent Spring, that humans constitute just one link in a complex chain of ecological interdependence, would not find much of a welcome at the lab bench. Carson argued that animal and plant life were equally jeopardized by “man’s habitual tampering with Nature’s balance” and that any disruption in the relationship of organisms to their environment reverberated along the entire chain of being. To focus exclusively on the human links in this chain was, she believed, to denature our understanding of cause and effect. It made no more sense to blame humans for contracting cancer than to blame birds for eating berries from a tree that had been sprayed with pesticides. Accusations of any kind were simply a distraction from the larger forces at work. Those larger forces included powerful corporate interests like the chemical industry—manufacturers of pesticides and many other toxic products—whose unfettered profit maximizing threatened environmental health. “It is one of the ironies of our time,” Carson wrote at the height of McCarthyism, “that, while concentrating on the defense of our country against enemies from without, we should be so heedless of those who would destroy it from within.”2 The National Cancer Institute (NCI) was not troubled by this irony. It saw its mission as more narrowly circumscribed than that implied by Carson’s vision. Cancer within the individual was already challenging enough. Given the tight focus, the extension of interest from the human body to human behavior was perhaps inevitable. Once the body becomes the center of attention, the caretaker of that body rises in importance as well.
Hidden Assassin
195
It is not long before the caretaker becomes the gatekeeper, saddled with responsibilities—and the threat of failure and guilt that comes with them. The shift in perspective was heralded by Frank Rauscher—appointed director of the NCI by President Nixon in 1972—to lead the newly launched “War on Cancer.” Addressing a New Jersey audience fearful of chemical contamination, Rauscher had an axe to grind: “People are talking about a cancer hot spot here. They are blaming industry. They are blaming everybody but themselves.”3 In responding to anxiety with recrimination, Rauscher was simply passing on the view of the president, who had made his own position on the subject clear a year earlier. “In the final analysis,” Nixon maintained, “each individual bears the major responsibility for his own health. Unfortunately, too many of us fail to meet that responsibility. . . . Through tax payments and through insurance premiums, the careful subsidize the careless; the non-smokers subsidize those who smoke; the physically fit subsidize the rundown and overweight.”4 In ratifying the president’s stance, Rauscher may have set the tone of his own agency going forward, but this was less a new directive than the confirmation of an approach already in play. The Allen trial, which finally reached the courtroom in 1982, provided the Justice Department with an opportunity to put these prejudices on public display. Accordingly, government lawyers in the case tried pointing the finger of blame at the plaintiffs themselves. Their opportunities, however, were limited; most of the plaintiffs were dead by the time the case went to trial. Norma Pollitt, the litigant with breast cancer, was one of the very few who survived long enough to be deposed. In their cross-examination of her, government lawyers included the following questions: “Have you ever smoked cigarettes? “Have you ever used any other form of tobacco?” “Have you ever consumed any beverage or product containing artificial sweeteners?” “Were you ever prescribed birth control medication or any other birth control device and/or hormone medication?” (The attorneys apparently drew the line at asking about abortions.)
By the early 1980s, then, the process of shifting the blame for cancer was already well underway. The individual was no longer an innocent victim but a custodian, perpetually on the alert for possibly pathogenic intruders, harmful substances that could penetrate her defenses and transform her body into a breeding ground for disease. Public health campaigns
196
Under the Radar
shifted attention away from toxins that were “let out” (radioactivity released into the atmosphere) to those that were “let in” (those that got by the guard). Unhealthy lifestyles were, in this reading, evidence of a failure of vigilance or of some other lapse of sound judgment. As such, they came to occupy the place formerly occupied by “sin” with guilt-inducing consequences that were every bit as damaging. These associations didn’t drop from the sky. They have been carefully cultivated. Well-funded research has burrowed its way into every nook and cranny of personal behavior, looking for significant connections between aspects of personality, diet, exercise, reproductive history—and cancer. The hunt for links between diet and cancer has perhaps consumed the lion’s share of attention. It has a very long pedigree. Over eighty years ago, the claim could already be made that “a bibliography on the subject of diet in relation to cancer would extend to many hundred titles. It would indeed be difficult to think of any article of consumption, the use of which has increased, which has not been impugned.”5 All of the usual suspects still in circulation today were already in contention well before the Second World War. There were advocates for vegetarian, high protein, and starvation diets as a means of inhibiting tumor growth.6 There were scientists who pointed to vitamin deficiencies as a cause of cancer, others who blamed vitamin excess. There were also, as always, cranks operating at the margins, promoting, for instance, an oil diet or “fossil earth nutrition.” But there was little consensus. In 1937, an extensive review of the work of almost two hundred authorities in the field revealed “an amazing amount of contradictory theories and results.”7 By the end of the twentieth century, almost every component of diet had been considered for closer scrutiny (either categorically as fats, carbohydrates, or fiber, or individually as red meat, red wine, or margarine). And almost every study ever undertaken (and there have been thousands of them) has proved to be controversial. Given the complexity of our diets— the diversity of ingredients, the variety of growing conditions (the fertilizers and pesticides used in production), the composition and sizes of meals, the synergistic effects of foods in combination with one another and with chemicals manufactured by the body itself, the changing response to foods as the body ages, the delayed effects of small amounts of toxins consumed over long periods, and so forth—all make it incredibly difficult to isolate the impact of any one active agent from all the others. Cancer researchers have not been put off by these obstacles. On the contrary, they have exploited them. The search for diet/cancer connections has,
Hidden Assassin
197
of late, continued unabated; it continues to receive generous funding. In April 2007, the database of currently active research funded by the National Cancer Institute lists more than 200 projects that address the connection between diet and cancer.8 While there are a few studies that look at food categories generically (that is, at plant-based, low-fat, or Mediterranean diets), most emphasize the search for new anticancer agents, that is, for substances with tumor-fighting properties that could be added to the diet. They are not, in other words, attempting to identify individual carcinogenic substances already in our food that would, if taken away, reduce the incidence of various cancers. They are looking instead for new forms of what is now called chemo-prevention. Nutritional ingredients currently under consideration include vitamins D and E, green tea, selenium, flaxseed, tomato-soy juice, and fish oil supplements, among others. Whatever the outcomes of these investigations, the pursuit in itself serves a useful purpose. It keeps alive and before the public the idea that simple causal links can be established between individual consumer choices and the incidence of cancer. Of course, that assumption is not limited to diet but can, potentially, be applied to any aspect of human behavior. This is a fundamental premise that is pressed repeatedly. Every report on a diet/cancer link reminds us of it, no matter how discouraging its particular findings may be. The vector underlying this research (that points from the individual out rather than from the environment in) tends to discourage research that moves in the opposite direction. There are, for instance, no major studies underway exploring the carcinogenic effects of genetically engineered recombinant bovine growth hormones (rBGH) added to milk. These are hormones given to dairy cows to increase their milk yields and so increase the profitability of milk production. Their potential impact on human metabolism has raised alarm in many quarters—they have been banned in Europe, Japan, and Canada. But in the United States, the FDA decided that rBGH posed no risk to human health and approved its use in 1993. Since then, the agribusiness giant, Monsanto, the company that markets the hormone (as Posilac), has been fighting off efforts by organic milk producers to label their own product as “hormone-free.” Monsanto claims that these marketing ploys “have unfairly damaged its business.”9 If statistically incontrovertible proof of harm were brought to the FDA’s attention, that might force the agency to intervene directly in dairy industry production. The continuing absence of such evidence (or any incentive to gather it) renders such a strategy moot. At least for the time being, the
198
Under the Radar
burden remains with consumers. They have been voting with their feet: demand for organic milk has been increasing by about 20 percent a year.10 The current disagreement over rBGH is just the tip of an iceberg. Its eruption into public consciousness reminds us of the vast world of chemical contamination that remains largely below the surface of awareness and largely unexplored.11 It also keeps alive the wider political controversy surrounding the use of possible carcinogens and the public health. On one side is the argument that government can and should intervene to protect the public from potential harm—from no matter what source—while on the other, the level of risk is played down and the argument made that the public must learn how to protect itself. One side believes that cancer can be prevented by collective action, by the categorical prohibition of suspected carcinogens; the other rejects government regulation, insisting that responsibility should rest with the informed consumer. The clearest representation of this contradiction—and of its resolution— lies in the history of the Delaney Clause. This was an amendment to the Food, Drug and Cosmetics Act of 1958 and is perhaps the most unlikely piece of environmental regulation ever to win legislative support. The clause prohibited the use of any food additive in processed foods that had been shown to be carcinogenic to any degree in laboratory tests with animals. For example, the discovery of residues of the pesticide amitrole in cranberries in the fall of 1959 created an outcry just before Thanksgiving and led to the gradual elimination of the toxin in the early 1960s. Diethylstilbestrol (DES) was another chemical known to produce cancer in lab animals. Until Delaney, it had been widely used as an additive in chicken feed. It was subsequently banned from that use.12 The blanket prohibition of suspected carcinogens was to be the only legislative strike for “zero tolerance” that ever made its way through Congress. It overrode the concept of “relative risk” that governed the use of all other toxins. In fact, it overrode the measurement of risk altogether. Carcinogens in food were now in a class by themselves and put beyond the reach of negotiated tolerances and cost-benefit calculations. Under Delaney, there would be no acceptable levels for residues of suspected carcinogens. In allowing passage of this amendment Congress made a political choice to uphold the caretaker responsibilities of government. Regulation to protect the public was still permissible, however imperfectly applied and however disruptive to corporate interests. This was certainly going against the grain, especially in the late 1950s. Essentially an anomaly, the Delaney clause was subjected to unrelenting attacks from the chemical
Hidden Assassin
199
manufacturing industry from the moment of its passage. It was criticized as unscientific and unworkable and blamed for introducing a double standard of risk since it applied only to processed foods (some scientists believed that raw food harbored natural pesticides that were just as carcinogenic as those introduced by manufactured chemicals). “Risk is a natural part of life,” the chemical industry protested.13 But, as everyone knew, in the absence of hard scientific evidence linking risk to disease, to accept any level of risk was to open the door to accommodation. Who would be able to resist the enticements of lobbyists? As the years passed, improvements in technology made it possible to detect pesticide residues at ever smaller levels in processed foods. But even these tiny amounts constituted adulteration under the terms of the Delaney clause. Increasingly, the industry chafed against the restrictions Delaney imposed. Lobbying against it intensified and gradually succeeded in watering down the terms of the original legislation. Finally, the concept of “negligible risk” was allowed to replace “zero tolerance”—and there was no turning back. Eventually, in 1996, the Delaney clause was repealed; pesticide residues were eliminated from the category of “food additives” and a new standard was applied to them, based on a “reasonable certainty of no harm.” This was something the chemical industry could work with. The Delaney saga provides a useful counterweight to the perspective on the cancer/diet connection that now governs thinking on the subject. Today, we are much less aware of or concerned with the harmful additives that are put into foods as they are grown and processed than we are with the composition or nutritional value of foods after they have reached the market shelves. As with the regulation of smoking, responsibility for carcinogenic consequences has been passed along the food chain from the producers to the consumers. Public attention has followed suit. If food causes cancer, it is no longer because food processing has poisoned it but because we have chosen to eat it. With this change of emphasis has come a change in the suspected carcinogens under scrutiny. Where once they were pesticides, preservatives, or colorants and artificial sweeteners, now they are animal fats or alcohol. That is, suspicion now falls on as yet unspecified properties inherent in food rather than on substances that have been added to increase its profitability (by increasing its appeal or shelf life). The shift in perspective recalls the diversionary tactic used by the defenders of fallout: Americans were much more at risk from radiation associated with medical X-rays, they argued, than they were from the fallout residues following atomic
200
Under the Radar
tests. In both cases, exposures that were admitted to be risks were limited to what individuals could control. In theory (if rarely in practice), individuals could refuse permission to be x-rayed just as they could choose not to buy cigarettes or milk with bovine growth hormone in it. But they could not refuse to be exposed to atmospheric fallout any more than they could refuse to breathe polluted air or drink contaminated water from a municipal supply. Against risks that affected communities or society as a whole, individuals remained helpless. So to play up the personal responsibility for risk management, these more pervasive hazards had to be played down. The privatization of cancer prevention that is evident in our approach to diet is completely consistent with the fortress mentality ushered in by the Cold War. Both trade on the idea of the American family under siege, ultimately responsible for its own survival. Beyond the garden fence lay an untold number of potential threats. Especially in the 1950s, the call for perpetual vigilance infected every aspect of domestic life, disease and defense alike. Fears whipped up by the promotion of backyard fallout shelters reinforced anxieties about alien forces “out there.” Inside that fear was a growing realization that American families were on their own. The government was asking them to defend themselves, to construct their own fallout shelters on their own property. This too was an abdication of collective accountability. It’s an attitude that survives to this day. After 9/11, the Department of Homeland Security suggested that Americans could protect themselves from any future terrorist attack at home by keeping a supply of duct tape and flashlights at the ready. The off-loading of public responsibilities onto individuals or their families is, in other words, not new. The transfer of accountability for disease prevention simply endorsed a general devolution of care that was already well under way. But it did require some adjustments. With everyone now in charge of his or her own health came the need to pay attention. Citizenship now imposed more onerous tasks. Being well informed became a survival skill, one that might make the difference between life and death. It might also make the difference between solvency and bankruptcy. Cancer was costly.14 Unlike most infectious diseases after the introduction of sulfa drugs and antibiotics, cancer could require multiple bouts of expensive treatment over many years if not decades. With national health insurance another casualty of Cold War anticommunism, the costs of treating a disease that was both chronic and potentially fatal became much more problematic. Insurance coverage had to be fought for, one treatment and often one patient at a time. The threat of financial ruin (which forced the Natansons
Hidden Assassin
201
to go to court) gave a significant boost to all products and lifestyle strategies that promised prevention. Americans were predisposed to believe in them; they were consumers-in-waiting. Inevitably, the population was drawn into the public debate on the cancer/diet connection, alert to every new bulletin leaked to the outside world from the centers of scientific research. Newspapers and magazines took full advantage of this hunger by supplying readers with a steady diet of study findings. The subject has, in fact, never been far from the headlines or the science pages of most newspapers. At regular intervals over the past few decades, the press has heralded the confirmation of a carcinogenic eating habit and then, with equal fanfare months or years later, gone on to debunk it, in a process that Barbara Brenner, executive director of Breast Cancer Action, has called “science by press release.”15 Many dietary theories are cyclical, falling into disfavor only to rise again years later with a new lease on life. This state of prolonged indeterminacy has been quite profitable. The lack of conclusive results has not deterred the media or the food industry from investing heavily in the premise of a connection. On the contrary, it has served to broaden market opportunities rather than to restrict them. By the early 1990s, the food industry had already shown itself to be quite adept at exploiting the awareness of dietary health risks. It had taken the presumed links between fats and heart disease and run with it, especially after Reagan administration policy in 1987 permitted the food industry to make “disease prevention claims.”16 In 1993, “new food products made 847 claims for reduced or low fat, 609 claims for reduced or low calorie and 543 for no additives or preservatives.17 Most of these claims were neither proved nor disproved. But, as the food industry knows better than most, an unproven idea can survive for generations as “accepted wisdom.” In the same vein, the shelf life of any hypothesized diet/cancer link is almost unlimited, given the difficulties inherent in providing irrefutable evidence to confirm or dismiss it (the debate on the dangers of low-level radiation, after all, continues to this day). In the open-ended “meantime,” markets are free to promote whatever ideas they can, by whatever means. The idea that diet is the secret to controlling cancer has an obvious appeal. It sells thousands of food products (incorporating complex carbohydrates, high fiber content, low fat or no fat). It also sells women’s magazines as well as myriad books with titles like The Breast Cancer Prevention Diet: The Powerful Foods, Supplements, and Drugs That Can Save Your Life. A medical imprimatur can add significantly
202
Under the Radar
to sales as television doctor Bob Arnot’s book The Breast Cancer Prevention Cookbook: The Doctor’s Anti-Breast Cancer Diet attests. All of them exploit the sunny side of victim blaming (or, more precisely, victim avoidance). If bad behaviors promote disease, then it must follow that good behaviors, manifested in the consumption of approved products, can prevent it. This encourages women to believe that they are taking charge of their health. By taking preemptive action, they believe they can improve their odds of escaping the dread disease.18 But one form of preemptive action that the food industry has not rigorously promoted is the restriction of overall caloric intake. Though science first established a connection between starvation diets and low rates of cancer more than sixty years ago, the message has clearly not gotten through to the majority of Americans who grow increasingly overweight every year. Their interest in weight loss, where it exists, is rarely driven by the fear of cancer. The food industry does not broadcast this connection. It cannot, after all, profit from an overall reduction in food consumption—that would be self-defeating. But it can profit from marketing products that address risk factors of its own choosing, selling low-fat and nonfat foods “as part of a calorie-controlled diet.” Why and how that control should be exercised is not its concern. In February 2006, the Journal of the American Medical Association finally reported on the long-anticipated outcome of a full-scale randomized clinical trial investigating the impact of low-fat diets on the incidence of breast cancer among postmenopausal women.19 Conducted at forty clinical centers across the United States between 1993 and 2005, the Women’s Health Initiative was the largest study of its kind ever mounted, enrolling over 50,000 women in its dietary modification trial. Cancer watchers of every persuasion had high expectations of conclusive findings. The results, however, dealt them all a serious blow. After eight years, the study could report no statistically significant reduction in the risk for invasive breast cancer. But while hopes were not gratified, nor were they dashed. Immediately after the announcement of results, a slew of articles appeared detailing the limitations of the study design and promising further work in the area (“Women’s Health Initiative: Not Over Yet” promised the Harvard Women’s Health Watch). Critics highlighted shortcomings ranging from the study’s failure to discriminate between “good” and “bad” fats to its relatively limited duration (in the context of cancer’s potentially long latency periods). The drawbacks of the study also highlighted the difficulties of epidemiological work of this kind. Human beings are fallible; the more extreme the
Hidden Assassin
203
dietary modifications of the study design, the harder it is for enrollees to stick to the agreed plan for the agreed length of time. Completed studies, in other words, have not begun to exhaust the possibilities. Research into the cancer/diet connection will continue to be funded, both here and abroad. The largest and most ambitious nutrition and cancer study ever mounted has been running in Europe for over a decade with more than half a million participants recruited from ten countries. The European Prospective Investigation into Cancer and Nutrition (EPIC) admits that the challenge is daunting, acknowledging from the outset that “in spite of several decades of research, comparatively few nutrition-related factors have been established unequivocally as playing a causal role in human cancer occurrence.”20 Another popular corner of cancer prevention studies has been occupied by the pursuit of the mind-body connection. A veritable mountain of research has postulated a link of some kind or other between psychological states and disease. The imputation of blame is sharpest here because human beings are presumed to be more in control of their thoughts than of their bodies. If bad thoughts cause cancer, the logic goes, then good thoughts will prevent it. If good thoughts fail to prevent it, then those thoughts were just not good enough. This is a recipe for despair that dismisses biology altogether. James Patterson, in his groundbreaking social history of cancer, points to some of the behavioral traits linked to cancer in the 1950s: These were “masochistic character structure,” “inhibited sexuality,” “inhibited motherhood,” “the inability to discharge or deal appropriately with anger, aggressiveness, or hostility, covered over by a façade of pleasantness,” “unresolved hostile conflict with the mother.” . . . Among those who supposedly died this way were . . . Grant, who grew tense after being swindled; Eva Peron, who had an inordinate drive for power; George Gershwin who was over-ambitious; Senator Taft, whose cancer stemmed from political frustration; and Ruth, who had desperately wanted to manage the Yankees.21
Such assignments of blame demonstrate the purchase of magical thinking on our response to disease. The simple-minded one-to-one associations fly in the face of everything we otherwise know about the complexity of human motivation and behavior. When it comes to cancer, it seems we are willing to jettison our hard-won tolerance for difficult, even dangerous ideas, preferring instead the comfort of a flat-earth response. Attributing disease to a singular life experience or personality trait sets limits to the terrors of
204
Under the Radar
an otherwise unbounded universe of possible causes. The “environment” writ large may be a stand-in for that infinity of fears. It may be safer to look closer to home. In 2004, a major study summarized the findings of seventy studies carried out over the previous thirty years investigating the impact of psychological factors on both the causation and progression of cancers.22 The author included only those studies with the most methodologically rigorous design, that is, prospective longitudinal studies. After exhaustively reviewing all the findings, he concludes that “there is no psychological factor whatsoever for which an influence on the initiation or progression of cancer has been convincingly demonstrated in a series of studies. . . . The influences of life events (other than loss events), negative emotional states, fighting spirit, stoic acceptance/fatalism, active coping, personality factors, and locus of control are minor or absent.” But, as with the ongoing pursuit of a diet/breast cancer link, the failure to establish a connection between psychological states and cancer is viewed more as a temporary setback than the end of the road for this line of research. To the author of the study, it simply means that investigators have been asking the wrong questions. He insists that “there are too many studies with promising findings to conclude that psychological factors are of no importance at all. Moreover, the possibility that psychological factors, which do not have any predictive power if studied in isolation, may have an effect in interaction with demographic and medical factors, is only seldom studied. . . . In a way, the most important studies are still to be done.” Psychological factors, it seems, are not likely to be retired any time soon. Research into their impact will continue. And their findings will continue to be reported in the popular press, where they are perceived as more reader-friendly than the more challenging updates dispatched from the world of basic science. Interest in the role of emotions in preventing disease is also given a boost by popular interest in alternative therapies that patients sometimes turn to after diagnosis. These often address the role that mood and states of mind might play in attenuating the pain and fear of disease or the side effects of more orthodox treatments. Despite the preponderance of equivocal results linking “lifestyle” to cancer, there are some well-established risk factors for the disease. Most, however, cannot properly be described as “behaviors” in the sense that they can be controlled or modified by human volition. For breast cancer, for instance, known risk factors include the age of menarche and of menopause (the earlier the former and the later the latter, the greater the
Hidden Assassin
205
risk). Historically, the timing of these events has been determined primarily by genetic inheritance rather than by choice. But even these factors may be confounded by broader environmental influences such as hormones in bottled and breast milk and in red meat. Research proposals, however, do not typically accommodate such complexities. So what appears to be a genetically determined risk may actually include an environmental component that remains well out of view. The more lifestyle factors are granted respectability through their inclusion in cancer research, the more they bolster the basic premise that individuals can intervene in the disease process and stave off cancer on their own. Many influential voices now endorse this view, including the American Cancer Society and the Harvard Center for Cancer Prevention at its School of Public Health. The Harvard Web site enjoins viewers to think positively: “Healthy choices could prevent over half of all new cancer cases in the United States.”23 The other half of new cancer cases, however, is never addressed. These are the cancers that may be construed as collateral damage, the unintended (and unacknowledged) consequences of deliberate political, economic, or industrial strategies. Victims of these cancers are not just blameless; they are also powerless. Unlike organized workers or downwinders, they are, for the most part, unable to make common cause with others in their situation. Their cancers may not be rare enough to point the finger at a specific source (as mesothelioma and liver angiosarcoma incriminate asbestos and vinyl chloride). Lacking distinction, they remain barely visible, absorbed into the greater pool of cancers nationwide. Now that victim-blaming strategies have come to dominate cancer prevention, alternative strategies that might look elsewhere (to the rigorous control of industrial pollution, for example) have virtually disappeared. So too has public awareness of the rising incidence of many infant and childhood cancers, malignancies that strike their victims well before they can have any understanding of what an unhealthy lifestyle is, let alone indulge it.24 Of course, parents can be blamed for their children’s bad habits, but few, I suspect, would wish to level such charges against the parents of babies and toddlers with cancer. More widely publicized has been the improvement in survival rates for teenage cancers between the late 1970s and the mid-1990s (an increase of about 17 percent). This is certainly a significant advance but it has masked the equally significant rise, over the same period, in the numbers of young people diagnosed with cancer (an increase in the incidence rates of about 15 percent.) That these cancers
206
Under the Radar
have become relatively invisible in the public discussion of the disease is a sign of their outsider status. They do not conform with the model of cancer prevention currently in favor because they cannot be called upon to reinforce the virtues of personal responsibility. Even genetic risk factors have now been adapted to the prejudices of the prevailing regime. Although there is obviously a great deal more to learn about the role played by genes in cancer causation, what we already know about the breast cancer genes (Brca 1 and 2) suggests that they contribute only modestly to a woman’s overall risk. Nevertheless, their existence has been used to leverage another form of behavioral modification. Women considered to be at high risk for the disease (those with a family history of breast cancer) are now encouraged to undergo expensive genetic testing. If they agree to be tested and the results are positive, they face the option of radical surgery as a preventive measure. Prophylactic mastectomies assume that where there is no organ, there can be no organ disease (this is obviously not a viable solution with vital organs).25 But it is a good illustration of the hazards—and the limits—of an approach to prevention that seeks to fend off cancer in the individual rather than in the population at large. How this basic premise came to be fully exploited by the cancer industry is taken up in the next chapter.
Chapter 10
Experiments by Other Means Clinical Trials and the Primacy of Treatment over Prevention
In the spring of 1986, the New England Journal of Medicine published a controversial article assessing the progress that had been made against cancer between 1950 and 1985 (that is, covering most of the Cold War period).1 Its lead author was John C. Bailar III, a biostatistician with more than twenty years’ research experience at the National Cancer Institute. Having left the NCI in 1980, Bailar was writing from a position at the Harvard School of Public Health. To evaluate the impact of the “war against cancer,” Bailar and Elaine Smith chose to focus on a single measurement—the age-adjusted mortality rate for all cancers combined, corrected for both the growth and the aging of the American population over thirty-five years. What they found was a moderate increase in overall mortality driven by a steady rise in cancer mortality among males (white and nonwhite) and a less pronounced rise among women. The all-in death rate masked several encouraging trends in some sitespecific or age-specific cancers. There had, for instance, been declines in the incidence of stomach and cervical cancers and in the mortality rates for cancers in patients under thirty years old. But overall, the authors argued, “we have slowly lost ground.” They concluded that “some 35 years of intense effort focused largely on improving treatment must be judged a qualified failure. . . . Why is cancer the only major cause of death for which age-adjusted mortality rates are still increasing?” The way forward, they said, lay in “cancer prevention rather than in treatment.” The article struck a nerve. The strongest response, not surprisingly, came from the director of the NCI, Vincent DeVita Jr. He lambasted the 207
208
Under the Radar
report as “the most irresponsible article I have ever read” and claimed that it was “purposely misleading.”2 DeVita took issue with the article’s methodology, condemning its use of “the age-adjusted mortality rate as the sole measure of progress” and its failure “even to mention the types and magnitude of prevention now in progress.” To substantiate that charge, DeVita cited twenty-six clinical trials in prevention then underway, highlighting the low-fat diet and breast cancer linkage studies to which the NCI had committed $100 million. Other scientists of equal stature applauded Bailar and Smith’s efforts. Michael Shimkin, a pioneering cancer researcher (and its preeminent historian) thought the authors had done “a public service with their thoughtful analysis” and had “put their confidence in the ability of physicians and the general public to reach correct conclusions when they are confronted with facts that are uncomfortable.”3 Among those facts was the doubling of the absolute number of deaths from cancer over the postwar period. Whatever the esoteric methodological arguments among scientists, the disease was now reaching—and killing— more Americans than ever. And more of their friends and relatives were now aware of the limited efficacy of available treatments. The cancer establishment understood this too and knew that they had to work harder to convince newly diagnosed Americans to submit to treatment even if the outcome was equivocal. Why would anyone agree to therapy that might itself be painful and even more dangerous than the disease? The cancer establishment needed Americans to value treatment. By the late 1970s, there was a growing awareness that medical intervention had to be not only effective but also palatable to the patient (the radical mastectomy, the gold standard of breast cancer treatment for almost a century, had been neither). Inevitably, the search for better and better-tolerated therapies required extensive research with human beings. There was no other way to identify improvements that were both effective and safe. So all the difficulties raised by the secret experiments of the 1950s and 1960s were to be stirred up again in renewed efforts to halt the disease. The experimental instrument of choice this time would be the clinical trial, a method that allowed a standard treatment to be compared with an as-yet-unproven one when the latter showed promise of being at least as effective and safe as the more traditional therapy.4 That the best judgment of medical science could not confirm the superiority of one treatment over the other was inscribed in the clinical trial’s commitment to randomization, a process that allowed participants to be randomly assigned to any
Experiments by Other Means
209
one of the trial treatments. That perceived differences between treatments could be small necessitated a radical shift in the scale of investigational studies, fueling the rise of cooperative clinical trials that drew on the resources of cancer institutions across the country. “A single hospital,” according to the National Cancer Institute, “can rarely make enough observations in . . . highly selected patients to give adequate data in a reasonable time; hence, collaborative research becomes essential.”5 The modern trial was first used in the early investigations of chemotherapeutic agents for the treatment of acute leukemia, in children and adults. The National Cancer Institute, in 1955, prompted by recent successes with sulfa drugs and antibiotics, was inspired to set up an innovative program, the Cancer Chemotherapy National Service Center, at the National Institutes of Health. Congress gave the program its blessing with $5 million in startup funding. The earliest trials were designed to test the efficacy of various chemical compounds used in different combinations as anticancer agents. Promising substances that had emerged from animal studies were ready to be considered for human use. The hope was to find chemicals that, alone or in combination, might induce remissions if not cures in children and adult patients with a uniformly fatal disease.6 The use of the clinical trial itself embodied the contradiction at the heart of the therapeutic enterprise. At least for the first few decades when trials focused exclusively on therapies, most of the participants were patients whose cancers had not been cured or sufficiently controlled by treatments then available. The great majority suffered from late-stage disease; many were terminally ill. The need for clinical trials was, therefore, in itself an admission of therapeutic failure. In other words, treatment had to fail for treatment to improve. This contradiction hovered over those caught up in the design and planning of the early trials. The cool scientific language of the new protocols seemed at odds with the desperation of many prospective participants who would perhaps be facing their most difficult treatment decision ever. The new studies construed experimental results as “data,” adding fears of regimentation that statistical rigor seemed to imply. What was at stake, to some, was “the replacement of human and clinical values by mathematical formulae” and the degradation of patients “from human beings to bricks in a column, dots in a field or tadpoles in a pool.”7 Though they began in the 1950s, modern cancer trials were a postwar rather than a Cold War phenomenon. That is, they carried forward work from the chemical warfare program that had investigated the weapons potential
210
Under the Radar
of agents like mustard gas for use against the Germans before 1945.8 After the war, the same compounds attracted the interest of the pharmaceutical industry operating in the postwar civilian economy. The trials that emerged from the joint efforts of drug companies and investigators marked a real break from what had gone before. They pointed the way to the rise of a new market culture that would eventually come to replace the old military mindset governing cancer research. Importantly, the new trials came out of the NCI, not the AEC. They were driven by medical rather than by national security objectives; their goals were explicitly therapeutic, adapted to the characteristics of disease rather than to exigent military demands. Accordingly, research methods were specified by scientists rather than by defense strategists. The contrast between them was striking. The first randomized clinical trial (RCT) got off the ground just a few years after the start of the radiation experiments carried out at the M. D. Anderson Hospital. So for some years, they were running concurrently. While the methods adopted by the Anderson investigators were crude, incorporating no control group and focusing on short-term results, the NCI trial implemented an impressively thoughtful study design. In addition to stratifying the participants by age, type of disease, treatment history, and other variables, the NCI protocol also specified eligibility and exclusion criteria, instructions for randomization, a description of anticipated toxicities, and recommendations for measures to offset them.9 Equally significant, the cooperative involvement of many institutions marked an openness that was itself a rebuke to the secrecy governing the radiation experiments. By their very design then, the new protocols exposed many of the scientific weaknesses of Cold War research. What the two sets of experiments did share was, first, generous government support. Both radiotherapy and chemotherapy were kick-started by massive public funding. In the first chemotherapy trial, the great majority of the participants (41 out of 65) were patients being treated at public expense at a new NCI in-patient facility. Increases in support spiked in subsequent years. But in addition to public pump priming, the two treatment modalities also shared another attribute in the 1950s; like the radiation experiments, the first RCT also failed to build in formal protections for its patient participants—it too ignored the need for informed consent.10 RCTs evolved over time, addressing an ever-widening range of malignancies and drawing upon a vast cooperative network of participating institutions. Continuous feedback from one generation of trials fed the
Experiments by Other Means
211
design of the next. Eventually, the standard trial was broken down into three distinct phases: Phase 1 was designed to test the safety of the experimental treatment; Phase 2, to begin to evaluate its effectiveness, and Phase 3, to compare it to the then-standard treatment for that particular cancer. Only Phase 3 trials would necessitate the randomization of large numbers of participants. Under the aegis of the NCI, testing of all phases would be scrupulously monitored and would leave a comprehensive paper trail. Of most importance here, participation by cancer patients would be voluntary. Starting in the 1970s, just after the secret radiation experiments had been brought to a close (after the Saenger experiments in Cincinnati had been widely publicized), federal agencies wishing to sponsor research with human subjects began legislating for changes in its regulation.11 The groundwork was laid by the Department of Health, Education and Welfare (DHEW) in 1974, when it instituted a system of review for all human-subjects research receiving DHEW funding. Oversight was to be provided by review boards set up at each participating institution and given the power to grant or withhold approval, to specify changes in proposed research protocols, and to suspend any research suspected of violating requirements. These DHEW safeguards became the model for a variety of refinements and variations adapted by each federal agency over the next decade or so. Finally, in 1991, at the end of the Cold War, fifteen federal agencies adopted what became known as the Common Rule, which mandated protections for subjects in all federally sponsored research, whether carried out in-house or by outside institutions. Far from being ignored as it had been in the radiation experiments, informed consent now became the centerpiece of clinical trials, “a cornerstone of modern research ethics” as the ACHRE Report put it. Twenty years after Natanson struggled to legitimize the concept, informed consent had become an essential feature of every therapeutic conversation. Those eligible for clinical trials now had to be given good reasons to participate in them. Patients had become players, with opinions and attitudes of their own. The reproductive and other health movements of the 1970s had given them a voice. Women especially were newly empowered by the formal acknowledgment of the “right to choose” embedded in the Supreme Court decision in Roe v. Wade (1973). They entered their doctors’ offices fortified by a growing sense of solidarity with other women. Their experience as postwar consumers also weighed in here. With more women earning incomes of their own and having access to credit independent of their husbands’, women were now courted directly by product manufacturers of all
212
Under the Radar
kinds. There was no longer any doubt about their status as “informed” consumers. Emboldened by both a newly acquired feminist sensibility and an enhanced sense of her economic power, a woman’s relationship with her physician was bound to change. Though what evolved in the 1970s and 1980s could hardly be called a partnership, it did at least reflect a different balance of power. The doctor’s professional authority was no longer absolute; blind compliance with “doctor’s orders” became largely a thing of the past. Patients now wanted to be informed of the risks and benefits of recommended therapies, and they wanted to make the final choice for themselves about whether or not to accept any doctor’s advice. This would eventually be as true of the choice of cancer treatments as it was of reproductive choices (and, of course, was as true for men as for women). The women’s and consumer movements that helped to recalibrate the power relations between doctors and patients had themselves been animated, in part, by the attenuation of Cold War ideology. The incipient transfer of medical research dollars from defense to disease pointed to a slackening of concern for the impact of atomic warfare on exposed combat troops, especially after the Test Ban Treaty of 1963. When anti-Soviet propaganda became less shrill and air raid drills less frequent, the sense of imminent danger began to fade. Further disarmament treaties with the Soviet Union in 1974 and 1976 enhanced the growing sense of national security.12 With it came a relaxation of the command structures that had held traditional social hierarchies in place and unaware of one another. The loosening of the doctor/patient relationship owes something to this easing of the prevailing crisis mentality just as it does to the rise of feminist consciousness. As one grew weaker, the other grew stronger. The feeling of stepping back from the brink permitted Americans to speak more openly, to pursue pent-up grievances closer to home without the fear of reprisals. Authority in any guise (whether professional, corporate, elected, or male) could now be challenged with greater impunity. Many activist groups were set to try. The resurgence of informed consent benefited from all these trends. The designers of clinical trials understood the sense of empowerment that social change had bestowed on individuals, whether as wives, workers, consumers, or patients. They knew that they could not win the trust of cancer patients without extensive information sharing—and that this had to include the full disclosure of potential risks. Physicians also knew that trials would not go forward unless and until patients agreed to give their
Experiments by Other Means
213
informed consent. Without the trials, there would be no improved treatments. Ironically, what had been construed as an impediment to the diffusion of innovative treatment in Natanson’s day now become an obligatory part of the same process. Drawbacks—and Diminishing Returns?
The 1960s and 1970s also witnessed the rise of more rigorous biostatistics with which to evaluate the efficacy of trials. In the context of more tightly controlled protocols and more demanding measures of significance, the research methodology of the secret experiments looked downright incompetent. All of them had involved very small groups of participants and had lumped together patients of different ages and gender, with different types of cancer and different treatment histories. The NCI trials began to disaggregate all these variables, sharpening the focus of what was under study. They were, typically, comparing two forms of treatment that were often more alike than not. The smaller the expected differences, the larger the numbers of patient participants required to give significance to those differences. Neither VA hospital patients nor, in the days before Medicaid and Medicare, “charity” or “cancer ward” patients (the guinea pigs in the radiation experiments) were going to satisfy this demand. Clinical trials needed to woo the middle classes. Informed consent certainly greased the wheels. The appeal to articulate middle-class patients was as good a bulwark as there was against the charge of high-handed secrecy. After all, the new generation of experimental subjects was neither illiterate nor marginal as many of the subjects in the M. D. Anderson or Cincinnati experiments had been. But the inclusion of educated subjects brought with it the threat of litigation, something that had only rarely raised its head during the era of secret experimentation. To avoid the pitfalls of liability, the cancer establishment worked hard to make cancer trials well behaved and respectable. Legislating informed consent, however, could not in itself promise patients either a safe or a fully informed experience. Despite the infiltration of market economics into the organization of medicine, patients, especially very sick patients, were not really consumers after all. In the world of experimental medicine, there were no guarantees, refunds, or returns. Patients were ultimately on their own and their ability to protect themselves was definitely limited. What took place in practice could, alas, stray far from the mandated guidelines. Abuses were inevitable. Monitoring and/or oversight could falter, with sometimes disastrous results.
214
Under the Radar
In 1981, a four-part front-page series in the Washington Post chronicled many of the shortcomings of the trials the NCI had carried out in the 1970s.13 The articles highlighted 620 deaths that were directly attributable to experimental cancer treatment. Many of the drugs being tested were, in fact, “derived from a list of highly toxic industrial chemicals including pesticides, herbicides and dyes.”14 The reporters uncovered a “nightmarish list of serious adverse reactions, including kidney failure, liver failure, heart failure, respiratory distress, destruction of bone marrow so the body can no longer make blood, brain damage, paralysis, seizure, coma, and visual hallucinations.” How likely was it that any of these possible side effects were spelled out to the trial participants in advance? A former chief of the NCI’s experimental drug branch, Vincent Bono, “likened many of the studies to ‘donating someone’s body to science while they are still alive.’ ”15 Vincent DeVita, director of the NCI, found the Post’s view of cancer research “slanted and distorted.”16 He reminded the newspaper’s readers that the 620 deaths from experimental drugs had to be viewed within the context of the 46,000 other patients who were “cured every year because of anticancer drugs.” (At a congressional hearing ten days later, he put this slightly differently, conceding that only 9.5 percent of cancer patients who had participated in preliminary chemotherapy trials had been significantly helped by the drugs.)17 DeVita was concerned that the horror stories documented by the Post would discourage patients from seeking proper help. “It would be tragic,” he wrote, “if cancer patients who read [these articles] . . . turn away from their treatment.” He made no comment on the discussions between doctors and patients that led to their participation in the NCI trials. The episode demonstrates the inexorable difficulties of human experimentation in the context of a lethal and intractable disease. The inclusion of informed consent that, in theory, distinguished clinical trials from their secret experimental counterparts did not resolve the ethical dilemmas of participating physicians. The goal of such trials was not, ultimately, to benefit individual patients but to test hypotheses and draw conclusions about innovative therapies that might benefit the next generation of cancer patients. If trials managed to provide some benefit to individual participants it would happen “by good fortune, not by design.”18 Physicians were aware of the awkward position they were in. Once again, they were caught in the middle, pulled between their loyalty to the best interests of their patients on the one hand and their loyalty to the objectives of the trial on the other. Those doctors who agreed to participate
Experiments by Other Means
215
were expected to follow a protocol that took a great deal of decision making out of their hands. To allow patients to be randomized to one treatment arm or another, they had to set aside their own therapeutic prejudices and to accept that chance, rather than their accumulated medical wisdom, would be allowed to determine which of two (or sometimes three) different treatments a patient would receive. Doctors also had to convince themselves that their patients were capable of understanding what was at stake. For the most part, patients who were eligible, particularly those who had exhausted all available remedies, were desperately dependent on their physicians’ judgment. Doctors understood this vulnerability all too well. Not surprisingly, many of them balked at the difficulties. The tricky position of physicians in both the radiation experiments and, later, in clinical trials, points to a change in their status within the cancer hierarchy. In both contexts, they became something closer to intermediaries charged with the task of arbitrating between patients and some larger purpose which they, as doctors, no longer determined or controlled themselves. Offsetting this loss of status, however, was the fact that doctors still remained the gatekeepers to all cancer treatments and, as such, extremely powerful. New drugs and new procedures had to win their approval in order to reach their intended markets. Many patients were as hesitant to participate in clinical trials as their doctors were to recommend them.19 The process was fraught with uncertainty, coming at a time when patients were already having to grapple with the emotional consequences of treatment failure. The decision to accept yet more treatment required a leap of faith. No physician describing the risks and benefits of a trial was in a position to give a patient a firm idea of the “relative ratio of toxicity and benefit.”20 It was in the nature of the business that such a trade-off would remain elusive. Real patient comprehension remained equally hard to pin down. In one study of cancer patients, almost everyone who had given their informed consent said they understood all or most of the information about the trial they had agreed to join, yet only a third could actually describe the purpose of the trial.21 Some patients did express clear therapeutic preferences. Others were mindful of the coercion involved in past experiments. Minorities, in particular, remembered the abuse of African Americans in the Tuskegee experiments and, possibly, in the Cincinnati radiation experiments. Their distrust of the medical establishment was hard to overcome. Even in the best of circumstances, when patients were guided by sympathetic and well-informed doctors, they were vulnerable to many different kinds of
216
Under the Radar
reasoning, ranging from the common expectation that a trial might improve one’s chances of survival to the less commonly expressed hope that their own participation would improve the chances for survival in the next generation of patients. There was also the concern not to disappoint family members or friends who believed it was important to keep trying, whatever the consequences. Adding to the difficulties was the overriding issue of costs. Who would pay for treatments that were not “standard,” that might not provide any therapeutic value but would almost certainly increase medical expenses? Health insurers were not philanthropists. As businesses driven by costcontainment strategies, they were hardly likely to welcome the therapeutic uncertainty underlying clinical trials. Ironically, some of the more promising treatments being tested—but refused coverage by insurers—could well turn out to be both more effective and less expensive than “standard” treatments that insurers were willing to pay for. But the conservative short-run perspective of private health insurance did not always permit investment in what were deemed high-risk experiments. Clinical trials were often seen more as a threat to the status quo than as a lifeline to improved results. Health care providers took defensive action to protect themselves, creating tremendous obstacles to enrollment. What might have been a fairly rational process of accommodation under a single health insurance plan became, under the prevailing balkanized system of health care, a fitful and uncoordinated patchwork of coverage, often won at great cost by extremely sick but determined individuals. There was no consensus on which new procedures or drugs coming through the pipeline deserved to be covered and which did not. Every health provider and managed care organization decided for itself, choosing its own menu of investigational drugs and procedures to support. In its determined resistance and incoherent response to clinical trials, the insurance industry exposed yet another serious weakness in the existing health care system. Reimbursement policies were a significant deterrent to an experimental agenda that was already seriously challenged. Inevitably, many prospective trials failed to achieve meaningful results.22 Only a small percentage of cancer patients—less than 5 percent—ever enroll in them. A review of the NCI program in 1997 stated that “lack of third-party reimbursement . . . may be one of the most critical barriers to patient participation.”23 Many bills were introduced to Congress in the 1990s to mandate coverage of clinical trial costs (by private health insurance companies), but none ever passed.
Experiments by Other Means
217
In the end, government had to step in to make clinical trials economically viable. As it had done with the original development of both radioand chemotherapy, the public sector would need to underwrite the costs of new cancer treatments. To that end, in the late 1990s, it facilitated the greater participation of VA medical centers in NCI trials. Soon after, it agreed to extend coverage of the medical costs of trials to eight million military beneficiaries and their families. Once again, it was those who received medical care at public expense who were being pushed to the front of the queue. Despite all the new patient protections in place, it’s hard not to see some echo of Cold War expedience here—it was veterans, after all, who constituted the single largest group of guinea pigs in the secret experiments. The elderly, another population with subsidized health care, also needed encouragement to participate. In 2000, President Clinton directed the Medicare program to begin to “reimburse providers for the cost of routine patient care associated with participation in clinical trials, and to . . . promote the participation of Medicare beneficiaries in clinical trials for all diseases.”24 As the press release pointed out, “Too few seniors participate in clinical trials . . . 63 percent of cancer patients are older than 65, but they constitute only 33 percent of those enrolled in clinical trials. The disparity is greater for breast cancer patients—elderly women make up 44 percent of breast cancer patients, but only 1.6 percent of women over the age of 65 are in clinical trials for the disease.” But even with the support of the VA and Medicare, the numbers were not sufficient. Eventually, pharmaceutical companies began to play a more active role in recruiting patients for clinical trials. They also began to shoulder more of the costs. Between 1980 and 2000, industry’s share of investment in biomedical research virtually doubled, rising from 32 percent to 62 percent.25 It has now become the dominant player in the field. The passage of the Bayh-Dole Act in 1980 expedited the shift from public to private investment by facilitating the commercial development of new products arising from federally funded research. (The 1954 Atomic Energy Act had provided a similar spur to the early makers of equipment and other products incorporating radioactive isotopes from the government’s own reactors.) Private involvement has inevitably opened a Pandora’s box of its own, creating serious conflicts of interest between academic research standards and investment imperatives. The details of these conflicts lay beyond the scope of this book, but they have been impressively documented in the work of Sheldon Krimsky and Marcia Angell, among others.26 The conflicts they describe draw attention
218
Under the Radar
once more to the extreme vulnerability of patient participants in human experimentation. Once private industry has a stake in the process—and, more importantly, in the outcome—it inevitably brings its influence to bear wherever it can.27 The rush to get new drugs and other products out of the pipeline and onto the market injects a dynamic into the trial process that may not always further the best interests of patients. Corners may be cut. Trials may be halted prematurely, before they have had a chance to demonstrate a long-term survival benefit or long-term side effects that might be toxic. Boosters of an experimental drug under investigation may turn out to be on the payroll of the company manufacturing that drug. Despite all the additional precautions, in other words, many subjects in human experiments are still at risk and at the mercy of considerations other than their immediate welfare. The carefully crafted protocols now governing the administration of trials are designed precisely to level the playing field between the interests of scientists and those of cancer patients. But no matter how imposing and watertight they appear to be on paper, nor how scrupulously physicians administer them, they cannot always compete with a science that is now backed by the overwhelming resources of private industry. Financial incentives, in the end, may prove to be too powerful. The distortions they introduce to the clinical trial process are different from those that deformed the secret radiation experiments. But in both cases, they are set in motion by interests that are extrinsic to the stated medical objectives of the research but that nevertheless interfere directly with its protocol and with its outcomes. Human experimentation remains an inherently risky business, no matter who is in charge. The protections offered by informed consent may be no match when up against the imperatives of global capital, with its drive to expedite new product development. With the rise of private sector involvement over the past decade, government officials have expressed concerns about “the lack of direct Federal oversight and the reliance on largely unregulated private institutional review boards.”28 An inspector general’s report in 2000 highlighted the inadequacy of trial inspections, noting that they “mostly focused on whether study information was accurate and not on whether human subjects were protected.”29 A more recent report by the Department of Health and Human Services reveals how very meager federal supervisory resources actually are. The FDA relies upon just 200 inspectors to audit an estimated 350,000 testing sites. Furthermore, the trials it oversees (which involve only drugs seeking market approval) use a different set of rules from those regulating trials financed by the federal government. For trials
Experiments by Other Means
219
that are privately financed, there is no federal oversight at all. With no central collection and analysis of feedback from any of these trials, there is so no way to gauge how safe they really are. “In many ways,” says the ethicist Arthur Kaplan, “rats and mice get greater protection as research subjects in the United States than do humans.”30 Despite these drawbacks, investment in trials has grown steadily and mightily, sweeping an increasing number of physicians into their net. Between 1988 and 1998, the number of participating physicians rose 600 percent, to more than 30,000.31 Trials are essentially the only game in town for those in the hunt for improved treatments. And there is no doubt that, over the past half century, they have yielded some impressive results. These include a few true cures. Imatinib (trade name Gleevec) has had very good results in treating chronic myeloid leukemia and a rare stomach cancer. Cis-Platinum (cisplatin) is able to cure testicular cancer, and Rituxan is an effective new adjunct for B-cell lymphomas. Trials have also led to some significant extensions of disease-free survival. Tamoxifen trials, though not uncontroversial, have demonstrated that after five years on the drug, recurrence among women with estrogen-receptive-positive breast cancer fell by almost half.32 Trials have also identified treatments that are just as effective but less toxic and/or debilitating than prevailing gold standards. The National Surgical Adjuvant Breast and Bowel Project trials of the 1970s and 1980s, for instance, confirmed that radical surgery for breast cancer (mastectomies) offered no survival benefits over conservative surgery (lumpectomies) followed by radiation.33 Clinical trials have also vastly extended the range of drugs that can be safely tolerated while offering some benefit. This has given the oncologist many more therapies to turn to if and when any one treatment (or treatment combination) fails. Looked at as a fifty-year multibillion-dollar investment,34 however, clinical trials for cancer have not fulfilled the promise expected of them. For most cancers there are still no cures. Yes, some have come close to being curable (cervical cancer, childhood leukemias). And yes, survival can be extended by the earlier diagnoses and greater range of remedies now available. But metastatic disease remains a killer. Trials now chase smaller and smaller differences between existing and experimental treatments, at greater and greater expense. And it’s not just the trials that are expensive. The drugs that come out of the process showing marginal improvements on existing pharmaceuticals can sometimes have astronomical price tags as well. Clinical trials of Abraxane, for example, a new drug for the treatment
220
Under the Radar
of late-stage breast cancer, showed no survival advantage over the drug Taxol which it closely resembles (both are based on the same compound, paclitaxel, from the Pacific yew tree). The side effects of the two drugs were also similar (both kill white blood cells, leaving patients vulnerable to infection). Abraxane, however, simplified and improved the delivery of the drug, reducing the likelihood of allergic reaction. For this advantage, its makers have been able to charge $4,200 a dose, compared to the $150 for a dose of the generic paclitaxel. Where insurance companies are willing to foot the bill, there is little incentive for either doctors or patients to choose the older treatment rather than the new, despite the extreme disparity in costs. The model of the “informed consumer” has little meaning here since it is society at large rather than the individual patient that suffers the economic consequences of unregulated drug pricing policies.35 The number of new drugs approved for cancer is on the decline. An article in the New York Times in December 2005 summarized the situation: “Although every field has suffered, cancer has had the greatest chasm between hope and reality. One in 20 prospective cancer cures used in human tests reaches the market, the worst record of any medical category. Among those that gained approval in the last 20 years, fewer than one in five have been shown to extend lives, life extensions usually measured in weeks or months, not years.”36 The public has rarely expressed impatience with the limited returns on clinical trials. This is partly because it confounds the recent breakthroughs in cancer science with those of cancer medicine. Advances in the understanding of the role of genes in carcinogenesis have indeed been impressive, but their translation into usable treatments remains, for the most part, uncertain and very far off. Curiously, the mode of attack of current cancer science seems to turn on its head the traditional approach to life-threatening disease. In taking action to control epidemics, public health pioneers followed their hunches—or epidemiological clues—without any understanding of the underlying mechanisms of the disease process. Now science is preoccupied with just those mechanisms, leaving the question of preemptive intervention moot. We are of course desperately grateful for the “smart drugs” that have appeared on the scene, that promise to achieve the same or better results than traditional cancer chemotherapies while reducing many of their unwelcome side effects. No one is arguing against them. But isn’t it important to keep alive the idea of eliminating cancer altogether, to obviate the need for clinical trials entirely? Where has that idea gone?
Experiments by Other Means
221
The public pays little attention to the economics of trials because it has no way of assessing them. There are no well-funded alternative strategies whose impact on mortality or incidence rates can be compared with those linked to trial results. We are never asked to consider their opportunity costs, the alternative investment strategies we might have pursued if clinical trials had not crowded them out. (What if the cost of a hundred unproductive trials for treating liver cancer had been invested instead in prevention of the disease? What would that look like?) As long as there are no answers to these questions, the shortcomings of trials will go unremarked. Yes, they are hugely expensive—but compared to what? The emphasis on cancer control rather than prevention displays a curious logic at work. To yield statistically significant results, clinical trials must enroll large numbers of patients. This inevitably favors the most common forms of the disease like breast cancer, which now annually strikes 182,000 American women, and prostate cancer, which afflicts about 186,000 American men. One might argue that the greater the incidence of the disease, the greater the incentive to pursue prevention rather than treatment. Breast and prostate cancer together account for a third of new cancers diagnosed every year. A significant reduction in the incidence of either disease would save a great many lives and have a huge impact on cumulative medical costs. They would also completely eliminate the pain and suffering that accompany any diagnosis of cancer and any subsequent recurrence. But none of these potential gains seems to influence the calculus of those setting research priorities. The substantial numbers of cancer patients requiring multiple rounds of treatment may look like a major medical failure to some, but to those in the cancer industry they represent a market. And it is one that is more or less guaranteed to grow, given the rising incidence rates of many cancers, the lack of cures, and the growth and aging of the American population. Even where incidence rates may be falling (as for prostate cancer), the absolute number of those diagnosed with disease will keep rising because the impact of the other factors will outweigh any drop in incidence; the population will continue to grow and will continue to age. The markets for cancer treatments are, in other words, remarkably stable, as markets go. Since the early 1970s, the total number of new cancer diagnoses has doubled. Spending on cancer treatment has grown even faster, quadrupling over the same period. The NCI puts the direct costs of treatment in 2005 at $74 billion.37 This means that in 2005, every new cancer patient spent twice as much on treatment, in real terms, as he or she would have done in
222
Under the Radar
1972. Underpinning the growth in treatment costs has been government investment in clinical trials. This has doubled in just the last fifteen years. In 2004, NCI funding for trials reached $800 million.38 This is a well that is hardly likely to run dry. In 1950, before the advent of most diagnostic tests and therapies commonly in use today, about 200,000 Americans died of cancer. In 2000, half a century later, after the introduction of new screening procedures—Pap smears, mammography, PSA tests, colonoscopies—and after chemo- and radiotherapies became equal partners with surgery, the number of annual cancer deaths exceeds 500,000.39 Improvement in mortality rates, if and when they materialize, will not necessarily hurt the cancer industry. Those with disease histories who survive longer are, in fact, likely to undergo more treatment over the course of their lifetimes (the rise in treatment costs may already reflect this). Adding to these recidivists are many survivors of childhood cancers who face secondary cancers decades later. The numbers are especially high among those diagnosed before the 1980s, when treatments like whole-body radiation for leukemia or high-dose radiation for Hodgkin’s lymphoma were in common use. These treatments, the price of survival, are thought to trigger other cancers and serious conditions down the road that will themselves require treatment.40 Here the circularity of radiation comes into play once again, its cure-or-kill logic manifested in a dynamic of self-perpetuating demand. The only thing likely to mitigate the trend toward larger treatment markets would be significant reductions in the number of cancer diagnoses. But under the current regime, diagnoses have tended to grow rather than to diminish. Improvements in diagnostic techniques, like the substitution of cat scans for X-rays in the detection of lung cancer, have turned up many more early tumors.41 But while the new approach may identify cancers earlier than before, it seems to have had little impact on the mortality rates of the disease. People live longer knowing they have cancer but die at roughly the same rates as they did before. And many of the tiny tumors that are now picked up by the more sophisticated imaging systems may turn out to be biologically harmless, unlikely ever to spread or kill. In the meantime, many more people have been swept up into the costly health care system. Capitalism by its nature chases new market opportunities wherever it can. Not content with promoting the treatment and early detection of cancer, it has now moved into prevention as well. Here, the pharmaceutical industry has taken the lead. By a semantic sleight-of-hand, drug companies have recast pharmacological intervention as a kind of chemoprevention. They offer drugs that promise to ward off disease. Reconfigured as preemptive
Experiments by Other Means
223
agents, the new drugs trade on an elastic concept of prevention, one that provides cover for a much deeper penetration of what is essentially treatment into the lives of healthy Americans. Oncologists have distinguished between “primary” prevention (intervention for relatively healthy individuals with no invasive cancer and an average risk for developing cancer), “secondary” prevention (intervention for patients determined by early detection to have asymptomatic, subclinical cancer), and “tertiary” prevention (symptom control, rehabilitation, or other issues in patients with clinical cancer).42 Like changes in exercise or diet, all variants of this definition target the individual rather than the environment. “Primary” prevention could potentially apply to the great majority of Americans (healthy, at average risk for disease). It would take the form of a drug regime, medicalizing an otherwise healthy population en masse. All drugs come with side effects and serious risks of their own. Many of these “preventive” drugs must be taken for long periods of time, if not for life. Some can, in rare cases, be as lifethreatening as the diseases they are trying to prevent (which makes the term chemoprevention something of an oxymoron).43 The number of people likely to experience adverse side effects or illness could well exceed the number of cancers actually averted. As the advocacy group Breast Cancer Action has argued, among others, the push for chemoprevention treats risk as a medical condition and may simply “result in disease substitution rather than disease ‘prevention.’ ”44 Oncologists readily acknowledge “the convergence of prevention with therapy.”45 They are not alone in exploiting this trend. Surgeons too have colonized the territory opened up by the new “prevention.” Prophylactic mastectomies performed on women at high risk for breast cancer are a more radical form of surgery than most women actually diagnosed with the disease are now likely to undergo. For the rising proportion of those who opt for breast reconstruction at the same time, the “preventive” component of the combined procedures is overwhelmed by the cosmetic. Prophylactic mastectomies are, in fact, a good illustration of the way that treatment more generally has overtaken prevention. From the perspective of society, this represents a most inefficient way of eliminating disease. It is tackling the enemy one individual at a time, year in, year out with little prospect of any significant reduction in the overall numbers of those who are truly vulnerable. But from the perspective of the cancer industry, things look very different. Preservation of the status quo guarantees stable (and continuously renewable) markets. Chemoprevention extends those
224
Under the Radar
markets, bringing a much greater segment of the American population within reach. If a semantic subterfuge could bring prevention into line with medical practice, it could do the same for “environmental hazard,” another concept that remained out of step with postwar medicine. Originally, the term was used to describe involuntary exposures to both natural and unnatural toxins such as radon in the first instance, fallout and industrial wastes in the second. These were all inflicted on human beings, mostly without their knowledge or consent. There was little or nothing individuals could do to protect themselves from polluted air or water. Solutions rested with public health authorities; they alone commanded the resources and authority to compel improvements and the regulation of contaminants. With the shift of emphasis from public to private cancer risks, that is, the transfer of attention from atmospheric and industrial pollution to unhealthy (“lifestyle”) habits, came a radical dilution of the term “environmental hazard.” Now voluntary exposures to harmful substances (such as cigarette smoke and alcohol) were added to the list of involuntary hazards. The critical distinctions between them were suddenly blurred. Any cancer risk that did not arise “spontaneously” within the body could now be called environmental. That meant that gross estimates of cancers considered to be “environmental” in origin became essentially meaningless or worse, camouflage for the deeper failure to understand where cancers come from. The word “environmental” had itself been detoxified. Since it could refer to almost anything, there was no longer a reason to fear it, in the context of cancer prevention at least. With all risks lumped together, it was no longer necessary to assign any single agency the job of controlling them. Why should the government bear any more responsibility than its citizens for the regulation of smoking when individuals made their own decision to smoke? The messier these questions of causation, the easier it became to take refuge in victim blaming, to place the individual at the center of cancer prevention. Is Prevention Un-American?
The sidelining of both true prevention and true environmental hazards and the foregrounding of the informed “consumer” are all consistent with the larger shift towards an increasingly deregulated market economy. The pursuit of prevention was never going to be a profitable undertaking. But the development of new cancer treatments dovetailed nicely with the larger expansionist ambitions of the postwar economy. With its support for
Experiments by Other Means
225
radiotherapies, government had signaled its willingness to help underwrite the costs of market medicine. Its early pump priming anticipated the later profitability of treatments like radio- and chemotherapies. Unlike surgery, which had been the dominant mode of treatment through the Second World War, the newer cancer treatments could be packaged and marketed as commodities. With government supplying the start-up costs (through legislation as well as funding), medical technologies and medical science could begin to reorient the national response to the disease, shifting it from something akin to a loose collection of services to a full-fledged industrial operation. Once on its feet, every branch of the new industry would be driven by the same financial imperatives as any other—and each would become self-regulating. There would be an ever-diminishing role for government and an ever-increasing role for the cancer patient, who would become an active player in medical decision making. The economics of prevention, by contrast, set aside the model of individual behavior underpinning this picture of free-market medicine. The search for (and investment in) true prevention is designed to lift the burden of decision making from both the cancer patient and her physician. For images of consumption between so-called free agents, it substitutes images of control and regulation. Violating basic tenets of market economics, such a pursuit would sanction unwelcome intervention into the operations and investment decisions of private enterprise, with potentially crippling effects on productivity. Markets could be lost and profits foregone as suspected carcinogens used in production or embedded in final products were identified and outlawed. The costs would no longer be spread across the entire population of cancer patients; they would target the producers and suppliers of identifiable products and services. And even if the threat of intervention remained just that, it would be enough, in itself, to inhibit research and development going forward; it would certainly discourage prospective investors. But a commitment to prevention could, in fact, coexist with the current arrangement without unduly disturbing the great majority of ongoing cancer research. As a distinctive attitude of mind, it might shift the emphasis of some work already underway, shining more of a light on those aspects with more pronounced implications for prevention. It would create a more receptive climate for many new research proposals that, under the present regime, stand little chance of ever being considered, let alone funded. Perhaps most importantly, it would create an alternative framework for evaluating overall progress against the disease, another way of thinking about endpoints
226
Under the Radar
and results. Benchmarks for reductions in the incidence of disease would become as common—and as eagerly anticipated—as those justifying the use of new drugs that extend survival by three months. But, in reality, the pursuit of prevention would have to contend with a free-market system that is considerably more powerful today than it was fifty years ago and has considerably more at stake. The economic climate is much more favorable to deregulation of every kind, including the relaxation of controls on environmental hazards. The secrecy mandated by government contracts during the Cold War has now been replaced by industrial secrecy, which is every bit as effective at controlling and/or releasing information on a strictly need-to-know basis. Investment decisions are no more accountable to the public than were Cold War research objectives before them. The prospect for a change in direction, therefore, is gloomy. Of course, even with multilateral support from government and private enterprise, the pursuit of primary prevention is no more guaranteed of success than the continuing pursuit of cancer cures or improved treatments. Prevention might, in fact, prove to be even more elusive and even more expensive. The point is, we don’t know because we have never cared to know. Interest in prevention peaked in the late 1970s and early 1980s. Since then, there has been no sustained public support for it in the United States, as there has been in Europe where government intervention is not nearly so problematic. Environmentalists there have promoted an approach to prevention that is unequivocal in its reassignment of risk. The precautionary principle, formally articulated in 1998, elaborates an idea first aired by Rachel Carson in the early 1960s.46 It states: “When an activity raises the threat of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context, the proponent of an activity, rather than the public, should bear the burden of proof.”47 In this formulation, prevention preempts science. Or more exactly, it turns the Cold War exploitation of science on its head. The difficulties inherent in proving cause and effect (between fallout and cancer) were cynically manipulated for decades, to buy time and provide cover for the nuclear weapons program. Now the same science is to be used defensively, to buy time to protect the public from potential harm. It would be up to “the proponent of an activity, rather than the public” to demonstrate that a new chemical or biological compound was safe before it could be used or marketed. In postulating this inversion of prevailing practices, the
Experiments by Other Means
227
precautionary principle essentially restores to the communal realm the traditional public health responsibilities that the Cold War so eagerly discarded. Unfortunately, the peremptory ordering of Cold War priorities gave much too much power to the “proponents” of what turned out to be hazardous activities. Decades later, that power has become too thoroughly entrenched to dislodge. It is propped up by a massively balkanized and bureaucratized approach to regulation (the responsibility for radiation alone is shared by more than half a dozen federal agencies and several jurisdictions).48 The net effect of this decentralized oversight is, inevitably, to play down the aggregated effects of environmental hazards as a whole. Not only does the current arrangement discourage awareness of the compound effects of different carcinogens on our biological health. It also dumbs down our understanding of the cumulative impact of repeated exposures to any one hazard over the course of a lifetime. The radiation standards-setting bodies might have caught up with these concerns but they have not. Today, the primary interests of the nuclear energy industry still prevail, just as they did in the Cold War. Anxiety about the development of nuclear weapons by what are deemed “rogue” states (such as Iran and Korea) tops the agenda, locating radiation issues within the familiar framework of defense objectives that are tied, as they were in the Cold War, to fears of foreign aggression. The status of radiation as an official health concern remains as much in the shadows today as it did fifty years ago when the appearance of fallout first raised the alarm (sporadic attempts to rouse public indignation have all failed to gain critical mass).49 Physicists still make up about half the membership of the ICRP’s Main Committee, the organization’s decision-making forum. There are still no oncologists serving as members. Nor, according to the environmental epidemiologist Rosalie Bertell, are there any specialists in public or occupational health or pediatricians or radiobiologists or even nuclear engineers who might improve the designs of nuclear reactors.50 To the list of the excluded we might add environmental activists like Bertell herself. We might also consider the inclusion of economists, political scientists, and ethicists. But to suggest such a dilution of the scientific membership would be to imply that the process of standards-setting relies on value judgments as much as on science. It would also expose the wider ramifications of the decisions that the Main Committee approved, subjecting them—and the ICRP itself—to public scrutiny and debate. The ICRP has resisted this. It remains an atomic club, setting its own agenda and appointing its own self-perpetuating membership. Its survival
228
Under the Radar
as a powerful elite points to the enduring success of Cold War initiatives that were able to bypass democratic protocols under the aegis of national security concerns—and never looked back. Though operating independently and formally accountable to no higher authority, the ICRP is enormously influential, advising governments around the world. The International Atomic Energy Agency (IAEA), set up in 1957 as the world’s Atoms for Peace organization within the United Nations, turns to it for advice rather than to its own sister operation, the World Health Organization. In fact, almost all countries adopt its radiation protection standards. The most notable exception is the United States, whose standards since 1970 have been set by legislation under the auspices of the Environmental Protection Agency (EPA) and administered by the Nuclear Regulatory Commission. (At the moment, U.S. guidelines are less stringent than those advocated by the international body.)51 The painstaking and comprehensive recommendations mapped and monitored by radiation standards bodies (both national and international) create the impression that the early dangers of radiation have now been neutralized. Committee reports seem to have anticipated every setting in which radioactivity may be released, acknowledging very fine distinctions in the tolerances of each vital organ, and in the emissions associated with every diagnostic or medical procedure and every occupational exposure. The thoroughness of coverage conveys the impression that the management of radioactive emissions is now fully understood and therefore under control.52 But the medical literature is full of disturbing findings that suggest otherwise. Some studies point to elevated rates of a second cancer following treatment with either chemotherapy or radiotherapy. Others suggest the possibility of magnified risks associated with compound exposures to more than one source of radioactivity and repeated exposures to the same source. Radiation protection standards do not adequately take this complexity into account. Their failure to formally acknowledge the state of jeopardy that hangs over a lifetime of exposures says a great deal about the prevailing official perspective. In the meantime, radioactivity has crept into more marketed products, more industrial and agricultural processes, more medical and dental procedures than ever.53 Newer diagnostic imaging techniques like computed tomography (CT) expose patients to much higher doses of radiation. Few Americans are aware, for example, that a CT scan of the chest and a bariumswallow X-ray study expose a patient to doses that are, respectively, thirteen and twenty-five times higher than that associated with a screening
Experiments by Other Means
229
mammogram.54 In 2007, a study from the National Council of Radiation Protection reports that nuclear medicine exams have risen almost threefold over the past quarter century (from 6.4 million exams in 1980 to 18.1 million in 2006). And while CT scans account for just 12 percent of the total number of exams, “they deliver almost half of the estimated collective dose of radiation exposure in the United States.”55 A Public Health Service evaluation puts the share of worldwide radiation exposure attributable to medical diagnosis and treatment at 55 percent, with all other manufactured sources accounting for just 2 percent of the total.56 The use of radiotherapies in the treatment of cancer has kept pace with these other developments. Between 1974 and 1990, the number of new cancer patients receiving radiation treatment of some kind rose by 60 percent, and the number of radiation oncology facilities grew by almost 30 percent.57 Over the same period, the word “fallout” has shed its radioactive overtones and is now widely used to connote unanticipated side effects of any kind; today it is much more likely to imply financial, social, or psychological consequences than anything physically threatening. Radiation as a “problem” introduced by atomic fallout has essentially faded from view. A good indication of the public’s lack of engagement with the issue is the almost universal ignorance of the very concept of radiation standards. To some extent, this reflects the more pervasive indifference to nuclear energy at the start of the twenty-first century after a long moratorium on the construction of nuclear power plants.58 Apart from those working in highrisk environments, most people have little or no idea of the complex maze of regulations and recommendations that together create the universe of permissible doses. Only a very few have ever heard of rads. It hasn’t helped that the units used to measure radiation have been redefined many times over. Though the changes have allowed emissions and exposures to be calculated with ever greater precision, the switch from the original roentgens, first to rems, then to rads, then to sieverts and grays has disrupted the continuity of the story. Without a stable concept of measurement to guide us, we remain clueless about secular trends in the levels of risk. Compare our ignorance here with the sometimes obsessive zeal with which we casually throw around the concepts of nutrition—grams of fat or cholesterol, calories, recommended daily allowances, international units, and so on. In reality, our familiarity with the components of diet may reflect as much brainwashing by advertising as genuine scientific understanding, but at least it provides a lexicon that opens up discussion and
230
Under the Radar
argument. Radiation, by contrast, sells nothing and has zero marketing appeal. It has to be brought to consciousness by other means. One antidote to our continued sleepwalking is to consider the use of radiation audits. No standards-setting body currently recommends that we keep a running tally of our exposures.59 At an annual checkup, no doctor or dentist automatically asks about our radiation history. In theory, such an assessment would keep track of all measurable exposures over a lifetime. This would deepen the meaning of informed consent when applied to decisions about radiotherapy and diagnostic procedures involving radiation. Like a history of vaccinations, audits would bring another dimension of health history into play, imposing a longer perspective on every medical encounter and repositioning the patient within it. Every recommendation for a diagnostic test or therapy involving radiation would be discussed within this larger context. Such an intervention may look like just another attempt to offload responsibility onto individuals and, as such, another sign of the withering away of true public health activism. But it is at least a way of broadening the idea of accountability, acknowledging the role of factors beyond “lifestyle” considerations that contribute to one’s biological health history. To the simple tally of calories consumed, miles run, drinks or cigarettes foresworn, would be added another column tracking the doses of radiation associated with every diagnostic test, X-ray, treatment, or occupational exposure. More hopefully, the maintenance of an integrated record of all exposures might also foment an engaged environmental awareness that, eventually, could transform life histories into catalysts for change. Given the growing awareness that global environmental damage may not, in all cases, be reversible, that the depletion of some natural resources may be permanent and consequential, isn’t it reasonable to hope that a parallel mindfulness of environmental harm to the body will follow? The concept of the body burden already captures the idea of accumulated harm.60 Its measurement of toxic residues—chemicals and other substances that build up in the body over time—is itself a condemnation of public health and prevention policy. It also makes a mockery of victim blaming since individuals are not only unaware of these toxins but powerless to block them. So it is no longer just exposure to radiation that leaves potentially damaging after effects but exposure to hundreds of other substances, many of whose longterm consequences are already well known.61 When will the idea of body burdens catch up with that of carbon footprints? If we wait for market economics to sort this out, we may be courting
Experiments by Other Means
231
extinction. There can be no human equivalent to the carbon “offsets” that allow pollution to continue undiminished through the use of compensating investment in green initiatives elsewhere. The natural lifespan of the planet, however compromised by human intervention, still offers vast opportunities for exploitation. These are denied the individual, whose period of existence in geological time is trivial, and whose resources are distinctly nonrenewable. The importance of a lifetime perspective on environmental harm to the body was, in fact, anticipated fifty years ago, at a time when the fear of radiation was still rampant, before it was subdued within official bureaucracies. The National Academy of Sciences’ 1956 report on the effects of radiation suggested that “steps should be taken to institute a national system of record-keeping, under which every individual would have a complete history of his exposure to X-rays and other gamma radiation.”62 Implicit in this proposal was a more rigorous application of the doctrine of informed consent since individuals would of necessity be drawn into a process that, with time, would broaden their understanding of the hazards involved while sharpening awareness of their own role in shaping their medical destiny. Those putting forth such a plan knew that implementation posed serious difficulties but thought they were outweighed by the “real protection” that radiation records would confer. What happened to this idea? It was quickly rejected by those who knew most about the dangers of overexposure. When the United Nations’ own committee on radiation asked the ICRP to consider the desirability of radiation record cards, Lauriston Taylor, head of the United States’ delegation to the international commission, voted against it. The ICRP rejected “the proposal of individual radiation records as excessively costly, difficult to administer and of doubtful accuracy and value.”63 Abstract recommendations on paper were, no doubt, a lot less problematic. Hypothetical exposures, in other words, were always going to be preferable to actual exposures, just as they would be in the dose reconstruction studies of fallout carried out by epidemiologists later on. In the early years of the weapons testing program, at a time when the government was doing all it could to suppress concerns about radioactivity, the prospect of an informed population aware of the risks of radiation in advance must have struck terror in the heart of the AEC. Record cards faded quickly from the public agenda, never to resurface.64 Alas, such a tactic could not even be conjectured for most of the thousands of other synthetic carcinogens that currently fly under the radar. They are just too deeply embedded in industrial processes or marketed
232
Under the Radar
products for consumers even to be aware of them, let alone measure or keep track of them. The biochemical interactions between them present even more formidable challenges. The compound risk of exposures to everything from household and commercial cleaning solvents to pharmaceuticals to fertilizers and pesticides would be, literally, incalculable. Add to this an unknown number and volume of toxins in products imported into the United States from every country around the world. Too numerous and variable—and much too costly—either to identify or evaluate, their collective impact on the etiology of cancer remains essentially beyond our reach. It is no longer medical science that can sort this out, even if it chose to. It requires sustained political commitment and the determination to intervene directly—and forcefully—in the comprehensive phasing out of any and all substances suspected of doing harm. With sufficient resources and perseverance, safe substitutes for almost every one of them can almost certainly be found. The official manipulation of radioactivity for political purposes was perhaps the first clear indication of what would become a comprehensive accommodation with environmental carcinogens. It has been singled out here because its history set a path for many others to follow. Public outrage about its dangers peaked a quarter of a century ago when the New York Times could report that “From New York to Washington State, from Texas to Montana, people are edgy, if not downright angry, over radiation.”65 Since then we have become habituated to it and to most of the many thousands of other manufactured hazards that come along after it. Without hesitation, Americans now buy and use products that expose them every day to a slew of known carcinogens.66 We know very little about the risks they pose. Our apparent indifference should itself be seen as a legacy of almost a half century of misinformation and of our surrender to the political and economic system that set it in motion. In this sense, the Cold War has scored a lasting victory. The boundaries it worked so hard to undermine—between the safe and harmful uses of radioactivity, between atoms for peace and atoms for war—have been well and truly blurred. The current balance of power among them, though always subject to change, points to a long-term accommodation within civil society of distinctly Cold War objectives. The postwar history of radioactivity animates this dynamic, documenting its transformation, over the past sixty years, from a phenomenon inducing awe and terror to one that has become largely invisible, hiding in plain sight. Unlike many other aspects of American culture that the end of the Cold War rendered harmless or subject to ridicule (the language and
Experiments by Other Means
233
imagery of anticommunism in literature, films, and propaganda, for example), the influence of nuclear energy remains both profound and pervasive. It remains, however, a cultural undercurrent, rumbling beneath us but rarely directly observed. Its impact on the response to cancer over the past half century has had to be teased out of other stories. But without question, though the evidence remains fragmentary and refuses to submit to a simple story line, it is clear that the Cold War’s response to radioactivity has left its mark, not least on the bodies of the millions of Americans exposed to it, whether intentionally or not. The subversive overtones of cancer that still hover over the disease today owe quite a lot to its forced association with Cold War connivance. So too, I would argue, does the orientation of much of today’s research. The search for true prevention has been a casualty of this relationship, a victim of the protracted struggle over fallout as much as an obstacle to unfettered industrial expansion. The long-term hope of eradicating cancer once and for all does survive as an ethical and political commitment, even if not much in evidence on American soil. But in the United States, where support for unregulated markets is much stronger than in Europe and where, correspondingly, more of the national response to cancer has been privatized, there is no dynamic at work to hasten the demise of any profitable market, no matter how toxic its products (tobacco remains a lucrative industry to this day). If anything, the tendency is to move in the opposite direction, to expand opportunities for private capital, wherever possible. For radiotherapy, that has meant moving into nuclear particle accelerators. These hugely expensive machines (costing more than $100 million) shoot protons into tumors with greater speed, precision and fewer side effects than more traditional forms of radiation. Used originally to treat rare eye and brain tumors, they have now been adopted for the treatment of much more common prostate cancers, at almost twice the cost of conventional X-ray therapy. But according to some radiation oncologists, “there are no solid clinical data that protons are better.”67 Nevertheless, a handful of hospitals like the M. D. Anderson have recently opened proton centers (in the case of the Anderson, its new $125 million center is run as a forprofit entity).68 The vulnerability of cancer patients to the lure of improved therapies and their willingness to pay for them feeds a system that is open to abuse. The government’s reluctance to serve as an effective watchdog, to rein in commercial enthusiasms or excessive costs, harks back to its hands-off ideology of fifty years ago. The AMA’s campaign against socialized medicine
234
Under the Radar
would not have packed the wallop it did without the Cold War to animate it, adding visions of “Big Brother” hovering over the autoclave and the prescription pad to its arsenal of anti-Soviet imagery. Not only did the campaign kill off the idea of a national health insurance plan, it also deprived the country of a debate on the role of government in health care. Any attempt to raise the issue was shot down by extremist propaganda from the AMA and other medical lobbying groups. The shibboleth of “socialist medicine” may have been more a paper tiger than a real threat, but it succeeded in truncating the public conversation about responses to disease. Americans have grown to be more familiar with arguments against public intervention in the provision of health care than they are with arguments documenting the economic—and human— advantages of a more engaged response. They are rarely reminded of the huge loss in fiscal discipline that has accompanied the rise of private sector dominance. The difficult births of piecemeal correctives like Medicaid and Medicare demonstrate the defensive struggle against a default position that is no longer a fundamental concern for the public’s health but a desire to ease the operations of capital, wherever possible. Public financing of research and development remains acceptable; beyond that, danger lies. The conversion of cancer from a crisis to a chronic disease inevitably feeds this market-driven approach, drawing more and more Americans with their own or family history of cancer into markets for risk-reducing, life-prolonging, and/or pain-controlling treatments. Unregulated capitalism will never get to the heart of the matter. It is unlikely to cure cancer on its own. Only when harnessed to a political system with the power to address its shortcomings and to reorder its priorities will the vast resources of the cancer establishment be redirected toward the necessary task of putting itself out of business.
Notes
Introduction
1. See Angela N. H. Creager, “Nuclear Energy in the Service of Biomedicine: The U.S. Atomic Energy Commission’s Radioisotope Program, 1946–1950,” Journal of the History of Biology 39 (November 2006): 649–684. 2. Robert Proctor’s Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer (New York: Basic Books, 1995). Gerald Kutcher also makes the leap between military objectives and medical practice in his “Cancer Therapy and Military Cold-War Research: Crossing Epistemological and Ethical Boundaries,” History Workshop Journal 56 (2003): 105–130. 3. A substantial literature on breast cancer, for example, followed the lead of the earlier literature on AIDS, acknowledging cancer as a hotly contested political issue. See Roberta Altman’s early Waking Up/Fighting Back: The Politics of Breast Cancer (Boston: Little, Brown, 1996). On the empowerment of the breast cancer patient see, for example, Marcy Jane Knopf Newman, Beyond Slash, Burn, and Poison: Transforming Breast Cancer Stories into Action (New Brunswick, N.J.: Rutgers University Press, 2004). On the role of environmental hazards as a precipitating factor in the development of the disease, see Sandra Steingraber, Living Downstream: A Scientist’s Personal Investigation of Breast Cancer and the Environment (Reading, Mass.: Addison-Wesley, 1998). 4. For example, government programs gave house building a tremendous boost by offering very generous loans to prospective homeowners and by guaranteeing home mortgages (both through Title VI of the National Housing Act). Suburban development was facilitated by federal highway building, which guaranteed that workers newly displaced to the outlying suburbs would have easy access to their jobs. 5. One of the earliest and most valuable contributions to the history of “nuclear consciousness” is Paul Boyer’s By the Bomb’s Early Light (New York: Pantheon Books, 1985). For more recent work, see American Cold War Culture, ed. Douglas Field (Edinburgh: Edinburgh University Press, 2005); and Robert Griffith, “The Cultural Turn in Cold War Studies,” Reviews in American History 29 (2001): 150–157. Interest in the cultural and social history of cancer has, by comparison, been much more modest. The first and in many ways still the most comprehensive cultural history of the disease is James T. Patterson’s The Dread Disease: Cancer and Modern American Culture (Cambridge, Mass.: Harvard University Press, 1987). The publication, in the spring of 2007, of a special issue of the Bulletin of the History of Medicine 81(1), devoted to “Cancer in the Twentieth Century” with contributions from scholars on both sides of the Atlantic, points to an encouraging broadening of interest. 6. Rearmament following the Korean War pushed the share of military purchases as a share of Gross National Product to an unprecedented “peacetime” high of 7.5 percent, where it remained. Before World War II, this share had typically been closer to 1 percent. Robert Higgs, Depression, War and Cold War (New York: Oxford University Press, 2006), 143. 7. “A Plot to Steal the World,” Work & Unity Group, 1948, 16 pp.
235
236
Notes to Pages 7–11
8. Cyndy Hendershot, Anti-Communism and Popular Culture in Mid-Century America (Jefferson, N.C.: McFarland, 2003), 20. 9. David Caute, The Great Fear: The Anti-Communist Purge under Truman and Eisenhower (New York: Simon and Schuster, 1978), 22. 10. Clifford Geertz notes the power of metaphor to “symbolically coerce” discordant meanings into “a unitary conceptual framework.” The Interpretation of Cultures (London: Fontana, 1993), 211. Cold War–forged cancer metaphors, for example, had to disregard the links between toxic workplace exposures and later malignancies, despite the fact that such connections had been made since late in the eighteenth century when chimney sweepers (exposed to tar and soot) fell victim to scrotal cancers at very high rates. 11. Lynn Boyd Hinds and Theodore Otto Windt Jr., The Cold War as Rhetoric: The Beginnings, 1945–1950 (New York: Praeger, 1991), 40, 96. 12. Ibid., 81. The word “malignant” did not originate as a descriptor of cancer. It first denoted something “evil in nature” or “gravely injurious” such as “malignant narcissism,” but not necessarily life threatening. It also was used to describe someone “keenly desirous of the suffering or misfortune of others” (according to the Oxford English Dictionary). But since the onset of the Cold War, all these meanings, like those of “tumor” and “metastasis,” have crystallized around cancer and are now almost exclusively associated with this disease. 13. Republican Party Platform. The first day of the national nominating convention was August 20, 1956. 14. In fact, the cancer ward portrayed in Solzhenitsyn’s novel proves to be far different from the gulag that most readers might expect. In many respects the ward offers a safe haven from the terrors of the outside world, where the common concerns of the patients, confined together over weeks if not months, generate surprisingly open discussions of politics, philosophy, and human relationships. It is also an environment in which the rigid hierarchies of rank so characteristic of Soviet society—between patient and staff, party member and nonparty member, apologist and dissident—are often relaxed. 15. Compared to the public rhetoric, the private discourse about cancer is hard to pin down with any certainty. There is copious evidence of letter writing by individuals with cancer histories (or their relatives), addressed to government bodies involved with the disease in one way or another. When the FDA, for instance, tried to curtail the activities of Harry M. Hoxley, whom the agency believed to be a cancer quack, they received letters from Americans protesting or defending the FDA’s position. See David Cantor, “Cancer, Quackery, and the Vernacular Meanings of Hope in 1950s America,” Journal of the History of Medicine and Allied Sciences 61 (2006): 324–368. Many Americans also wrote to the Cancer Research Program at the Oak Ridge Institute for Nuclear Studies (ORINS) when word of its own thirty-bed cancer clinic hit the press. They too were patients (or their relatives) desperately seeking treatment or advice. So there is evidence that some Americans were comfortable enough with the disease to confront it head-on. But their letters had a purpose: they sought an immediate response from the experts. They were not written to be shared with a larger audience as are Letters to the Editor of a major newspaper. The outing of one’s own cancer experience for the purpose of reaching other unknown individuals with a similar history, seems to be a much later phenomenon. 16. Elmer Holmes Bobst, Bobst: The Autobiography of a Pharmaceutical Pioneer (New York: David McKay, 1973), 229.
Notes to Pages 11–15
237
17. George Crile Jr., Cancer and Common Sense (New York: Viking, 1955), 9, 8. 18. “Spy Agencies Say Iraq War Worsens Terror Threat,” New York Times, 24 September 2006, 1. 19. Michiko Kakutani, “From Planning to Warfare to Occupation: How Iraq Went Wrong,” review of Fiasco: The American Military Adventure in Iraq by Thomas E. Ricks, New York Times, 25 July 2006. 20. “Lebanese-American Financiers Differ on How to End War,” New York Times, 1 August 2006, C1. Occasionally, the metaphors are reversed, with Cold War imagery summoned to illuminate cancer rather than vice versa. A man chronicling his own encounter with disease writes in New York that “cancer is a nuclear bomb, not a tactical weapon, and so there is fallout.” Jon Gluck, “The Radioactive Dad,” New York Magazine, 28 May 2007, 107. “Radioactive” continues to be used to denote something dangerous or controversial. 21. Susan M. Chambré, Fighting for Our Lives: New York’s AIDS Community and the Politics of Disease (New Brunswick, N.J.: Rutgers University Press, 2006), 35. 22. Henry Allen, “America Faces a New ‘Black Plague’ of Filth,” Los Angeles Times, 19 October 1959, B4; Howard Kennedy and Paul Beck, “Cancerous Growth of Crime Feared in Pan Card Parlors,” Los Angeles Times, 30 August 1964, E1. 23. Newsweek, 1 February 1971, 76. 24. Arval A. Morris, “A New Lease on Malignancy,” The Nation, 22 March 1965, 299. 25. “The Shape of Things We Hope Won’t Come,” Chicago Daily Tribune, 25 June 1960, 14. 26. George J. Church, “Trading Stamp Surge: A ‘Cancerous Evil’ or a Sales Stimulant?” Wall Street Journal, 24 May 1956, 1. 27. Eliot Janeway, “Insurance Termed Business Necessity,” Chicago Tribune, 7 August 1967, C7. 28. Spencer R. Weart, Nuclear Fear: A History of Images (Cambridge, Mass.: Harvard University Press, 1988), 189–190. 29. Peter H. Irons, “American Business and the Origins of McCarthyism: The Cold War Crusade of the United States Chamber of Commerce,” in The Specter: Original Essays on the Cold War and the Origins of McCarthyism, ed. Robert Griffith and Athan Theoharis, 79 and n17, 303 (New York: New Viewpoints, 1974). 30. Illness as Metaphor (New York: Farrar, Strauss & Giroux, 1977), 38. Sontag framed the question—but did not provide an answer to it. In this book, she is more interested in the uses of metaphors applied to cancer while I am more interested in the uses of cancer metaphors applied to the Cold War. 31. Paul Stewart Henshaw, “Atomic Energy: Cancer Cure or Cancer Cause?” Science Illustrated 2 (November 1947): 46–47⫹. 32. The Atomic Energy Commission was set up just after the end of World War II to establish civilian control over atomic research and development. Its responsibilities included the manufacture and testing of nuclear weapons, the development of nuclear reactors for military and civilian use, and research in biological and medical sciences. Although the bulk of the AEC’s work was in the field of atomic weaponry, it took on the development and regulation of the commercial nuclear power industry after the passage of the Atomic Energy Act in 1954. The agency was dissolved in 1974 and its responsibilities handed to two new agencies—the Energy Research and Development Administration, charged with the management of nuclear weapons and the Nuclear Regulatory Commission, charged with the regulation of the nuclear power industry. In 1977, both agencies were folded into a new Department of Energy.
238
Notes to Pages 17–26
33. “End of World Seen with a Cobalt Bomb,” New York Times, 21 February 1955, 12. The “real” cobalt bomb involved encapsulating a hydrogen bomb in a cobalt casing which, upon detonation, would inflict radiation on its victims far and wide. Newspaper coverage of cobalt for peace (“Three More ‘Bombs’ Ready,” New York Times, 9 February 1953, 16) was easily confounded with stories of cobalt for war. 34. “Atoms in Peacetime,” New York Times, 13 November 1957, 34. 35. For an international view of the impact of atomic energy on the development of radiobiology, see Angela N. H. Creager and Mariá Jesús Santesmases, “Radiobiology in the Atomic Age: Changing Research Practices and Policies in Comparative Perspective,” Journal of the History of Biology 39 (November 2006): 637–647. 36. Kenneth Osgood, Total Cold War: Eisenhower’s Secret Propaganda Battle at Home and Abroad (Lawrence: University Press of Kansas, 2006), 169. If the U.S. could establish a toehold in the countries interested in buying reactors, Osgood points out, they would come to rely on American suppliers for every aspect of the power plant’s operations—design, construction, repair and maintenance, etc. 37. In addition to the 16,000 X-ray operations, the city’s sources of radiation also included “214 deep therapy installations, 70 veterinary installations, 416 radioisotope users, and 140 radium users.” Leona Baumgartner and Hanson Blatz, “Control of Common Radiation Hazards in New York City,” Public Health Reports 76 (July 1961): 584. 38. National Academy of Sciences, National Research Council, The Biological Effects of Atomic Radiation: A Report to the Public (Washington, D.C., 1956). 39. Radioactive mutants excited the public imagination—killer ants and giant squids were a great hit at the box office. See Weart, Nuclear Fears, 191–195. 40. The Incredible Hulk got too close to a detonating bomb of gamma rays, Spider-Man was bitten by a radioactive spider, and the Fantastic Four were bombarded by a radioactive solar flare while traveling in a spaceship. There were hundreds of other characters similarly transformed, including a Chinese nuclear physicist who exposed himself to massive doses of the mysterious energy to emerge as Radioactive Man. See The Marvel Comics Encyclopedia: The Complete Guide to the Characters of the Marvel Universe (New York: DK Adult, 2006). 41. Barry Commoner, “The Fallout Problem,” Science 127 (2 May 1958), 1024. 42. “The Nature of Radioactive Fallout and Its Effects on Man,” Hearings before the Special Subcommittee on Radiation of the Joint Committee on Atomic Energy, Congress of the United States, Eighty-fifth Congress, first session (Washington, D.C.: Government Printing Office, 1957), Parts 1 and 2. The hearings were held 27 May–7 June. 43. Bernie Gottschalk was an undergraduate physicist at Rensselaer Polytechnic Institute in Troy, New York, in 1953 when Geiger counters registered levels of background radiation that were 20 to 100 times higher than normal. He pointed out a study showing a local incidence of leukemia that was twice as high for children born in the two-year period 1953–1954 than for children born in any other two-year interval. “But,” he cautioned, “because of the small number of cases, the statistical significance is debatable.” Personal communication with the author, 17 September 2007. See Ernest J. Sternglass, Secret Fallout: Low-level Radiation from Hiroshima to Three Mile Island (New York: McGraw-Hill, 1972), chapter 5. 44. James Patterson, Dread Disease, 193. Citing the AMA’s health magazine Hygeia, December 1947, 916. 45. Welsome puts a human face on these experiments, documenting their consequences for seventeen individual patients identified in the files by code name only. None of the participants had been told that the injections offered no therapeutic
Notes to Pages 26–31
46.
47.
48.
49.
50.
51. 52.
239
benefit. One of them, Albert Stevens, diagnosed with stomach cancer, turned out to be suffering with an ulcer, not a malignancy. See The Plutonium Files, 90–96. Clinton created the Advisory Committee on Human Radiation Experiments (hereafter ACHRE) in January 1994 in response to “the growing number of reports describing possibly unethical conduct of the U.S. government, and institutions funded by the government, in the use of, or exposure to, ionizing radiation in human beings at the height of the Cold War.” The 4,000 experiments the committee found were carried out at close to a thousand institutions. See Advisory Committee on Human Radiation Experiments Final Report (ACHRE Report) (Washington, D.C.: U.S. Government Printing Office, 1995). The report includes a lengthy discussion of the ethics of research using human subjects and an extensive review of case studies. A second volume reproduces some key documents and a third volume serves as a useful guide to the experiments themselves (lists of the principal investigators, participating institutions, relevant published materials, etc.). There is no claim to exhaustive coverage—and, alas, no index. The cancer death rate rose from 66.4 per 100,000 of population in 1901 to 138.7 in 1949. “Cancer Gaining as Death Cause thruout World,” Chicago Daily Tribune, 16 July 1952, 16. “Statement of Frank Conahan,” National Security and International Affairs Division, Human Experimentation: An Overview on Cold War Era Programs (Washington, D.C.: General Accounting Office, 1994), 3. Over the period 1945 to 1962, Conahan estimates that “approximately 210,000 DOD-affiliated personnel, including civilian employees of DOD contractors, scientists, technicians, maneuver and training troops, and support personnel, participated in 235 atmospheric nuclear tests.” A further 199,000 were exposed to radiation at work. The participants in the four thousand experiments identified in the ACHRE Report must be added to this. Their number is unknown but in the thousands. By the end of the twenty-first century, fallout will have caused an estimated 430,000 fatal cancers worldwide. The 140,000 attributed to American atmospheric testing represents 33 percent of the total, corresponding to the United States’ release of 33 percent of the total quantities of fissile materials. Stephen I. Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons since 1940 (Washington, D.C.: Brookings Institution Press, 1998), 428. Karl Z. Morgan, The Angry Genie: One Man’s Walk through the Nuclear Age (Norman: University of Oklahoma Press, 1999), 102. The statement overlooks the Pacific Islanders and Japanese fisherman who were also directly harmed by the U.S. weapons testing in the Pacific Ocean. Arnold S. Relman, “The New Medical-Industrial Complex,” New England Journal of Medicine 303 (1980): 963–970. For information purposes only, the National Toxicology Program (Public Health Service), has published a biannual Report on Carcinogens since 1978. The most recent (eleventh) edition includes fifty-eight known carcinogens, including, for the first time, X-ray and gamma radiation. It also provides a longer list of potential carcinogens-in-waiting, substances “reasonably anticipated to be a human carcinogen” with summaries of the existing scientific evidence backing their inclusion.
Chapter 1. Double Jeopardy
1. The first personal chronicles of breast cancer began to appear in the late 1970s with the publication of Rosalind Campion, The Invisible Worm (1972); Rose Kushner,
240
2.
3.
4.
5.
6.
7. 8.
9.
10.
11. 12. 13.
Notes to Pages 32–36
Breast Cancer: A Personal History and Investigative Report (1975), and Betty Rollin, First You Cry (1976). Cobalt-60 is produced by bombarding nonradioactive cobalt-59 with neutrons. It decays over a half-life of 5.3 years by emitting energy as beta and gamma rays, eventually returning to a nonradioactive state. Other common isotopes produced at the same time and also investigated for their potential role in cancer treatment were iodine-131, phosphorus-32, and carbon-14. “The Bomb Secret Is Out!” American Magazine, December 1947, 137. Quoted in Paul Boyer, “‘Some Sort of Peace’: President Truman, the American People, and the Atomic Bomb,” in The Truman Presidency, ed. Michael J. Lacey (New York: Cambridge University Press, 1989), 190. “Availability of Radioactive Isotopes. Announcement from Headquarters, Manhattan Project, Washington, DC,” Science, 14 June 1946, 697–705. The article invites interested researchers from accredited institutions to apply to the Isotopes Branch in Oak Ridge, Tennessee. and sets out the procurement protocol. David Cantor, “Radium and the Origins of the National Cancer Institute,” in Biomedicine in the Twentieth Century: Practices, Policies, and Politics, ed. Caroline Hannaway (Amsterdam: IOS Press, 2008), 95–146. Coincidentally, the program made its first loan to a hospital in Wichita, Kansas, in 1938. As a radioactive source material, Cobalt-60 grew continually weaker, requiring adjustments to the equipment that housed it as well as the need to be returned fairly often to the source reactor in order to be reirradiated. “Cobalt Put Above Radium in Cancer,” New York Times, 2 April 1950. St. Francis was just the twelfth hospital in the country to gain approval from the AEC to install and operate a cobalt machine with a source larger than 1,000 curies. St. Francis purchased the cobalt directly from the Oak Ridge National Laboratory for $10–11,000. The machine itself, a Picker C-1500 Cobalt Unit, cost a further $55,000, and the specially built underground room to house it (with concrete walls 40 inches thick) an additional $81,000—all together, an investment of close to $150,000 (or more than a million dollars in today’s money). In the mid-1950s, Picker X-ray was one of just a few companies licensed by the AEC to receive shipments of cobalt-60 for R & D purposes in the field of radiotherapy. The others included Westinghouse and Kelley-Koett. Companies interested in radiotherapy machines accounted for a tiny proportion of the more than 1,000 firms licensed as part of the radioisotope distribution program between 1946 and 1956. These so-called endocrine ablative procedures have been commonly used since the late nineteenth century, when the Scot George Beatson first demonstrated the benefits of removing the ovaries of a premenopausal woman with advanced breast cancer. All quotations, except where otherwise noted, are taken from the case of Natanson v. Kline, 186 Kan 393; 350P.2d 1093 (primarily from the Appellant’s Abstract of the Record, filed in the Supreme Court of the State of Kansas, 3 June 1959, and from opinions filed 9 April 1960, and 5 August 1960). George Gallup, The Gallup Poll: Public Opinion, 1935–1971, 3 vols. (New York: Random House, 1972), Vol. 2, 1322. Gordon L. Dunning, “The Effects of Nuclear Weapons Testing,” Scientific Monthly 81 (December 1955): 265–270. Newsweek, 21 March 1955, 62; New York Times, 4 June 1955, 6; Time, 7 March 1955, 89. Time’s intimation of fallout’s long-term hazards was rare at the time. A more detailed history of the slow emergence of public awareness of the cancer risks posed by fallout is included in chapter 5.
Notes to Pages 36–44
241
14. Dr. Atomic, an opera about the Manhattan Project by John Adams and Peter Sellars, debuted in 2005. 15. William Grimes, “Sci-Fi Dream Turns World’s Worst Nightmare,” review of Doomsday Men by P. D. Smith, New York Times, 28 December 2007. 16. Letter to Mr. Robert C. McGain from the Chief of the Radiation Effects of Weapons Branch, Atomic Energy Commission, 8 August 1957. Department of Energy, OpenNet NV0070785. 17. Newsweek, 3 May 1948, 41; Coronet, July 1952, 22–26; Reader’s Digest, October 1952, 19–22; Look, 24 March 1953, 51–52. 18. A. E. Hiebert and H. W. Brooks, “Surgical Repair of Radiation Injuries,” American Surgeon 23 (1957): 1149–1151. 19. A split skin graft involves some but not all layers of skin. A graft with a pedicle flap uses the full thickness of skin with the underlying fat. This is moved from one part of the body to another and left attached to both its origin (“donor site”) and its ultimate destination (“recipient area”) until a sufficient blood supply establishes itself at the latter. 20. James Barrett Brown and Minot P. Fryer, “Report of Surgical Repair in the First Group of Atomic Radiation Injuries,” Surgery, Gynecology & Obstetrics 103 (July 1956): 1–4. 21. David M. Cutler, Your Money or Your Life: Strong Medicine for America’s Health Care System (New York: Oxford University Press, 2004), 4. Today’s average annual medical spending is 10 times higher than it was in 1950, $5,000 per person. 22. The jury selection chart from the Natanson trial does not survive, so it is impossible to know whether any (or how many) women served at her trial. At the time, women in Kansas were legally empowered to serve. Nevertheless, it was an allmale jury that, in 1960, convicted the two murderers of the Clutter family in neighboring Finney County, in a drama made famous by Truman Capote’s In Cold Blood. Chapter 2. The Court Considers Informed Consent
1. A study published in 1902 mentions more than fifty cases of X-ray injury in 1896. These included injuries to operators of X-ray machines as well as patients. A. E. Codman, “A Study of the Cases of Accidental X-ray Burns Hitherto Recorded,” Philadelphia Medical Journal 9 (8 March 1902): 438. By the mid-1950s, there were roughly 125,000 X-ray machines in use in the United States. 2. Robert S. Stone, “The Concept of a Maximum Permissible Exposure,” Radiology 58 (May 1952): 641, 659, citing the work of O. Hesse in 1911. The paper includes a table, “Historical Landmarks: Radiation Injuries,” 640. 3. Andrew A. Sandor, “The History of Professional Liability Suits in the United States,” Journal of the American Medical Association (hereafter JAMA) 163 (9 February 1957): 466. 4. The two cases were Corn v. French (1955) and Valdez v. Percy (1939), respectively. 5. The only explanation Hamilton could find for “this clouding of the clear waters of truth is that our courts are still working under laws which were framed when men were tortured to make them confess, and merciful judges tried to protect them, not by overthrowing the system—lawyers never do that—but by clever shifts which would do something while seeming something else.” Alice Hamilton, “What About the Lawyers?” Harper’s Magazine 163 (October 1931): 542–549. Cited in L. C. Allen, “The Physician’s Testimony,” Journal of the Kansas Medical Society 33 (1932): 353.
242
Notes to Pages 44–56
6. See Jay Katz, The Silent World of Doctor and Patient (Baltimore: Johns Hopkins University Press, 2002), 65–66. 7. Graham v. Updegraph, No. 32,853, Supreme Court of Kansas, 144 Kan. 45; 58 P.2d 475; 1936 Kan. LEXIS 184, January 1936, Decided; 6 June 1936, Filed. 8. The firm representing St. Francis Hospital was McDonald, Tinker, Skaer, Quinn & Porter of Wichita (now McDonald, Tinker, Skaer, Quinn & Herrington P.A.). A year or two before the trial, William Tinker been invited to address a scientific meeting of Wichita doctors on the rise of medical malpractice suits. He was at the time an attorney for the Medical Protective Insurance Company. Dr. Kline or an administrator from St. Francis Hospital could well have been in attendance for this talk. 9. Personal communication with Richard T. Foster, Esq. 10. C. C. Burkell and T. A. Watson, “Some Observations on the Clinical Effects of Cobalt-60 Telecurie Therapy,” American Journal of Roentgenology and Radiation Therapy 76 (November 1956): 895–904. 11. Ibid., 896. Reports of other experimental studies with cobalt began to appear in medical journals around the same time. Montefiore in New York claimed to be the very first hospital in the United States to own and operate one of the new units. The hospital’s 1955 report of its first three years’ experience included only three breast cancer patients, all with advanced disease. These women received cobalt as a primary therapy followed by surgery. See J. R. Freid, H. Goldberg, W. Tenzel, et al., “Cobalt 60 Beam Therapy: Three Years Experience at Montefiore Hospital (New York),” Radiology 67 (August 1956): 200–209. The first five-year follow-up study of cobalt radiotherapy (based on 942 patients) was not published until two years after Natanson’s treatment. See T. A. Watson, “Co60 Telecurietherapy—After Five Years,” Journal of the Canadian Association of Radiologists 8 (June 1957): 22–30. 12. I. S. Trostler, “Some Lawsuits I Have Met and Some of the Lessons to Be Learned from Them,” Radiology 25 (September 1935): 333. 13. Neal C. Hogan, Unhealed Wounds: Medical Malpractice in the Twentieth Century (New York: LFB Scholarly Publishing LCC, 2003), 100–101. The Nevada case Hogan cites is Boswell v. Board of Medical Examiners 72 Nev. 20, 293 P. 2d. 424 (1956). See ibid., 177, n240, for a list of cases in which courts explicitly recognize the conspiracy of silence among doctors. 14. The fact that Schroeder was born and raised in Harvey County, and was graduated from Kansas State University with a degree in agricultural economics and administration, may have led him to empathize with the farmers. Judge Schroeder’s opponent in his 1956 campaign for the Kansas Supreme Court was Paul Wilson, who, as a deputy attorney general for the state of Kansas, had participated in the oral arguments to the U.S. Supreme Court in the case of Brown v. Board of Education (personal communication, from Richard T. Foster, Esq.). 15. Salgo v. Leland Stanford Jr. University Board of Trustees 154 Cal. App. 2d 560; 317 P.2d 170; 1957 Cal. App. LEXIS 1667, 22 October 1957. 16. Medical Bulletin of the Sedgwick County Medical Society (Kansas) 26, 10 (1956): 13–14. 17. Chief Justice Parker and Justice Price dissented without opinion. 18. No one can remember the exact amount of the settlement. A lawyer connected with one of the legal firms involved believed the agreed damages came to $60,000. Another said it was almost certainly less than $100,000. 19. Curiously, at the time Natanson was treated, cobalt remained the only radioactive isotope that had not yet been tested for carcinogenicity. This was because it was “used
Notes to Pages 58–64
243
only in sealed containers for external radiation.” W. C. Hueper, then a senior scientist at the National Cancer Institute, took a dark view of all these isotopes. “Mankind,” he wrote, “has entered an artificial carcinogenic environment, in which exposures to ionizing radiation of various types and numerous sources will play an increasingly important role in the production of cancers.” Jacob Furth and John L. Tullis, “Carcinogenesis by Radioactive Substances,” Cancer Research 16 (1956): 9, 18. 20. Harvard Law Review 75 (1962): 1449. Chapter 3. The Rise of Radioactive Cobalt
1. See, for example, Jurgen Thorwald, The Triumph of Surgery (1960). 2. See, for example, W. G. MacCullum, William Stewart Halsted Surgeon (1930), and Harvey Cushing, The Life of Sir William Osler (1925); both are books about teachers written by their students. All four were for a time at Johns Hopkins. Cushing’s book won a Pulitzer Prize. 3. Lawrence K. Altman, “Radiology Was Young and So Was I,” New York Times, 19 June 2007, D1, D6. 4. No radiologists have become pop icons. The current physician-scribes (writers like Atul Gawande and Sherwin Nuland) are much more likely to be or to have been surgeons than radiologists. 5. Otha W. Linton, The American College of Radiotherapy—First 75 Years (Reston, Va.: American College of Radiology, 1997), 21–27. 6. See, for instance, Ruth and Edward Brecher, The Rays: A History of Radiation in the United States and Canada (Baltimore: Williams and Wilkins, 1969), and Bettyann Holtzmann Kevles, Naked to the Bone: Medical Imaging in the Twentieth Century (New Brunswick, N.J.: Rutgers University Press, 1997). For a comprehensive history of diagnostic radiology written by and for radiologists, see Raymond A. Gagliardi and Bruce L. McClennan, A History of the Biological Sciences: Diagnosis (Reston, Va.: Radiology Centennial, Inc., 1996). 7. The First Twenty Years of the University of Texas M. D. Anderson Hospital and Tumor Institute (University of Texas, Houston, 1964). See pages 212–223 for the story of cobalt. The Memorial Center for Cancer and Allied Diseases in New York (now Memorial Sloan Kettering) was the first American cancer hospital, set up in 1884. The first two state-supported cancer hospitals were Roswell Park Memorial Institute in Buffalo, New York, and the Pondville Hospital in Walpole, Massachusetts. M. D. Anderson was the third in this tradition. Unusual for its time, the 1964 hospital history takes an inclusive approach to its subject, acknowledging the contributions of administrative and clerical staff as well as medical. Under the National Cancer Act of 1971, M. D. Anderson was named one of the first three comprehensive cancer centers. 8. John V. Pickstone, “Contested Cumulations: Configurations of Cancer Treatments through the Twentieth Century,” Bulletin of the History of Medicine 81 (Spring 2007): 178. 9. Ibid., 175. 10. The existence, in Canada, of state-controlled enterprises (“crown corporations”) allowed the government to share responsibility for radium and radioactive isotopes with partners of its own choosing. Chief among them was Eldorado, which mined both uranium and cobalt and also, for a time, was given exclusive control of the commercial distribution of isotopes produced in Canada. In 1952, these activities were passed along to another crown company, Atomic Energy of Canada Ltd. Both companies enjoyed the advantages of government backing, especially in marketing
244
11.
12.
13. 14. 15. 16.
17. 18.
19.
20.
21.
22.
23. 24. 25.
Notes to Pages 64–71
their products abroad. Paul Litt, Isotopes and Innovation: MDS Nordion’s First Fifty Years, 1946–1996 (published for MDS Nordian by McGill-Queen’s University Press, 2000), 51–55. Payments were often based on the ability to pay. In the late 1950s, 65 percent of radiotherapy patients in British Columbia were assigned to the no-pay category. Stewart M. Jackson, Radiation as a Cure for Cancer: The History of Radiation Treatment in British Columbia (Vancouver: BC Cancer Agency, 2002), 74–75. David Cantor, “Cancer,” in vol. 1, Companion Encyclopedia of the History of Medicine, ed. W. F. Bynum and Roy Porter (London: Routledge, 1993), 550. Cantor adds that “after the Second World War, the focus of cancer research shifted to the USA.” Litt, Isotopes and Innovation, 62. The First Twenty Years, 214. Ibid., 216–217. Richard G. Hewlett and Oscar E. Anderson Jr., The New World, 1939/1946, vol. 1, A History of the United States Atomic Energy Commission (University Park: Pennsylvania State University Press, 1962), 187. This is the first of three volumes that represent the official history of the AEC, including its operations at Oak Ridge. They cover the period 1939 to 1961. Victor Perlo, Militarism and Industry (New York: International Publishers, 1963), 46–49. “Appendix IV—U.S. Technical Exhibit at Geneva.” Report of the United States Delegation to the International Conference on the Peaceful Uses of Atomic Energy Held by the United Nations; with Appendices and Selected Documents (New York: United Nations, 1956), 254–256. The conference took place 8–20 August 1955. John Boh, “An International Edge: The Kelley-Koett Company 1903–1956,” Northern Kentucky Heritage Magazine, Kenton County Historical Society 3 (2) (Spring/Summer 1996). According to a study reported in the New Physician in December 1961, the number of radiologists in the United States rose by 170 percent between 1946 and 1961, while the number of physicians rose by only 40 percent. Cited in Ruth and Edward Brecher, The Rays, 213. To address the problem of radium scarcity, the National Cancer Institute (NCI) Act of 1937 directed the Surgeon General to “procure, use and lend radium.” Accordingly, half of the NCI’s first appropriation ($400,000 altogether) was applied to the purchase of radium. In 1940, the Cancer Institute set up a loan program that allowed hospitals to borrow discrete amounts of the radioactive substance, at a much reduced cost, for use in the free treatment of indigent cancer patients. Radium’s very long half-life (1,600 years) made this feasible. Six months into the program, loans had been made to forty-seven hospitals. The program was still running in the late 1950s. Journal of National Cancer Institute 19 (1957): 156. “Minutes of the Joint Meeting of the Oak Ridge Institute of Nuclear Studies— Isotopes Division—and the X-Ray Industry on Teletherapy and Human Radiographic Problems with Isotopes,” 3 October 1953, 1, 14. Department of Energy, OpenNet (DOE OpenNet) NV0011959. L. G. Grimmett, “A 1000-Curie Cobalt-60 Irradiator.” Texas Reports on Biology and Medicine 8 (Winter 1950): 480–490. See “Minutes of the Joint Meeting of the Oak Ridge Institute,” 11–13. Canadian salesmen used two sets of passports to keep their visits to China and the USSR hidden. Litt, Isotopes and Innovation, 99. After the U.S. entry into the
Notes to Pages 71–77
26. 27. 28.
29. 30.
31.
32. 33.
34.
35. 36. 37.
38.
39.
245
Korean War, the Mutual Defense Assistance Control Act (Battle Act) of 1951 threatened to withdraw economic or military assistance from any ally found to be trading with the USSR. Between 1951 and 1952, shipments of strategic goods to the Soviet bloc dropped from $7.5 million to less than $400,000. See Philip J. Funigiello, American-Soviet Trade in the Cold War (Chapel Hill: University of North Carolina Press, 1988), 72–74. “Teletherapy Evaluation Board.” Radiology 60 (May 1953): 738–739. “Minutes of the Joint Meeting of the Oak Ridge Institute,” 15. Stephen P. Cobb Jr., Export-Import Branch, Isotopes Division, AEC, to R. F. Errington, Sales Manager, Eldorado Mining & Refining, Ltd., Ottawa, Ontario, 2 August 1951. DOE OpenNet NV0726760. The next year, Errington took over the Commercial Products Division of Atomic Energy of Canada Ltd. Section 53 (e) 8 of the Atomic Energy Act of 1954. See Gerald L. Hutton, “Evidentiary Problems in Proving Radiation Injury.” Georgetown Law Journal 46 (1957): 74. Wendell G. Scott, “Legislative and Professional Controls of the Use of Radiation,” 426, in Roentgens, Rads and Riddles: A Symposium on Supervoltage Radiation Therapy, ed. Milton Friedman, Marshall Brucer, and Elizabeth Anderson. U.S. Atomic Energy Commission (Washington, D.C.: U.S. Government Printing Office, 1959). For example, two Manhattan Project chemists had been killed in an explosion that produced a lethal mixture of hydrogen fluoride, in September 1944, at the Naval Research Laboratory in Philadelphia. Nine others were injured in the accident. Scott, “Legislative and Professional Controls,” 427. United States Atomic Energy Commission, Atomic Energy FACTS (Washington, D.C.: U.S. Government Printing Office, 1957), 52. The biggest item in the budget, the production of enriched uranium and plutonium, cost $729 million, 45 percent of the total. The figure of $1.6 billion applies to the fiscal year ending 30 June 1956 and excludes the construction costs of new reactors and other projects under way during that period. Besides the hospital at Oak Ridge, the three other AEC-supported cancer centers were the Argonne Cancer Research Hospital, Chicago; Brookhaven National Laboratory, Long Island; and the Radiation Laboratory at the University of California Medical Center of San Francisco. 2 December 1955 Minutes of the Advisory Committee for the Division of Biology and Medicine (AEC), 24–25. DOE OpenNet NV411747. “AEC Withdraws from Production and Distribution of Cobalt 60.” US AEC Press Release, 14 March 1968. The AEC’s gradual withdrawal exemplifies a broader decline in government support for industrial research and development (R & D). From a high-water mark of 59 percent in 1959, the public share of industrial R & D dropped to 38 percent in 1975. See Henry R. Nau, Technology Transfer and U. S. Foreign Policy (New York: Praeger, 1976), 67. Jean B. Owen, Lawrence R. Coia, and Gerald E. Hanks, “Recent Patterns of Growth in Radiation Therapy Facilities in the United States,” International Journal of Radiation: Oncology, Biology, Physics 24 (1992): 984. The trial, the largest of its kind to date, was conducted by the National Surgical Adjuvant Breast Project (NSABP) under the direction of Dr. Bernard Fisher. The literature published before the trial got underway was evenly divided between those who found some positive benefit to postoperative radiation and those who did not. See Bernard
246
Notes to Pages 78–84
Fisher, Nelson H. Slack, Patrick J. Cavanaugh, et al., “Postoperative Radiotherapy in the Treatment of Breast Cancer: Results of the NSABP Clinical Trial,” Annals of Surgery 172 (October 1970): 711–729. The long-term results of the continuing series of NSABP trials are reviewed in “Twenty-five-Year Follow-up of a Randomized Trial Comparing Radical Mastectomy, Total Mastectomy, and Total Mastectomy Followed by Irradiation,” New England Journal of Medicine 347 (22 August 2002): 567–575, and 347 (17 October 2002): 1233–1241. Seventy-five percent of the treated patients (352 women) in the NSABP trial had been exposed to radiation in the form of highvoltage cobalt-60. Significantly, the maximum tissue dose given to any of them was considerably lower than that given to Irma Natanson a decade earlier. 40. Marshall Brucer, J. H. Harmon, William D. Gude, United States Atomic Energy Commission, “Radioactive Isotopes in the United States Hospitals: A Survey of Hospital Administrators’ Problems up to 1957,” 6. 41. Ibid., 19. In four out five hospitals where cobalt was being used for treatment, costs were being passed on to the patient, allowing the use of the new equipment to become, in many cases, a profitable operation for the hospital. Patients were charged for the radioactive material, staff time, depreciation of equipment, and even for the associated costs of waste disposal. 42. In 2002, a randomized clinical trial revealed that HRT, in addition to increasing the risk of breast cancer, also increased the risk of heart disease. A multidisciplinary group discussing these findings asked, “Why, for four decades, since the mid-1960s, were millions of women prescribed powerful pharmacological agents already demonstrated, three decades earlier, to be carcinogenic?” See Nancy Krieger, Ilana Lowy, Robert Aronowitz, et al., “Hormone Replacement Therapy, Cancer, Controversies, and Women’s Health: Historical, Epidemiological, Biological, Clinical, and Advocacy Perspectives,” Journal of Epidemiology and Community Health 59 (2005): 740–748. Chapter 4. The Cobalt Back Story
1. U.S. Department of Energy OpenNet, Nevada (hereafter DOE OpenNet) NV0065979, 2. 2. Richard G. Hewlett and Jack M. Holl, Atoms for Peace and War 1953–1961 (Berkeley: University of California Press, 1989), 14. 3. H. M. Sweeney, Minutes, Research Council Meeting, 14 January 1954, 7. DOE OpenNet NV0750553. 4. Animal experimentation had been shown to be of only limited use since every species, as well as various strains of the same species, responded differently to radiation. 5. The technique was first used experimentally in Europe in the 1920s on a small group of patients with Hodgkin’s disease and leukemia. See F. Medinger and L. Craver, “Total Body Irradiation, with Review of Cases,” American Journal of Roentgenology and Radium Therapy 48 (November 1942): 651–671 (quotation in text p. 651). 6. See William R. Lafleur, ed., Dark Medicine: Rationalizing Unethical Medical Research (Bloomington: Indiana University Press, 2007). 7. See Advisory Committee on Human Radiation Experiments Final Report (hereafter ACHRE Report) (Washington, D.C.: U.S. Government Printing Office, 1995), chapter 1. 8. Memo to Dr. Fidler, 17 April 1947 from O. G. Haywood Jr., Colonel, Corps of Engineers, Atomic Energy Commission. DOE OpenNet NV0707493. 9. Reinterpreted by the secretary of defense at the time, Charles Wilson, the memo preserves the basic principles of the Nuremberg Code but limits its application to
Notes to Pages 84–87
10. 11.
12. 13. 14. 15.
16.
17. 18.
19.
20.
21.
247
“experimental research in the fields of atomic, biological and/or chemical warfare.” “Use of Human Volunteers in Experimental Research” (known as the Wilson Memorandum), 26 February 1953. ACHRE Report, 115. W. A. Selle, “Chronological Review of Important Events in the History of NEPA’s Effort to Secure Support for Its Recommendation on Human Experimentation,” NEPA Research Guidance Committee, January 1951, 4. DOE OpenNet NV0750791. ACHRE Report, 99. Warren served as the head of the AEC’s Division of Biology and Medicine from 1947 to 1952. Ibid., 520–521. Ibid., 249–251. Ibid., 99. For the cited document itself, see J. G. Hamilton, University of California, to Shields Warren, Division of Biology and Medicine, AEC, 28 November 1950, reproduced in ACHRE Report, Supplemental Volume 1, 214. Hamilton displayed many of the features of the “mad scientist.” His zeal for radioactive isotopes was unrestrained by any care for its hazards. He frequently experimented upon himself—showboating with radioactive cocktails to make a point. The results were perhaps predictable—he died of leukemia in 1957. W. A. Selle, Secretary, “Minutes of the NEPA Research Guidance Committee Meeting, NEPA 1765, December 12, 1950,” Taylor Papers (8), Box 81–13, File: “NEPA 1948–1951, correspondence.” Quoted in Gilbert Whittemore, “A Crystal Ball in the Shadows of Nuremberg and Hiroshima,” in Science, Technology and the Military, ed. Everett Mendelsohn, Merritt Roe Smith, and Peter Weingart (Dordrecht: Klewer Academic Publishers, 1988), 2: 448. Selle, “Chronological Review,” DOE OpenNet NV0750791, 6. John M. Talbot, Lt. Colonel U.S. Air Force, “Trip Report,” to Commandant, USAF School of Aviation Medicine, 3 April 1950, referring to visits to M. D. Anderson on 2 and 27 March 1950. DOE OpenNet NV0751566. Radiation Biology Relative to Nuclear Energy Powered Aircraft, Recommendations to NEPA by the NEPA Medical Advisory Committee, 5 January 1950, 20. DOE OpenNet NV0750789. The roentgen (R) was the first unit of radiation measurement to be standardized. Named for the discoverer of X-rays, Wilhelm Roentgen (1845–1923), and adopted in 1928, it was used to measure the amount of ionization that took place in air under specified conditions. This was superseded in 1953 by the “rad”—a unit that got closer to measuring the energy actually absorbed by human tissue (energy absorbed per unit mass) rather than the energy produced in air. Although there is no equivalence between the two units, one roentgen of gamma radiation results in roughly one rad of absorbed dose. The “rem” added a further refinement by adding to the rad a factor that took into account the fact that different types of radiation (alpha, gamma, X-ray, neutron, proton) cause different amounts of cell damage. The “rem,” in other words, incorporates an estimate of biological effectiveness. For gamma and X-rays, for example, the absorbed dose in rads is the same in rems, while for neutrons, an absorbed dose of one rad is equivalent to five or more rems. More recently, to establish international consistency, the Sievert (Sv) has replaced the rem for dose equivalent measurements: 1 Sv = 100 rems. And, since 1976, the “gray” has superseded the rad: 1 gray (Gy) = 100 rads. Lieutenant Lando Haddock, 1st Lieutenant, USAF, to Commanding General, Air Materiel Command, Wright-Patterson Air Force Base, Dayton, Ohio, 19 October
248
22.
23. 24. 25. 26. 27.
28. 29.
30. 31. 32.
33.
34.
35. 36.
37.
Notes to Pages 87–94
1950 (re “Negotiation of Cost-Reimbursement,” ACHRE No. DOD-062194-B-3) cited in ACHRE Report, 411, and Supplemental Volume 2a, E11. Cited in Staff Memorandum to Members of Advisory Committee on Human Radiation Experiments, 18 July 1994, Re: Draft Summary of History of Ethics Policies Relating to Human Radiation Experiments, DOD and AEC, 1942–1954, DOE OpenNet NV0750243. Daniel J. Kevles, “R & D and the Arms Race: An Analytical Look,” in Mendelsohn, Science, Technology and the Military, 466. “Interview with Colonel John Pickering,” DOE OpenNet NV0751048, 44. Air Force School of Aviation Medicine (SAM) contract AF-18 (600)-926. Joe and Cynthia Adcock, “The Smell of Charity,” The Nation, 4 January 1965, 6, 8. The article discusses The Hospital by Jan de Hartog (1964). Worse for the Tuskegee victims, from the 1940s on, investigators knew about the potential benefits of penicillin as a new treatment for syphilis but, in the interests of scientific research, withheld both the knowledge and the drug from the patients under their care. See James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment (New York: Free Press, 1993), 70. See also Susan M. Reverby, Tuskegee’s Truths: Rethinking the Tuskegee Syphilis Study (Chapel Hill: University of North Carolina Press, 2000). Paul Beeson interviewed by Susan Lederer, Seattle, 20 November 1994, 39. Cited by Eileen Welsome, The Plutonium Files (New York: Dial Press, 1999), 215–216. The remarks were part of an address delivered to the Annual Meeting of the Mayo Foundation at the Mayo Clinic in 1960. R. Lee Clark, “The Ethics of Medical Research in Its Human Application,” Cancer Bulletin 29 (4): 91–99 (July–August 1977). Gilbert H. Fletcher, “Radiotherapy in the Management of the Cancer Patient,” Postgraduate Medicine 17 (June 1955): 493. Peter Bacon Hales, Atomic Spaces: Living on the Manhattan Project (Urbana: University of Illinois Press, 1997), 118–119. “The Culture of Secrecy and the Nuclear Age,” “Reaching Critical Will,” http://www.reachingcriticalwill.org/technical/factsheets/secrecy.html, accessed 9 September 2007. The survival rate is based on 15,501 cancers treated at the M. D. Anderson over the fifteen-year period. It excludes skin cancers. The First Twenty Years of the University of Texas M. D. Anderson Hospital and Tumor Institute (Houston: University of Texas, 1964), 381. The death rate for all primary cancers combined grew from 195.4 per 100,000 people in 1950 to 202.3 in 1976. By 2003, the mortality rate had fallen back down to 190.1 per 100,000, that is, to a level below that prevailing in 1950. See SEER Cancer Statistics Review, 1975–2003: http://seer.cancer.gov/csr/1975_2003/, accessed 21 February 2008. Donald Okun, “What to Tell Cancer Patients? A Study of Medical Attitudes.” Letter, Journal of the American Medical Association (JAMA) 175 (1961): 1120–1128. Robert J. Samp and Anthony R. Curreri, “A Questionnaire Survey on Public Cancer Education Obtained from Cancer Patients and Their Families,” Cancer 10 (March/April 1957): 383. In 1977, the results of a questionnaire almost identical to Donald Okun’s in 1961 (see note 35 above) found an almost complete reversal of attitudes, with 97 percent of the physicians now expressing a preference for telling a patient his or her true diagnosis. Dennis H. Novack, Robin Plumer, Raymond L. Smith, et al., “Changes in Physicians’ Attitudes toward Telling the Cancer Patient.” JAMA 241 (1979): 897–900.
Notes to Pages 95–98
249
38. One hundred roentgen is roughly equivalent to about 100 rad whole-body radiation or 1 gray (a gray is a measure of absorbed dose). “A dose to the human body of 0.5–1.5 grays will cause radiation sickness.” Arthur C. Upton, “Health Effects of Low-level Ionizing Radiation,” Physics Today, August 1991, 34–35. The Atomic Bomb Casualty Commission, which monitored the health of Japanese bomb survivors, found a high relative risk for leukemia among those who had been exposed to 200 rads or more. Susan Lindee, Suffering Made Real: American Science and the Survivors at Hiroshima (Chicago: University of Chicago Press, 1994), 245. For the controversy about whether a safe dose of radiation exists, see chapter 6. 39. The first-published article was Lowell S. Miller, M.D., Gilbert H. Fletcher, M.D., and Herbert B. Gerstner, M.D., “Systemic and Clinical Effects Induced in 263 Cancer Patients by Whole-Body X-Irradiation with Nominal Air Doses of 15 to 200 R,” Air University, School of Medicine, USAF, May 1957. The more carefully edited version of the same paper by the same authors appeared as “Radiobiologic Observations on Cancer Patients Treated with Whole-Body X-Radiation,” in Radiation Research 4 (1958): 150–165. 40. Daniel J. Kevles, “R & D and the Arms Race: An Analytical Look,” in Mendelsohn, Science, Technology, 471. 41. Gilbert Whittemore attributes the longevity of the NEPA project to the need to support an aircraft industry operating at only 3 percent of its capacity after the war. It also faced competition from other branches of the armed services, particularly from the navy, whose status, especially during the Cold War, was greatly enhanced by the development of the nuclear submarine. The long-range nuclear-powered bomber was the air force’s matching bid for prominence among Cold War weapons systems. “A Crystal Ball in the Shadows of Nuremberg and Hiroshima: The Ethical Debate over Human Experimentation to Develop a Nuclear-Powered Bomber, 1946–1951,” in Science, Technology and the Military, 450–452. 42. E. L. Saenger, B. I. Friedman, J. G. Kereiakes, and H. Perry, “Effects of Whole and Half Body Irradiation in Human Beings with Cancer,” presented at Third International Congress of Radiation Research, Cortina d’Ampezzo, Italy, 1964, 5. DOE OpenNet NV0760748. See also ACHRE Report, chapter 8. 43. See, for example, K. Salisbury, “Tragic Results of Fallout among Pacific Victims,” Newsweek, 25 June 1956, 70; Ralph E. Lapp, “The Voyage of the Lucky Dragon,” Reader’s Digest, May 1958, 114–120. 44. The Washington Post article, by Stuart Auerback, appeared on 8 October 1971. It was followed soon after by Robert Kuttner’s “An Experiment in Death,” Village Voice, 14 October 1971. Twenty-three years later, in 1994, the Cincinnati experiments were the subject of a congressional hearing in Cincinnati. In the same year, a group of victims and their families also brought a class-action lawsuit against the doctors administering the experiments and won modest settlements, a sum roughly equivalent to the $50,000 paid today through the Radiation Exposure Compensation Act (RECA) to the victims of the Nevada Tests in the 1950s. The Cincinnati experiments are the only total-body radiation experiments, as far as I know, to have become the subject of a full-length study; see Martha Stephens, The Treatment: The Story of Those Who Died in the Cincinnati Radiation Tests (Durham N.C.: Duke University Press, 2002). See also David Egilman, Wes Wallace, Cassandra Stubbs, et al., “A Little Too Much of the Buchenwald Touch? Military Radiation Research at the University of Cincinnati, 1960–1972,” Accountability in Research 6 (1998): 63–102.
250
Notes to Pages 98–102
45. Letter from Robert W. McConnell, M.D., president, ACR, to the Honorable Mike Gravel, 3 January, 1972, 1. Quoted in Department of Defense, Report on Search for Human Radiation Experiment Records 1944–1994, vol. 1, June 1997, 32. 46. Stephens, The Treatment, 13. 47. Robert L. Egan, “Experience with Mammography in a Tumor Institution: Evaluation of 1,000 Studies,” Radiology 75 (December 1960): 894–900. 48. Otha W. Linton, The American College of Radiotherapy—First 75 Years (Reston, Va.: American College of Radiology, 1997), 97–98. 49. R. L. Clark, M. M. Copeland, R. L. Egan, et al., “Reproducibility of the Technique of Mammography (Egan) for Cancer of the Breast,” American Journal of Surgery 109 (1965): 127–133. 50. “Procurement of Medical Personnel,” Radiology in World War II, ed. Arnold Lorentz Ahnfeldt (Washington, D.C.: Office of the Surgeon General, Department of the Army, 1966), 846. 51. By the late 1960s, Gofman and his colleague Arthur Tamplin came to believe that exposure to ionizing radiation was, in fact, much more dangerous than the official position on it acknowledged. Together, the two scientists took an unpopular stand against several AEC initiatives that involved the intentional release of radioactive materials (Project Plowshare, for instance, was designed to use nuclear explosions to carry out heavy-duty excavation work in infrastructure projects.) The AEC made various efforts to silence both men. Gofman went on to write several books drawing attention to the role played by low-level radiation in the development of cancer. See for example, Radiation-Induced Cancer from Low Dose Exposure: An Independent Analysis (San Francisco: Committee for Nuclear Responsibility, 1990). He died in 2007. 52. “Human Radiation Studies: Remembering the Early Years, Oral History of Dr. John W. Gofman, M.D., Ph.D.” Conducted 20 December 1994. United States Department of Energy, Office of Human Radiation Experiments, June 1995, 34. Gofman continued, “You had the whole scene dominated by the people who’d come up through radiology. You know, if somebody in Tennessee [Oak Ridge] gave somebody something, some iron experiments or calcium experiments, I can see these people saying, ‘Hey look, what are you making a fuss about, we used to give people 200 rad from the thymus [in] the chest’” (36). 53. Constance Holden, “Low-Level Radiation: A High-Level Concern,” Science 13 April 1979, 156. In 1978, the Department of Energy (successor to the AEC) controlled 78 percent of the government’s $17 million research budget allocated to the human health effects of radiation. Only 10 percent of the total was awarded to university and nongovernmental researchers. 54. In 1994, the Advisory Committee on Human Radiation Experiments did conduct a significant number of interviews with surviving scientists and others with firsthand experience. But since the objective was to assemble and archive an oral history, questions inevitably focused on details of the past rather than on more personal interpretations of the experiments’ long-term significance. See n. 24. 55. Parran was an advocate of national health insurance. Scheele was not. Scheele was, however, drawn into the many controversies surrounding the early use of polio vaccines in the early 1950s. He resurfaced again in 1962 during the abortive Bay of Pigs invasion of Cuba. President Kennedy, drawing on Scheele’s expertise in public health, asked for his help in negotiating the release of captives in exchange for the food and medical supplies that Castro demanded.
Notes to Pages 102–107
251
56. See C. L. Dunham, “Radioactive Fallout: Its Significance for the Practitioner,” JAMA 183 (1963): 136–139. 57. “M. D. Anderson Played Role in Radiation Testing,” Houston Chronicle, 28 June 1994, 1, 8A. 58. A small group of experiments was documented and discussed a decade before the ACHRE Commission got underway. Congressional hearings in 1986 described in detail thirty-one experiments in which 695 people had been subjected to radiation with no promise of therapeutic benefit. They included, for example, a dozen patients with terminal brain tumors at the Massachusetts General Hospital in the mid-1950s who had been injected with uranium for the purpose of determining the dose at which damage to the kidneys began to occur. Most of the patients were comatose at the time. See American Nuclear Guinea Pigs: Three Decades of Radiation Experiments on U.S. Citizens, Committee on Energy and Commerce, U.S. House of Representatives, 99th Congress, 2d Session, November 1986, 20–21. Though its scope was limited, the report (which came to be known as the “Markey Report”) prepared the ground for the later work of ACHRE. 59. John T. Edsall, Scientific Freedom and Responsibility: A Report of the AAAS Committee on Scientific Freedom and Responsibility (Washington, D.C.: American Association for the Advancement of Science, 1975), 20–21. 60. Hewlett and Holl, Atoms for Peace and War, Appendix 2: AEC Ten-Year Summary of Financial Data, 576–577. “The Army Research Medical Program,” prepared for presentation to the Committee on Consultants on Medical Research to the Subcommittee on Labor and Health, Education and Welfare of the Senate Appropriations Committee, 18 January 1960, 24. DOE OpenNet NV0757869. Of the army’s total medical research budget, $3 million was allocated to research into the medical problems of ionizing radiation. 61. A. M. Evans, R. G. Moffat, R. D. Nash, et al., “Cobalt 60 Beam Therapy,” Journal of the Faculty of Radiologists 5 (April 1954): 248–260. Another early study with three breast cancer patients came from Montefiore Hospital in New York. See Jacob R. Fried, Henry Goldberg, William Tenze, et al., “Cobalt Beam Therapy: Three Years Experience at Montefiore Hospital, New York,” Radiology 67 (August 1956): 200–209. 62. Unlike the trials runs with cobalt, experimentation with particle therapies associated with the cyclotron predated the Cold War, but the early forays with neutron therapy, carried out by Robert Stone, did not yield promising results. 63. James Ewing, “The Public and the Cancer Problem,” Science, 6 May 1938, 399–407. 64. Coley discovered that a man with an inoperable neck tumor appeared to recover from his cancer after developing the streptococcal infection erysipelas. So he searched the medical literature for other remissions that might be explained by concurrent infections. His own blend of treatment organisms, called “Coley’s toxins,” was marketed by Parke Davis & Company and widely used for thirty years, despite the criticisms of the treatment in the medical press. JAMA condemned its use as early as 1894, depriving the research of a critical forum for debate. Forty years later, JAMA recanted, suggesting that Coley’s approach might be of some value after all. See Edward F. McCarthy, “The Toxins of William B. Coley and the Treatment of Bone and Soft-Tissue Sarcomas,” Iowa Orthopedic Journal 26 (2006): 154–158. For more recent commentary on Coley and immunotherapy, see “Panel Endorses New Anti-Tumor Treatment,” Andrew Pollack, New York Times, 30 March 2007.
252
Notes to Pages 109–113
Chapter 5. Behind the Fallout Controversy
1. 2.
3. 4. 5.
6. 7. 8. 9. 10.
11.
12.
13.
See Elof Axel Carlson, Genes, Radiation and Society: The Life and Work of H. J. Muller (Ithaca, N.Y.: Cornell University Press, 1981), 336–367. See Percy Brown, American Martyrs to Science through the Roentgen Rays (Springfield, Ill.: C. C. Thomas, 1936). Dr. Brown himself died of X-ray-induced cancer in 1950. Joseph Hamilton met a similar fate not long afterward. Radiology in World War II, ed. Arnold Lorentz Ahnfeldt (Washington, D.C.: Office of the Surgeon General, Department of the Army, 1966), 832. Ibid., 847–848. Ibid., 848. The official history overlooks the accidents that took place in various labs while carrying out work that was part of the bomb project. In addition, Eileen Welsome describes three experiments using total body radiation on cancer patients that took place in the 1940s under the sponsorship of what was called the Met Lab at the University of Chicago, that is, before 1946, when the AEC took over the control of nuclear energy (including radioactive isotopes) from the Manhattan Project. See The Plutonium Files, 51–54. The causes of cancer were pursued with religious if unscientific fervor. See, for example, Herbert Snow, The Proclivity of Women to Cancerous Diseases (1891). In 1960, the National Association of Science Writers had 372 members. In 2006, membership had risen to 2,500. Stephen B. Withey, “Public Opinion about Science and Scientists,” Public Opinion Quarterly 23 (3) (Autumn 1959): 383. New York Times, 2 October 1959, 8. New York Times, 28 May 1961, 22. Although Dr. Hugh F. Henry was the named author of the JAMA article—“Is All Nuclear Radiation Harmful?”—the published version had actually been shortened and prepared for publication by Dr. Marshall Brucer, the first chairman of the Medical Division at the Oak Ridge Institute of Nuclear Studies (ORINS), appointed in 1948. Both men were proponents of what has been called hormesis, the idea that low-level exposures to radiation (or any toxin) produce effects contrary to those associated with high-level exposures. In this case, Henry and Brucer argue, such exposures confer positive health benefits. The evidence for this hypothesis derives exclusively from animal studies. See JAMA 176 (27 May 1961): 671. Harold M. Schmeck Jr., “Tiny Clips Avert Brain Blowout,” New York Times, 2 October 1959, 8; “Science Envisions Hospitals in Space,” New York Times, 10 October 1960, 37. Victor Cohn, “Radiation, Breast Cancer Are Linked,” Washington Post, 28 September 1968, E1. The Post story summarized the NEJM’s report on the increased incidence of breast cancer among women exposed to heavy radiation during the bombing of Hiroshima and Nagasaki; Jane E. Brody, “Uniformly ‘Safe’ Levels of Radiation Questioned,” New York Times, 21 July 1972, 32. Brody discussed the claim, put forward in an article newly released by the NEJM, that “current procedures for setting ‘safe’ levels of exposure to radiation provided no guarantee of protection from radiation-induced cancer.” In 2003, the Times cited 37 articles from the NEJM and 28 from JAMA as well as articles from twenty other health or science journals. James Aronson, The Press and the Cold War (New York: Monthly Review Press, 1970), 91, 94. Both James Wechsler, editor of the New York Post, and Cedric Belfrage, editor of the National Guardian, were summoned by McCarthy to hearings in Washington and attacked for their putative communist views.
Notes to Pages 113–120
253
14. George Gallup, The Gallup Poll: Public Opinion, 1935–1971 (New York: Random House, 1972), 3 vols., vol. 3, 1745. 15. See Claudia Clark, Radium Girls: Women and Industrial Health Reform (Chapel Hill: University of North Carolina Press, 1997). 16. “Mme Curie Is dead; Martyr to Science,” New York Times, 5 July 1934. Seventythree years later, the newspaper updated Curie’s cause of death. Its obituary of her daughter Eve (Margalit Fox, “Eve Curie Labouisse, 102, Mother’s Biographer, Dies,” 25 October 2007) acknowledged that Marie Curie had “died of leukemia, which was believed to have been caused by her prolonged exposure to radioactive material.” 17. Howard Ball, Justice Downwind: America’s Atomic Testing Program in the 1950s (New York: Oxford University Press, 1986), 204, quoting Alice P. Broudy v. United States, et al., U.S. District Court, C.D. CA, Civil No. 79–2626-LEW, Memorandum in Support of Defendant’s Motion for Summary Judgment, 12 November 1985, Attachment 4, 8–9. Ball’s book provides a thorough and thoughtful examination of the issues raised by the mass exposure of the American population to radioactive fallout over the period 1951–1963. 18. W. H. Lawrence, “No Radioactivity in Hiroshima Ruin,” New York Times, 13 September 1945, 4. 19. George E. Jones, “Survey Rules Out Nagasaki Dangers,” New York Times, 7 October 1945, 28. 20. New York Times, 11 December 1956, 41. 21. Philip L. Fradkin, Fallout: An American Tragedy (Tucson: University of Arizona Press, 1989), 204. Fradkin gives a lively account of the political machinations and personalities caught up in the “science” of fallout studies. 22. Terry Tempest Williams, Refuge: An Unnatural History of Family and Place (New York: Pantheon, October 1991), 285. Williams watched her mother, aunts, and both grandmothers die of breast cancer. “I cannot prove that . . . [they] developed cancer from nuclear fallout in Utah. But I can’t prove they didn’t. . . . Tolerating blind obedience in the name of patriotism or religion ultimately takes our lives” (286). 23. In 1956, ranchers filed a suit against the government claiming compensation for the loss of thousands of dead lambs born to sheep that had been grazing in the vicinity of nuclear tests in the spring of 1953 (Bulloch v. United States, 145 Supp. 824). The AEC had found massive amounts of radiation (20,000 to 40,000 rads) in the thyroids of these lambs but kept this evidence out of court. AEC scientists testified instead that the amount of radiation to which the sheep had been exposed could not have caused the damage. The judge saw no reason to question their integrity and dismissed the case on the merits. Howard Ball, Justice Downwind, 206–207. 24. Newsweek, 21 March 1955, 62. 25. Quoted in Robert A. Divine, Blowing on the Wind: The Nuclear Test Ban Debate (New York: Oxford University Press, 1978), 44. 26. See Ralph E. Lapp, The Voyage of the Lucky Dragon (New York: Harpers, 1958). 27. Michael Straight, “The Ten-Month Silence,” New Republic, 7 March 1955, 9. 28. See Kai Bird and Martin J. Sherwin, American Prometheus: The Triumph and Tragedy of Robert Oppenheimer (New York: Knopf, 2005), Part Five. 29. Harvey Wasserman and Norman Solomon, Killing Our Own: The Disaster of America’s Experience with Atomic Radiation (New York: Delta, 1982), 94, quoting an article in the Washington Post, March 1955. 30. Dr. W. G. Cahan, Letter to the Editor, New York Times, 31 October 1956, 32.
254
Notes to Pages 120–126
31. See Precautionary Tools for Reshaping Environmental Policy, ed. Nancy J. Myers and Carolyn Raffensperger (Cambridge, Mass.: MIT Press, 2006). 32. I. Phillips Frohman, M.D., “Role of the General Physician in the Atomic Age,” JAMA 162 (3 November 1956): 962–966. 33. Jane Pacht Brickman, “ ‘Medical McCarthyism’: The Physicians Forum and the Cold War,” Journal of the History of Medicine and Allied Sciences 49 (3) (1994): 380–418. 34. Monte M. Poen, Harry S Truman versus the Medical Lobby: The Genesis of Medicare (Columbia: University of Missouri Press, 1979), 187. 35. “The Voluntary Way Is the American Way.” Quoted in Poen, Harry S Truman, 148. According to the historian Paul Starr, the Library of Congress has been unable to find a source for this attribution among Lenin’s writings. The Social Transformation of American Medicine (New York: Basic Books, 1982), 285. 36. “Alleged Injury to Workers at an Atomic Research Station, Foreign Letters,” JAMA 134: 1039 (1947). Quoted in Paul Boyer, “Physicians Confront the Apocalypse,” JAMA 254 (2 August 1985): 634 (633–643). The U.K. National Health Service remains as much a target as ever. After the revelation in the summer of 2007 that British doctors were involved in a terrorist plot in London and Glasgow, Fox News, exploiting familiar imagery, ran a story with the on-screen banner headline, “National Healthcare: Breeding Ground for Terror?” 37. “The President’s Page,” JAMA 145 (24 February 1951): 567. Quoted in Poen, Harry S Truman, 188. 38. After repeated AEC budget cuts and discontinuing its use of “drastic radiotherapy” for patients with leukemia, the cancer hospital at Oak Ridge finally closed its doors in November 1974 (“Care for ORAU Patients after Closing Assured,” The Oak Ridger, 4 October 1974). The only cancer therapy center that survived the Cold War and that was funded exclusively with public money was the Clinical Center in Bethesda, Maryland, opened by the NIH in 1953, with 140 beds. 39. Lewis Thomas, “On Medicine and the Bomb,” Discover, October 1981, 33. 40. S. L. Fishbein, “Doctors Curbed by A-Secrecy,” Washington Post, 19 March 1955, 2. 41. The New England Journal of Medicine introduced the new organization by devoting an issue to the concerns that mobilized its original members, such as “the medical consequences of thermonuclear war,” and “the physician’s role in the post attack period.” See 266 (31 May 1962): 1126–1155. Physicians for Social Responsibility was set up in Boston in 1961 and by 1979 had 5,000 members in more than forty chapters in the United States. 42. Olaf Petersen, “Radiation Cancer: Report of 21 Cases,” Acta Radiologica 42 (1954): 221. E. A. Codman’s “A Study of the Cases of Accidental X-Ray Burns Hitherto Recorded,” the first recorded review of severe radiation injuries, appeared in 1902 in the Philadelphia Medical Journal 9: 438, 499. See also C. D. Haagensen, “Occupational Neoplastic Disease,” American Journal of Cancer 15 (1931): 214. 43. Marjorie Hunter, “U.S. Doubts Peril in A-Test Fallout,” New York Times, 18 September 1962, 18. 44. Ibid. John Lindsay went on to serve as mayor of New York City from 1966 to 1973. 45. Frances Stonor Saunders, The Cultural Cold War: The CIA and the World of Arts and Letters (New York: New Press, 1999), 83. 46. The total number of recorded cancer deaths in the U.S. was 210,733 in 1950 and 267,627 in 1960. See Phyllis A. Wingo, Cheryll J. Cardinez, Sarah H. Landis, et al., “Long-Term Trends in Cancer Mortality in the United States, 1930–1998,” Cancer 97 (11 Suppl.) (2003): 3133–3275.
Notes to Pages 126–129
255
47. Lauriston S. Taylor, “Is Fallout a False Scare?” U.S. News & World Report 51 (November 1961): 72–79. More than twenty-five years later, a National Academy of Science report, using 1987 data, estimates that medical X-rays and nuclear medicine together account for about 79 percent of the public’s exposure to all engineered sources of radiation whose production, at least theoretically, could be controlled; in other words, everything but natural sources such as radon and “cosmic” particles. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII—Phase 2 (Washington, D.C.: National Academies Press, 2006), Public Summary, 5. 48. Constance Holden, “Low-Level Radiation: A High-Level Concern,” Science 13 (April 1979): 158. 49. “What Will Radioactivity Do to Our Children? Interview with Dr. H. J. Muller, Nobel Prize Winner in Genetics,” U.S. News & World Report 13 May 1955, 72–78; Consumers’ Research Bulletin 25 (June 1950), 19. 50. A study of hospitals in Minneapolis carried out shortly before the advent of cobalt therapy revealed that only 1 in 6 of all X-ray exposures was related to therapeutic treatment. It also showed that the radiation delivered therapeutically was only a tiny fraction of what cobalt would transmit. See S. W. Donaldson, “The Practice of Radiology in the United States: Facts and Figures,” American Journal of Roentgenology 66 (December 1951), table 32, 945. 51. Albert Q. Maisel, “What’s the Truth about Danger in X Rays?” Reader’s Digest, February 1958, 29. 52. Radiation dosages from these machines could range from 16 to 75 roentgens per minute. In January 1957, Pennsylvania became the first state to ban their use. By 1960, thirty-four states had taken similar legislative action. See Jacalyn Duffin and Charles R. R. Hayter, “Baring the Sole: The Rise and Fall of the Shoe-Fitting Fluoroscope,” Isis 91 (2000): 274, 278. 53. A. E. Hotchner, “The Truth about the X-ray Scare,” This Week, 23 February 1958, 8–9. Quoted in J. Samuel Walker, Permissible Dose: A History of Radiation Protection in the Twentieth Century (Berkeley: University of California Press, 2000), 22. 54. Each X-ray delivered, on average, a radiation dose of one roentgen. Dade W. Moeller, James G. Terrill, and Samuel C. Ingraham, “Radiation Exposure in the United States,” Public Health Reports 68 (January 1953), 59. A study of mass X-ray programs in upstate New York over the period 1952 to 1958 showed that they yielded, on average, one new case of TB for every thousand X-rays. See Andrew C. Fleck, Herman E. Hilleboe, and George E. Smith, “Evaluation of Tuberculosis: Casefinding by Mass Small Film Radiography,” Public Health Reports 75 (September 1960), table 1, 808. 55. See “Skin TB Tests Urged Over Schools’ X-Rays,” New York Times, 26 June 1958, 29. 56. U.S. News & World Report, 22 June 1956, 63–64. Warren Weaver, chairman of the Committee on Genetic Effects of Atomic Radiation, one of six committees whose recommendations were included in the National Academy of Medicine’s 1956 report, “Biological Effects of Radiation.” A few months later, an article in Time magazine (1 October 1956, 67) noted that the “average dental x-ray now delivers 5 r (roentgen) of radiation,” a dose the American Dental Association deemed unnecessary.” A radiologist suggested ways of reducing this exposure to—by using machines with higher voltages, better filters, faster films, and shorter exposures. 57. American Medical Directory (Chicago: American Medical Association, 1958), 12, 17. 58. Otha W. Linton, The American College of Radiotherapy—First 75 Years (Reston, Va.: American College of Radiology, 1997), 50.
256
Notes to Pages 129–136
59. Walter Sullivan, “Radiologists See Danger in Debate,” New York Times, 1 October 1958, 5. 60. Ibid. Dr. Robert R. Newell, professor of radiology at Stanford, spoke at the same conference. 61. Genell Subak-Sharpe, “X-Ray Danger,” New York Times Magazine, 24 October 1976, 42–44, 46. Chapter 6. Cancer and Fallout
1.
2. 3.
4. 5. 6. 7.
8.
9. 10. 11.
12. 13. 14.
U.S. Atomic Energy Commission, Division of Biology and Medicine, Some Effects of Ionizing Radiation on Human Beings: A Report on the Marshallese and Americans Accidentally Exposed to Radiation from Fallout and a Discussion of Radiation Injury in the Human Being (Washington, D.C., 1956). Newsweek, 28 February 1955, 20; New York Times, 4 June 1955, 6. T. T. Puck, P. Marcus, and S. J. Cieciura, “Clonal Growth of Mammalian Cells in Vitro,” Journal of Experimental Medicine 103 (1956): 273–284; “Action of X-Rays on Mammalian Cells. II. Survival Curves of Cells from Normal Human Tissues,” Journal of Experimental Medicine 106 (1957): 485–500. The Nation, 9 April 1955, 302. Science, 10 June 1955, 10. Jack Schubert and Ralph E. Lapp, Radiation: What It Is and How It Affects You (New York: Viking, 1957), 242–243. Congress of the United States, Eighty-fifth Congress, The Nature of Radioactive Fallout and Its Effects on Man: Hearings before the Special Subcommittee on Radiation of the Joint Committee on Atomic Energy, 27–29 May, 3–7 June, 1957, vol. 2, 1264–1265. Ibid. 1279. The remarks of Merril Eisenbud cited by Lapp were taken from “Man Who Measures A-Fallout Belittles Danger” in the [New York] Sunday News, 20 March 1955. Peter Goodchild, Edward Teller: The Real Dr. Strangelove (Cambridge, Mass.: Harvard University Press, 2004). Robert A. Divine, Blowing on the Wind: The Nuclear Test Ban Debate, 1954–1960 (New York: Oxford University Press, 1978), 126. Linus Pauling, “Fact and Fable of Fallout,” The Nation, 14 June 1958, 537–542. Pauling also disputed Teller’s claim that a wristwatch with a luminous dial was ten times more dangerous than radiation from fallout. Michael Specter, “Political Science: The Bush Administration’s War on the Laboratory,” New Yorker, 13 March 2006, 68. The treaty put an end to atmospheric testing, but, between 1961 and 1992, more than 800 additional tests were carried out underground. W. M. Court Brown and R. Doll, “Leukemia and Aplastic Anemia in Patients Irradiated for Ankylosing Spondylitis,” Medical Research Council Special Report Series, no. 295 (London: HMSO, 1957). The study revealed a 1.8-fold excess of lung cancer among the men treated for the condition; C. L. Simpson, L. H. Hempelmann, and L. M. Fuller, “Neoplasia in Children Treated with X-Rays in Infancy for Thymic Enlargement,” Radiology 64 (1955): 840–845. In a follow-up study of 1,400 children, the authors found 6 thyroid carcinomas where much less than one (0.08) was expected; H. C. March, “Leukemia in Radiologists,” Radiology 43 (1944): 275–278. March found the frequency of leukemia deaths among radiologists to be 4.7 percent, compared with just 0.5 percent among all physicians.
Notes to Pages 137–143
257
15. Robert Proctor offers a thorough review of the different policy implications of the various dose-response models in “The Political Morphology of Dose-Response Curves,” in Cancer Wars: How Politics Shapes What We Know and Don’t Know about Cancer (New York: Basic Books, 1995), 153–173. 16. M. Susan Lindee, Suffering Made Real: American Science and the Survivors at Hiroshima (Chicago: University of Chicago Press, 1994), 6. 17. J. H. Folley, W. Borges, and T. Yamawaki, “Incidence of Leukemia in Survivors of Atomic Bomb in Hiroshima and Nagasaki, Japan,” American Journal of Medicine 13 (1952): 311–321. 18. Dorothy R. Hollingsworth, Howard B. Hamilton, Hideo Tamagaki, and Gilbert W. Beebe, “Thyroid Disease: A Study in Hiroshima, Japan,” Medicine 42 (1963): 47–71. 19. William J. Schull, Effects of Atomic Radiation: A Half-Century of Studies from Hiroshima and Nagasaki (New York: Wiley-Liss, 1995), 154. See C. K. Wanebo, K. G. Johnson, K. Sato, and T. W. Thorsland, “Breast Cancer after Exposure to the Atomic Bombings of Hiroshima and Nagasaki,” New England Journal of Medicine 279 (26 September 1968): 667–671. The paper first appeared as a technical report of the Atomic Bomb Casualty Commission in 1967. 20. For instance, a 1990 study found a lifetime risk for leukemia that was three times higher than an estimate made ten years earlier. See Yukiko Shimizu, Hiroo Kato, and William J. Schull, “Studies of the Mortality of A-Bomb Survivors,” Radiation Research 121 (1990): 136. 21. Howard Ball, Justice Downwind: America’s Atomic Testing Program in the 1950s (New York: Oxford University Press, 1986), 39, citing Eugene Zuckert in Michael Uhl and Tod Ensign, GI Guinea Pigs: How the Pentagon Exposed Our Troops to Dangers More Deadly Than War (Chicago: Westview, 1980), 24. 22. John C. Burnham, How Superstition Won and Science Lost: Popularizing Science and Health in the United States (New Brunswick, N.J.: Rutgers University Press, 1987), 176. 23. This was a goodwill mission organized by Norman Cousins, editor of Saturday Review. 24. J. W. Hollingsworth, “Delayed Radiation Effects in Survivors of the Atomic Bombings: A Summary of the Findings of the Atomic Bomb Casualty Commission, 1947–1959,” New England Journal of Medicine 263 (8 September 1960): 481–487. 25. Shields Warren, “Shattuck Lecture: You, Your Patients, and Radioactive Fallout,” New England Journal of Medicine 266 (31 May 1962): 1123–1125. 26. Ball gives a brief account of these and other major epidemiological studies carried out between the early 1960s and the 1980s (chapter 5). 27. Howard Ball, Justice Downwind, 109, quoting U.S. Congress, House Committee on Interstate and Foreign Commerce, Subcommittee on Oversight and Investigations, The Forgotten Guinea Pigs: A Report on Health Effects of Low-Level Radiation Sustained as a Result of the Nuclear Weapons Testing Program Conducted by the United States Government (Washington, D.C.: Government Printing Office, 1980). 28. Philip L. Fradkin, Fallout: An American Tragedy (Tucson: University of Arizona Press, 1989), 195. 29. Marvin Rallison, Blown M. Dobbyns, F. Raymond Keating, Joseph E. Rall, and Frank H. Tyler, “Thyroid Disease in Children,” American Journal of Medicine 56 (April 1974): 457–463. 30. Testimony of Joseph L. Lyon, University of Utah, on Senate bill 1483, the Radiation Exposure Compensation Act of 1981, 23 October 1981. DOE OpenNet NV67276,
258
Notes to Pages 144–149
4–5. Twenty-five years later, Lyon said he would modify his 1981 remarks to acknowledge the higher quality of dose reconstructions that have been carried out since then, given the availability of ever more sophisticated computer power. In one of two more recent studies he conducted himself, he found that “using these more precise dosimetry models to identify those with exposure versus those without increased the strength of the association between exposure to fallout and risk of subsequent leukemia and thyroid disease.” Personal communication with the author, 1 May 2006. 31. Allen v. United States, trial transcript, 2473. Harold Knapp is citing the remarks of Dr. Paul Tompkins, head of the Federal Radiation Council. 32. Personal communication with author, 1 May 2006. The Centers for Disease Control, sponsors of Lyon’s work, cut off funding in August 2005. 33. The words of Colonel Stone are taken from the transcript of the Committee on Medical Sciences of the Department of Defense Research and Development Board, 23 May 1950. Quoted in Jonathan D. Moreno, Undue Risk: Secret State Experiments on Humans (New York: W. H. Freeman, 1999), 148–149. Chapter 7. Paradise Lost
1. Shields Warren, “Symposium on the Effect of the NCRP Recommendations on National Life: III. Effects on Medicine,” Health Physics 4 (1961): 217. 2. Rudimentary standards-setting bodies existed before the war. See note 10 below. 3. Taylor Memo on the history of the NRCP, October 1952, Taylor Papers, Box 31, File: NRCP-1952. Quoted in Gilbert F Whittemore, “The National Committee on Radiation Protection, 1928–1960: From Professional Guidelines to Government Regulation” (PhD diss., Harvard University, 1986), 268. 4. Whittemore, “The National Committee,” 268. The “Committee” established in 1946 became the “Council” in 1964. 5. Ibid., 441. 6. Letter from Taylor to Lewis Straus, 10 November 1948, Taylor Papers, Countway Library, Box 31, File: NCRP-19848. Quoted in Whittemore, “The National Committee,” 441–442. 7. Bureau of the Budget paper on organizational responsibilities for radiation protection, 28 May 1959. Quoted in George T. Mazuzan and J. Samuel Walker, Controlling the Atom: The Beginnings of Nuclear Regulation 1946–1962 (Berkeley: University of California Press, 1984), 257, 472 n21. 8. Pare Lorentz, “Fight for Survival,” McCall’s, January 1957, 29, 73–74. 9. D. S. Greenberg, “PHS Radiation Report: Administration Finds That Delay in Publication Can Lead to All Sorts of Conclusions,” Science 136 (15 June 1962), 970. 10. The International Commission on Radiological Protection (ICRP) started life as the International X-ray and Radium Protection Committee in 1928 under the auspices of the Second International Congress of Radiology. A year later, the United States set up its own operation, the Advisory Committee on X-ray and Radium Protection, which drew its members from professional radiology societies, the American Medical Association, and X-ray equipment manufacturers. In 1946, the American organization was reconstituted, adding new subcommittees to address the multiplying applications of radioactive materials. Congress transferred the committee’s regulatory duties to the Federal Radiation Council (FRC) in 1959, leaving it as just one of several consultative bodies providing the new council with scientific expertise and guidance. Five years later, it was formally chartered as the National Council on Radiation Protection and Measurements (NCRP), an independent,
Notes to Pages 150–154
11.
12.
13.
14.
15.
16. 17. 18.
19. 20.
21.
259
nonprofit body with responsibility for providing scientific guidance and recommendations on radiation protection. The FRC’s responsibilities passed over to the newly formed Environmental Protection Agency in 1971. At the same time, EPA assumed responsibility for radiation protection of the general population. Responsibility for occupational exposures at government nuclear installations, however, remained with the AEC. Control of radiation protection at civilian nuclear power plants remained a gray area contested by the EPA and the AEC as well as by the Public Health Service. Adding to the balkanization of control, individual states were empowered to assume many of the federal government’s powers to license and regulate nuclear by-product materials (radioisotopes) and source materials like uranium (33 states operate with such agreements today). When the AEC was abolished in 1974, its nuclear regulatory duties were taken over by the newly formed Nuclear Regulatory Commission (NRC). For a detailed history of the controversies involved, see Mazuzan and Walker, Controlling the Atom. Karl Z. Morgan, himself an ICRP Main Committee member, claims that, from 1960 to 1965, “most members of the ICRP either worked directly with the nuclear weapons industry or indirectly received most of the funding for their research from this industry.” Karl Z. Morgan, “Changes in International Radiation Protection Standards,” American Journal of Industrial Medicine 25 (1994): 303. Lauriston S. Taylor, Organization for Radiation Protection, The Operations of the ICRP and NCRP 1928–1974, Office of Technical Information, U.S. Department of Energy, 1979, 9–093, 9–094. The letter writer was Harold H. Rossi, a radiation physicist who had worked on the Manhattan Project. For members of the general public, the average (as opposed to maximum) dose was set at 0.17 rems or rads. This is equivalent to 170 millirems or millirads (for X- and gamma rays, rads and rems are interchangeable). Philip M. Boffey, “Radiation Standards: Are the Right People Making Decisions?” Science, 26 February 1971, 782. More than half of the Main Committee members were physicists. Bo Lindell, H. John Dunster, and Jack Valentin, “International Commission on Radiological Protection: History, Policies, Procedures,” ICRP Web site, 3. See http://www.icrp.org/docs/Histpol.pdf, accessed 27 August 2007. Boffey, “Radiation Standards,” 783. Ibid. Nancy K. Eskridge, “EPA under Fire on Radiation Protection,” BioScience 28 (July 1978): 369–371. The congressman was Leo Ryan (D-California), who, six months later, was murdered by one of Jim Jones’s disciples at the airstrip as the visiting congressional delegation was preparing to leave Georgetown, Guyana. Shields Warren, “Symposium,” 216. Karl Z. Morgan, “Changes in International Radiation Protection Standards,” 303. Morgan cites the work of Alice Stewart on the impact of radiation on developing fetuses. The story of the patients overtreated at the Riverside Methodist Hospital between March 1975 and January 1976 is told in “The Riverside Radiation Tragedy,” Columbus Monthly, April 1978, 52–66. One of the victims of the hospital’s carelessness was Ronald Salyer, who had been treated for testicular cancer. Twenty years later, when he was diagnosed with diabetes, X-rays revealed that he had suffered significant bone disintegration in his lower back and hip, the result of earlier overexposure to radiation. When Salyer sued for damages, the court ruled that he should
260
22. 23.
24. 25.
26. 27. 28.
29.
30.
Notes to Pages 154–158
have sued when he first knew that he faced potential problems—“he knew he had been hurt and knew who had inflicted the injury as early as 1978. Constructive knowledge of facts, rather than actual knowledge of their legal significance, is enough to start the statute of limitations running under the discovery rule.” By 1996, the court opined, he was out of luck (Ronald Salyer et al. v. Riverside United Methodist Hospital, Court of Appeals Ohio, Tenth Appellate District, Opinion Rendered 20 June 2002). This is another reminder of the exceptional burdens imposed by a disease—and a treatment—that did their damage by stealth. The law did not and would not accommodate its idiosyncrasies. J. Samuel Walker, Permissible Dose, 89. For more details on the NRC’s response to the negligence at the Riverside United Methodist Hospital, see pages 85–90. In the four years following the introduction of the new reporting requirements in 1980, 27 therapy misadministrations were reported to the NRC. In 1992, the NRC introduced a more detailed performance-based rule that specified five steps that had to be taken before every administration of radiotherapy. Four years later, the Institute of Medicine’s Committee for Review and Evaluation of the Medical Use Program of the Nuclear Regulatory Commission recommended that “Congress eliminate all aspects of the NRC’s Medical Use Program,” including the regulatory activities spelled out in the Misadministration Reporting Requirements, rules which the committee believed “intrude excessively into the patient-physician relationship” (Institute of Medicine, Radiation in Medicine: A Need for Regulatory Reform (Washington D.C.: National Academy Press, 1996): 174–177, 294. J. Samuel Walker, Permissible Dose, 48. Tom O’Hanlon, “An Atomic Bomb in the Land of Coal,” Fortune 74 (September 1966): 132–133. Even articles on mammography sometimes made the same mistake, with titles like M. Weber’s “Seek and Destroy: Techniques against Breast Cancer,” Vogue 166 (June 1976): 76. Lin Nelson, “Promise Her Everything: The Nuclear Power Industry’s Agenda for Women,” Feminist Studies 10 (Summer 1984): 292. Ibid., 291. For a useful history of both the HIP and the BCDDP trials and the controversies they spawned, see Barron H. Lerner, “To See Today with the Eyes of Tomorrow: A History of Screening Mammography,” Canadian Bulletin of Medical History 20 (2003): 299–321. John C. Bailar, III, “Mammography: A Contrary View,” Annals of Internal Medicine 84 (January 1976): 80. Bailar estimated the number of additional cancers that might appear if the HIP study that was carried out in the mid-1960s were to be carried out ten years later when average doses of radiation had been reduced. Using an estimated “average depth dose of 2 rads to the breast tissue for each set of mammograms” and 3.2 sets of mammographic studies for each of 20,000 women, Bailar came up with eight additional breast cancers “for each ten years of follow up after the first 10 years.” If the same protocol is applied to 25 percent of American women aged 40–64 in 1976 (about seven million), Bailar’s estimate would yield another 2,800 cases of accidental breast cancers. (I have used 25 percent here as an educated guess because the Centers for Disease Control did not track the use of mammography until 1987; for that year, it reported that 37 percent of American women over forty had had a mammogram). Barbara J. Culliton, “Breast Cancer: Second Thoughts about Routine Mammography,” Science, 13 August 1976, 555–558.
Notes to Pages 158–162
261
31. Culliton, “Breast Cancer,” 557. The Strax study carried out in the 1960s had involved exposures of 6.4 rads. Walter S. Ross, Crusade: The Official History of the American Cancer Society (New York: Arbor House, 1987), 103. 32. San Francisco Chronicle, 29 July 1976, 17. 33. Walter S. Ross, “What Every Woman Should Know about Breast X-Ray,” Reader’s Digest 110 (March 1977): 117–118. 34. Robert Shalek, “Measuring the Radiation Dose in Mammography,” Cancer Bulletin 30 (January–February 1978): 13–14. 35. “Mammography Muddle,” Time, 2 August 1976, 42. According to Genell SubakSharpe, writing in the New York Times Magazine (24 October 1976, 42–44, 46), the average exposure to radiation associated with a mammographic exam in the mid1970s ranged from 2–3 rads at the low end to as high as 6 or more rads (depending on the number of exposures as well as the equipment used). It is the minimum risk here that Bailar uses in his estimation of additional cases of breast cancer caused by mammography; see above. Thirty years later, the average radiation exposure is a fraction of what it was earlier. Since the mid-1980s, mammography has been undertaken with dedicated machines rather than having to rely on multi-purpose X-ray equipment. The mean absorbed dose, per view, is now between 100 to 200 millirad (0.1 to 0.2 rads). The “effective dose,” a measure that takes the sensitivity of breast tissue into account, is even lower. Running parallel with the decline in radiation doses has been legislation regulating other aspects of mammography. In 1992, Congress passed the Mammography Quality Standards Act. Administered by the FDA, the law sets wide-ranging standards for screening facilities and the equipment used in them. It stipulates that a mammography unit cannot exceed a dose of 300 millirad (0.3 rads) per exposure and, if found to exceed this limit, must be repaired before it can be used again. Running against this trend is the increase in radiation risk associated with screening mammography among women who have undergone breast augmentation. In 2005, 365,000 such procedures were carried out, more than half on women under thirty-five. Screening after breast augmentation typically requires four rather than two images for each breast, thereby doubling radiation exposure at a stroke for every woman with a breast implant every time she undergoes mammography. See Ralph L. Smathers, John M. Boone, Lisa J. Lee, et al., “Radiation Dose Reduction for Augmentation Mammography,” AJR 188 (May 2007): 1414. My thanks to Barbara Brenner for bringing this to my attention. 36. Gerald D. Dodd and Richard H. Gold, “Mammography,” in A History of the Radiological Sciences: Diagnosis, ed. Bruce L. McClennan (Reston, Va.: Radiology Centennial, 1996), 331. 37. Bettyann Kevles, Naked to the Bone: Medical Imaging in the Twentieth Century (New Brunswick, N.J.: Rutgers University Press), 256–257. 38. M. L. Schildkraut, “Don’t Be Scared of Breast X-Rays,” Good Housekeeping, March 1978, 237; M. Markham, “Mammography: Now Safer than Ever,” Harper’s Bazaar, September 1979, 263. 39. Richard Costlow, “Breast Cancer Detection Demonstration Projects: Status Report and the Rationale for the Policy Statement on Mammography by the NCI and ACS,” Cancer Bulletin 30 (January–February 1978): 19–20. 40. In a large study of malpractice claims settled in 1970, i.e., just before the advent of screening mammography, radiologists were involved in less than 5 percent of the cases. Melvin H. Rudov, Thomas I. Myers, and Angelo Mirabella, “Medical
262
Notes to Pages 162–166
Malpractice Insurance Claims Files Closed in 1970,” 16. In U.S. Department of Health, Education and Welfare, Report of the Secretary’s Commission on Medical Malpractice (Washington, D.C.: U.S. Government Printing Office, 1973), 73–88. 41. In a study of delays in breast cancer diagnosis carried out by the Physicians Insurers Association of America in 2001, radiologists accounted for 33 percent of claims and had an average indemnity of $346,247. PIAA Breast Cancer Study (Rockville, Md.: Physicians Insurers Association of America, 2002). 42. See, for example, Peter C Gøtzsche and Ole Olsen, “Is Screening for Breast Cancer with Mammography Justifiable?” The Lancet 355 (8 January 2000): 129–134, and the many responses to it in The Lancet 355 (26 February 2000): 747–752. Because the debates hinge on complex discussions of methodologies and meta-analyses, they can be heavy going for lay readers and hard to evaluate. However interpreted, they are a reminder that, despite the ubiquity of screening programs, mammography remains, for some, as unproven and as problematic today as it was when it was first introduced. Chapter 8. Subdued by the System
1. There had been controversies over hazardous chemicals at the workplace well before the war (epitomized, perhaps, by the commotion surrounding the addition of tetraethyl lead to gasoline in the 1920s). See David Rosner and Gerald Markowitz, eds., Dying for Work: Workers’ Safety and Health in Twentieth-Century America (Bloomington: Indiana University Press, 1987). The Public Health Service, through its Division of Industrial Hygiene, carried out its own investigations of a wide range of suspected toxins. For a history of its work, see Christopher Sellers, “The Public Health Service’s Office of Industrial Hygiene and the Transformation of Industrial Medicine,” Bulletin of the History of Medicine 65 (1991): 42–73. 2. The NCI’s Umberto Saffiotti estimated that no more than 3,000 of the nearly 2 million known chemical compounds had been adequately tested. About 1,000 of these showed some sign of being carcinogenic. Of these, only a few hundred were “clearly established” as carcinogens. Robert Gillette, “Cancer and the Environment (II): Groping for New Remedies,” Science, 18 October 1974, 242–245. 3. Joe Klein reported on the death, from angiosarcoma of the liver, of four workers at the B. F. Goodrich plant in Louisville. “The Plastic Coffin of Charlie Arthur,” Rolling Stone, 15 January 1978. Quoted in Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution (Berkeley: University of California Press, 2002), 191–192. 4. Stewart Alsop, “MIRV and FOBS Spell DEATH,” Reader’s Digest, July 1968, 134. Quoted in Paul Boyer, “From Activism to Apathy: The American People and Nuclear Weapons 1963–1980,” Journal of American History 70 (March 1984): 821–844. 5. There were 815 nuclear tests carried out in craters, shafts, or tunnels underground and a total of 210 atmospheric tests (detonated from a tower, barge, or airdrop). Of the total, 904 were carried out at the Nevada Test Site (804 underground and 100 in the atmosphere); 106 in the Pacific (Bikini, Christmas, Enewetak, and other islands); and a few in each of a handful of other locations in the United States (Hattiesburg, Mississippi; Grand Valley and Rifle, Colorado; central Nevada; and elsewhere). United States Nuclear Tests July 1945 through September 1992 (Las Vegas: United States Department of Energy, Nevada Operations Branch, 2000). 6. EPA press release, 16 December 1970. 7. Michael S. Sherry, In the Shadow of War: The United States since the 1930s (New Haven: Yale University Press, 1995), 263.
Notes to Pages 166–174
263
8. James T. Patterson, The Dread Disease: Cancer and Modern American Culture (Cambridge, Mass.: Harvard University Press, 1987), 249. 9. New York Times, 29 July 1979, 22. The surgeon general is probably citing the estimate of a report submitted to the Occupational Safety and Health Administration (OSHA) by scientists at the Department of Health, Education, and Welfare (DHEW) in 1978. The 20 percent figure attributes 14 percent of occupational cancers to asbestos and the remaining 6 percent collectively to arsenic, benzene, chromium, nickel, and petroleum products. See David G. Hoel, “Carcinogens in the Workplace,” Science, 5 November 1982, 560–561. The estimate of the share of cancers attributable to workplace exposures soon dropped down to single digits. In 1991, scientists at the University of Southern California Medical School believed that occupational factors were “not likely to account for more than 4% of cancers in the United States. The actual percentage may be substantially lower.” Brian E. Henderson, Ronald K. Ross, and Malcolm C. Pike, “Toward the Primary Prevention of Cancer,” Science, 22 November 1991, 1137. 10. See Geoffrey Tweedale and Philip Hansen, Magic Mineral to Killer Dust: Turner & Newall and the Asbestos Hazard (Oxford: Oxford University Press, 2000). 11. Anthony Mazzocchi, of the Oil, Chemical and Atomic Workers International Union had arranged for his local members to contribute their children’s baby teeth to a study organized by Barry Commoner of the Committee for a SANE Nuclear Policy (SANE), to document the presence, in the teeth, of strontium-90 from fallout. Markowitz and Rosner, Deceit and Denial, 157–158. The book offers an exhaustive study of the arduous and unending struggle to rein in occupational hazards over the second half of the twentieth century, with special emphasis on the histories of lead and vinyl chloride. 12. Ibid., 193. 13. Bill Curry, “U.S. Ignored Atomic Test Leukemia Link, PHS Ignored Leukemia Link in Western A-Tests,” Washington Post, 8 January 1979, A1. 14. It estimated, for example, that 1,000,000 to 1,500,000 American workers were exposed to low-level radiation, including 215,000 in medical jobs and 85,700 in government and military research and development. It also reviewed the debates between Teller and Pauling and between theories of “threshold” versus linear dose responses. The front-page series ran from 1 to 5 July 1979. 15. Harvey Wasserman and Norman Solomon, Killing Our Own: The Disaster of America’s Experience with Atomic Radiation (New York: Delta, 1982), 13–14. 16. Ibid., 26–27. 17. See, for example, “Radiophosphorus Treatment of Multiple Myeloma,” Final Report: Advisory Committee on Human Radiation Experiments, Supplemental Volume 2A, Appendix D, D75. Much more is known about the experiments which involved troops at the Nevada Test and other weapons testing sites than about the secret radiation experiments conducted over three decades in VA hospitals across the country. Images of “atomic veterans” staring directly at mushroom clouds at close range were widely circulated in popular magazines at the time. 18. Joseph Bauman, “Radiation Doses Too Small to Have Caused Cancer, Government Says,” Deseret News, 11 October 1982. 19. John Gofman, Radiation and Human Health (San Francisco: Sierra Club Books, 1981), 59. Cited in Irene H. Allen et al. v. United States, 588 F. Supp. 247; 1984 U.S. Dist., 10 May 1984. 20. Milton Terris, review of Karl Z. Morgan and Ken M. Peterson, The Angry Genie: One Man’s Walk through the Nuclear Age (1990) in Journal of Public Health Policy 23 (2002): 243. Three years before Allen, Morgan also testified in the Karen Silkwood
264
21.
22.
23. 24.
25.
26. 27.
28.
Notes to Pages 175–178
case against her employer Kerr-McGee, a manufacturer of plutonium fuel rods. Silkwood died under suspicious circumstances after being contaminated by plutonium and threatening to expose the lax safety conditions at her worksite. In his autobiography, Morgan wrote: “The real victory in the Silkwood case is that it brought to the forefront one of the worst fears of the nuclear industry: educating the public that there is no such thing as a “safe dose’ of radiation” (The Angry Genie, 145). Opinion by Judge Jenkins, Irene H. Allen et al. v. United States, Civil No. C 79-0515J, 10 May 1984. Two comprehensive accounts of the trial are Howard Ball, Justice Downwind: America’s Atomic Testing Program in the 1950s (New York: Oxford University Press, 1986), and Philip L. Fradkin, Fallout: An American Tragedy (Tucson: University of Arizona Press, 1989). The court documents provide an insight into the additional burdens that breast cancer places on those living in remote rural areas. Pollitt began weekly chemotherapy sessions six weeks after her surgery. For the first two months, she drove six hours every Thursday night from her home in Panguitch, Utah, to Salt Lake City and then, after treatment, drove another six hours back home again on Sunday. Finally, her local doctor was able to give her the chemo himself, sparing her the twelve-hour journey. But as her disease ran its course, it necessitated more and more trips to Salt Lake City, for consultations, further tests, further treatments, while her husband and school-age children remained at home. Allen trial transcript, docket document 199. Allen trial transcript, 1533–1534. The Committee on the Biological Effects of Atomic Radiation (BEIR) was convened by the National Academy of Sciences in 1955 and published its first report in 1972. The reports attempt to quantify the relationship between radiation exposures and cancers, estimating the increase in relative risk for every type of radiation-sensitive malignancy. The controversies stirred up by the reports still reflect unresolved conflicts about the exact nature of the relationship between exposure and carcinogenesis. Advocates of the “safe threshold dose” still defend their position against advocates of zero tolerance. And there is still no consensus about what mathematical formula most accurately describes the dose-response relationship. As of 2007, the compensation program had paid out over a billion dollars to more than 11,000 downwinders and 7,000 other claimants. The RECA Web site regularly updates information on the program. See http://www.usdoj.gov/civil/omp/omi/Tre_Sys ClaimsToDateSum.pdf, accessed 24 February 2008. Arthur C. Upton, “Health Effects of Low-Level Ionizing Radiation,” Physics Today, August 1991, 39. Energy Secretary Bill Richardson, quoted in Joby Warwick, “Radiation Risks Long Concealed: Paducah Plant Memos Show Fear of Public Outcry,” Washington Post, 21 September 1999, A1. And see Mark J. Parascandola, “Compensating for Cold War Cancers,” Environmental Health Perspectives 110 (July 2002): 405–407. This might have been wishful thinking, given the way some compensation programs have played out. The atomic workers’ program remains especially controversial. The success rate for the downwinders applying for compensation to RECA is more than 77 percent as of February 2008; for the nuclear workers applying to EEOICPA, it has been much lower, about 20 percent. As recently as June 2007, a federal advisory panel denied the claims of thousands of former workers at the Rocky Flats, Colorado, plant where, between 1952 and 1989, the production of detonating devices for hydrogen bombs had involved exposure to both uranium and plutonium (Dan Frosch, “Setback for Ill Workers at Nuclear Bomb Plant,” New York Times, 13 June
Notes to Pages 179–184
29. 30.
31.
32.
33.
34.
35.
36. 37. 38. 39.
265
2007, A13). For a discussion of the enduring legacies of nuclear testing, especially on populations most often overlooked (Navajo uranium miners and Marshall Islanders), see Barbara Rose Johnston, Half-Lives & Half-Truths: Confronting the Radioactive Legacies of the Cold War (Santa Fe, N.M.: School for Advanced Research, 2007). Tony Batt, “Health Officials: Fallout Study Took Too Long,” Las Vegas Review-Journal, 17 September 1998, 1B, 5B. Scott A. Hundahl, “Perspective: National Cancer Institute Summary Report about Estimated Exposures and Thyroid Doses Received from Iodine in Fallout after Nevada Atmospheric Nuclear Bomb Tests,” CA: A Cancer Journal for Clinicians 48 (1998): 285–298. Michael Uhl and Tod Ensign, GI Guinea Pigs: How the Pentagon Exposed Our Troops to Dangers More Deadly than War: Agent Orange and Radiation (New York: Playboy Press, 1980), 97. Meredith Wadman, “NCI Apologizes for Fallout Study Delay,” quoting Richard Klausner, Nature, 9 October 1997, 534. Later studies made their own estimates of excess cases of thyroid cancers resulting from exposures to iodine-131. A 2002 review of the data puts the range from 11,000 to about 220,000 excess cases. F. Owen Hoffman, A. Iulian Apostoaei, and Brian A. Thomas, “A Perspective on Public Concerns about Exposure to Fallout from the Production and Testing of Nuclear Weapons,” Health Physics 82 (2002): 736–748. For the most recent and perhaps most comprehensive attempt to explain the tenacity of tobacco, see Allan M. Brandt, The Cigarette Century: The Rise, Fall and Deadly Persistence of the Product That Defined America (New York: Basic Books, 2007). See back-to-back articles by Ernest L. Wynder and Evarts A. Graham, “Tobacco Smoking as a Possible Etiologic Factor in Bronchiogenic Carcinoma: A Study of Six Hundred and Eighty-four Proved Cases,” 329–336, and M. L. Levin, H. Goldstein, and P. R. Gerhardt, “Cancer and Tobacco; a Preliminary Report,” 336–338, both JAMA 143 (27 May 1950). See also Richard Doll and A. Bradford Hill, “Smoking and Carcinoma of the Lung: Preliminary Report,” British Medical Journal, 30 September 1950, 739–748; Michael J. Thun and Jane Henley, “The Great Studies of Smoking and Disease in the Twentieth Century,” in Tobacco and Public Health: Science and Policy (Oxford: Oxford University Press, 2004), 53–92. Gio B. Gori, “Low Risk Cigarettes: A Prescription,” Science, 17 December 1976, 1245. At the time this was published, Gori was serving as director of the NCI’s Smoking and Health Program. Richard Kluger, Ashes to Ashes: America’s Hundred-Year Cigarette War (New York: Knopf, 1996), 421. For a detailed history of this episode, see 421–434. C. Everett Koop, “Tobacco: The Public Health Disaster of the Twentieth Century,” in Thun, Tobacco and Public Health, xiii. Eugene Braunwald, “Shattuck Lecture: Cardiovascular Medicine at the Turn of the Millennium,” New England Journal of Medicine 337 (6 November 1997): 1360. G. Meadors Correspondence to B. Boone, 19 July 1947, in Papers of the National Heart, Lung, and Blood Institute, Epidemiology Correspondence Folder. Quoted in G. M. Oppenheimer, “Public Health Then and Now: Becoming the Framingham Study 1947–1950,” American Journal of Public Health 95 (2005): 602–610. The public resources required to carry out the proposed study were significant enough to raise the alarms of physicians, ever on the alert for backdoor (read: government) infiltration of their profession. To win their endorsement, Vlado Getting, commissioner of the PHS, assured them that “there is nothing in this program that smacks of either state or socialized medicine. We are not contemplating medical care. This
266
40.
41.
42. 43. 44. 45.
46.
47.
48.
49.
50.
Notes to Pages 185–194
is nothing more than case finding and education.” New England Journal of Medicine 239 (22 July 1948): 130–131. William Rothstein argues that changes in personal risk factors do not, in fact, account for the significant decline in coronary heart disease that began in the 1960s. He believes that the contribution of social factors to this trend has been inadequately explored. See William G. Rothstein, Public Health and the Risk Factor: A History of an Uneven Medical Revolution (Rochester, N.Y.: University of Rochester Press, 2003), chapters 19–20. “Smoking and Health,” Report of the Royal College of Physicians on smoking in relation to cancer of the lung and other diseases (London: Pitman Medical, 1962). Quoted in Peter Taylor, Smoke Ring: The Politics of Tobacco (London: Bodley Head, 1984), 26. It survives today in the precautionary principle, espoused by environmentalists in the nonprofit sector but not by the U.S. government. See chapter 10. See n. 34 above. Martin Walker, “Sir Richard Doll: A Questionable Pillar of the Cancer Establishment,” The Ecologist 28 (March/April 1998): 83. Unknown factors fared even worse, accounting for a vanishing 3 percent of the total. Richard Doll and Richard Peto, “The Causes of Cancer: Quantitative Estimates of Avoidable Risks of Cancer in the United States Today,” Journal of the National Cancer Institute 66 (1981): 1191–1308. Roger Dobson, “Professor Doll Failed to Declare Interests When Working on Vinyl Chloride,” British Medical Journal 333 (2 December 2006): 1142. Not only did Monsanto pay Doll a handsome consultancy fee; the prepublication peer reviewers were also paid. A fee of close to $30,000 was paid partly “by the CMA [U.S. Chemical Manufacturers Association], partly by ICI, the biggest producer of vinyl chloride in the UK, and partly by Dow, another big producer of vinyl chloride.” W. M. Court Brown and Richard Doll, “Mortality from Cancer and Other Causes after Radiotherapy for Ankylosing Spondylitis,” British Medical Journal (4 December 1965): 1332. W. M. Court Brown and Richard Doll, “Geographical Variation in Leukemia Mortality in Relation to Background Radiation—Epidemiological Studies,” Proceedings of the Royal Society of Medicine 53 (September 1960): 763. In a paper written by Doll in 1970, entitled “Cancer Following Therapeutic External Radiation,” he remarked that “the data are consistent in showing excess incidence (or mortality) for cancer in irradiated sites and no significant excess for cancers in other sites.” Wellcome Collection, Wellcome Trust (London), PP/DOL/D1/5, typescript, 5. S. C. Darby, G. M. Kendall, T. P. Fell, R. Doll, et al., “Further Follow-up of Mortality and Incidence of Cancer in Men from the United Kingdom Who Participated in the United Kingdom’s Atmospheric Nuclear Weapon Tests and Experimental Programs,” British Medical Journal 307 (11 December 1993): 1535. See, for instance, British Medical Journal 308 (29 January 1994): 339, and “Richard Doll: An Epidemiologist Gone Awry,” on the Cancer Prevention Coalition Web site http://www.preventcancer.com/losing/other/doll.htm, accessed 24 February 2008.
Chapter 9. Hidden Assassin
1. Ellen Leopold, A Darker Ribbon: Breast Cancer, Women and Their Doctors in the Twentieth Century (Boston: Beacon Press, 1999), 33–34. 2. Rachel Carson, Lost Woods: The Discovered Writing of Rachel Carson (Boston: Beacon Press, 1998), 100.
Notes to Pages 195–20 0
267
3. Cited in Samuel Epstein, The Politics of Cancer (San Francisco: Sierra Club Books, 1978), 427. 4. Report of the President’s Committee on Health Education (New York: Public Affairs Institute, 1973), 2. Quoted in William G. Rothstein, Public Health and the Risk Factor: A History of an Uneven Medical Revolution (Rochester, N.Y.: University of Rochester Press, 2003), 363. 5. S. Monckton Copeman and Major Greenwood, Diet and Cancer with Special Reference to the Incidence of Cancer upon Members of Certain Religious Orders. Reports on Public Health and Medical Subjects No. 36, Ministry of Health (London: HMSO, 1926), 1. 6. Early evidence supporting the idea that restricted caloric intake offered some protection against cancer was reported in Albert Tannenbaum, “Initiation and Growth of Tumors. Introduction: Effects of Underfeeding,” American Journal of Cancer 38 (March 1940): 335–350. 7. Frederick L. Hoffman, Cancer and Diet (Baltimore: Williams and Wilkins, 1937), 116. Hoffman, on the basis of his extensive review, concluded that cancer was “profoundly affected by dietary and nutritional factors.” But, to his credit, he also reported the opposing view, expressed by the British surgeon Percy LockhartMummery, that “there is no evidence whatever to support such an idea, and a very great deal of evidence to refute it” (117). 8. See http://researchportfolio.cancer.gov/projectlist.jsp?result=true&strSearchID= 278166, accessed 24 February 2008. 9. Stephen J. Hedges, “Monsanto Having a Cow in Milk Label Dispute,” Chicago Tribune, 15 April 2007. Organic dairy producers, including Ben and Jerry’s and Organic Valley Farm, have been fighting labeling restrictions since the late 1990s. 10. Andrew Martin, “Organic Milk Supply Expected to Surge as Farmers Pursue a Payoff,” New York Times, 20 April 2007, C1. 11. Advocates for intervention have recently become more visible and the evidence they have mustered—with foundation support—more compelling. A study carried out under the auspices of Environmental Working Group (EWG) tested the umbilical cord blood of ten babies born in U.S. hospitals in 2004. The blood samples revealed the presence of 287 chemicals, including “pesticides, consumer product ingredients, and wastes from burning coal, gasoline, and garbage . . . we know that 180 [of the chemicals] cause cancer in humans or animals, 217 are toxic to the brain and nervous system, and 208 cause birth defects or abnormal development in animal tests. The dangers of pre- or post-natal exposure to this complex mixture of carcinogens, developmental toxins and neurotoxins have never been studied” (italics added). From the Executive Summary of the 2005 report Body Burden—The Pollution in Newborns. See http://www.ewg.org, accessed 12 August 2007. 12. “Ms. Browner Meets Mr. Delaney,” Rachel’s Hazardous Waste News #324, February 10, 1993. DES was later found to have caused reproductive cancers in the daughters of women who took the chemical to prevent miscarriage during pregnancies. See “Stilbestrol and Vaginal Cancer in Young Women,” CA: A Cancer Journal for Clinicians 22 (September 1972): 292–295. See also Cynthia Laitman Orenberg, DES: The Complete Story (New York: St. Martin’s Press, 1981). 13. Barbara Sattler, Review of Toxic Deception: How the Chemical Industry Manipulates Science, Bends the Law, and Endangers Your Health, by Dan Fagin, Marianne Lavelle; and the Center for Public Integrity, Journal of Public Health Policy 22 (2001): 240. 14. A study of bankruptcies caused by illness in 2001 found that people with cancer had average medical debts of $35,878. The research was carried out jointly at the Harvard
268
15.
16.
17. 18.
19.
20.
21.
22.
23.
24.
25.
Notes to Pages 201–206
Law and Harvard Medical Schools; see http://www.consumeraffairs.com/news04/ 2005/bankruptcy_ study.html, accessed 7 January 2008. April Dembosky, “A Fearless Crusader,” Smith Alumnae Quarterly Online, Summer 2007, http://saqonline.smith.edu/article.epl?issue_id=18&article_id=1710, accessed 13 August 2007. Quoted in Rothstein, Public Health, 332. Rothstein supplies a useful history of dietary recommendations over the second half of the twentieth century and includes the sparring between science and the food industry. Ibid., 330. The same victim-blaming philosophy haunts postdiagnosis books like Jane Brody and Art Holleb’s You Can Fight Cancer and Win. Already feeling guilty for having failed to prevent their cancers, readers now face the real possibility of failing to cure them. This adds an additional layer of suffering to what may already be an extremely painful experience. Ross L. Prentice, Bette Caan, Rowan T. Chlebowski, Ruth Patterson, et al., “Low-Fat Dietary Pattern and Risk of Invasive Breast Cancer: The Women’s Health Initiative Randomized Controlled Dietary Modification Trial,” JAMA 295 (8 February 2006): 629–642. As established risk factors in 1997, the study cites obesity and alcohol consumption. E. Riboli, K. J. Hunt, N. Slimani et al., “European Prospective Investigation into Cancer and Nutrition (EPIC): Study Populations and Data Collection,” Public Health Nutrition 5 (6B) (December 2002): 1113–1124. Patterson, The Dread Disease, 186. Quoting Harold Simmons, The Psychogenic Theory of Disease: A New Approach to Cancer Research (Sacramento: General Welfare Publications, 1966). See also Samuel Kowal, “Emotions as a Cause of Cancer: Eighteenth and Nineteenth Century Contributions,” Psychoanalytic Review 42 (July 1955): 217–227; “Personality and Cancer,” Scientific American 186 (June 1952): 34, 36. Interestingly, twenty-nine of these seventy studies (42 percent) focused on breast cancer alone. Bert Garssen, “Psychological Factors and Cancer Development: Evidence after 30 Years of Research,” Clinical Psychology Review 24 (July 2004): 315–338. http://www.hsph.harvard.edu/cancer/risk/index.htm, accessed 12 August 2007. There is no consensus about this 50 percent; it is just one of many estimates. The numbers are all over the place—and all are speculative. According to one analyst, “experimental, clinical, and epidemiological research indicate that approximately three-fourths of all cancer deaths are attributable to lifestyle factors.” G. M. Williams, “Causes and Prevention of Cancer,” Statistical Bulletin Metropolitan Life Insurance Company: 1984 72 (April–June 1991): 6–10. Another pundit claims that “about 70–90% of cancer is thought to be environmentally related, with about 40–60% attributed to lifestyle or dietary practices.” Raymond J. Shamberger, Nutrition and Cancer (New York: Plenum Press, 1984), 1. A medical advance will always draw more media attention than a medical impasse. Between 1975 and 1984 and 1985 and 1994, the survival rate for all cancers among 15–19-year-olds increased from 69 percent to 77 percent, but over a similar interval, the incidence rate for all cancers among 15–19-year-olds rose from 183 to 203.8 per million. National Cancer Institute, Cancer Incidence and Survival among Children and Adolescents: United States SEER Program, 1975–1995, tables 13.4, 13.5. Recently, however, about twenty people with a very strong family history of pancreatic cancer (an especially deadly form of the disease) have been offered—and have taken—the chance to have their pancreas removed prophylactically even
Notes to Pages 207–211
269
though this immediately makes them diabetic and dependent on insulin for the rest of their lives. Denise Grady, “Deadly Inheritance, Desperate Trade-Off,” New York Times, 7 August 2007. Chapter 10. Experiments by Other Means
1. John C. Bailar III and Elaine M. Smith, “Progress against Cancer,” New England Journal of Medicine 314 (1986): 1226–1232. 2. The Cancer Letter 12 (May 23, 1986), 1–2. 3. Letters to the Editor from Vincent T. DeVita Jr. and David Korn, New England Journal of Medicine 314 (1986): 964; Michael B. Shimkin, New England Journal of Medicine 314 (1986): 965. The World Health Organization also weighed in, suggesting that the decline in stomach cancer in all countries was probably attributable to “changes in diet and food preparation and not improved therapy” (965). And a physician from Maryland argued that the rise in cancer mortality actually represented progress of a sort, that falling mortality from cardiovascular disease “has ‘unmasked’ cancer mortality, and persons who formerly died with cancer now died of cancer” (966) (italics added). 4. For an introduction to the history of clinical trials, see Harry M. Marks, The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990 (Cambridge: Cambridge University Press, 2000). 5. National Cancer Institute, Annual Report of Program Activities, 1958, 50, n39. Quoted in Peter Keating and Alberto Cambrosio, “From Screening to Clinical Research: The Cure of Leukemia and the Early Development of the Cooperative Oncology Groups, 1955–1966,” Bulletin of the History of Medicine 76 (2002): 310. 6. The first trial compared the effect on rates of remission of delivering experimental combinations of chemotherapies continuously (on a daily basis) rather than intermittently (every third day) with equivalent total doses in both groups. The results indicated that those in the “continuous” group had longer remissions than those in the “intermittent.” Children enrolled in the trial also fared better than adults (36 percent of children had remissions versus 19 percent of adults), but children who benefited had different types of leukemia from adults who did. See Emil Frei, James F. Holland, Marvin A. Schneiderman, et al., “A Comparative Study of Two Regimens of Combination Chemotherapy in Acute Leukemia,” Blood 13 (1958): 1144. 7. A. Bradford Hill, “The Clinical Trial,” New England Journal of Medicine 247 (24 July 1952): 118. Hill is quoting the author of “Infectious Diseases and Vital Statistics,” British Medical Journal 2 (1951): 1088–1090. 8. Francis H. Adler, W. E. Fry, and I. H. Leopold, “Pathologic Study of Ocular Lesions Due to Lewisite (Beta-Chlorovinyldichloroarsine),” Archives of Ophthalmology 38 (1947): 89–108. 9. Richard L. Schilsky, O. Ross McIntyre, James F. Holland, et al., “A Concise History of the Cancer and Leukemia Group B,” Clinical Cancer Research 12 (11 Pt 2) (1 June 2006): 3553s–3555s. The first randomized clinical trial enrolled eighty-four leukemia patients, of whom sixty-five eventually participated. 10. Schilsky, A Concise History, “Notably, the [first] protocol contained . . . no formal statistical section, no adverse event reporting, and no model-informed consent document.” 11. For a detailed history of the evolution of informed consent at federal agencies, including the FDA and NIH, see Ruth R. Faden and Tom L. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986), chapter 6.
270
Notes to Pages 212–216
12. The Threshold Test Ban Treaty of 1974 and the Peaceful Nuclear Explosions Treaty of 1976 restricted all nuclear test explosions to yields that could not exceed 140 kilotons. 13. The four articles were written by staff writers Ted Gup and Jonathan Neumann and appeared on 18, 19, 20, and 21 October 1981, in a series entitled “The War on Cancer.” The introduction to the series claimed that it was based on “detailed reports on each of the more than 150 chemicals used in human cancer experiments in the past decade (and that) on-the-record interviews were conducted with more than 600 doctors, patients, nurses, scientists and researchers.” 14. Ted Gup and Jonathan Neumann, “Experimental Drugs: Death in the Search for Cures,” Washington Post, 18 October 1981, A1. “The chemicals generally come from industry, including such firms as Clairol, Dow Chemical, the 3M Company, Gulf Oil, Uniroyal, U.S. Naval Weapons, Proctor and Gamble, Eastman Kodak.” Gup and Neuman, “A Long, Hit-and-Miss War against Cancer,” Washington Post, 18 October 1981, A16. 15. Gup and Neuman, “Experimental Drugs.” 16. Vincent DeVita, “DeVita Responds to Post Article,” Washington Post, 19 October 1981, A27. 17. Howie Kurtz, “Cancer Drug Programs Rapped at Hill Hearing,” Washington Post, 28 October 1981, A3. 18. E. Haavi Morreim, “Litigation in Clinical Research: Malpractice Doctrines versus Research Realities,” Journal of Law, Medicine and Ethics (Fall 2004): 475. 19. In the mid-1970s, a National Surgical Adjuvant Breast and Bowel Project (NSABP) trial designed to measure the relative effectiveness of simple versus radical mastectomy failed to enroll an adequate number of subjects within the allotted time frame. More than a third of the principal investigators had refused to enter any of their patients. As barriers to accrual, participating surgeons commonly cited difficulties seeking their patients’ informed consent before knowing to which arm of the trial they had been assigned. Randomization was “an unfamiliar and disquieting process.” See Kathryn M. Taylor, Richard G. Margolese, and Colin L. Soskolne, “Physicians’ Reasons for Not Entering Eligible Patients in a Randomized Clinical Trial of Surgery for Breast Cancer,” New England Journal of Medicine 310 (1984): 1363–1367. The NSABP subsequently modified the protocol, permitting patients to be prerandomized to one treatment arm or the other before their consent was obtained, rather than after. After this change, in June 1978, the accrual rate rose by 600 percent. This did not entirely resolve the ethical difficulties involved. See Marcia Angell, “Patients’ Preferences in Randomized Clinical Trials,” New England Journal of Medicine 310 (1984): 1385–1387. 20. Christopher K. Daugherty, “Impact of Therapeutic Research on Informed Consent and the Ethics of Clinical Trials: A Medical Oncology Perspective,” Journal of Clinical Oncology 17 (1999): 1606. 21. Ibid., 1607. Twenty-seven cancer patients had agreed to participate in a Phase 1 trial. Twenty-three of them said they had decided to participate for “reasons of possible therapeutic benefit,” three “because of the advice or trust of physicians,” and one because of family pressure. 22. Trials for cancer therapies are not the only ones to experience the consequences of inadequate enrollment. Early trials of Vioxx were not large enough to catch problems before the drug went to market as an anti-inflammatory arthritis medicine. 23. C. P. Gross, V. Murthy, Y. Li, et al., “Cancer Trial Enrollment after State-Mandated Reimbursement,” Journal of the National Cancer Institute 96 (2004): 1063.
Notes to Pages 217–220
271
24. “President Clinton Takes New Action to Encourage Participation in Clinical Trials,” White House Press Release, 7 June 2000. 25. Justin E. Bekelman, Yan Li, and Cary P. Gross, “Scope and Impact of Financial Conflicts of Interest in Biomedical Research: A Systematic Review,” JAMA 289 (2003): 454. 26. See Sheldon Krimsky, Science in the Private Interest: Has the Lure of Science Corrupted Biomedical Research? (Lanham, Md.: Rowman and Littlefield, 2003); and Marcia Angell, The Truth about the Drug Companies: How They Deceive Us and What to Do about It (New York: Random House, 2004). 27. The failure to achieve effective oversight of trials has been recognized by the Office of the Inspector General, attributed to “increased commercialization, the increase in multicenter trials, and a significant increase in workload.” Existing protections are laxly enforced. Broad federal regulations on the books that may apply to the research environment have not yet been comprehensively tested. The federal antikickback statute, for instance, prohibits health care providers from “knowingly and willfully paying for the referral of patients covered by federal health care programs.” Since 2000, this has applied to Medicare-supported clinical trials. Potential conflicts of interest get more complicated when a manufacturer contributing research funds to an institution (e.g., in support of a clinical trial) also sells products to the same institution that are reimbursable by federal health programs. See Paul E. Kalb and Kristin Graham Koehler, “Legal Issues in Scientific Research,” JAMA 287 (2002): 85–91. 28. Letter to Daniel R. Levinson, Inspector General, Department of Health and Human Services, from Senator Charles E. Grassley, Chairman, Committee on Finance, 8 November 2005. 29. Gardiner Harris, “Report Assails F.D.A. Oversight of Clinical Trials,” New York Times, 28 September 2007, 1. 30. Ibid. 31. Karine Morin, Herbert Rakatansky, Frank A. Riddick Jr., et al., “Managing Conflicts of Interest in the Conduct of Clinical Trials,” JAMA 287 (2002): 78. 32. “Effects of Chemotherapy and Hormonal Therapy for Early Breast Cancer on Recurrence and 15-Year Survival: An Overview of the Randomized Trials,” The Lancet 365 (14 May 2005): 1703. 33. Related trials also demonstrated that women opting for radical mastectomies did not receive any additional benefit from undergoing postoperative radiation. In other words, no women need ever again submit to the treatment regime that so damaged Irma Natanson. See Bernard Fisher, Nelson H. Slack, Patrick J. Cavanaugh, et al., “Postoperative Radiotherapy in the Treatment of Breast Cancer: Results of the NSABP Clinical Trial,” Annals of Surgery 172 (October 1970): 711–732. 34. A U.K. pharmaceutical trade group advertising its report “The Clinical Trials Market 2006” online claims that “the global clinical trials industry is currently worth an estimated $10 billion and has the potential for considerable growth in the future.” The report sells for £1,499 (about $3,000). See http://www.leaddiscovery.co.uk/reports/ The_Clinical_Trials_Market_2006.html, accessed 13 September 2007. 35. Alex Berenson, “Hope, at $4,200 a Dose,” New York Times, 1 October 2006, 3, 7. Berenson estimates that worldwide sales of cancer medicines will rise to $55 billion in 2009. Runaway cancer drug costs help explain why the share of medical care in gross domestic product (GDP) has risen from 4 percent in 1950 to almost 15 percent today. David M. Cutler, Your Money or Your Life: Strong Medicine for America’s Health Care System (New York: Oxford University Press, 2004), 4.
272
Notes to Pages 220–227
36. Gardiner Harris, “New Drug Points up Problems in Developing Cancer Cures,” New York Times 21 December 2005, A 23. 37. In 1972 there were about 650,000 new cancer diagnoses, and in 2005 the comparable figure was 1,372,910. The cost of treatment was $3.87 billion in 1972, and $74 billion in 2005; adjusted for inflation, that yields a ratio of 1 to 4. National Cancer Institute, National Institutes of Health, Fact Book, 2005, table C-3. 38. NCI Fact Book, 1997, 46, and NCI Fact Book, 2004, B8. Since 1982, NCI spending on clinical trials has risen almost fourfold, from $246.3million to $800 million (both figures in 2005 dollars). 39. The stark contrast masks many changes. Among the most important are (1) the aging of the American population over the past fifty years—this in itself alters the distribution of specific cancer deaths since some cancers (lung, melanomas, nonHodgkin’s lymphoma) more commonly kill people over sixty-five; (2) declines in the death rates of some cancers, such as cancers of the cervix, testes, stomach, and Hodgkin’s lymphoma; (3) increases in the death rates of others, such as lung cancer (among women), liver cancer, multiple myelomas. See “Cancer Mortality Rates: Changes from 1973 to 1997 Ages under 65” and “Cancer Mortality Rates: Changes from 1973 to 1997 Ages over 65,” NCI Fact Book 2000, C-6, C-7. 40. In a study of childhood cancer survivors (average age about twenty-seven), more than a quarter had a severe or life-threatening condition, including cancer. See Kevin C. Oeffinger, Ann C. Mertens, Charles A. Sklar, et al., “Chronic Health Conditions in Adult Survivors of Childhood Cancer,” New England Journal of Medicine 355 (12 October 2006): 1572–1582. 41. H. Gilbert Welch, Steven Woloshin, and Lisa M. Schwartz, “How Two Studies on Cancer Screening Led to Two Results,” New York Times, 13 March 2007, F5. 42. Scott Lippman, Bernard Levin, Dean E. Brenner, et al., “Cancer Prevention and the American Society of Clinical Oncology,” Journal of Clinical Oncology 22 (2004): 3848. 43. Tamoxifen is perhaps the best known of these drugs. It is designed to limit the incidence of breast cancer among high-risk populations and to reduce the recurrence of disease among those already diagnosed. But it can also raise the risk of uterine cancer, blood clots, and cataracts among a small but significant number of those taking the drug. 44. See Breast Cancer Action, “Policy on Pills for Prevention,” available on their Web site, http://www.bcaction.org/Pages/LearnAboutUs/PillsForPreventionPolicy.html, accessed 28 February 2008. See also Adriane Fugh-Berman and Samuel Epstein, “Tamoxifen: Disease Prevention or Disease Substitution?” The Lancet 340 (7 November 1992): 1143–1145. 45. Lippman et al., “Cancer Prevention,” 3848. 46. In her last public appearance, Carson argued that “the burden of proof is on those who would use these chemicals to prove the procedures are safe.” See Ellen Leopold, “Seeing the Forest and the Trees: The Politics of Rachel Carson,” Monthly Review 52 (May 2000): 48–54. 47. Quoted in Rita Arditti, “Ten Years After: rBGH and Cancer,” Women’s Community Cancer Project Newsletter, Summer 2006, 4. 48. Besides the EPA and the FDA, these include the Department of Energy, the Nuclear Regulatory Commission, the Department of Defense, and the Department of Labor. See “Appendix B. Federal Agency Radiation Responsibilities,” Radiation Protection at EPA: The First 30 Years, United States Environmental Protection Agency, 2000.
Notes to Pages 227–229
273
49. Several good histories documenting the ubiquity of radiation and its perils appeared in the 1980s and early 1990s. See Rosalie Bertell, No Immediate Danger: Prognosis for a Radioactive Earth (London: Women’s Press, 1985); Catherine Caufield, Multiple Exposures: Chronicles of the Radiation Age (Chicago: University of Chicago Press, 1989); Jay M. Gould and Benjamin A. Goldman, with Kate Millpointer, Deadly Deceit: Low-Level Radiation, High-Level Cover-Up (New York: Four Walls Eight Windows, 1990). 50. Rosalie Bertell, retired president of the International Institute of Concern for Public Health, personal communication with the author, 10 April 2006. She currently serves on the Board of Regents of the International Physicians for Humanitarian Medicine. 51. This divergence in recommended standards is not new. In 1954, the NCRP dropped the requirement for occasional blood counts that might have picked up diseases like leukemias among those at risk. The ICRP, however, kept it. “Blood counts,” argues Gilbert Whittemore, “were a dramatic reminder that the ultimate concern was with biology, not physics” (466). In 1990, when the ICRP lowered its recommendation for occupational exposures from 5 rems to 2 rems per year, the NRC did not follow suit. 52. For a full list and synopsis of all the reports, see the ICRP Web site. 53. Industrial radioisotopes are now used to activate fire alarms, sterilize instruments, locate flaws in materials like steel and in welds used in manufacturing, authenticate art works, eliminate dust from CDs, and to discover whether a well has the potential to produce oil (nuclear well-logging). Radiation is also used to splice foreign genetic material into crops (genetically modified food) and, using cobalt-60, to produce mutant crops to improve their taste, yield, resistance to disease, and so on (in a process called “radiation breeding”). 54. A CT urographic study delivers 75 times the radiation of a mammogram, an effective dose of 44.1 millisieverts (4.41 rem). See Daniel Lockwood, David Einstein, and William Davros, “Diagnostic Imaging: Radiation Dose and Patients’ Concern,” Cleveland Clinic Journal of Medicine 73 (June 2006): table 1, 584. 55. The NCRP report is due to be published in 2008. Roni Caryn Rabin, “With Rise in Radiation Exposure, Experts Urge Caution on Tests,” New York Times, 19 June 2007, D5. 56. Report on Carcinogens, Eleventh Edition; U.S. Department of Health and Human Services, Public Health Service, National Toxicology Program, released January 2005, available on line. 57. In 1974, there were 304,680 new patients (recorded for the previous year). In 1990, the number of new patients reached 492,120 (including a modest number from Hawaii and Alaska who had not been counted before). The number of radiation treatment facilities grew over the same interval from 1,013 in 1974 to 1,310 in 1990. Jean B. Owen, Lawrence R. Coia, and Gerald E. Hanks, “Recent Patterns of Growth in Radiation Therapy Facilities in the United States: A Patterns of Care Study Report,” International Journal of Radiation Oncology, Biology, Physics 24 (1992): 983–986. By 2007, the number of Americans undergoing some form of radiotherapy reached an estimated 800,000, almost double the number in 1990 (Andrew Pollack, “Hospitals Chase a Nuclear Tool to Fight Cancer,” New York Times, 26 December 2007, A22). 58. No new civilian nuclear power plant has been constructed since the 1979 accident at Three Mile Island. But recently there have been signs of a resurgence of interest. According to the Department of Energy, 16 U.S. power companies have indicated their intention “to submit applications for a combined Construction and Operating
274
59.
60.
61.
62. 63. 64.
65. 66.
67. 68.
Notes to Pages 230–233
License to the Nuclear Regulatory Commission (NRC) between 2007 and 2009” (direct communication with the author, 31 July 2007). Accompanying this revival— although not explicitly linked to it—has been the reappearance (though still rare) of the popular article on the dangers of medical radiation exposures. See Rabin, “With Rise in Radiation Exposure.” See also “Report Links Increased Cancer Risk to CT Scans,” New York Times, 29 November 2007, 17, which points to the rise in CT scans in the United States from 3 million in 1980 to about 62 million scans in 2006. The American College of Radiology Web site does include a recommendation to keep a tally of exposures: “If you have had frequent x-ray exams and change healthcare providers, it is a good idea to keep a record of your x-ray history for yourself.” But how many patients are likely to visit this Web site? See http://www. radiologyinfo.org/en/safety/index.cfm?pg=sfty_xray, accessed 27 August 2007. In a similar vein, the FDA also recommends the use of a radiation report card. But not one American in a million, I would guess, has ever put this recommendation into practice. A typical body burden now includes literally hundreds of toxins. It makes the concept of “dose reconstruction,” based on an estimated exposure to a single hazard at a single time and place, seem obsolete. Of course it also renders moot any question of compensation. The Environmental Working Group has undertaken a comprehensive mapping of what is known about the constituents of the body burden in its Human Toxome Project. Among the many carcinogenic compounds on its list are inorganic arsenic, cadmium, chromium, organochlorine pesticides, and toxic metals. See www.bodyburden.org (accessed 21 September 2007). “Radiation Data Faces U.S. Limit,” New York Times, 7 April 1957, 22. John W. Finney, “2 Groups to Urge Check on X-Rays,” New York Times, 9 June 1957, 10. There have been some citizen-based initiatives. In 2000, the Committee for Nuclear Responsibility (chaired since 1971 by John Gofman) launched the “X-rays and Health Project.” Its Patients’ Right-to-Know Policy Statement argued that “medical and dental patients have the right (a) to know their radiation exposures from x-ray imaging procedures and to possess a reliable dose-record, and (b) to know that the medical and dental communities are actively seeking the most effective ways to reduce dosage during x-ray imaging procedures.” See also Carrie Spector, “X-ray Visionary and the Patients’ Right-to-Know Project,” Breast Cancer Action Newsletter 64 (March/April 2001). New York Times, 1 July 1979. Quoted in J. Samuel Walker, Permissible Dose, 92. Very few Americans are aware that X-rays have been included on the official list of “known carcinogens,” in the select company of just over fifty other substances. “Part A. Known to be Human Carcinogens,” Report on Carcinogens, Eleventh Edition. This is a biannual report published for informational purposes only. Included in the most recent edition, in addition to “x-radiation and gamma radiation,” are the first viruses, hepatitis B and C and the human papillomas virus. Tamoxifen and steroidal estrogens are also on the list. Andrew Pollack, “Hospitals Chase a Nuclear Tool.” The article quotes Dr. Theodore S. Lawrence, the chairman of radiation oncology at the University of Michigan. Ibid. Before 2000, there was only one proton center in the United States. Now there are five with plans to build more than a dozen more.
Index
abraxane, 219–220 ACHRE. See Advisory Committee on Human Radiation Experiments Advisory Committee on Human Radiation Experiments (ACHRE), 26, 239n46, 250n54 Advisory Committee on Human Radiation Experiments Final Report (ACHRE Report), 84, 103, 211, 239n46 AEC. See Atomic Energy Commission African Americans, 117, 215; as subjects in medical experiments, 88, 97, 215 AIDS, 12, 192 air force. See U.S. Air Force Air Force School of Aviation Medicine, 86, 95 Allen, Irene, et al. v. United States, 170–178; plaintiffs, 170, 171; and radiation science, 171–174; and suggestions of victimblaming, 195–196. See also under Gofman, John W.; Morgan, Karl Z.; Pollitt, Norma; Saenger, Eugene Alsop, Stewart, 12, 164 AMA. See American Medical Association American Association for the Advancement of Science, 103–104 American Cancer Society, 5, 129, 160, 205; co-sponsors BCDDP, 157, 160; and early detection, 161; on the sidelines in fallout debates, 125 American Medical Association (AMA), 24, 53, 120, 121, 123, 234; exploits Cold War to defeat health insurance, 121–122; on the sidelines in fallout debates, 122, 124, 129 Angell, Marcia, 217, 270n19 anticommunism, 5, 27, 121, 193, 200, 232–233. See also under Cold War asbestos, 167, 188 Atom and Eve, The (film), 157 Atomic Bomb Casualty Commission, 138, 140, 141, 176, 249n38
atomic energy: fear of, 155, 156–157; risks of, 16, 20–21; split personality of, 15, 17–18. Atomic Energy Act (1954), 134; promotes private development of radioisotopes, 33–34, 73, 76 Atomic Energy Commission (AEC), 24, 25, 32, 36, 66, 95, 99, 100, 101, 105, 111, 118, 120, 122, 182, 210, 231; budget, 75, 245n33; and cancer research, 74–75, 104, 115; conflicting obligations of, 15, 149; denies hazards of fallout, 36, 116–117, 133, 141, 253n23; Division of Biology and Medicine, 34, 66, 75, 76, 84–85, 100, 102, 141, 147; history, 66, 94, 237n32, 258n10; keeps a tight rein on fallout research, 136, 142, 144, 152, 250n51; and liability, 72, 73; military mission at odds with cancer concerns, 74–75, 139; and privatization of postwar technologies, 21, 70, 76–77; promotes radioisotopes, 32; and radiation protection standards, 147–149, 151; temporizes to protect weapons tests, 22, 114, 131–132, 139, 142, 144, 156; underwrites costs of cobalt radiotherapy, 68, 70. See also under Bravo test; human experimentation Atomic Energy of Canada, Ltd., 71, 78, 243n10, 245n28 atomic tests. See nuclear weapons testing atomic workers. See Energy Employees Occupational Illness Compensation Program Atoms for Peace, 18–20, 24, 68, 71, 73–74, 228; conference in Geneva (1955), 69, 102 Auerback, Stuart, 98 Bailar, John C.: questions safety of mammography, 158, 160, 260n29, 261n35; questions success of war against cancer, 207–208 Ball, Howard, 142, 253n17
275
276
Index
BCDDP. See Breast Cancer Detection Demonstration Project BEIR reports. See Biological Effects of Ionizing Radiation (BEIR) reports Belfrage, Cedric, 252n13 Bertell, Rosalie, 227, 273n50 Biological Effects of Ionizing Radiation (BEIR) reports, 155, 176, 177, 264n24 Bobst, Elmer, 11 body burden, 230–231 Bono, Vincent, 214 bovine growth hormones (rBGH), 197–198 Boyer, Paul, 235n5 Bravo test, Marshall Islands (1954), 97, 117; AEC delays response to, 118, 131–132 breast cancer, 217, 221, 223, 268n22; individual experience of (see Natanson, Irma; Pollitt, Norma); literature of, 235n3, 239n1; screening (see mammography); treatments, 47, 59, 208, 219–220, 242n11. See also under Japanese bomb survivors; National Surgical Adjuvant Breast Project Breast Cancer Action, 223 Breast Cancer Detection Demonstration Project (BCDDP), 157–158, 159 Brenner, Barbara, 201, 261n35 Brody, Jane, 252n12, 268n18 Brooks, A. E., 37 Brown, James Barrett, 49, 56; treats Natanson as burn victim, 38–39 Brown, Percy, 252n2 Brucer, Marshall, 66, 70, 252n10 Bulloch v. United States, 253n23 Caldwell, Glyn, 172, 173 Canada: covers costs of cancer treatments, 47, 64; crown corporations, 243n10; pioneers development of cobalt radiotherapy, 46, 72; trades with USSR, 71, 244n25 cancer: arising from occupational hazards, 263n9; as byproduct of atomic weapons, 22; and Cold War, 26–27; distinctive features of, 14, 17, 22, 120, 187, 192; enduring impact on language, 11, 236n12, 237n20; malaise, 11, 17; media coverage in 1950s, 110–113; as metaphor of communism, 8–10, 11, 236n10, 236n12; paired with heart disease, 187; personal
responsibility for, 14, 22, 27–28; as pollution, 24; private perceptions of, 10, 236n15; specific types of (see breast cancer; leukemia; lung cancer; prostate cancer; thyroid cancer); as un-American, 9–10, 119. See also clinical trials cancer deaths: changing pattern of, 269n3, 272n39; over the Cold War period, 207–208, 248n34; compared with battlefield deaths, 13; current levels, 222; over first half of the twentieth century, 26, 239n47; linked to chemicals, 167; linked to fallout, 23, 28, 239n49; between 1950 and 1960, 125–126, 254n46; over second half of the twentieth century, 222, 272n39 cancer hospitals, 243n7, 245n34, 254n38. See also M. D. Anderson Hospital cancer patients: face dilemmas of clinical trials, 215–216, 218; increase in numbers, 229, 273n57; marginal status as human subjects, 88, 213; not consumers, 213; in secret experiments, 88–90. See also doctor/patient relationship cancer survival rates, 268n24; between 1944 and 1960, 93 cancer treatments: as markets, 221–224; rising costs of, 221–222, 272n37. See also cobalt radiotherapy; mastectomy; radiotherapy Cantor, David, 64, 236n15, 244n12 carbon footprint, 230–231 carcinogens: identified, 239n52; difficulty in disentangling effects of, 231–232; industrial, 164, 167–168; shift in compounds under scrutiny, 199–200; small percentage tested, 262n2; in tobacco smoke, 181; tolerance for, 165, 183, 232; versatility of, 168. See also asbestos; fallout; vinyl chloride Cardozo, Benjamin, 51 Carson, Rachel, xi, 165, 194, 226, 272n46 Carter, Jimmy, 167 chemotherapy, early development in NCI trials, 209, 210, 269n6 Chernobyl nuclear accident, 180 Christie, Agatha, 36 Churchill, Winston, 8–9 CIA (Central Intelligence Agency), 125
Index
Clark, R. Lee, 63, 65, 66, 88, 101; and ethics of medical research, 90–91; facilitates air force contract at M. D. Anderson, 85–86; later career of, 99 clinical trials: costs, 216, 217; developed by NCI, 208–211, 269n6; enrollment, 216, 270n19, 270n22; harm inflicted by, 214; insurance coverage for, 216; lack of federal oversight, 218–219, 271n27; limited returns on, 219–220; NCI funding, 222, 272n38; need for, 208; opportunity costs of, 221; and secret radiation experiments, 210, 218; successes, 219. See also under cancer patients; informed consent Clinton, William J., 217; mandates ACHRE, 26, 239n46 cobalt-60 (radioactive cobalt), 2, 34; growing awareness of, 36; production of, 71, 240n2; as suspected carcinogen, 242n19 cobalt radiotherapy: advantages over radium, 34, 65; development of, 62–63; early clinical studies, 46–47, 77, 104, 242n11; as poster child for Atoms for Peace Program, 68; success and later decline of, 77–78; used to treat patients prematurely 25, 76, 78–79 cognitive dissonance, 87, 129; induced by radioactivity, 17 Cold War: and anticommunist rhetoric, 7–8; enduring legacies of, 5, 200, 217, 227, 228, 232–233; and God, 13; governed by short-term priorities, 146, 159; and multiple links to cancer, 5, 15, 27, 106; relaxation of, 212; rise of, 5–6. See also under cancer; secret radiation experiments Coley, William B., 107, 251n64 Columbia Law Review, 57 Columbus, Ohio, “misadministrations” of cobalt-60 at Riverside Methodist Hospital, 153–154, 259n21 comics, 23, 238n40 Committee for Nuclear Responsibility, 274n64 Commoner, Barry, 23, 263n11 communism, 6, 10, 12, 24; and disease, 8–9. See also under cancer
277
compartmentalization, as Manhattan Project strategy, 91–92 compensation programs. See Energy Employees Occupational Illness Compensation Program; Radiation Exposure Compensation Act computed tomography. See CT Congress for Cultural Freedom, 125 congressional hearings: on experiments, 251n58; on fallout, 23, 133, 257n27 Connecticut Yankee Power, 157 contracts, wartime procurement practices applied to Cold War, 67–68 Crile, George, Jr., 11, 59 Crumpacker, Leo, 35, 46, 59 CT (computed tomography), 222; and high radiation exposures, 228, 273n54; increasing use of, 229 Curie, Marie, 113, 253n16 Delaney Clause, 198–199 Department of Energy, research budget of, 250n53 Department of Health Education and Welfare (DHEW), introduces safeguards for experimental subjects, 211 DES. See diethylstilbestrol Deseret News (Salt Lake City), 116 DeVita, Vincent, Jr.: responds to Bailar and Smith, 207–208; responds to Washington Post exposé, 214 DHEW. See Department of Health Education and Welfare diet and cancer research: bias of, 197, 199; complexity of, 196; exploited by food industry, 201–202; long pedigree, 196; recent results, 202–203 diethylstilbestrol (DES), 198, 267n12 doctor/patient relationship, 3, 11, 154, 260n23, 270n19; adjusts to rise of feminism and decline of Cold War, 212; compromised by secret radiation experiments, 89–90, 92–93; at the heart of cancer treatment, 59–60; lack of candor in, 26, 89, 93–94; in the prevention of heart disease, 185. See also physicians Doll, Richard, 187–190, 265n34; and conflict of interest, 189; and emblematic career, 190, 266n46, 266n48
278
Index
dose reconstructions, 143–145, 231; as official response to fallout, 144 downwinders, 15, 113, 140, 142; disregard for health of, 146; in Lyon study, 170; plaintiffs in Allen v. United States, 170, 171. See also Mormon communities in Nevada Dunham, Charles, 102 Dunning, Gordon, 172 DuPont, 67–68 early detection, 3; and Irma Natanson, 162–163. See also American Cancer Society; mammography EEOICP. See Energy Employees Occupational Illness Compensation Program Egan, Robert L., and mammography research, 99 Eisenbud, Merril, 133 Eisenhower, Dwight D., 68, 122, 166. See also International Atomic Energy Agency ElBaradei, Mohamed, 18, 135 Eldorado Mining and Refining Ltd., 243n10 Endicott, Kenneth, 166 Energy Employees Occupational Illness Compensation Program (EEOICP), 178, 180, 264n28 environmental pollution, 165, 168–169 Environmental Protection Agency (EPA), 151, 165; attacked for ineptitude, 152; and radiation standards, 228, 258n10 Environmental Working Group (EWG): and Human Toxome Project, 274n61; umbilical cord blood study, 267n11 EPA. See Environmental Protection Agency epidemiology, 182, 192, 202–203; apportions incidence of occupational cancers, 263n9; and infectious diseases, 185, 186; and smoking, 180, 181; and studies of fallout, 133, 138, 140, 142–144, 146, 171. See also Doll, Richard; dose reconstructions; Framingham Heart Study; Lyon, Joseph Ewing, James, 106–107 EWG. See Environmental Working Group
Failla, G., 149 fallout, nuclear, 15, 16, 17, 18, 21, 23, 24, 119, 121, 123, 125, 126, 155, 175, 180; compared with other sources of radioactivity, 126, 127; early awareness of carcinogenic properties, 109–110; and government failure to warn of risks, 174–175; as harbinger of wide-ranging environmental contamination, 164; hyperbolic claims about, 114, 133; loses original meaning, 229; renewed interest in, in late 1970s, 169–170; short- vs. longterm perspective on, 74, 120–121, 132; as unexpected byproduct of atomic weaponry, 22, 117. See also under Atomic Energy Commission; cancer deaths Farrell, T. F., denies dangers of fallout, 114 FBI (Federal Bureau of Investigation), 7 FDA. See Food and Drug Administration Federation of American Scientists, 132 Federal Radiation Council, 125, 144, 150, 160, 258n10 Fisher, Bernard, 245n39 Fletcher, Gilbert H., 63, 65, 85, 86, 87, 88, 91, 95, 98–99, 105 Food and Drug Administration (FDA), 153, 197, 218, 236n15 Fradkin, Phillip, 116, 253n21 Friedan, Betty (The Feminine Mystique), 57 Framingham Heart Study, 184–186; impact of methodology on cancer prevention, 186–187 General Electric, 20, 66–67, 71, 74, 151, 163 Gofman, John W, 115, 172, 174, 250n51, 274n64; on NCRP, 151; on radiologists’ perception of risk, 100, 250n52 Gottschalk, Bernie, 238n43 Grant, Lee, 167 Gravel, Mike (D-Alaska), 98 Green, Harold P., on NCRP, 151 Grimmett, Leonard, 65, 66, 67, 70, 81, 85, 87 Groves, Leslie, 92, 114 Gup, Ted, 270n13, 270n14 Halsted, William Stewart, 59 Hamilton, Alice, 43–44, 241n5 Hamilton, Joseph, 247n15; as author of “Buchenwald” memo, 85
Index
279
Hartog, Jan de, 248n26 Harvard Center for Cancer Prevention, 205 Harvard Law Review, 57–58 Health Insurance Plan (HIP), 157 Heath, Clark, 172 Henry, Hugh F., 252n10 Hiebert, A. E., 37, 49 Hill, Bradford, 188 HIP (Health Insurance Plan), 157 Hiroshima, 15, 17, 22, 24, 35, 114, 117, 170, 171, 176. See also Japanese bomb survivors “Hiroshima Maidens,” 140 Hoffman, Frederick L., 267n7 hormone replacement therapy (HRT), 79, 246n42 Hoxley, Harry M., 236n15 HRT. See hormone replacement therapy Hueper, W. C., 242n19 human experimentation: and clinical trials, 213; discussed in AEC memo, 83–84, 85–87; inherent hazards of, 218. See also Nuclear Energy for the Propulsion of Aircraft; Nuremburg Code; secret radiation experiments Hutchins, Robert, 32
JAMA (Journal of the American Medical Association), 111, 202; stories cited in the New York Times, 252n12 Japanese Americans, 117 Japanese bomb survivors (hibakusha), 15, 22, 138–139, 140, 141, 176, 177; breast cancer among, 138, 252n12, 257n19; leukemia among, 138, 140 Jenkins, Bruce S., 171, 174–175, 177 Johnson, Lyndon, 166
IAEA. See International Atomic Energy Agency ICRP. See International Commission on Radiological Protection immunotherapy, early research discouraged, 106–107 informed consent, 2, 41, 218, 230, 231; abused in secret radiation experiments, 89; “choice” and, 4–5; in clinical trials, 211, 212–213, 270n19; inherent conflict of interests aired in Natanson trial, 52–54. See also Schloendorff v. The Society of New York Hospital International Atomic Energy Agency (IAEA), 135, 228; established by Eisenhower, 18 International Commission on Radiological Protection (ICRP), 152, 153; membership bias, 227–228, 259n11; and NCRP, 149, 150, 273n51; origins, 258n10; rejects radiation audit proposal, 231 iodine-131, 179, 265n32
Land, Charles, 172, 174 Lanier, Ray, 132–133 Lapp, Ralph, criticizes AEC, 132–133 Lawrence, David, 119 Leopold, I. H., 269n8 Lerner, Barron, 260n28 Leshner, Alan, 135 leukemia, 23, 36, 116, 138, 142, 171, 189, 238n43, 249n38, 253n16, 256n14, 257n20; target of early trials, 209, 269n6, 269n9; unexpected rise in incidence among downwinders, 169, 170, 173. See also Japanese bomb survivors Libby, Willard F., 118, 133 Life, 134 “lifestyle” factors in prevention policy, 28, 186, 187, 188, 196, 204, 205 Limited Test Ban Treaty (1963), 135, 141, 164, 212, 256n13 Lindsay, John, 125 Lorentz, Pare, 149 Love, Susan, 163
Kaplan, Arthur, 219 Kefauver, Estes (D-Tenn), 118 Kennan, George, 9 Kennedy, Edward (D-Mass), 98 Kennedy, John F., 95, 250n55 Kevles, Bettyann, 243n6 Klausner, Richard, 179, 180 Kline, John R., 35, 37, 40, 46, 51, 52, 55, 56, 59, 105 Knapp, Harold, 142, 144 Koop, C. Everett, 183 Krimsky, Sheldon, 217 Kushner, Rose, 167, 239n1 Kutcher, Gerald, 235n2 Kuttner, Robert, 98
280
Index
Lucky Dragon (ship), 117, 118 lung cancer, 138, 167, 185, 222 Lyon, Joseph, 142–143, 144, 170, 172, 173, 257n30 mammography: and breast augmentation, 261n35; and early detection, 161; limitations of, 161–162; linked to fear of cancer, 129, 158; and medical malpractice suits, 162; radiation associated with, 158–159, 261n35. See also Breast Cancer Detection Demonstration Project Mammography Quality Standards Act, 261n35 Manhattan Project (Manhattan Engineer District), 32, 81, 85, 91, 92, 99, 100, 101, 102, 110, 166 Markowitz, Gerald, 262n3, 263n11 Marshall Islanders, 22, 38, 264n28 mastectomy: discussed in Natanson trial, 45–46; prophylactic, 223. See also under National Surgical Adjuvant Breast Project; radiotherapy Matusow, Harvey, 113 Mazzochi, Anthony, 263n11 McCarran Act (1950), 9–10, 12 McCarthy, Joseph, 7–8, 112–113 M. D. Anderson Hospital, 25, 63, 78, 87, 89, 93, 95, 97, 99, 103, 150, 233, 243n7 medical malpractice, early lawsuits, 42–43. See also under mammography; Natanson v. Kline medical X-rays, 170, 229; overuse of, 128, 155; as scapegoat in fallout controversies, 126–129, 193–194, 199; unregulated, 73, 153. See also X-rays Medicare, 122, 234; coverage of clinical trials, 217 military expenditures, postwar, 235n6 mind-body hypotheses in theories of cancer causation, 203–204 Monsanto, 67, 189, 266n46; and rBGH, 197 Morgan, Karl Z.: and dilemma of government scientists in fallout debate, 174; difficulties with ICRP, 153, 259n11; and Karen Silkwood case, 263n20 Mormon communities in Nevada, as downwinders, 115–116, 139
Morton, Rogers, 166 Muller, Herman J., 109 mutants in Cold War culture, 23 Nagasaki, 15, 22, 24, 35, 114, 117, 138, 170, 171, 176. See also Japanese bomb survivors Natanson, Edward, 35, 40 Natanson, Irma, vii, 1–2, 47, 48, 57, 58, 73, 104, 80, 245n39; appearance in court, 44; files lawsuit, 40; final illness and death, 56; initial treatment for cancer, 38; pain and suffering, 39; physical injuries, 40, 44, 54 Natanson v. Kline (Irma Natanson v. John R. Kline and St. Francis Hospital and School of Nursing, Inc.), 2, 154; appeal court reverses decision of lower court, 54; difficulties of proving injury from radiation, 45–46; response to, 57–58; reluctant to challenge medical authority, 53. See also informed consent Nation, The, 134 National Academy of Sciences, 150; report (1956) on effects of atomic radiation, 22, 23, 231 National Cancer Act (1971), 126, 166 National Cancer Institute (NCI), 5, 98, 99, 102, 111, 194, 207, 208, 216; applies “threshold dose” to design of “safe” cigarette, 182–183; co-sponsors BCDDP, 157; and mandated fallout study, 179; research budget compared with AEC’s, 104; sponsors diet/cancer research, 197, 208. See also under clinical trials; radium; thyroid cancer National Committee on Radiation Protection. See National Council on Radiation Protection National Council on Radiation Protection (NCRP), 73, 148, 149, 153, 229, 258n4, 273n51; highlights risks of CTs, 229; highlights risks of medical X-rays, 126–128; membership bias, 149, 150–151, 259n14; origins, 258n10; sets standards, 150, 152; silent on fallout hazards, 150; silent on mammography, 160; steers clear of regulating radiation in medicine, 152, 153
Index
National Health Service (UK): and postwar cancer services, 64; as whipping boy for AMA, 122, 254n36 National Institutes of Health (NIH), 101, 209; initiates Consensus Development Program, 31 National Surgical Adjuvant Breast Project (NSABP), evaluates breast cancer treatment, 77, 219, 245n39, 270n19, 271n33 NCI. See National Cancer Institute NCRP. See National Council on Radiation Protection NEPA. See Nuclear Energy for the Propulsion of Aircraft Neumann, Jonathan, 270n13, 270n14 Nevada Test Site, 15, 131, 132, 139, 140, 142, 144, 193. See also atomic testing program New England Journal of Medicine, 141, 254n41; cancer studies reported in popular media, 112, 252n12 Newman, Marcy Jane Knopf, 235n3 Newsweek, 12, 36, 132 New York Times, 8, 11, 18, 36, 111, 113, 130, 220, 232, 252n12, 253n16; on fear of fallout, 232; runs five-part series on radiation, 170 NIH. See National Institutes of Health Nixon, Richard, 98; on individual responsibility for disease, 195; and war on cancer, 166 Nobel Peace Prize, 135 NRC. See Nuclear Regulatory Commission NSABP. See National Surgical Adjuvant Breast Project Nuclear Energy for the Propulsion of Aircraft (NEPA), 81–82, 95; cancelled, 95; lobbies for permission to conduct human radiation experiments, 84–85; recasts experiments as therapeutic studies, 86–87. See also under radiation experiments nuclear power plants: fears of, in 1960s, 165; resurgent interest in, 273n58 Nuclear Regulatory Commission (NRC), 152, 154, 228, 260n23 nuclear weapons testing, 6, 21, 22, 23, 28, 35, 132, 133, 135, 136, 139, 141, 164–165,
281
173, 175–176, 179, 180; legacy of, 179–180, 264n28; summarized, 262n5. See also Bravo test; fallout; Limited Test Ban Treaty Nuremberg Code, 83, 84 Oak Ridge Institute of Nuclear Studies (ORINS), 66, 70, 71, 236n15 Oak Ridge National Laboratory (ORNL), 66, 174, 240n8 Occupational Safety and Health: Act (1970), 166; Administration, 263n9 O’Leary, Hazel, 26 Oppenheimer, Robert, 119 ORINS. See Oak Ridge Institute of Nuclear Studies ORNL. See Oak Ridge National Laboratory Orwell, George (1984), 7 Pacific Islanders, 239n50 Paloncek, Frank P., 111 Parks, Rosa, 57 Parran, Thomas, 101, 184, 250n55 Patterson, James, 203 Pauling, Linus, 45, 115, 135, 256n11; petitions United Nations, 134 Peters, Lester, defends M. D. Anderson experiments, 103 Peto, Richard, 188 Petrakis, Nicholas, 163 pharmaceutical companies, as active players in clinical trials, 217–218 PHS. See Public Health Service physicians: benefit from government largesse, 73–74; and civil defense, 123; divided loyalties, 214–215; on government payroll, 120, 121; involved in Cold War experiments, 92–93 (see also Clark, R. Lee; Fletcher, Gilbert; Saenger, Eugene; Stone, Robert S.); missing from fallout debate, 74, 120–121; participation in clinical trials, 219; unwilling to testify against colleagues in court, 48–49. See also doctor/patient relationship; Natanson v. Kline; radiologists Physicians for Social Responsibility, 123, 254n41 Picker. See X-ray equipment manufacturers
282
Index
Pickering, John E., 88 Pickstone, John, 64 Pollitt, Norma: as atomic witness, 175–176; interrogated in Allen trial, 195; and treatment ordeal, 264n22 polyvinyl chloride (PVC), 169 precautionary principle, 120, 186, 226–227, 266n42 prevention, 27, 28, 187; of heart disease (see Framingham Study); incentive to pursue, 221; privatization of, 200; prospects for, in a market economy; 224–227; recast as chemo-prevention, 222–223; as untried strategy, 207; See also precautionary principle Proctor, Robert, 257n15 prostate cancer, 221 public health, 124–125, 181, 183, 184, 220, 224, 227, 230; disappearance of nineteenth-century interventions, 186; successes of, 192, 198 Public Health Service (PHS), 85, 97, 99, 101, 149, 172, 182, 183, 184, 185, 186, 229, 262n1 Puck, Theodore, 132 PVC (polyvinyl chloride), 169 radiation audits, 16–17; advantages of, 230; first proposed, 231 radiation experiments: administered by air force and M. D. Anderson Hospital, 88–90; administered by Eugene Saenger in Cincinnati, 96–98, 249n44; exposed to public scrutiny, 97–98; impact on practice of cancer medicine, 101; investigated by Clinton committee (see Advisory Committee on Human Radiation Experiments); and later careers of participating physicians, 98–99, 102; parallels with fallout research, 145–146; retrospective evaluation of, 94–95, 103–104, 213 Radiation Exposure Compensation Act (RECA), 28, 176, 180, 249n44, 264n25; awards compared with those of EEOICPA, 264n28; and BEIR reports, 177; limitations of, 177–178 radiation exposures: estimates of engineered vs. spontaneous, 255n47; links with
cancer, 124, numbers subjected to, in Cold War initiatives, 239n48; occupational, 263n14; recent increase in, 228–229; units of measurement, 229, 247n20; unregulated, 127. See also under mammography; whole-body radiation radiation protection standards: and limited health concerns, 227; and permissible levels of exposure, 150; as recommended rather than mandated limits, 148. See also International Commission on Radiological Protection; National Council on Radiation Protection radioactive isotopes, 78: as byproducts of nuclear weapons program, 2–3, 32; uses of, 273n53. See also cobalt-60; iodine-131 radioactivity. See atomic energy radiologists: early turf battles with other specialists, 61, 128–129; face dilemmas of untried therapy, 47–48, 105; in high-level health policy positions during the Cold War, 102, 120, 127; lack heroic stature, 60–61; links with defense interests, 100–101; in medical malpractice suits, 162, 262n41; mount campaigns to overcome fear of X-rays, 129; outnumbered in Cold War skirmishes, 128; silent in fallout debate, 124. See also Gofman, John W.; Morgan, Karl Z. radiotherapy, 62: and excess cancers among patients treated for nonmalignant diseases, 136, 256n14; following radical mastectomy, 270n19, 271n33; increasing use of, 229. See also cobalt radiotherapy radium, 34, 35, 37, 70; greater availability in Europe, 63–64; limited use in cancer therapy, 65; and NCI loan program, 33, 244n21; poisoning of women dial painters, 113; and state-sponsored medicine, 64 Rallison Study, 142 Rapaport, Roger, 98 Rauscher, Frank, 195 Reader’s Digest, 127 RECA. See Radiation Exposure Compensation Act Relman, Arnold, 29 Report on Carcinogens (National Toxicology Program), 239n52
Index
research: agenda, 27–28; budgets, 250n53, 251n60; independent, 142. See also Atomic Energy Commission; epidemiology; Knapp, Harold; Lyon, Joseph; secret radiation experiments; Weiss, Edward risk factors, 27–28, 184–185, 187, 206, 268n20; for breast cancer, 204–205 Roe v. Wade, 211 Rosner, David, 262n3, 263n11 Rothstein, William, 266n40, 268n16 Royal College of Physicians, 186 Saenger, Eugene, 102; government witness in Allen trial, 173; member of NCRP Committee, 149. See also radiation experiments Saffiotti, Umberto, 262n2 Sartorius, Otto, 163 Saskatchewan Cancer Clinic, 46 Scheele, Leonard, 102, 184, 250n55 Schloendorff v. The Society of New York Hospital, 51 Schroeder, Alfred G., 50–51, 52–53, 54, 242n14. See also informed consent secrecy, 68, 83–84, 110, 118; impedes scientific progress, 103–104; relaxation of, 94. See also compartmentalization Shapiro, Sam, 157 Shimkin, Michael, 208 shoe-fitting fluoroscope, 127 Shute, Neville (On the Beach), 23 Silent Spring. See Carson, Rachel Smith, Elaine, 207 smoking: as addiction, 183; as consumer choice, 182–183; impact on cancer prevention policy, 183–184; and links with cancer, 180–181 Solzhenitsyn, Alexandr, 10, 236n14 Sontag, Susan, 14, 237n30 Soviet Union (USSR), 7, 27 Sputnik, 139–140 Stans, Maurice, 12 Steingraber, Sandra, 235n3 Stewart, Alice, 259n20 Stevens, Albert, 238n45 Stone, Robert S., as member ICRP, 149, 150–151, 153 Strauss, Lewis, 117
283
Strax, Philip, 157 Subak-Sharpe, Genell, 261n35 surgeons, as stars of traditional medical histories, 61 survival rates. See cancer survival rates tamoxifen, 219, 272n43 Tamplin, Arthur, 250n51 Taylor, Lauriston, 148, 153, 231; and NCRP mindset, 149 Teller, Edward, 45, 134–135 Test Ban Treaty (1963). See Limited Test Ban Treaty Thomas, Lewis, 123 threshold dose, 137–138, 145, 263n14, 264n24; and “safe” cigarette, 182–183 thyroid cancer, 138, 171, 173, 256n14; estimated excess cases of, 180, 265n32; and NCI study, 179 Time, 240n13 total-body radiation. See whole-body radiation Truman, Harry, introduces loyalty program, 7 Tuskegee experiments, 89, 248n27 U.S. Air Force, 81, 82, 84, 85, 86, 88, 95, 104, 249n41 Upton, Arthur, 158, 160 veterans: and atomic exposures, 170–171, 173, 178, 263n17; in secret experiments, 103, 217; as volunteers in clinical trials, 217 Veterans Administration, 103, 171, 178, 217 victim blaming, 29; and breast cancer, 193; and Cold War, 193; long history of, 191–192; in policy response to smoking/cancer link, 181–182. See also under Allen; diet and cancer research; women Village Voice, 98 vinyl chloride, 164, 167, 169, 205 Vioxx, 270n22 Wachholz, Bruce, 179 Walker, J. Samuel, 155 Warren, Shields, 34, 66, 84–85, 101, 141, 147, 152; reassures physicians about hazards of fallout, 141
284
Index
Warren, Stafford, 102, 114; admits hazards of radioactivity, 109; denies hazards of radioactivity, 114; and mammography, 157 Washington Post: exposes Cincinnati radiation experiments, 98; exposes coverup of fallout study, 169; investigates shortcomings of NCI trials, 214 Weart, Spencer, 12 Wechsler, James, 252n13 Weiss, Edward, 142, 170 Welsome, Eileen (The Plutonium Files), 25, 238n45 Westinghouse, 20, 71, 151, 240n8 Whittemore, Gilbert, 148, 273n51; on NEPA, 249n41 Williams, Terry Tempest, 116, 253n22 World Health Organization, 228, 269n3 whole-body radiation: first used, 82–83; levels administered in secret experiments, 88
women: and changing relationship with doctors, 211–212; as plaintiffs in early medical malpractice suits, 42; as postwar consumers, 3–5; and victim blaming, 193 Women’s Field Army, 161 X-rays, 17, 21, 25, 61, 65, 75, 200, 222, 231; as carcinogen, 239n52; fear of, 128; generated by electricity, 15–16, 63; implicated in early malpractice suits, 42; injuries, 42; risks of, 109; unregulated, 42, 100, 127; uses of, 42, 127, 128. See also medical X-rays; National Council on Radiation Protection; X-ray equipment manufacturers X-ray equipment manufacturers, 69, 70, 76, 78, 163; Canadian, 71; Kelley-Koett, 69, 240n8; Picker X-ray, 163, 240n8 Zerhouni, Elias, 101
About the Author
Ellen Leopold is the author of A Darker Ribbon: Breast Cancer, Women and Their Doctors in the Twentieth Century (Beacon Press, 1999). She has written about the politics of health care for The Nation, The American Prospect, and the Boston Globe, among others. She lives in Cambridge, Massachusetts.