204 39 64MB
English Pages 376 [375] Year 2021
Safe Enough?
The publisher and the University of California Press Foundation gratefully acknowledge the support of the Nuclear Regulatory Commission in making this book possible.
Safe Enough? A History of Nuclear Power and Accident Risk
Thomas R. Wellock
UNIVERSITY OF CALIFORNIA PRESS
University of California Press Oakland, California Published in 2021 by University of California Press in association with the US Nuclear Regulatory Commission (NRC). Cataloging-in-Publication Data is on file at the Library of Congress. isbn 978–0–520–38115–5 (cloth : alk. paper) isbn 978–0–520–38116–2 (ebook) Manufactured in the United States of America 29 10
28 27 26 25 24 23 22 9 8 7 6 5 4 3 2 1
21
For Sam
Contents
List of Illustrations Acknowledgments Preface 1 2 3 4 5 6 7
When Is a Reactor Safe? The Design Basis Accident The Design Basis in Crisis Beyond the Design Basis: The Reactor Safety Study Putting a Number on “Safe Enough” Beyond Design: Toward Risk-Informed Regulation Risk Assessment Beyond the NRC Risk-Informed Regulation and the Fukushima Accident
Abbreviations Notes Bibliography Index
ix xi xiii
1 11 40 82 102 146 188
221 225 287 331
Illustrations
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
Hanford Exclusion Areas / 6 Hanford N, K, and B Reactors / 9 Big Rock Point Consumers Power Plant, 1962 / 15 Bell Hot Water Heater Fault Tree / 22 Farmer Curve / 24 Browns Ferry Unit 1 Under Construction / 28 Boiling-Water Reactor (BWR) / 30 Pressurized-Water Reactor (PWR) / 30 Loss-of-Fluid Test / 38 Norman Rasmussen / 58 Saul Levine / 58 Sample PRA Flowchart / 61 Daniel Ford / 63 Henry Kendall / 63 Executive Summary—Man Caused Events / 68 Executive Summary—Natural Events / 69 Three Mile Island Control Room / 117 NRC-Russian Ceremony / 160 Linear No-Threshold Model / 176 Davis-Besse Vessel Head Diagrams / 196 Davis-Besse Vessel Head Erosion / 197 ix
Acknowledgments
My career has spanned stints as a submarine reactor test engineer, an engineer at a commercial nuclear power plant, and fifteen years as a history professor. In the latter role, I wrote two books on the history of the antinuclear and environmental movements.1 It has been a privilege to serve an agency, the Nuclear Regulatory Commission (NRC), welcoming of my eclectic life experiences and committed to independent historical research. As such, this book reflects my experiences, views, and judgments alone. It is a product of my training as a historian. It does not represent in any way an official position of the NRC. I enjoyed complete independence in selecting this topic, as well as in researching and writing it. The NRC only expects that I meet the high standards of the history profession. I am indebted to the advice and assistance of many people. Colleagues at the NRC, some now retired, read parts of the manuscript or simply offered useful perspectives, including Andy Bates, Gary Holahan, Nathan Siu, Don Marksberry, Cynthia Jones, Allen Hiser, Karen Henderson, Jack Ramsey, Jenny Weil, and Scott Morris. I am grateful to the generations of regulatory staff who eased my pursuit of history by being historians themselves. They kept an astonishing array of records and wrote their policy papers and technical reports with painstakingly chronicled background sections, a breadcrumb trail I could not have done without. My colleagues in the Office of the Secretary have been a daily source of reinforcement, especially Rochelle Bavol, Rich Laufer, and Annette xi
xii
| Acknowledgments
Vietti-Cook, the most supportive boss I could ask for. Kristy Remsburg has the heart of an archivist. Like the monks of medieval Ireland, she believes deeply in preserving records and never failed to help me find them. In the Office of Public Affairs, Ivonne Couret, Holly Harrington, and Eliot Brenner took me on as a special project, believing that if the NRC historian was going to communicate with the public, he ought to get better at it. They helped me write simply, develop video presentations, and do a credible job reading a teleprompter. Bebe Rhodes, Lee Wittenstein, Anne Goel, Zuberi Sardar, and Kathleen Dunsavage of the NRC technical library supported my numerous interlibrary loan requests and renewed my very overdue books without complaint. Woody Machalek and Eddie Colon guided me through the federal contracting system. The NRC retiree community was a constant source of wisdom. Most of all, Sam Walker, my predecessor, was a source of inspiration and encouragement, and a careful reader of the manuscript. Tom Murley, a deep thinker and valuable resource, deserves special thanks. Others include Ashok Thadani, Frank Miraglia, Dennis Rathbun, and Roger Mattson. I am grateful to several who have since passed away, including Manning Muntzing, Marc Rowden, Norm Lauben, John Austin, William O. Doub, and Harold Denton. At the University of California Press, Senior Editor Niels Hooper was enormously supportive. He and Robin Manley went the extra distance in getting the book into production. Steven Jenkins at the UC Press Foundation gamely navigated the complexities of the federal contracting system. I would also like to thank copyeditor Catherine Osborne and production editors Emilia Thiuri and Francisco Reinking. This book is the sixth in a series of volumes on the history of nuclear regulation sponsored by the US Nuclear Regulatory Commission and published by the University of California Press. The first volume, Controlling the Atom: The Beginnings of Nuclear Regulation, 1946–1962 (1984), was coauthored by George T. Mazuzan and J. Samuel Walker. Walker was the author of the next four volumes, Containing the Atom: Nuclear Regulation in a Changing Environment, 1963–1971 (1992); Permissible Dose: A History of Radiation Protection in the Twentieth Century (2000); Three Mile Island: A Nuclear Crisis in Historical Perspective (2004); and The Road to Yucca Mountain: The Development of Radioactive Waste Policy in the United States (2009). Portions of chapters 1, 2, 3, and 6 of this book were previously published in Nuclear Technology, Technology and Culture, and History and Technology.2
Preface
In 1965, the “bandwagon market” for nuclear power finally took off. After a decade of struggle, there was a rush of orders for reactor plants that did not abate for eight years. For the nuclear industry and the US Atomic Energy Commission (AEC), these were hectic but rewarding times. By the late 1960s, utility companies contracted for power reactors at a rate of about two dozen a year. In the early 1970s, the orders for gigantic thousand-megawatt units doubled. Nuclear power was projected to take the lion’s share of new orders from coal plants—about 150 thousand-megawatt plants by 1980 and more than five hundred by 1990. The AEC’s regulatory division had a bumpy road to developing a licensing process amid a flood of construction permit applications. Each application raised new safety questions, but by the late 1960s, permits started to roll out with regularity, and the shortage of regulatory staff was the good kind of problem that came with a flourishing industry.1 Still, supporters of nuclear power were nervous. More plants meant more local opponents. In 1970, residents of Eugene, Oregon voted to place a moratorium on its municipal electric utility’s nuclear power projects. National antinuclear organizations formed. Press coverage of the industry turned critical. Even typically apolitical magazines, such as Life and Sports Illustrated, put out exposés. There were brisk sales of books claiming the nation was playing “nuclear roulette.” AEC officials rushed from one controversy to another. Seismic hazards gave way to opposition to urban reactor sites, then thermal pollution drew unexpected criticism, xiii
xiv
|
Preface
followed by protests of routine low-level radioactive emissions. It did not stop there. In early 1971, AEC Chairman Glenn Seaborg worried in his journal about a new strategy: “Anti-nuclear power forces seem to be shifting from low level radiation dangers to reactor safety.”2 The usual assurances of reactor safety by AEC officials stopped working. Public opinion was supportive of nuclear power, but the opposition was scoring points. Despite a substantial counteroffensive of educational material, radio spots, film presentations, and open debates with opponents, an aide to Seaborg concluded that “the AEC is in deep trouble with the public.” The atom’s advocates dismissed their critics as kooks and professional “stirrer-uppers” who spread “misinformation, halftruths, and hearsay,” but the new crop of opponents was smart and articulate. After one sponsored debate between AEC officials and critics, commissioner Theos Thompson called the AEC’s performance an “utter disaster.” In another debate, audiences openly rooted for the critics, who scored what one industry weekly called a “technical knockout.” Reassurances that a major accident was “extremely unlikely” fell flat.3 In fact, the AEC did not know the true probability of a reactor disaster, and their claim that accidents were rare was tested by surprising safety problems. Research and operating experience raised questions about the effectiveness of a key reactor shutdown system and the Emergency Core Cooling System (ECCS). In early 1972, the AEC had to conduct a major hearing on ECCS safety that turned into an embarrassment. On cross-examination from antinuclear participants, AEC experts received poor marks. News reports claimed agency leadership had intimidated and demoted its dissenting staff witnesses.4 The public and Congress wanted the AEC to regain credibility and provide certainty that reactors were safe. The AEC tried to reassure the public by answering what had been so far an unanswerable technical question: What is the probability of a major reactor accident? It was a tall order. How could engineers quantify the probability of an accident that had never happened in a technology as complex as nuclear power? Any study that offered an answer was sure to be expensive, complex, and, if wrong, a fiasco. “Do we dare undertake such a study till we really know how?,” wondered the AEC’s leading technical advisor, Stephen Hanauer.5 He already knew the answer. In March 1972, he met with Norman Rasmussen, an engineering professor at the Massachusetts Institute of Technology (MIT) and won from him a commitment to lead the AEC’s accident study, a firstof-its-kind probabilistic risk assessment (PRA). Rasmussen’s job was to
Preface |
xv
develop a “figure of merit” on accident risk—the product of an accident’s probability and its consequences—presented in terms easily understood and convincing to the public. It became a three-year, multimillion-dollar report, titled the Reactor Safety Study, typically called the “Rasmussen Report” or “WASH-1400” for its report number.6 Hanauer’s question touched on an uncomfortable reality. The AEC was risking its technical reputation on a study with important public policy implications that no one yet knew how to do. All previous estimates of accident probability had been expert guesswork or calculations that often produced absurd results. “Unfortunately, the probabilities [of accidents] have so far been extremely difficult to predict,” Hanauer once confessed, “whereas the worst possible consequences are all too readily predictable.”7 The solution proposed by Rasmussen was to calculate the probabilities for chains of safety-component failures and other factors necessary to produce a disaster. The task was mind-boggling. A nuclear power plant’s approximately twenty thousand safety components have a Rube Goldberg quality. Like dominoes, numerous pumps, valves, and switches must operate in the required sequence to simply pump cooling water or shut down the plant. There were innumerable unlikely combinations of failures that could cause an accident. Calculating each failure chain and aggregating their probabilities into one number required thousands of hours of labor. On the bright side, advancements in computing power, better data, and “fault-tree” analytical methodology had made the task feasible. But the potential for error was vast, as was the uncertainty that the final estimate could capture all important paths to failure. If the estimate did not withstand scrutiny, the subsequent humiliation could have grave political implications. The agency was already criticized for a pronuclear bias, and there was open congressional discussion about splitting it up. Nevertheless, it took the risk. Forty years on, the daring that so worried Hanauer has paid off. The AEC did not survive, but WASH-1400 did. At first, the report suffered heavy criticisms of its calculations and an embarrassing partial rejection in 1979 by the AEC’s regulatory replacement, the Nuclear Regulatory Commission (NRC). Its prescience in identifying some of the safety weaknesses that led to the Three Mile Island accident rehabilitated it. A disaster at a chemical plant in Bhopal, India and the Challenger space shuttle explosion suggested WASH-1400 had non-nuclear applications, too. WASH-1400 set in motion what one expert described as a “paradigm shift” to an “entirely new way of thinking about reactor safety . . . and regulatory processes.” With new ways of thinking about risk came new regulations.
xvi
|
Preface
PRA made possible the NRC’s conversion toward risk-informed regulation, which has influenced international safety regulation.8 WASH-1400 is the acknowledged urtext of probabilistic risk assessment, serving as a catalyst to the fields of risk analysis and risk management. It opened for study the risk of rare and poorly understood technological disasters. Insurance actuaries could assess the risk of common misfortunes—airplane crashes, auto accidents, and heart attacks. The Rasmussen Report made knowable the unknown risks of rare catastrophic events and shaped how a modern “risk society” evaluates and manages them.9 Much work has been done to improve WASH-1400’s weaknesses, but its original methodological framework survives. Engineering histories celebrate the Report’s technical origins and accomplishments, although they tend to overlook its political controversies and the unfulfilled ambitions its supporters had for it.10 PRA became an essential tool in evaluating the hazards of numerous technologies and substances. Rasmussen’s disciples have colonized other government agencies and the aircraft, aerospace, and chemical industries; PRA has been applied to dam safety analysis, medicine, medical devices, and even bioterrorist risk assessments. By the early 1980s, risk quantification and analysis had come to dominate government regulation and policymaking in environmental and health agencies.11 There has been limited historical research on probabilistic risk assessment in nuclear power or any other application. While PRA’s influence has been broad, its history is deepest in nuclear power. Many of the critical methodological advancements of PRA were made by the AEC, NRC, and the nuclear industry. Other technical fields such as aerospace made important contributions, but at times abandoned it after discouraging results. Nuclear experts persisted. They brought probabilistic risk assessment to life and applied it to safety regulation. This history focuses on their work. I detail the endeavor to establish a figure of merit for nuclear reactor risk from its origins in the late 1940s through the 2011 accident at the Fukushima Daiichi facility in Japan.12 Since the beginning of the Atomic Age, assessing accident risk has been the central task in determining when a reactor was “safe enough” for the public. The seven-decade pursuit of risk assessment provides the narrative framework for this exploration of how the very notion of safety evolved through the forces of political change and technical discovery. Over the decades, a safety philosophy rooted in descriptive qualitative risk assessment and expert judgment made room for quantitative
Preface |
xvii
risk assessments with greater use of numerical estimates of the probabilities and consequences of reactor mishaps. The motives behind this shift were multifaceted. The engineers in the AEC and NRC bureaucracy responded to external pressure from the nuclear industry and the public in ways familiar to other histories of technical bureaucracies. Like Theodore Porter’s cost-benefit experts in the Army Corps of Engineers, nuclear engineers placed their “trust in numbers” to bring stability to their regulatory activities and control the play of politics. In supplementing qualitative engineering judgment with PRA-based regulations, quantification aimed to reduce public conflict and interagency disputes. It was “a strategy of impersonality” that helped bureaucrats ground their decisions in purportedly objective measures of safety.13 From the beginning of the Cold War, there was also an internal drive to quantify risk inherent in engineering practice, particularly when new safety questions emerged. The quantitative impulse started while AEC engineers were insulated from public scrutiny at the nation’s top-secret weapons production reactors. They turned to probabilistic methods as they became aware of new unanalyzed accident scenarios that, while unlikely, carried potentially catastrophic consequences. Risk quantification could allow them to meaningfully evaluate a hazard and measure the value of safety upgrades. Quantification placed a reactor’s unconventional hazards into the perspective of conventional risks engineers understood. It might answer a simple question: “How safe is ‘safe enough’?” Burdened by primitive computers, undeveloped models, and limited data, their early efforts were mostly unsuccessful, and the AEC made a virtue of its qualitative assessments of risk as a conservative, reliable approach to safety. Nevertheless, research on risk assessment continued because it offered an intellectual technology to compensate for and discover weaknesses in a physical technology. Interest in the quantification of risk followed nuclear energy from an era of production reactors to commercial power reactors and from almost complete secrecy to unprecedented openness. The shift began with President Dwight Eisenhower’s “Atoms for Peace” address to the United Nations in 1953 and the passage of the Atomic Energy Act of 1954. With nuclear power no longer stamped “secret,” risk quantification followed commercial plants into an increasingly polarized debate over plant safety. The impulse to quantify the risk of emergent safety issues at production reactors repeated in the public world of the peaceful atom. Internal and external motivators provided a mutually reinforcing rationale for the AEC to launch WASH-1400 to resolve safety
xviii
|
Preface
issues and allay public concern. Although a valuable analytical tool, WASH-1400 proved to be an imperfect regulatory and political weapon. Skeptics inside and outside the industry were dubious of its methodology, accuracy, and the motives of the experts who promoted it. Nuclear power had an enviable safety record established by conservative design margins, engineering judgment, and qualitative tools, and some regulators believed PRA was a risky departure from a proven safety philosophy. Critics outside the industry feared it might bestow tremendous political power on its practitioners by excluding the public from a highly technical debate. They subjected the study to withering attack, arguing its quantitative estimates were too inaccurate to permit their use in policy or regulatory decisions. The WASH-1400 debate was a setback for PRA, but these critiques played a consequential role in its success as a regulatory tool. Experts responded by making PRA models more powerful and drawing on insights from multiple technical and social-science fields to express numerically flaws in hardware, human performance, and organizational culture. The intellectual technology of PRA, it was hoped, could adapt to the modern understanding of nuclear power plant operations as a complex sociotechnical system. From the 1980s through the Fukushima accident, the NRC sought what had eluded the AEC: “risk-informed” regulation that was safe, cost-effective, transparent, and publicly acceptable. Safe Enough? offers a broad account of the successes and limitations of risk assessment in regulation in response to technical developments, reactor mishaps, the economic travails of the nuclear industry, and political controversies. The reach of risk assessment extended beyond US reactor regulation. I include a chapter on the participation of the NRC and nuclear-industry risk experts in the application of quantified risk assessment methodology in three new areas. Risk assessment had mixed success when the NRC worked with the Environmental Protection Agency (EPA) and the National Aeronautics and Space Administration (NASA) to apply quantitative risk assessment to land contamination policy and the safety of the nation’s space program. In negotiations with the EPA, putting a number on risk backfired. A minor disagreement over low-level radiation exposures turned into a decade-long bureaucratic conflict over tiny differences in risk estimates. NASA, however, came to embrace risk assessment after the Challenger and Columbia accidents overcame internal resistance. Risk assessment also won success in the international arena as a technical/diplomatic tool in evaluating the safety of Soviet-designed nuclear power plants and smoothing the integration of
Preface |
xix
safety regulation approaches between former Communist-bloc nations, the European Union, and the international nuclear community. The book closes with a chapter on how risk assessment has fared in the twenty-first century. New safety issues, changes to the electric power industry, the shocks of the September 11, 2001 terrorist attacks, and the accident at the Fukushima nuclear power station in Japan buffeted PRA development as the NRC devoted thousands of hours to riskinforming regulations. Many wondered if it was worth the effort. Even after Fukushima raised a host of new questions about PRA, the NRC’s answer is still “yes.” Its supporters believe it is the best tool for balancing robust safety standards regulatory stability with demands for operational efficiency in a competitive power market. Consistently, regulators have turned to it when unmoored by unpredictable events, accidents, demands for efficient regulation, and criticisms of safety standards. It beckons as an attractive technical solution to political and regulatory problems. Often it has succeeded. Often, too, quantifying risk has disappointed its supporters. By distilling the risk of nuclear reactors into easy-to-grasp and purportedly objective numbers, engineers expected a well-crafted PRA would serve safety and allay public concerns. They were surprised at the intractability of the task: their numbers were sometimes fraught with errors, sharpened disputes, revealed new safety problems, and did not consistently persuade the public that nuclear power was “safe enough.” But they kept at it. This history explains why they persisted and what they accomplished.
1
When is a Reactor Safe? The Design Basis Accident
For the first twenty-five years of the Atomic Age, engineers and technicians operated reactors uncertain of the probability of a major accident. Automobile and aircraft safety regulations grew from the grisly accumulation of accident data, but there had been no reactor accidents and, thus, no data. Nuclear experts constructed an alternative safety approach. From the start-up of the first wartime plutonium production reactors at the Hanford Engineering Works in Eastern Washington State, safety assurance relied on the “Three Ds,” as I will call them— Determinism, Design Basis Accidents, and Defense in Depth. They relied not on determining and limiting the known probabilities of accidents, but by imagining far-fetched catastrophes and developing conservative designs. The first D—deterministic design—differed from probabilistic safety in the way it addressed three safety questions that came to be known as the “risk triplet.” 1) What can go wrong? 2) How likely is it to go wrong? 3) What are the consequences? In brief, what are the possibilities, probabilities, and consequences of reactor accidents? A probabilistic design had to address all three questions for a broad range of accidents.1 With no history of reactor accidents, nuclear engineers could not answer question 2 except in a qualitative way by subjectively judging that some accidents were “incredible” and not worth considering, such as a meteor striking a reactor, though even that remote probability was
1
2 |
When is a Reactor Safe?
estimated in the 1970s.2 Worst-case thinking was a mainstay of reactor safety. Deterministic design compensated for this ignorance of probabilities by addressing questions 1 and 3 in a very conservative way. For question 1, engineers developed “imaginatively postulated” or “stylized” accidents judged to be extreme but credible that would result in “the most hazardous release of fission products.” They further calculated the consequences of question 3 by assuming pessimistic conditions during an accident, such as weather conditions that might concentrate an escaping radiation cloud over a nearby population center. Without careful definitions, terms for these accidents were used in defense and civilian reactor applications. Terms like Maximum Hypothetical Accident and Maximum Probable Incident were similar to Maximum Probable Flood, used previously by flood control engineers. The winner, Maximum Credible Accident (MCA), gained common usage in the late 1950s.3 A decade later the AEC switched again, to Design Basis Accident (DBA), a name that captured the purpose of these accidents as safety design standards. DBA remains in use today and the term will be used throughout this book.4 Reactor designers analyzed DBAs to “determine” the safety features necessary to prevent these extreme accidents or mitigate their consequences. Typically, designers used a combination of qualitative factors, such as remote reactor siting, careful system design to ensure there was enough component redundancy (for example, backup pumps and power supplies), and extra margins of material strength and quality. This deterministic design approach to a few DBAs, engineers reasoned, would cover many lesser accidents, too. It set a conservative outer boundary of safety that simplified a designer’s task from having to explore the many paths to failure in complex reactor systems. When the DuPont Corporation designed the plutonium production reactors at Hanford during World War II, the design basis accident was extreme and simple: an explosive reactor power surge that spread radioactivity about 1.5 miles away. Protecting the public was also simple: isolation. The first reactors were spaced several miles apart and well inside Hanford’s expansive borders, out among the sagebrush and rattlesnakes of eastern Washington. The reactors and workers were protected with shielding and redundant shutdown and cooling systems. A deterministic approach aligned with DuPont’s chemical engineering culture, stressing large safety design margins.5
When is a Reactor Safe? |
3
THE REACTOR SAFEGUARD COMMITTEE
The concept of defense in depth—the second D of safety—was also articulated early in the postwar period by the Reactor Safeguard Committee, an AEC panel of eminent experts chaired by physicist Edward Teller. The committee’s task was to analyze the design safety of existing and proposed reactors at AEC and contractor facilities. Teller held nuclear safety to a high standard. He and the committee worried about the impact of an accident on public opinion and sought to make reactors safer than conventional technologies. The committee enjoyed a reputation for excessive caution. “The committee was about as popular— and also as necessary—as a traffic cop,” Teller recalled. Unpopular but influential, the Committee’s judgments carried weight.6 In 1949, it spelled out its understanding of reactor hazards and safety in a report with the AEC identifier WASH-3 (WASH stood for Washington, DC, the AEC’s headquarters). Although the term “defense in depth” did not come into usage for another decade, WASH-3 contained its key elements. From the physical properties of their fuel, to shutdown systems, emergency pumps, auxiliary power, shielding, and location, AEC reactors were to be designed with multiple lines of defense to prevent an accident or mitigate its consequences. While all the lines of defense were important, the Safeguard Committee believed some were more reliable and important than others. For example, the committee favored designs with “inherent” safety features that could make certain accidents nearly impossible. Inherent features were self-correcting mechanisms built into the plant’s physical properties, such as a reactor fuel with a “negative coefficient of reactivity.” If power rose in a reactor with a negative coefficient, the extra heat generated naturally slowed down the chain reaction by reducing the neutrons available to split atoms. As a reactor started up and rose to operating temperature, reactor operators worked to keep a chain reaction going by turning a “shim switch” that pulled neutron-absorbing control rods further out of the fuel. Keeping a reaction going was hard, but safe, work. By contrast, a positive coefficient meant a reactor had its own gas pedal. Once power and temperature started rising the reaction fed itself, creating more and more neutrons and fissions until there was a “runaway,” or even an explosion similar in force to what might happen at a chemical plant, as later happened at the Soviet Union’s Chernobyl power plant in 1986. Operators or automatic shutdown systems had to
4 |
When is a Reactor Safe?
insert the control rods to keep the reactor under control. It was no coincidence that the Hanford reactors with their positive coefficients were sited in remote Eastern Washington.7 Defense in depth consisted of other less reliable lines of defense that offered compensating safety advantages. Physical or “static” barriers such as shielding and airtight containment buildings could be important for runaways or coolant leaks. Static barriers were highly reliable if not perfect. Least reliable were “active” safety systems, such as emergency cooling systems or the reactor “scram” system that shut down a reactor by inserting control rods into the fuel. Active systems could quickly bring a troubled reactor under control. But the committee warned that “such systems are liable to failure in operation.” Pumps had to start, relays had to actuate, switches could not jam, valves had to close, and operators could not make mistakes. Yet, all those things were almost certain to happen in a plant’s lifetime. The varied advantages of inherent, static, and active systems forced the AEC to rely on all layers together, some slow but certain, others fast but a bit fickle.8 The committee established a general priority for the lines in defense in depth that would not be seriously questioned for the next fifteen years: (1) isolation and inherent features; (2) static barriers: and (3) active systems. Not all reactor designs had the ideal arrangement of defense in depth and each line was supposed to compensate for the weaknesses of the others. The Hanford reactors had a positive coefficient, but their isolated location provided acceptable safety given the Cold War need for their plutonium. The days of safety certainty at Hanford were brief. After World War II, General Electric Company (GE) took over Hanford’s management from DuPont. The reactors were worse for the wear of wartime production. DuPont’s own internal history of Hanford observed: “the production facilities at Hanford that DuPont turned over to General Electric had major operational problems” so severe that it expected them to have short production lives.9 GE concluded the probability and consequences of an accident were growing. It was understood that Hanford reactors had positive reactivity coefficients, but the aging graphite bricks that surrounded the uranium fuel created a new problem. By 1948, a Hanford supervisor wrote, the “appalling prospect” of a runaway from the bricks’ stored energy “is immediately conceivable.”10 It was possible the heat could be inadvertently released to the fuel, and the positive reactivity coefficient could cause a runaway. While operational changes and further research by GE later reduced concern about this problem, the Safeguard Committee worried that a runaway was credible, and
When is a Reactor Safe? |
5
radiation could be “suddenly released in a single catastrophe.”11 It prodded Hanford staff to study a range of conceivable runaway initiators, such as sabotage, earthquakes, and even the almost inconceivable failure of the Grand Coulee Dam upstream on the Columbia River.12 The committee also believed the consequences of a runaway were “far more disastrous” than wartime estimates. Fission product contamination from isotopes of iodine and strontium were likely to spread well beyond the original 1.5-mile radius.13 At the Hanford reactors, the Safeguard Committee concluded the existing isolation standard was insufficient. It recommended expanding the exclusion radius around the reactors to about five miles, but that pushed it outside Hanford’s boundaries and encompassed small communities nearby. Worse, the exclusion area would have to grow larger as the AEC responded to Cold War tensions by building even larger reactors and raising power levels of existing ones. Lacking containment buildings, Hanford reactors also had a weak second line of defense. Public safety depended on active systems, the least reliable line in defense in depth. The Safeguard Committee thought a safer reactor design was possible, but “the present Hanford type pile is definitely not in this category.”14 Even their designer, DuPont, agreed that Hanford’s reactors were less safe than the new heavy-water production reactors it was building at Savannah River, South Carolina. The latter reactors, DuPont bragged, enjoyed greater inherent safety. Even a decade later in the early 1960s, Hanford engineers admitted that it was “obvious that the Hanford reactor safety systems cannot measure up” to current standards of safety.15 Shuttering Hanford, however, was unthinkable while the Korean War and tensions with Russia kept the nation on a wartime footing. The expense of plutonium production made impracticable costly redesign that had unproven value. The Safeguard Committee implored GE to find creative ways to make Hanford reactors safe enough to operate.16 But how safe was “safe enough”? Ideally, the answer required risk quantification—the product of accident probabilities and consequences. GE already had conceived of several worst-case scenarios. For Question 3—consequences—they had the benefit of data from weapons testing and Hanford’s secret “green run” where, for over six hours, the facility released and monitored the dispersion of fission products from very radioactive fuel, including about four thousand curies of the dangerous isotope Iodine-131. By comparison, the accident at Three Mile Island released less than twenty curies of Iodine-131, while the Fukushima
FIGURE 1 .
In 1949, the Reactor Safeguard Committee developed a formula for the isolation distance needed to protect the public from a reactor accident, based on the square-root of its power output. In this 1952 illustration, engineers applied the formula to Hanford’s reactors, drawing isolation boundary circles around each reactor. It showed that Hanford’s property line—the stair-stepped line north of the Columbia river—was too small. Lacking proof of isolation safety, Hanford’s staff attempted to calculate the probability of catastrophic accidents. Source: US AEC/DOE (G. M. Roy, E. L. Armstrong, G. L. Locke, and C. Sege, “Summary Report of Reactor Hazards for Twin 100-K Area Reactors, HW-25892,” October 10, 1951, D8442387, p. 116, DOE PRRC.
When is a Reactor Safe? |
7
accident released 47 million curies. GE plugged the green-run data into a study of the consequences of reactor disaster at Hanford’s proposed K reactors. Herbert Parker, director of the radiological sciences department, noted the “remarkable agreement” among the various accident scenarios considered. Whether the release of radiation was a sudden explosion or a from a slow meltdown, the accident would not kill more than 250 civilians. Property damage estimates of $600 million were surprisingly high for a rural location. Satisfied with the results, Parker concluded further research on consequences was unnecessary. The next step should “be to establish more closely whether a disaster can occur at all, and if so, what is the relative probability of [radiation] release.”17 In 1956, Parker and another Hanford expert, J. W. Healy, presented a declassified version of their “theoretical” findings on accident consequences at an international conference, and their work contributed to later studies of civilian power plant disasters.18 The probabilities Parker sought were a difficult challenge. Experts were not sanguine that reasonable calculations were even feasible. Without quantification, probability estimates could only be stated in an unsatisfying qualitative way, such as saying that the probability of accident was “low” or “remote.” The Safeguard Committee confessed frustration that probabilities could not inform a safety philosophy. Yet probability is the usual measure of safety of such operations as airline travel, as described for example in fatalities per hundred million passenger miles. If a certain reactor were estimated to have a chance of one in a hundred of blowing up in the course of a year, certainly much thought, energy, and money would quickly be devoted to decreasing this evidently great hazard. On the other hand, a probability of disaster as low as one chance in a million per year might be considered to impose on populations outside the control area a potential hazard small compared to that due to flood, earthquake, and fire. Where the actual accident probability lies in relation to these two extreme figures is thus an important question, with definite practical consequences.19
One in a million—10–6 written as an exponent—became a timeless, intuitively appealing safety threshold. Looking back several decades later, one NRC regulator noted that 10–6 started as a “saw used to illustrate a low level of risk,” but gained “the appearance of gospel” as an acceptable standard of safety. The gospel of 10–6 was soon preached in many lands. From rocket launches to DDT, experts often invoked a probability around 10–6 when they subjectively guessed an acceptable risk of death.20
8 |
When is a Reactor Safe?
Nuclear experts used probabilistic estimates in the rare instances where they could find data and looked for engineered certainty when they could not. In 1953, the Safeguard Committee was combined with another advisory committee and renamed the Advisory Committee on Reactor Safeguards (ACRS). The ACRS pressed GE to develop “foolproof” safety features that made runaways “impossible.” GE considered some options but found no magic safety device.21 The committee’s unease was evident in its sparring with GE Hanford staff. GE claimed design enhancements made an accident from sabotage, earthquakes, and Grand Coulee flooding at its new large K reactors “improbable.” The ACRS was unsatisfied with improbability. Through the 1950s, it grew more alarmed as reactor power output increased and hazards became “progressively more serious.” Hanford reactors posed “a degree of risk which, in the opinion of the Committee is greater than in any other existing reactor plant.”22 THE PROBABILITY OF DISASTER?
Proving to the ACRS that Hanford was safe enough, GE staff recognized, depended on a novel probabilistic proof that active safety systems were reliable. In 1953, statisticians at Hanford proposed doing the first ever probabilistic risk assessment with a bottom-up methodology to calculate the “probability of disaster” through an analysis of accident chains. A disaster, they reasoned, was the culmination of small malfunctions and mistakes. “While there have been no disasters, there have been incidents which, in the absence of mechanical safety devices and/or the alertness of other personnel, could have led to disasters. . . . A disaster will consist of a chain of events. It may be possible to evaluate more specifically the individual probabilities in the chain, and then amalgamate these results to obtain the probability desired.”23 Accident-chain analysis became a fundamental component of later risk assessments. GE’s first foray into quantified risk assessment proved “disappointing” for reasons familiar to risk experts years later—inadequate data and an inability to model accident complexity. Data on Hanford component failures were not adequate for probabilistic analysis, and the paths to disaster seemed infinite.24 GE abandoned its ambition to totalize risk, but it persisted with a more modest probabilistic goal of estimating component and system reliability. As an electrical engineering company, it brought to Hanford a systems engineering approach that was more sophisticated than the “cut-and-try” methods that character-
When is a Reactor Safe? |
9
FIGURE 2 . In this aerial view of the Hanford reservation and Columbia River, three generations of plutonium production reactors are visible, as is their key safety feature—isolation from population centers. The N Reactor (1963–87) is in the foreground; the plumes of the twin KE/KW Reactors (1955–71) are visible in the center. Upstream in the upper right is the historic B Reactor (1944–68). While the B Reactor was built according to deterministic design principles, the succeeding generations made greater use of probabilistic methods. Source: US AEC/DOE Flickr, Image N1D0069267.
ized the DuPont era. By the 1950s, reliability engineering had become a recognized profession. It had developed in the 1930s in the electrical and aircraft industry. The fragile nature of vacuum tubes, a critical innovation to World War II electronics, made reliability and systems analysis second nature to GE. Statistical reliability and reliability prediction methods became popular after the war, and electrical engineering committees worked with the Department of Defense to establish reliability engineering as a formal discipline in 1957. Reliability studies focus on the probability that a component or system will perform its intended function over a given time interval and provides essential data for broader risk studies.25 Hanford engineers faced a reliability problem more complex than tube failure rates, and they moved beyond the simple component reliability studies common at the time. By the end of the decade, they were
10 |
When is a Reactor Safe?
analyzing safety system reliability and developed quantified system reliability goals, such as “one system failure in a million years of operation.” Accident probabilities also appeared in their design of a new Hanford reactor, where they concluded that the “Maximum Probable Incident” was a small pipe rupture that might happen once in two thousand years of operation. The ability of a probabilistic approach to identify and prioritize the most important safety improvements, GE insisted, increased reactor safety without the expensive and impracticable design changes sought by the ACRS. Hanford reactors were an experiment in supplementing qualitative safety goals with quantified reliability measures, an approach that kept the risk of a reactivity accident acceptably low. GE’s probabilistic inclinations influenced its later safety approach to civilian nuclear power plants.26 Of necessity, GE’s quantified strategy made it a proponent of probabilistic risk assessment as more transparent than the qualitative Three Ds that relied on expert judgment. Quantified risk assessment, a Hanford engineer argued, exposed “the framework of risk decisions to enable critical review by any interested party.”27 But as the civilian nuclear power industry took off in the 1960s, GE’s probabilistic methods had not coalesced into a reliable estimate of risk. In 1964, a GE Hanford staffer admitted, “considerable effort has been expended over the past ten years in trying to develop a failure model which would make use of minor incident statistics which would, through appropriate combinations, culminate in a major type incident. These studies did not prove successful.” Commercial plants had to adhere to qualitative safety rooted in the Three Ds.28 Nevertheless, as GE became a leading civilian reactor vendor, it became the primary advocate of quantitative safety. GE’s probabilistic turn and the rise of PRA in nuclear energy have been attributed to its quantificationfriendly electrical engineering culture that displaced DuPont’s chemical engineers. Cultural explanations need to consider technical context. It was not so much a different engineering culture as Hanford’s unique engineering problems that compelled GE to move beyond deterministic safety. New problems in civilian reactors would do the same.29
2
The Design Basis in Crisis
As the commercial nuclear industry grew in the late 1950s, AEC regulators and the ACRS applied the safety lessons of production reactors to civilian nuclear power plants to avoid the hazards of Hanford’s design in the civilian world. Reactors were almost always expected to have the inherent safety of negative temperature coefficients. Having addressed the threat of reactor runaways, the design basis accident that occupied designers and regulators was a loss-of-coolant accident—a major leak of the reactor coolant system—caused by a large pipe break. Defense in depth required three reliable physical barriers to prevent the escape of radioactive steam, including robust containment buildings. A simple application of safety principles was complicated in practice. There was much expected of nuclear power, particularly after Dwight Eisenhower launched his Atoms for Peace program in 1953. The Atomic Energy Act of 1954 burdened the AEC with a conflicted “dual mandate” to promote nuclear power and regulate its safety. It was, as one critic of the law noted, a “schizophrenia which is inflicted upon the AEC by law.”1 It simultaneously pulled the regulatory staff toward more and less regulation. Maintaining a neat line between promotion and regulation was difficult. The small regulatory staff under Harold Price, the Director of Regulation, served as design reviewers, inspectors, hearings examiners, and enforcers. The key technical knowledge of reactor safety, however, resided in research and development laboratories on the AEC’s promotional side. Regulators often had to consult 11
12 |
Design Basis in Crisis
with other AEC divisions that actively promoted nuclear power with subsidies, educational media, and development research at AEC national laboratories. Even research done in the name of reactor safety came from the promotional Division of Reactor Development. The AEC tried to maximize the independence of the regulatory staff. Price reported directly to the AEC’s five commissioners rather than the general manager. His staff occupied offices in Bethesda, Maryland, separated from the AEC’s large complex fifteen miles up the road in Germantown. The AEC also established two levels of licensing hearing panels to insulate licensing rulings from commission influence. Nevertheless, the staff worked under AEC commissioners who were usually pronuclear. It did not help that the regulatory division was such an agency backwater that some commissioners opposed it being split off into an independent regulatory commission because they feared its staff lacked the talent to go it alone.2 Safety regulations reflected the tension of the dual mandate. To stimulate industry initiative and creativity, early regulations were broad and limited. There were few hard-and-fast safety criteria, design guidelines, and rules for reviewing applications. The Atomic Energy Act of 1954 and related regulations spelled out a flexible standard of safety: that there be reasonable assurance of “adequate protection” to the public.3 Providing adequate protection against “credible accidents” indicated the AEC had to judge in a qualitative way whether accident risk was low. The 1954 act’s flexible language of “adequate” safety implied a “good enough” attitude toward safety. Yet, the AEC pursued a standard of safety it believed was higher than other technologies. As the ACRS explained in 1960, nuclear power was not yet an economic necessity and the dearth of accident probability data argued against taking conventional risks with a still unconventional technology. Conservative safety margins (that is, designs that had more than enough capacity to handle a design-basis accident) provided superior safety and compensated for limited quantitative data, operating experience, and rudimentary computer capability. In addition to its design safety, the AEC added margin in its siting criteria. To account for the remote probability of the worst conceivable accident where the containment building failed completely, the AEC used a formula that required that plants be set back an extra distance from cities larger than twenty-five thousand inhabitants.4 The foundation of the AEC’s qualitative approach to safety was expert judgment. Without extensive operating experience “for a statistical analysis of the risk,” Price admitted, “we will have to act on judgment—the
Design Basis in Crisis |
13
judgment of technical people . . . that the risk of a reactor catastrophe is exceedingly low.”5 The AEC evaluation of hazards based on judgment sometimes looked like guesswork. David Okrent, a long-time ACRS member, compared the AEC’s reliance on judgment of credible accidents favorably to other technologies. The inability to quantify accident risk, he admitted, forced the AEC to rely on judgment as its safety arbiter. “That such problems are reviewed and judged in advance of the occurrence of an accident . . . is relatively unique in the regulatory field. Most technological ventures have approached safety empirically, with corrections made after the occurrence of one or more bad accidents. So, if the process has been imperfect, at least it has existed.”6 The AEC did not rely on judgment alone. It invested heavily in research and development to confirm elements of its safety approach. From 1951–74, the federal government spent $23 billion (1990 dollars) on research related to light-water reactors, the bulk of which was conducted by the AEC’s national laboratories, particularly Oak Ridge and Argonne. Argonne oversaw the National Reactor Testing Station (later renamed Idaho National Laboratory), where fifty-two test and prototype reactors were built for civilian, navy, army, and space applications as well as other facilities. For example, the AEC built many test reactors to confirm that civilian reactor concepts would not have the potential for reactivity excursions that were possible at the Hanford reactors. Beginning in 1953, the Boiling Water Reactor Experiment (BORAX) series ran hundreds of experiments across five generations of BORAX reactors. Researchers demonstrated the inherent tendency of water reactors to shut themselves down during simulated equipment malfunctions and operator error that might inject a large pulse of reactivity into the chain reaction. The success of BORAX series led to other safety programs. Test facilities with acronyms such as STEP and SPERT dotted the Idaho landscape. Researchers often pushed the reactors to destruction to establish the outer boundary of safe operation. The AEC also studied reactor fuel and component material properties in special facilities such as the Materials Test Reactor (MTR). This massive program informed reactor vendors, such as GE and Westinghouse, as they developed boiling-water (BWR) and pressurized-water (PWR) reactor designs that met the AEC’s adequate protection standard.7 Satisfied that water reactors were inherently stable, the AEC’s research turned to the loss-of-coolant accident (LOCA) as the maximum credible accident of greatest concern. There was little evidence that large piping could suffer an instantaneous break, but the AEC staff
14 |
Design Basis in Crisis
and the ACRS judged that it could not rule out such a break as an incredible event. It became the reigning design basis accident for virtually every light water reactor worldwide. It was a useful standard for the design of robust containment buildings and cooling systems. In the 1960s, the AEC developed plans to explore a LOCA in a test series known as the Loss-of-Fluid Test (LOFT) at an Idaho test reactor. As the civilian nuclear power industry gained traction in the early 1960s, the AEC remained wedded to the Three Ds. The ACRS, originally an ad hoc committee, became a statutory committee in 1957. Its independent evaluations became an important check on the regulatory staff’s review of civilian reactor applications. As in its earlier evaluations of AEC research and plutonium production reactors, the ACRS was usually a conservative advocate for civilian designs with some measure of inherent safety, defense in depth, and sites on the outskirts of urban centers. This created tension with utilities seeking to maximize production efficiency with sites close to its customers downtown. The ACRS also pressed for robust containment buildings that could handle the worst loss of coolant accident.8 With the LOCA as the standard design basis accident, regulators demanded safety features from vendors that held radiation levels below public exposure limits during an accident. While this process seemed straightforward, reactor behavior during a LOCA was poorly understood, and the LOFT tests were not expected to begin for years. The AEC accounted for these uncertainties deterministically, by assuming the worst conditions at each step in an accident’s progression. Static containment buildings were the capstone of defense in depth. They were made with a reliable steel shell and usually a layer of reinforced concrete. They had few moving parts, almost no need for power sources, and were virtually certain to work. No one had to turn on a containment building, whereas pumps could fail or be mistakenly turned off, as later happened during the Three Mile Island accident.9 Containment was the last barrier, the last line in defense in depth. These layers of defense in depth, said the chairman of the ACRS, had to “be extraordinarily reliable and consistent with the best engineering practices as used for applications where failures can be catastrophic.”10 THE BANDWAGON MARKET
Even as the AEC sought to maintain regulatory stability and simplicity, a burst of reactor orders and rapid changes in designs presented new
Design Basis in Crisis |
15
FIGURE 3 . The spherical containment building of the Big Rock Point nuclear power plant exemplified the role played in reactor design by escaping high-pressure steam from a loss-of-coolant accident. Later containment buildings added a layer of reinforced concrete for additional support, shielding, and resistance to projectiles. Source: US AEC/DOE Flickr, HD.6D.167.
challenges to the Three Ds of safety. The fledgling nuclear industry quickly found itself the preferred power source over smoky coal facilities. In 1963, Jersey Central Power & Light Company announced the purchase of the first “turnkey contract” from General Electric Company to build the 515-megawatt Oyster Creek nuclear power plant for just $66 million, an astonishingly low price. Since the vendor took on responsibility for designing and building the plant, a turnkey contract was like buying a car: the new utility owner did not get involved until it symbolically took the keys for the new plant and started it up for the first time. Although Oyster Creek was sold as a loss leader, nuclear plants were soon competing with coal plants even in locations close to coal mines. GE and Westinghouse engaged in an intense competition for contracts and by 1967 a “great bandwagon market” for nuclear plants took off. The Tennessee Valley Authority (TVA) ordered two thousand-megawatt plants from GE despite its access to low-cost coal
16 |
Design Basis in Crisis
deposits. Fortune magazine called the order “An Atomic Bomb in the Land of Coal.” Philip Sporn, president of the American Electric Power Company, was skeptical and complained of “a bandwagon effect, with many utilities rushing ahead . . . on the basis of only nebulous analysis.” Sporn was prescient, but optimism was everywhere. GE vice president James Young reported the buyers exceeded “even the most optimistic estimates” with thirty-one orders in 1967.11 The bandwagon market placed a heavy workload on the AEC regulatory staff to process requests for construction permits, but it also raised new safety questions as vendors designed ever-larger plants. Over the course of the twentieth century, designers had scaled up fossil power plants in search of economies of scale, and they expected the same of nuclear plants. By 1967, just four years after the Oyster Creek order, vendors had doubled the output of their baseload plants. The new plants were not just larger versions of the early plants. Vendors sought economic advantages and shifted plant designs. The constantly evolving designs produced unanticipated safety questions and a feeling among the AEC staff that they were being asked to rubber-stamp new applications at sites with questionable characteristics.12 Some applicants strayed a bit too far from the AEC’s conservative approach. Pacific Gas & Electric Company proposed a nuclear power plant at Bodega Bay, California, close to the San Andreas Fault. Under fire from the public and sharp questioning about seismic safety from the AEC staff, PG&E withdrew the application. It was the same story for the Los Angeles Department of Water and Power when it proposed a seismically questionable facility near Malibu beach. In New York, Consolidated Edison applied for a plant just across the river from Manhattan in Queens. Public outcry and a skeptical ACRS assessment forced Con Ed to retreat.13 Even proposals in less controversial locations ran into trouble. In December 1964, an AEC licensing board granted only a conditional construction permit for the Oyster Creek plant. The licensing board wanted more information on the plant’s new, innovative Mark I pressuresuppression pool containment and other design features. The board ruled that the applicant, Jersey Central Power & Light, had not provided enough data to assure the board that it could operate the reactor “without undue risk” to the public. The integrity of the containment over the forty-year lifetime of the plant, the board noted, depended on “the potential fallibility of active systems.” The increased power and reduction in safety equipment costs created “uncertainties” that prevented the
Design Basis in Crisis |
17
issuance of a full construction permit. The board called for a more formal, careful review of the reactor safety systems, fuel design, and design criteria for the plant. Regulators could not ascertain even in a qualitative sense that the risk of a major accident was acceptably low.14 The nuclear industry, however, believed its safety designs were already mature and safe enough. Its leadership objected to the Oyster Creek ruling, claiming that AEC safety standards were moving targets. One industry journal called it a troubling “precedent-setting decision” that created more uncertainty about the level of detail required for construction permit applications.15 Oyster Creek supported what became a chronic complaint: the AEC was excessively conservative and unpredictable. From one application to the next, the AEC demanded new, expensive, redundant safety systems without a clear justification. Kenneth Davis of Bechtel Corporation said these regulatory surprises were like having “a set of precise instructions for getting to B from A without knowing where either A or B are.”16 The AEC believed there was no consistency in designs, the vendors believed there was no consistency in regulation. Both points of view were true. It was taking longer to issue construction permits. The AEC often forced redesign and backfits on plants already under construction. AEC regulators acknowledged its regulations were getting more complex, but, they insisted, so was the technology. Nuclear power design was not mature, but the industry believed it was. Utilities wanted economies of scale, and nuclear vendors raced to meet demand by extrapolating from smaller to larger plants without gaining operating experience with new design innovations, experience that often revealed problems and disappointing performance. TVA director S. David Freeman said of the extrapolation period, “American utilities were trying to save nickels and lost dollars.” In 1973, the AEC finally put a stop to it by capping maximum reactor power output for new orders. It pointed out, “continual increase in the size of these plants has resulted in many plant design modifications and in a large expenditure of AEC staff review effort to assure the maintenance of a consistent level of safety.”17 As the Three Ds proved simple in concept but complex in practice, there was plenty of second-guessing of the AEC’s safety philosophy. As one engineer with Atomics International explained, the design basis accident left designers in the dark about the magnitude of the hazard, and it made the dividing line between credible and incredible accidents a “purely subjective” determination. Moreover, by focusing on spectacular accidents, the DBA “may obscure subtle possibilities and lead to neglect of
18 |
Design Basis in Crisis
less severe accidents which may be more important because they occur more frequently.” Very unlikely incredible accidents might be important, he argued, because their disastrous consequences might be intolerable to a society. He called for sophisticated tools to consider a broad range of accidents through a “probabilistic approach” that employed the latest statistical methods developed in the ballistic missile program.18 The international nuclear community also favored risk quantification. Nations such as Canada, Japan, West Germany, and Great Britain took their safety cue from the AEC, but they often dealt with different reactor designs and siting challenges. Canada’s reactors had slightly positive reactivity coefficients, which made their active shutdown systems, and a probabilistic estimate of safety, crucially important. Japan and European nations contended with higher population densities than the United States, which made remote siting difficult. They advocated probabilistic approaches they believed offered greater siting flexibility.19 The AEC was not persuaded. Its regulators defended the Three Ds as the only reliable approach to safety. The AEC’s leading technical advisor, Clifford Beck, acknowledged quantification of risk was “intuitively attractive,” but argued it was “not possible to obtain a confident answer” given the “subjective judgments” of experts.20 ACCIDENT CONSEQUENCES: THE WASH-740 STUDY
Beck’s skepticism of a probabilistic approach was based on hard experience. In 1957, the AEC issued a report on the theoretical consequences of a major reactor accident, known as WASH-740. WASH-740 produced an alarming and, some argued, unrealistic estimate that a worstcase accident might kill 3,400 people and cause $7 billion in property damage. In the report, the AEC tried to counter this alarming hypothetical accident with some estimate of its “exceedingly low” probability but could elicit only unreliable expert judgments ranging from one in one hundred thousand to one in a billion reactor-years of operation. These numbers were reassuring, but the AEC acknowledged that some experts it consulted refused to answer the question, believing that such a probability was “unknowable.” The authors of WASH-740 despaired that “no one knows now or will ever know the exact magnitude of this low probability.” Without a clear number for risk, the AEC’s qualitative approach to safety made for a muddled, incoherent message to the public, as was evident by the qualification-laden title selected for WASH740. Its concise working title, The Reactor Disaster Study, turned into
Design Basis in Crisis |
19
Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants: A Study of Possible Consequences if Certain Assumed Accidents, Theoretically Possible but Highly Improbable, Were to Occur in Large Nuclear Power Plants. The AEC could not explain qualitative safety in simple language.21 The legacy of WASH-740 haunted the AEC for years. It was the worst half of a risk assessment—dire, unrealistic consequences without the perspective of their improbability. The report made for favorite reading by antinuclear activists, was easy to caricature, and was an “albatross around our necks,” AEC Chairman Dixy Lee Ray later noted. Refuting it became a priority for friends of nuclear power. The first attempt to do so was mostly forced on the AEC. In February 1964, Frank Pittman, director of the AEC’s Division of Reactor Development, testified before the Joint Committee on Atomic Energy. Its powerful chairman, Congressman Chet Holifield, pressed him to update WASH740 in anticipation of more complete picture of risk. Pittman demurred. Given the industry’s still limited operating experience, the AEC had little new information to revise the original conclusions, he told them. Safety studies and experiments were years from completion. A new study could backfire, Pittman warned. “If you make a new analysis and it doesn’t improve the situation, it might make the situation worse.” Holifield reasoned differently that new safety improvements the AEC had mandated would certainly improve the study’s conclusions. At a Joint Committee hearing held two months later he asked commissioner James Ramey for an updated study.22 It was a request the AEC could not refuse. Holifield spent almost all of his thirty-two years in Congress on the JCAE, and he pursued the promise of nuclear power with zeal. Toward the end of his career, his promotional commitment intensified. In 1970, he confessed to a Westinghouse official that the looming energy crisis “is so clear to me, and it is so urgent that, for the first time in my life, I am aware of the meaning of my age and the shortness of probable time which I have to work on the problem.” With the passion of a man seeking to define his legacy, the California congressman placed allies in key AEC positions. Milton Shaw, a key assistant to Admiral Hyman Rickover, the father of the nuclear navy, headed the AEC’s research and development program. The JCAE’s staff director, James Ramey, became an AEC commissioner in 1962 despite a lack of White House enthusiasm for a nominee who would obviously increase Holifield’s power. One industry official described him as “industry’s best friend on the Commission.”23
20 |
Design Basis in Crisis
As Pittman predicted, Holifield’s request to update WASH-740 was a major mistake. The update produced consequences worse than the original study for the simple reason that the study’s basic assumptions did not change, save one: reactors were larger and produced more radiation. A major hypothetical accident would simply produce more hypothetical deaths. And while the AEC hoped the updated study would include estimates of “probabilities and consequences of major reactor accidents,” Brookhaven National Laboratory refused to develop any probability estimate.24 Assigned to study a worst-case accident, its scientists assumed that the probability of a major accident was one hundred percent, and that new safety features, including containment, would fail. To AEC staff, Brookhaven’s assumptions seemed “unwarranted” and ignored the fact that no reactor would be built without first proving “the effectiveness and reliability of engineered safeguards.” Brookhaven’s approach presented a major public relations problem. “No matter what statements are made on the incredibility of the upper limit accidents,” wrote one AEC chief, “these will be ignored by reactor siting antagonists, including the coal lobby.” He was particularly nettled by the assumption that containment would fail, and he called for scrapping Brookhaven’s pessimistic assumptions in favor of “a consideration of reasonable accidents which are still considered incredible by most standards (failure of every safeguard except the containment structure).” Brookhaven refused.25 Confronted with another public relations debacle, AEC staff tried to prove that engineered safeguards were already so reliable that the Brookhaven assumptions could be discarded. Clifford Beck took the lead in developing a section on accident probability estimates. In recent years, the AEC had contracted with Holmes & Narver to quantify nuclear power reliability and the Planning Research Corporation for accident probability studies. The agency enlisted Planning Research to attempt the probability estimate.26 Planning Research pursued the estimate in two ways. One approach developed probabilities based on operating experience. The other estimate took a similar approach to GE’s at Hanford in 1953, developing some rough accident chains from random component failure-rate data and expert judgment. Neither approach worked well. Reactor operating experience was still limited, and Planning Research could only state with 95 percent confidence that a major accident’s probability was no worse than one in five hundred years of operation. That was hardly a comforting number. The industry expected to build one thousand nuclear power
Design Basis in Crisis |
21
plants by the year 2000, which meant two major accidents a year were possible. Its second estimate, Planning Research admitted, was an uncertain “quasi-quantitative” approach that relied on a mix of component failure data, expert judgment, and accident sequence block diagrams. It produced wildly optimistic numbers. Even the most pessimistic end of the range predicted one major accident in one hundred million years of operation; a reactor operating since the age of the dinosaurs might have just one accident. The most optimistic end of the range predicted one accident in ten quadrillion years of operation—seven hundred thousand times longer than the age of universe.27 The AEC concluded the Planning Research estimates were useless. The five-hundred-year probability estimate simply reflected the paucity of operating experience, and the component-failure estimates missed obvious ways a reactor might fail. John Garrick, leader of Holmes & Narver’s risk assessment group, recalled that such “age of the earth” estimates were typical of 1960s accident models, even his own dissertation, because it was hard to model common-cause failures that swept away primary and backup safety systems, such as common manufacturing defects, fires, and floods. The definition of probabilities and risk were also not standardized, nor was there agreement on how to account for uncertainties. Lacking probabilities, again, the AEC was stuck with a larger WASH740. The commission opted to suppress the results. For the next eight years, the unfinished update sat like a tumor in remission in AEC filing cabinets, waiting to metastasize. In hope of better results, the agency continued funding reliability and probabilistic work through Holmes & Narver on basic piping failure probabilities.28 In the meantime, the AEC had no adequate risk assessment methodology and no way to put nuclear catastrophe in probabilistic perspective for the public. That would require help from the defense and the aerospace industries. SEEING THE FOREST WITH TREES: FAULT-TREE METHODOLOGY AND RISK QUANTIFICATION
Part of the problem in developing probabilities was that nuclear engineers could not see all the paths to disaster. They needed a common visual representation of component failure chains. “Decision trees” came to the rescue. In 1959, a British statistician suggested that the binary nature of computers could calculate the probability of biological mutations. Business scholars, such as Howard Raiffa, borrowed the idea to develop decision trees that mapped chains of financial decisions using
22 |
Design Basis in Crisis
FIGURE 4 . Fault trees were the key innovation developed by Bell Laboratories for the US missile and aerospace programs. In this 1963 illustration, Bell presented a simplified fault tree depicting the possible failure paths of a household hot water heater. Source: C. R. Eckberg, WS133B Fault Tree Analysis Program Plan, Seattle, WA: Boeing Corporation, March 16, 1963.
Bayesian probability estimates. Military think tanks such as the RAND Corporation sought to meld systems analysis with the new probabilistic methods.29 Like nuclear reactors, weapons systems often could not be easily tested for catastrophic failure, and, as GE had done at Hanford, engineers in the US ballistic missile program looked to break down the risk of malfunction by analyzing individual system reliability and then combine the individual probabilities of failure into a final assessment. In 1962, Bell Labs provided the key innovation by adapting tree theory for more sophisticated use in the US ballistic missile program. It created “fault trees” that represented Boolean algebra. Symbols called “gates” depicted the logic and binary behavior (operate successfully/fail) of system components. AND gates could represent a power supply that failed only when its primary and backup supplies were lost. OR gates could symbolize a power-operated valve that failed if it stuck mechanically or suffered a broken motor. Fault trees reduced power supplies, valves, and pumps to universal symbols, a visual lingua franca of catastrophe. If fluent in fault trees, any engineer, even one unfamiliar with a technology, could see the paths to a disaster. By adding in component failure data at each gate, he could manipulate the design to minimize the probability of failure.30 Fault trees made designs and their accidents knowable. Hailed for enhancing the reliability of Minuteman missiles, fault trees found civilian applications, including Boeing’s new 747 passenger
Design Basis in Crisis |
23
plane.31 Nuclear experts were eager to use the new methodology, and decision and fault trees spread like a contagion into all areas of nuclear technology. Nuclear experts who worked between nuclear power and aerospace applications used them to assess the risk of the SNAP reactors and generators used in satellites. Holmes & Narver developed failure-rate data collection techniques and a fault-tree computer model that was used in the late 1960s to analyze Hanford reactors.32 Civilian reactor designers, such as GE and Westinghouse, used them to assess reactor safety systems in water- and non-water-cooled reactor designs and the probability of station blackouts during a LOCA.33 Nevertheless, roadblocks persisted with data gathering, accounting for major accident initiators, and estimating common-mode failures. Risk analysts recognized these uncertainties and typically did not claim they had determined the total risk a plant design posed. They used numbers in a relative way to compare system design variations.34 Most of these efforts were limited analyses of individual safety systems, much as GE had done at Hanford in the 1950s. But some experts envisioned something much more complex and ambitious, coupling these bottom-up estimates with top-down quantitative safety goals that would set safety design standards. In 1967, F. R. Farmer, head of Great Britain’s Atomic Energy Authority, made a landmark proposal for reactor siting criteria along probabilistic lines. The US design basis accident, Farmer thought, was arbitrary, and Britain faced population densities that made US siting regulations impracticable. He called for assessing “in a quantity-related manner . . . a spectrum of events with associated probabilities and associated consequences.” He based his maximum exposure criteria on international standards for health impacts from Iodine-131, a particularly dangerous isotope, and plotted a curve of increasing radiation exposure to declining accident probability. Farmer incorporated the psychology of disaster into his curve. He believed the public feared single large accidents more than many small ones. So, he bent the limit curve, making it “risk averse.”35 A major accident with many fatalities had to be disproportionately less likely than many small accidents that killed the same number of people. The “Farmer Curve” became very influential. GE staffer Ian Wall, for example, integrated Farmer’s work into the company’s already substantial probabilistic efforts. In 1969, a study led by Wall used quantitative risk estimates to conclude that siting reactors in urban locations was an unacceptable risk to the company, and it spelled out quantitative safety goals for plant design, including the reliability of safety components. While the
24 |
Design Basis in Crisis
FIGURE 5 .
In 1967, Great Britain’s F. R. Farmer proposed one of the earliest and most influential accident risk curves for reactor safety design. The “Farmer Curve” set a maximum radioactive release criterion for an accident based on its probability. On the x-axis, Farmer placed release values for the dangerous fission product Iodine-131. The y-axis represented in reverse order the years of reactor operation. The limit line runs from relatively common small releases in the upper left corner to rare, disastrous releases in the lower right. A large accident releasing about a million curies of I-131 (106) should not occur more than once in ten million years of reactor operation (107). Believing that the public would not tolerate large disasters, Farmer bent the curve to make the risk criterion “risk averse” for catastrophic accidents. They were to be far less probable than many small accidents that might add up the same number of deaths. Source: IAEA, (F. R. Farmer, “Siting criteria—a new approach,” in Containment and Siting of Nuclear Power Plants, Proceedings of a Symposium Held in Vienna, Austria, 3–7 April 1967 (Vienna: IAEA, 1967), 315. By permission of the International Atomic Energy Agency.
AEC approached safety deterministically, GE was thinking probabilistically. Wall later brought his quantitative skills to the AEC and worked on the WASH-1400 report.36 In 1969, Chauncey Starr, the dean of engineering at the University of California–Los Angeles, attempted a universal model of acceptable
Design Basis in Crisis |
25
risk with policy application. He had spent several years considering nuclear power’s public relations problem, which, he believed, was caused by the excessive caution of nuclear experts as epitomized in the improbably severe accidents postulated in WASH-740. Evaluating nuclear power this way was like evaluating airline travel by postulating a plane crash into Yankee Stadium during the seventh game of the World Series. Such scenarios only confused and frightened the public. Above-ground weapons testing, and “the vacillation of self-nominated science-statesmen in their mixing of fact, value judgments and policy,” he contended, had produced a generation of “nuclear hypochondriacs” with “irrational anxiety.” It was time to persuade the public with numbers through a formal analysis. The risk of a new technology should be understood quantitatively as “the gambler’s chance” of enjoying new benefits at some risk of personal damage. Quantifying risks and benefits allowed for comparisons with other technologies. The key was to discern the public’s tolerance for various kinds of risk.37 Starr believed public risk tolerance was stable across generations, and he used historical data on accidents and disease to calculate its “revealed preference” for certain kinds of risk. This data on accepted risk for automobile accidents, plane crashes, and the health impact of coal power could be used by policymakers to establish safety standards for new technologies such as nuclear power. Starr concluded the public readily accepted—by a factor of one thousand—voluntary risks, such as driving a car or climbing a ladder at home, over involuntarily imposed risks, such as air pollution and nuclear power accidents. He then divided up voluntary and involuntary hazards, used insurance data to estimate the economic value of a lost life, and quantified risks and benefits of each technology. In this way, he could compare any risk and determine whether it was safe enough for public acceptance. By his calculations, nuclear power was already much safer than almost any other voluntary or involuntary risk. Starr’s work intrigued the AEC. It might finally be possible to eliminate the incomprehensible language of qualitative safety and define “adequate protection” in numbers that could be compared with ease to risks in everyday life.38 Starr believed that the problems created by physical technologies could be solved by the intellectual technology of risk assessment. His revealed preference proposal was an engineer’s formula to solve a public policy question: “How safe is safe enough?” “To a technologist,” Starr later admitted, “it appeared to be a logical approach to societal decisions.” Starr relied on what was called the “deficit model” of public
26 |
Design Basis in Crisis
awareness of science and technology issues. Starr thought that improving knowledge of a scientific or technological issue would persuade the typical citizen to see risk as Starr did and eliminate public distrust of nuclear power. His was an Enlightenment faith that the public was composed of rational actors who dispassionately choose their risks according to data.39 Starr helped inspire the new field of risk analysis, and scholars tested his assumption that the public acted rationally based in informed choices. The work of Amos Tversky and Daniel Kahneman on emotion, bias, and perceptions of risk—part of a body of work for which Kahneman later won a Nobel Prize—created an alternative understanding of the role of emotion and the illogic of public attitudes toward risk. Scholars applied Tversky and Kahneman’s insights and showed that while Starr was correct that the public generally tolerated voluntary more than involuntary risk, numerous biases among the public, decision-makers, and even experts made a mess of any quantification of “safe enough.” Individuals were easily swayed by their familiarity with a hazard, “dread” of catastrophic events, and “confirmation bias” that led them to seek only information that supported preexisting beliefs. Starr was also wrong in thinking the public’s tolerance for risk did not change over time. People may tolerate certain risks that seem inevitable, but that does not mean they accept them. From seat belts to smoking, the public was becoming more safety conscious. And they did not see nuclear power as offering the same benefits as other sources of electricity, even when comparing a coal and nuclear plant that produced the same power. While these views seemed irrational, other research indicated that the public perceived nuclear power risk in more complex ways than could be expressed in simple calculations. People feared nuclear power because of its possible genetic threat to the next generation and the great uncertainty of an accident’s probability and consequences. Ideology mattered, too. Many did not trust data from industry and AEC technical experts because they represented the military-industrial complex. These insights into risk perception, however, were years away. In the meantime, Starr’s risk assessment model gained favor as a technical tool and a means of public persuasion.40 By the late 1960s, the pieces of risk assessment seemed to be coming together, including the elusive pursuit of accident probabilities. In 1967, John Garrick’s dissertation developed one of the first sophisticated fault-tree diagrams, accident estimates, and spelled out an early version of what became known as the risk triplet, described in chapter 1. It was
Design Basis in Crisis |
27
work that, in the 1970s, would lead to Garrick’s commercial success at developing industry PRAs. He suggested that advancements in the field might permit experts to quantify enough of the total probabilities and consequences of reactor accidents to “arrive at the goal of a figure of merit [i.e. a numerical expression of total risk] to quantify safety.”41 In 1969, an AEC task force noted the limits to quantification, but agreed that a “total risk evaluation may be feasible.” The British were already using rough estimates of risk in their power plant siting decisions. There were technical questions to resolve, but a new age of probabilistic safety loomed, one that might solve nuclear power’s political problems, too.42 AEC regulators were curious but unconvinced. There had been advances in risk assessment, they agreed, but neither the British nor American models addressed all significant accidents. Stephen Hanauer responded to a British counterpart: “We have not yet arrived at the point where probability analysis techniques give adequate assurance that all failure modes are indeed considered, that the probabilistic model for severe accidents corresponds to the actual failures that will occur, and that adequate failure rate data are available for prediction.”43 The British had done extensive work to estimate probabilities through decision trees, but their model’s simplicity often sacrificed detail and accuracy, and their results still relied on extensive use of expert judgment.44 The AEC’s staff was also skeptical that active safety systems were reliable enough to count on. This produced a catch-22. The AEC wanted high-quality active systems like Emergency Core Cooling Systems (ECCS) but was of two minds about giving vendors credit for them in a reactor design. In 1964, the ACRS deemed ECCS essential for accident prevention, but said ECCS systems “cannot be relied upon as the sole engineered safeguards” and should be assumed to fail when postulating a maximum credible accident. The ACRS’s unwillingness to give ECCS systems much credit gave vendors little incentive to improve them. Nevertheless, the vendors and licensees continued to push for greater flexibility in siting nuclear power plants near cities by getting credit for a variety of new safety features. In many applications, the AEC had to pass judgment on these tradeoffs.45 Balancing tradeoffs was complex. By 1965, reactor vendors tried to develop inexpensive containment designs. General Electric proposed an economical Mark I containment system. The Mark I’s trademark inverted light bulb containment building was much smaller than typical dry containment buildings due, in part, to the lower operating pressure of the coolant in GE BWRs, as compared to PWRs. The Mark I also
28 |
Design Basis in Crisis
FIGURE 6 . This image of the construction of a Mark I containment at the Browns Ferry Nuclear Power Station in Alabama reveals its safety logic. The capsule-shaped reactor vessel sits inside the steel “inverted light bulb” section . In the event of a leak from the reactor, the escaping steam is directed through the spider-leg piping to the donut-shaped torus at the base. Filled with water, the torus condensed the steam and scrubbed it of radioactive particulates. More versatile than dry containments in some accident scenarios, the Mark I was more susceptible to failure during a core meltdown. Source: Tennessee Valley Authority.
deployed an ingenious pressure suppression system. The escaping steam during a LOCA was funneled out of the containment building and into submerged pipes feeding into an enormous donut-shaped pool of water called a torus. As the steam bubbled up through the pool, it condensed back to water and was scrubbed clean of most of the radioactive particles. Less hazardous, non-condensable gasses collected in the top half of the torus where operators could vent them outside the building.46
Design Basis in Crisis |
29
The design had a weakness. If the fuel heated up too much during the accident, the zirconium metal tubes—cladding—holding the fuel pellets would react with the steam and create explosive hydrogen. The problem was particularly acute for BWRs, since their fuel design had more zirconium than a comparable PWR. The AEC regulatory staff reported that a pressure suppression “containment would not withstand the pressure of a metal-water reaction and the ensuing hydrogen release.” They concluded that “that dry containments are, from this standpoint, in better shape than pressure suppression.”47 Rather than change its containment design, GE called for reordering the priorities of defense in depth. GE executives argued for giving credit for the effectiveness of its ECCS system, because it had redundant power sources, water, and pumps to prevent a metal-water reaction. The staff complained that GE was changing the design basis accident “to fit what the containment can take.” At a meeting in July 1965, the staff argued that “for containment design, credit should not be taken for active systems.”48 While some members of the AEC staff may have thought GE was playing fast and loose with safety, GE officials believed their BWR compared well in terms of safety with its competitor, the Westinghouse Pressurized Water Reactor. Solomon Levy, a leading GE engineer who later headed the company’s entire nuclear division, believed that the AEC’s singular fixation with a large LOCA overlooked the superior safety aspects of the BWR design. BWRs operated with a single cooling loop whose simple design and lower operating pressure made it less susceptible to accidents and pipe failures. The Westinghouse dry containment, he granted, was superior in dealing with accidents with significant core damage, but the BWR and its pressure suppression containment were more likely to prevent a core meltdown in the first place. With more water available in an emergency and many more ways of getting it into the core than a Westinghouse reactor, the risks balanced out. GE engineers joked that they had so much water on hand they could make the reactor float during an accident.49 There were few standards or data that allowed a clear-cut determination on the best designs for reactor safety. Static systems might seem to be better than active ones, but what about an active system with multiple backup features and redundant systems? The AEC groped for ways to compare the safety of different designs. Given these uncertainties, the AEC was divided on prioritizing static containment. Some staff and ACRS members argued that accidents were inevitable and “the containment should be designed to withstand
FIGURES 7 AND 8 . The Boiling Water Reactor (BWR) and Pressurized Water Reactor (PWR) designs dominate nuclear power operations worldwide. The BWR offered a simple, relatively inexpensive design where a single loop of water served to cool the reactor and run the turbine generator. The PWR’s use of a two-loop design allows it to isolate virtually all radiation to the containment building and operate at higher pressures and temperatures. The advantages and disadvantages of each design are asymmetrical, making comparisons difficult. The BWR’s simplicity made it less likely to suffer a core-damaging accident, but the consequences were likely to be more severe if a core meltdown occurred. Source: US NRC.
Design Basis in Crisis |
31
the worst consequences of any credible accident.” But a powerful dissenting camp emerged around Milton Shaw, director of the Division of Reactor Development and Technology (RDT). RDT was a microcosm of the AEC’s conflicted dual mandate. It supervised safety testing for the regulatory staff but was controlled by the agency’s promotional wing. Its critics claimed it favored development projects over safety research, particularly the development of the breeder reactor, a favorite project of Congressman Holifield.50 Over the years, Shaw disagreed with the regulatory staff and ACRS over priorities in safety research. Some members of the staff believed Shaw simply ignored their safety concerns, but Shaw’s differences with the staff were also philosophical. A protégé of Admiral Rickover, Shaw was a passionate advocate for the Navy’s emphasis on accident prevention and high quality in design and fabrication. He was confident that quality control could ensure reactor safety. He and his division “wholeheartedly” believed that AEC safety research was unnecessary and should be turned over to reactor vendors. The AEC, he believed, should focus on development of advanced reactor concepts. He steered most safety research resources to quality assurance studies that, he believed, could make a large pipe break impossible and mitigation features unnecessary. ACRS members thought that quality assurance could not eliminate the chances of an accident entirely; mitigation systems and research were essential.51 THE CHINA SYNDROME: CHANGING SAFETY APPROACHES
In 1966, advocates who favored containment as the last line of defense were undermined by, as ACRS member David Okrent recalled, “a revolution in [light-water reactor] safety.”52 During a LOCA, the core could meltdown quickly due to the production of “decay heat,” energy from fission products—the radioactive fragments of the uranium atoms that had split apart during the chain reaction. Fission products release energy even after a reactor is shut down. Decay heat can produce 7 percent of the reactor’s full power right after a scram, enough energy to raise the fuel past its five-thousand-degree (Fahrenheit) melting point. The AEC thought a melting core could not escape the reactor’s massive pressure vessel, a big steel pot with walls six to eight inches thick that held the fuel rods. Between 1958 and 1964, however, research indicated that the fuel could melt, slump to the vessel bottom, and melt through it, too, landing on the containment building floor. It would then attack the concrete until it broke through the bottom. Once outside the
32 |
Design Basis in Crisis
containment building, the fission products might enter the water table or escape into the atmosphere. A joke about the glowing hot blob melting all the way to China led to the phenomena being dubbed “The China syndrome.”53 The China syndrome disrupted the AEC’s Three Ds safety approach. A breached containment and a fatal release of fission products during a design basis accident could be inevitable unless ECCS worked. GE and Westinghouse proposed two very different solutions. Westinghouse preserved static safety by beefing up its containment with a “core catcher,” a crucible of magnesium oxide bricks (often used in blast furnaces) positioned below the reactor vessel to catch and hold the molten core inside the containment building. GE’s small Mark I design could not handle such a redesign, and the company refused to alter or abandon it. GE officials argued that safety should rely on ECCS rather than containment, since ECCS was now so reliable that a core meltdown was not a credible accident.54 The debate over the GE proposal hinged on three key safety principles: redundancy, diversity, and defense in depth. The AEC staff and many members of the ACRS accepted GE’s design as safe enough because a LOCA was improbable and GE’s ECCS contained two separate redundant systems.55 Members Stephen Hanauer and David Okrent argued that redundancy was not enough. Hanauer, an engineering professor and researcher at Oak Ridge National Laboratory, wanted safety systems to be diversified with two systems “of a different species” using different principles of operation that the GE design lacked. Safety systems that relied on the same principles of operation, he argued, were vulnerable to common cause failures, such as a fire that cut all power to ECCS pumps or a common manufacturing defect in the same model of valves. If a single event can disable multiple systems, they are not independent of each other. The static Westinghouse core catcher, he noted, was independent and diverse from ECCS. GE’s two active ECCS systems were not. Okrent agreed, adding that GE had abandoned the historical primacy of static systems over active ones in defense in depth. “The old criterion [stated] that containment, which is a static engineering safeguard, is more reliable than active measures such as core cooling. . . . The proposals for reliable core cooling contradict this accepted point of view.” Okrent scribbled in his meeting notes, “Can’t go along with writing this off.” Often in the minority, Okrent persisted, and the ACRS debate dragged on inconclusively through the summer of 1966.56
Design Basis in Crisis |
33
To industry, the ECCS debate confirmed the ACRS’s image for unpredictability and excess caution. Okrent thought the ACRS tended to be more conservative than the AEC staff, but there were cases, such as seismic safety, where the staff took a stronger position. As a part-time advisory body, ACRS members had the power to act independently and there was a strong desire to arrive at something close to consensus on important issues. On primary system integrity and the China syndrome, the ACRS debate was careful and protracted. For an anxious power industry, it seemed more like dithering. In 1965, Consolidated Edison of New York was almost frantic to get its Indian Point 2 reactor approved in anticipation of skyrocketing electricity demand in New York City. Yet the ACRS recommended additional improvements in its reactor pressure vessel to avert a catastrophic rupture. Reactor vendors responded to the new requirements with “incredulity,” reported Nucleonics Week. The industry asserted that large pipe and vessel failures were already incredible events. The ACRS was engaged in “academic” speculation. One industry observer complained, “You can analyze things forever and never get anything built. . . . I don’t know if it is coming to a ridiculous point.”57 The AEC’s five commissioners and Congress’s powerful Joint Committee on Atomic Energy tended to sympathize with the industry and worried about the financial burdens of the ACRS position. ACRS members protested that their primary duty was to safety, but Chairman Glenn Seaborg complained they were “piling one safety system on the back of another.” Craig Hosmer, a long-time member of the JCAE, attacked the ACRS before an appreciative industry audience as the “Advisory Committee on Reactor Redundancy. . . . I cannot help but wonder if ACRS has outlived its usefulness—if it now serves less as a protective boon than it does as an anachronistic burden.” The ACRS was influential but politically vulnerable.58 In August, the ACRS found a compromise. It supported the most pressing applications at Indian Point and Dresden with improved, redundant ECCS. But members wanted a different long-term solution. The group wrote a draft letter calling for an “independent means” of protection besides ECCS in future applications. The committee warned that current designs were “suitable only for rural or remote sites” away from cities.59 The AEC’s commissioners intervened to halt the letter’s issuance. Seaborg warned that if it became public, the letter’s impact “on the industry might be serious,” and proposed instead that a task force
34 |
Design Basis in Crisis
explore the ECCS problem. Harold Price, head of the regulatory staff, agreed with Seaborg that the AEC should avoid “public attention to the problem.” In an executive session marked by hand-wringing over political retaliation against the ACRS and worries that the task force would be loaded with proindustry members, the committee agreed to Seaborg’s plan.60 The composition of the “Task Force on the ‘Chinese Syndrome’ ” did not hearten the worriers. Many members represented industry and the rest came from AEC laboratories, including the task-force chairman, William Ergen of Oak Ridge National Laboratory.61 Ergen and other laboratory members emphasized the uncertainty of preventing pipe ruptures and the lack of proof regarding ECCS effectiveness. Industry representatives argued that core catchers were too uncertain to count on and too expensive to build. Eric Beckjord of Westinghouse reported industry did not favor containment design changes to accommodate an ECCS failure “since such a failure cannot be allowed to happen in the first place because of the melt-through problem”—that is, the China syndrome. Warren Nyer, a Manhattan Project veteran who worked at Idaho’s National Reactor Testing Station, disagreed. He thought the task force could not assume “that all routes leading to trouble are knowable; you can never answer the question as to whether you have covered all bets.”62 The industry position carried the day. In the minority, Ergen toyed with resigning as chairman, but satisfied himself with warning the ACRS that the forthcoming task force report was too optimistic and its “conclusions in many cases represent judgments rather than solid fact.”63 While the task force deliberated, GE campaigned for a probabilistic approach and active systems like ECCS. GE officials even suggested that their more reliable ECCS should allow them to reduce the safety margin in their containment designs. Glenn Seaborg noted in his journal that GE representatives met with all AEC commissioners to discuss their “concept of relying solely on reactor safeguards to prevent accidents (and not consequence limiting devices [such as containment buildings]).” GE officials, including GE Hanford veterans, met with AEC staff to recommend they become more familiar with a “probability approach” to accident analysis and accept a reliance on ECCS rather than containment. GE’s Robert Richards argued that a careful probabilistic evaluation of a spectrum of accidents could “buy” more safety than the AEC’s focus on design basis accidents. He dismissed the “almost mystical belief that the containment provided protection” and thought a DBA and core meltdown was only possible during a natural
Design Basis in Crisis |
35
disaster, such as an earthquake. He warned that GE might quit the industry if AEC requirements were not liberalized.64 GE also pressed its case before Congress. In oral and written testimony before the JCAE, Richards expanded on his belief that the AEC’s “emphasis on the consequences of unchecked accident situations” produced less safety than a probability approach. He made a direct appeal for new fault-tree and probabilistic methods to be deployed to provide greater realism to accident analysis. They could, GE felt, “assess the relative importance of various postulated accident situations. We believe that probabilistic methods can be successfully applied to safeguards evaluations for this purpose. Techniques of decision theory can then be applied to determine what course of action will result in optimum safety system designs. This approach will also indicate those areas of the safety program where the most worthwhile improvement in safety technology can be made.”65 In sum, GE argued, a probability approach would save money and improve safety. GE took its campaign to international forums, where it found a receptive audience. By the late 1960s, Canada, Britain, and West Germany had taken the lead in advocating for the probabilistic “Canadian regulatory approach.” In early 1967, Solomon Levy laid out GE’s philosophy at an International Atomic Energy Agency conference in Vienna, Austria. He criticized the AEC for emphasizing design basis accidents and placing “the entire burden of protection upon the containment.” Presciently, he argued that the AEC’s focus on a few extreme accidents could make a reactor vulnerable to other ones that needed different safety systems and operator training, as was later the case during the Three Mile Island accident.66 Levy called for the consideration of a broad range of accidents and the optimization of plant safety systems to respond to scenarios in proportion to their risk rather than focus on just a few DBAs. This “systems approach” tied together a reactor’s complex safety features in one mutually reinforcing unit. Containment alone was not to be relied upon to halt a core meltdown; it simply had to be strong enough to deal with a LOCA where ECCS worked enough to halt a meltdown. Levy was confident in the reliability of GE’s new ECCS design because it had three separate active systems. He dismissed large core meltdowns as “unbelievably severe accidents.” Drawing on company fault-tree analysis, he offered a probabilistic estimate that GE’s ECCS design had a reliability of 0.99999—just one chance in one hundred thousand of failing during a LOCA, which implied a probability for a major reactor
36 |
Design Basis in Crisis
accident much less than one in one million. Levy’s estimate was met with skepticism by some attendees, but others voiced support for a reliability approach to safety.67 The AEC would not reduce its containment requirements. Clifford Beck responded that GE did not understand the philosophy of defense in depth and the value of “independent lines of defense which differ in nature and in objective, if one is improved or revised, it does not follow that corresponding alteration in the other should be made.” An improved ECCS could not eliminate the probability of an accident altogether, he noted. The staff pointed out that it was not yet feasible to establish numerical probabilities for large accidents or probabilistic safety standards for individual plants.68 GE’s probabilistic approach won a partial victory, however. The publication of the Ergen task force report elevated ECCS to the first among equals in defense in depth. Assigned by the ACRS to look at new ECCS designs and ways to cope with a meltdown, the task force gave itself a different assignment, much to the consternation of ACRS members. It simply determined whether existing ECCS designs were adequate or might be improved. Framed this way, the report leaned toward GE’s emphasis on preventive systems rather than containment. The Ergen report was the first public admission that the China syndrome was possible, but it considered ECCS reliable enough to prevent it. Core catchers were unproven, unnecessary, and expensive. In keeping with Levy’s systems approach, containment buildings only needed to limit the consequences of a LOCA where ECCS operated sufficiently to prevent significant core melting. The acceptability of ECCS should be determined “on a disciplined and quantitative basis.” The report also recommended that the AEC’s safety research program focus on analyzing ECCS performance and less on unlikely meltdowns.69 GE proved persuasive for political, economic, and technical reasons. Levy’s systems approach provided coherence to a tangle of safety systems tacked on to designs higgledy-piggledy over the years. Designing a system to pump water, as ECCS did, seemed easy. Adding another system, the core catcher, was a journey into the unknown fraught with unanswerable questions. Would it work? Or would the white-hot molten core cause a containment-bursting steam explosion when it dropped to the flooded floor? The only way to find out was a large, potentially dangerous test. It was easy to imagine an expensive R&D program that failed to develop a workable design. Even Westinghouse, which proposed the core catcher, abandoned the proposal.70
Design Basis in Crisis |
37
The China syndrome also upended safety research. From the vendors’ perspective, the AEC wasted time and resources researching highly improbable accidents. The ductility of reactor piping, they asserted, precluded the chance of a sudden pipe break. Vendor computer models indicated that ECCS designs had ample capacity to prevent a meltdown. As a result, GE officials had a hard time taking AEC safety concerns seriously. At a disagreeable meeting in early 1968, the AEC staff tried to coax reactor vendors into doing more safety research. Philip Bray, a leading engineer at GE, asked with exasperation “if the AEC would ever have enough information to relax its conservative requirements.” There was little chance of “unexpected behavior,” and he drew a graph depicting the “real world” at one end and a design basis accident at the other.71 The industry lobbied to recast the AEC’s safety research program in the Ergen report’s image. GE’s Richards complained to Milton Shaw that the agency’s safety research had done little on accident prevention and too much on the “consequences of unchecked accident situations.” Richards was referring to the LOFT program, a small reactor at the National Reactor Testing Station in Idaho slated to conduct a complete a large-break LOCA and meltdown test. LOFT was where much of the meltdown trouble started. Modeling of the LOFT test performed in 1963–64 provided an early indication that the China syndrome was possible. Richards wanted LOFT to focus instead on practical engineering research aimed at demonstrating ECCS effectiveness—a proof-test typical in engineering disciplines. The Atomic Industrial Forum (AIF), an industry organization, chimed in, criticizing the AEC’s penchant for studying accidents “which have only a remote probability of occurrence.” It wanted regulators to learn about and use probabilistic analysis to study more common events with lower consequences. Research on core meltdowns was unnecessary since “a major meltdown would not be permitted to occur.”72 That the industry believed it had the power to forbid a meltdown spoke to the confidence among nuclear experts that such an accident was almost impossible. Nevertheless, the industry prevailed. LOFT as a core meltdown test was out; LOFT as a proof of ECCS effectiveness was in. Although the ACRS and regulatory staff favored research on molten core behavior, mitigation systems, and the potential generation of explosive hydrogen from damaged fuel, both industry and Shaw’s division had other priorities. The AEC’s new safety philosophy and research program was a messy combination of engineering judgment and a political balance of power within the agency that mostly favored accident
38 |
Design Basis in Crisis
FIGURE 9 . In the early 1960s, the AEC conceived of the Loss-of-Fluid Test (LOFT) as a reactor meltdown test following a full loss-of-coolant accident. With recognition of the potential for a “China syndrome” breach of containment, the agency reoriented LOFT to test the effectiveness of emergency core cooling systems. The test reactor was mounted on a rail car and rolled into the containment building. After the test, it was to be rolled back into a “hot shop” for inspection. Source: US AEC/DOE Flickr, HD.6D.051.
prevention research over severe accident behavior. The neglect of research into the behavior of molten cores persisted until 1979, when the Three Mile Island accident opened funding for such experiments.73 The Ergen report capped off a destabilizing shift from certainty to uncertainty. Until 1967, the Three Ds—design basis accidents, deterministic design, and defense in depth—had provided safety certainty with simple robust designs. More complex designs that leaned on ECCS for safety created new regulatory and public relations headaches. Looking back on 1967, Alvin Weinberg, the director of Oak Ridge National Laboratory, recognized the China syndrome debate as having “profound repercussions.” With a fully functional containment, the AEC had been able to argue that “the consequence of even the worst accident was zero. . . . [Instead] we had to argue that, yes, a severe accident was possible, but the probability of it happening was so small that reactors must still be regarded as ‘safe.’ ”74
Design Basis in Crisis |
39
Quantifying the probability of a reactor accident became more urgent even as the AEC struggled to avoid doing so. While the pivot to ECCS provided an immediate answer to the China syndrome and allowed licensing of new reactors to proceed, it raised complex questions: Would upgraded ECCS systems work as designed? And, even if they did, what was the probability that the system might fail due to some malfunction, error, or external event?75 The AEC was already under fire from a growing environmental movement on multiple safety issues. As it searched for answers to these questions, the AEC confronted disturbing results from ECCS research and was caught off guard by new credible accident scenarios. To keep up with the industry and reassure the public, regulators needed to get smarter with new experiments, computer accident models, and, finally, probabilistic risk assessment.
3
Beyond the Design Basis The Reactor Safety Study
As the Emergency Core Cooling System (ECCS) became the last line of defense in depth, the AEC launched a massive research program through its national laboratories. Oak Ridge took on experiments with fuel element behavior at high temperatures. The Idaho National Reactor Testing Station conducted tests and computer modeling of a loss of coolant accident (LOCA), ECCS performance, and of a reactor’s thermal and hydraulic behavior. One vital question dealt not with probabilities but with whether ECCS designs would supply cooling water to the fuel fast enough to prevent them from overheating. Unlike the AEC’s thorough testing of reactor core stability before vendors developed commercial designs, regulators settled on the ECCS as a safety solution first and tested it later. ECCS design seemed like little more than a plumbing problem. As AEC staffer Roger Mattson recalled, proving that an ECCS worked would be simple. After a pipe break, the coolant would all escape as steam during the “blowdown phase” until the reactor depressurized. An ECCS need only pump water back in fast enough during the “reflood phase” to keep the zirconium cladding from heating up and melting at about 3300o F. “What’s so hard about that? Of course, it would work,” Mattson recalled. “Ha! Little did we know all the problems that would portend.”1 Portents came from all directions. Tests at Oak Ridge and Argonne National Laboratory showed that the long tubes containing uranium fuel pellets—called fuel rods or cladding—would deform or disintegrate 40
Beyond the Design Basis |
41
hundreds of degrees below their melting point, and the cladding might bulge outward in spots, forming small balloon shapes. The balloons could shatter into dust or pack tight against each other, blocking the flow of cooling water. The cladding might also embrittle during the reflood phase when cool ECCS water hit the fuel rods, causing them to shatter “catastrophically” like a scorching hot Pyrex baking dish thrown into frigid water. With the cladding destroyed, fuel pellets inside the tubes would tumble to the reactor vessel bottom in a big uncoolable pile and melt. These results placed AEC staff at odds with the industry, which doubted the plausibility of Oak Ridge’s results. At a meeting in early 1968, an AEC representative sarcastically asked vendors what might be “an acceptable pile of rubble at the bottom of the reactor vessel.” The AEC’s Morris Rosen thought they “may have to ask whether ECCS really works.” In early 1968, AEC engineers concluded that the “margin of safety for present [ECCS] designs is not large, at best.”2 There was double trouble in Idaho. Scaled test rigs of the Loss-ofFluid Test (LOFT) reactor coolant system indicated ECCS water might not make it to the fuel in time to cool it. AEC-funded computer accident modeling programs, called codes, found other problems. Idaho’s Reactor Loss-of-Coolant Accident Program (RELAP) code for a LOCA showed that the core might be starved of cooling water for a long time at the beginning of an accident. Alarmed by the results, code development leader Larry Ybarrondo noted in his professional journal that the margins of conservatism “are not there as analyses have improved.”3 Vendors had their own codes, which they believed disproved Idaho’s findings. In 1970, Westinghouse unveiled its advanced System Accident and Transient Analysis of Nuclear Plants (SATAN) computer program to prove the temperature of the cladding would not exceed 2000o F, well below the informal 2300o F limit set by AEC staff. AEC staffers Morris Rosen and Robert Colmar discovered mistakes in Westinghouse’s calculations. Their superior, R. C. Young, reported, “it has become evident that the SATAN code not only does not provide assurance of lower core temperatures but raises serious questions about [its] ability to reliably calculate the response of the core during the initial blowdown phase. . . . The consensus of those who have reviewed the situation is that a serious problem has been uncovered for all PWR plants.”4 The staff reported that SATAN had lessened their “confidence in the ability of emergency core cooling systems to provide adequate heat removal following a loss of coolant.”5 Glenn Seaborg warned JCAE Chairman Senator John Pastore that licensing delays were possible.6
42 |
Beyond the Design Basis
As uncertainty increased, the AEC’s search for reliable knowledge grew urgent. The regulatory staff needed better data, more varied expertise, and new computer modeling tools. Their reviews of reactor license applications had been relatively crude checks for inconsistencies. Mattson recalled that he tacked up on the wall of his office a handcrafted spreadsheet of key technical data for similar reactor plant proposals. When some value in a new application did not match previous ones of similar design, he asked the vendor why the numbers changed. The new codes at Idaho, however, could get inside a design, do an independent assessment, and model different plant conditions.7 Staff access to the codes was constrained by Milton Shaw’s division of research and development. In Shaw’s view, his resources were devoted to developing accident codes for his LOFT program, not ongoing licensing issues. Roger Mattson and fellow AEC staffer Saul Levine resorted to sneaking to the Idaho lab for a secret night meeting with experts there to perform computer analysis on new reactor license applications “off-the-books.”8 Idaho’s computer codes revealed weaknesses in vendor calculations. The move created a storm within the agency. Idaho supervisor George Brockett reported to Ybarrondo that the AEC had been “castigated [by industry] for using LOFT to test [industry] codes and standards.” Nevertheless, the value of the codes had been demonstrated, and regulatory staff reliance on Idaho increased, as its knowledge and tools put them on a more equal footing with vendor experts.9 These unsettling research results soon became public. A flourishing antinuclear movement took advantage of the internal AEC conflict. The movement included well-funded, capable groups such as the Union of Concerned Scientists and lawyers such as Myron Cherry. As one Idaho supervisor warned his subordinates, antinuclear activists were “getting stronger and smarter” just as research and code results delivered disquieting news on ECCS systems.10 One Idaho researcher confessed, “The more we worked this problem the more it fell apart in our laps. Everything we did to analyze [accident scenarios] pushed our predicted temperatures higher toward melting and the margins of fuel safety lower.”11 Activists hoped to use the new ECCS results in plant licensing hearings where they were “intervenors” in the proceedings and could oppose the issuance of a license until ECCS was proven effective.12 Up to fifty-six applications might be delayed, the press reported, a possibility that “sent a shock wave” through the industry.13 Under pressure from industry and antinuclear activists, the AEC opted to draft uniform acceptance criteria for ECCS systems. It created
Beyond the Design Basis |
43
a task force under ACRS member turned AEC staffer Stephen Hanauer.14 Hanauer was one of the more interesting personalities at the AEC. The most common words used to describe him by AEC veterans were “brilliant” and “narcolepsy.” The latter disorder afflicted him often. He routinely fell asleep in the middle of conversations, major meetings, and even on the witness stand during hearings. Yet, he could suddenly rouse himself from his slumber and fire off the most trenchant question in the room, and he fired often. He kept a “nugget file” of plant incidents that he thought raised safety questions and forwarded them to other staff with comments like, “This one was too good to pass up,” and “Someday we all will wake up.” Roger Mattson recalled that one supervisor kept a file drawer of these missives labeled “shots from Steve.” “He was an enigma,” Mattson said.15 During the ECCS controversy, most of the shots were fired not from, but at Hanauer. His task force made a controversial decision to modify the AEC’s traditional practice of imposing the worst credible values in calculations, referred to as “conservatisms.” Hanauer argued that the AEC’s traditional practice resulted in a “cascading of conservatisms” that produced simulations with unrealistically dire results. In the new Interim Acceptance Criteria, the task force opted for “suitably conservative” calculations that used some less-than-conservative data and assumptions.16 Not everyone accepted the new “realistic” interim criteria, and some questioned the task force’s failure to consult experts in the national laboratories. They suspected Shaw had cut them out because their research delivered bad news. True or not, the suspicions were indicative of the poor relationship between Shaw and the national labs due to conflicts over research on advanced reactor designs, and Shaw’s sacking of lab management and rejection of their safety concerns.17 Within the AEC staff, Robert Colmar and Morris Rosen attacked the interim criteria as unjustified until better data were available.18 The Hanauer task force demurred, accepting some technical uncertainty if there was no strong contrary evidence. Rosen and Colmar did not accept uncertainty when safety margins seemed small.19 Unless the new criteria were codified in AEC regulations, they would be open to question at every licensing hearing. To avoid this, the AEC opted for rulemaking hearings to develop evidence for having the ECCS criteria placed in the Code of Federal Regulations. There were twentythree operating plants, fifty-four under construction, and forty-nine planned. With power shortages forecast in the Northeast, putting the ECCS issue to bed was crucial.20
44 |
Beyond the Design Basis
THE ECCS HEARINGS
Antinuclear activists pooled their resources in the ad hoc Consolidated National Intervenors and became a party to the hearings. Daniel Ford of the Union of Concerned Scientists and lawyer Myron Cherry aimed to expose what they believed was the AEC’s cover-up of negative research results on ECCS. They warned that a major accident might bring “a peacetime catastrophe whose scale might well exceed anything this nation has ever known.”21 Cherry recalled that he aimed to put the AEC on trial to “allow [The New York Times] to write a headline. I didn’t care at that point about getting an answer to prove we were right or we were wrong [about ECCS]. I was out to stop the agency because I thought they were dishonest.”22 Ford and Cherry had no technical training, but they did not need it. Ford became the technical expert by learning the ropes from dissident AEC insiders. Ford recalled he drew on the wisdom of dozens of experts at Oak Ridge, Idaho, and inside the AEC’s Washington headquarters staff. “I spent days being tutored. It was like a spy novel.” To obtain assistance and documents, Ford took steps to elude suspected surveillance by taking multiple taxi rides and ducking into malls. Anonymous AEC staffers gave Ford and Cherry a line-by-line critique of the Hanauer task force’s written testimony for use during cross examination. “[Morris] Rosen was one of our biggest sources,” Ford noted. In one phone conversation, Ford talked to him for “four or five hours. My ear was aching. I had cramps in my fingers from taking notes.”23 Rosen and Robert Colmar were so helpful that Cherry and Ford gave them secret superhero names, “Batman and Robin.”24 Cherry was the antinuclear movement’s most effective lawyer. He was so good, he earned a frontpage profile in the Wall Street Journal. One utility official said of Cherry: “He’s an obnoxious SOB, but unfortunately, he’s also the best trial lawyer I’ve ever seen.”25 The duo carried out effective, damaging cross-examinations of expert witnesses. Outside the hearing room, they released non-public records and research that indicated the AEC may have squelched staff dissent, demoted Rosen and Colmar from ECCS responsibilities, and pressured witnesses. The AEC contested these claims, but the disputes received the extensive coverage Cherry hoped for in the nuclear industry trade press, Science magazine, and the New York Times.26 In addition to their challenge to the AEC’s integrity, Ford and Cherry raised doubt about its technical competence. Industry observers were
Beyond the Design Basis |
45
startled at Ford’s effective interrogations, particularly his dismantling of Milton Shaw on the witness stand. Shaw’s prowess at congressional testimony was legendary. He dazzled representatives with his command of policy and details. Yet when Ford asked him simple technical questions about his written testimony, he “literally wilted,” a reporter wrote, and was “verbally floored.”27 The AEC was so divided that even some regulatory staff were rooting for Ford. Roger Mattson confessed he watched Ford work Shaw over with “joy in my heart.” “There was no great love for Uncle Milt” among the staff, Mattson noted, “because his testimony supported the industry’s position not those of the regulatory staff.”28 Shaw’s humiliation was a battlefield victory, but the AEC won the regulatory war over ECCS. Cherry, Ford, and the dissenting experts poked holes in the acceptance criteria, but from the intense technical probing emerged defensible regulatory criteria that endured for the next forty years. To really halt licensing, the intervenors needed to show that the criteria were unsalvageable, not just in need of more analysis and research. The AEC simply opted to do both. New resources flowed to ECCS research, allowing the agency to conclude it was reasonably effective. Overall, industry officials reported they were pleased with the hearing results. However, the new criteria and required upgrades to existing plants were so expensive that Consolidated Edison of New York decided to close its relatively small Indian Point 1 reactor. It offered several operational alternatives to reduce risk, but the AEC remained firm on required design modifications. The AEC’s safety criteria, in effect, regulated one plant out of existence.29 For the AEC, the hearings were a public relations setback. Physicist Ralph Lapp called them the agency’s “technolegal Vietnam.”30 Revelations of its internal divisions over safety reinforced doubts about the AEC’s objectivity and competence. The ECCS hearings added to the agency’s embarrassing 1971 federal court loss in the Calvert Cliffs nuclear power plant licensing case. In a harshly critical decision, a federal circuit court compelled the AEC to write extensive Environmental Impact Statements (EIS) on new power plants. District court judge Skelly Wright ruled that the AEC’s refusal to write a full EIS on the facility “makes a mockery” of the National Environmental Policy Act (NEPA). Already criticized as a secretive, pronuclear agency, the Calvert Cliffs case forced the AEC to engage its critics in meaningful dialogue. The AEC’s regulatory program was only 1 percent of the AEC budget, but regulatory problems dominated headlines. The “almost prostrate” regulatory division, as one industry newsletter described it, was the AEC’s
46 |
Beyond the Design Basis
biggest problem. The Nixon White House was also concerned about the too-close relationship between the JCAE and the AEC.31 The solution was to appoint outsiders to a commission whose membership typically came from scientists and lawyers within the nuclear community. In 1971, President Richard Nixon appointed James Schlesinger, a Rand Corporation economist, to replace Glenn Seaborg as chairman. Nixon followed with the appointment of William O. Doub, a lawyer and chairman of the Maryland Public Utilities Commission and, a year later, Dixy Lee Ray, a biologist. Doub recalled that they came with a mandate to launch “a big house cleaning job.”32 Schlesinger told one staff member that the AEC staff was “living in Lotus Land,” and the regulatory side needed “major surgery.”33 An internal analysis of the agency described its organization as “inbred, traditionally oriented, and non-adaptive.”34 Doub quickly turned to “putting the screws” to AEC management, which resulted in a “meteoric growth” of the regulatory budget. Doub recruited other outsiders to the regulatory staff, such as Manning Muntzing, a telephone industry lawyer, who became director of regulation. “Muntzing didn’t know anything about nuclear power,” Doub said, which made him “free of any kind of bias or prejudice.”35 The new guard quickly came into conflict with the old as Schlesinger and Doub crossed swords with commissioner James Ramey. Ramey was an ardent New Deal Democrat and anticommunist who married his wife in the apartment of his hero, theologian Reinhold Niebuhr. After working as a lawyer in the Tennessee Valley Authority and the AEC, he served as executive director of the Joint Committee before receiving his appointment to the AEC from President John F. Kennedy in 1962. The New York Times described him as the “single most influential member of the Commission in the past decade.”36 After intense discussions, Schlesinger and Doub prevailed in setting a new course; the AEC would comply with the Calvert Cliffs decision. For the remainder of his term, Ramey’s avid pronuclear views put him at odds with the other commissioners even as he retained considerable influence. Schlesinger and Doub surprised an industry gathering by announcing that the AEC would no longer fight the industry’s battles with its critics. But they also resisted talk that the AEC be broken up. They hoped internal reforms would be enough. The commission feared a loss of quality among the regulatory staff and believed the existing separation of the promotional and regulatory staff made the move unnecessary.37 In 1972, the tide turned toward an agency split. In January, as the ECCS hearings began, the JCAE chairman, Senator John Pastore, wrote
Beyond the Design Basis |
47
Schlesinger to suggest that a committee address the perception of an “inherent conflict” within the agency.38 Six months later, Pastore took a firmer position that “for the long run benefits of the peaceful atom” the AEC’s promotional and regulatory roles “should be separated.”39 The move toward a split indicated pronuclear forces were losing power, which spilled over to safety research. The triumvirate of James Ramey, Milton Shaw, and Representative Chet Holifield supported turning most safety research over to the vendors and compelling the national laboratories to focus on developing the breeder reactor, Holifield’s favored project. At various times, Shaw forced out directors who dissatisfied him at the Idaho National Reactor Test Station, Argonne, and even Oak Ridge’s highly regarded head, Alvin Weinberg, who fell from Holifield’s good graces over reactor safety. In 1972, Holifield stunned Weinberg by informing him, “Alvin, if you are concerned about the safety of reactors, then I think it may be time for you to leave nuclear energy.” At the end of the year, Weinberg stepped down.40 Within the agency, Shaw was the tip of Holifield’s spear. Tom Murley, an AEC staffer and future NRC director, recalled Shaw as an untouchable “bigfoot.”41 Nobody challenged him. Until somebody did. In 1973, Shaw resigned, and James Ramey was not reappointed as commissioner. Holifield retired a year later and there was serious discussion of breaking up the JCAE. The arrival of new AEC commissioners from outside the nuclear industry had shifted the balance of power. The unlikely force behind the shift was Dixy Lee Ray. A University of Washington professor and head of Seattle’s Pacific Science Center, Ray joined the commission in 1972 with the support of President Nixon’s domestic affairs advisor John Ehrlichman. Greeting her appointment with unconcealed sexism, Nucleonics Week noted that Nixon only appointed the “spinster” after “scouring the rolls of distaff academia” and noted that “industry reaction was more amused than serious.” Observers predicted that Ray’s presence would “not seriously affect the work of the Commission.”42 The prognosticators were mistaken. When, in 1973, Nixon nominated Schlesinger to head the CIA, Ray was promoted to chairman. Ray’s academic background disposed her favorably toward the national laboratories in their conflicts with Shaw. Shaw, she concluded, was out to “destroy” Oak Ridge. She was equally “determined that that fine institution should live forever.”43 Stripping away some of Shaw’s power, she announced the formation of a new office of safety research giving regulators more control over safety research. One source told an
48 |
Beyond the Design Basis
industry publication, “They’re dancing in the streets of Idaho.”44 Rebuffed by the commissioners, Shaw left at the end of June.45 Ray went after Ramey next, winning the support of the other commissioners for a letter to Nixon requesting that he not be reappointed. Ramey, the four commissioners wrote, was a man bogged down by “ties to a world that no longer exists” and was incapable of the “independent thinking” needed to “harmonize the conflicting demands of energy sources on the one hand and the demands of safety and environmental quality on the other.”46 Ramey, the longest serving commissioner in AEC history, was not reappointed. Ray’s move had broken the hold of the JCAE on the agency and elevated the importance of reactor safety research. While an angry Holifield and Craig Hosmer called Ray on the carpet, the JCAE generally considered the reorganization internal agency business. Seventytwo-year-old Senator Stuart Symington of Missouri told the fifty-nineyear-old Ray, “You stick to your guns, young lady.” As the New York Times reported, Ray had done what her male predecessors never did, “establishing the commission’s independence from the domineering Congressional committee.” Ray’s move was a power play, but it also addressed technical needs made obvious by the ECCS hearings. Ray seemed to admit that Shaw’s critics were right that safety research had not been independent. The new safety division, she said, would provide regulators with “greater emphasis and effectiveness in our safety research programs” and give “new directions and a renewed dedication to safety research [which] will help to speed resolution of the still unanswered questions in this rapidly developing technology.” The division’s new director dismantled many of Shaw’s mandates for the labs.47 For activists and a growing minority of the public, these moves were too little, too late. Shortly before he left the AEC in 1974, Commissioner William Doub assessed the damage. The AEC-industry alliance to sell nuclear power had created public distrust. “No one seemed to realize that these slanted ‘hard sell’ activities were counterproductive; that in fact they appeared antithetical to the AEC’s responsibility for regulating the use of nuclear energy in the public interest; and that a too intimate association with industry could easily lend itself to a charge of compromising the public health and safety.” While he believed the agency was striving to make amends, he feared that the “misjudgments and inappropriate actions of the past” would haunt future regulators.48 The success of UCS in raising questions about the AEC’s integrity and its aura of technical expertise energized the antinuclear movement. UCS’s
Beyond the Design Basis |
49
most significant coup was recruiting consumer activist Ralph Nader to antinuclear activism. Nader praised the group’s influence: “The Union of Concerned Scientists has performed a public service which will go down in history.”49 Pronuclear forces inside and outside the agency were substantially weakened, and nuclear regulators saw their credibility damaged. Establishment of an office of safety research set the AEC up for a relatively painless split and the creation of the Nuclear Regulatory Commission. In addition to its conflicts of interest, the AEC’s singular focus on nuclear energy was anachronistic after the Arab oil embargo. Congress wanted a federal agency that developed a broad array of energy sources, and it passed the Energy Reorganization Act in 1974 to split the AEC into the Energy Research and Development Administration (ERDA) and the independent Nuclear Regulatory Commission to oversee nuclear safety regulation. In 1977, Congress dissolved the Joint Committee on Atomic Energy, and ERDA was rolled into the new Department of Energy. With a statutorily protected Office of Nuclear Regulatory Research, safety research gained a prominent role within the NRC, especially to resolve questions raised during the ECCS controversy and risk assessment. While antinuclear activists dismissed the NRC as the AEC with a new name and stationery, they overlooked the dramatic improvement in the staff’s technical capability made necessary by safety controversies. It was true, as critics noted, that the NRC simply absorbed the old AEC staff and regulations unchanged, but it was a far more capable regulatory agency than its predecessor. By January 1975, when the NRC began operations, it had grown to almost 1,500 personnel. The ECCS controversy committed the regulatory staff to a substantial research agenda, including work on risk assessment. The large research budget was dubbed “the ECCS mortgage” to fulfill the implicit promise made at the ECCS hearings that new research would reduce remaining areas of uncertainty. By the late 1980s, the NRC alone had spent a cumulative $700 million to improve ECCS systems by studying their behavior and creating sophisticated accident models. The AEC sponsored work at Idaho on the LOFT program; thermal-hydraulic modeling became the basis for the field of computational fluid dynamics and established some of the most important codes, such as the RELAP-5 code for nuclear power, the chemical industry, and even bioengineering. By ensuring that there was adequate protection against the consequences of a LOCA, the ECCS rulemaking, code development, and
50 |
Beyond the Design Basis
research had restored stability to the licensing process, expanded understanding of accident progression, and created new science and computer modeling to solve emergent safety issues. The NRC created what the AEC could not: a stable base of knowledge from which to regulate. Roger Mattson believed that the NRC became committed to maximizing regulation through research: “That agency and its intensity for technical truth are unparalleled in my experience in government anywhere in the world.”50 As controversy raged in 1972–73, meanwhile, some regulatory staff had been quietly at work on a new safety model based on probabilistic risk assessment, a major report to quantify realistic accident probabilities and consequences. Closure of the ECCS issue and the new report, these officials hoped, would prove nuclear power was safe enough and reassert the AEC’s technical authority. THE REGULATORY NEED FOR RISK ASSESSMENT
In the late 1960s reactor vendors, led by GE, were well ahead of the regulatory staff developing fault-tree expertise for individual safety systems. Fixated on a predetermined set of credible accidents, regulators were not prepared to deal with incredible accidents that suddenly gained credibility. One such accident went by the inelegant name Anticipated Transient Without Scram (ATWS). Anticipated transients are common plant events, typically caused by human error or equipment malfunctions, and plants are designed to handle them. For example, the feedwater system pumps water that cools the nuclear reactor and boils to steam to operate a turbine generator producing electricity. If a feedwater pump fails, plant instruments detect the transient and typically send an automatic signal for a reactor “scram,” where neutron-absorbing control rods are inserted into the fuel to stop the chain reaction. But what if the rods do not automatically scram? In some unlikely situations, a failed scram could lead to a violent, containment-bursting power surge, particularly in GE’s BWR reactors. Reactor vendors had considered scram system failures highly improbable events given their redundant (backup) channels and numerous control rods. A failure to scram required the simultaneous failures of multiple components.51 In 1969, E. P. Epler, a consultant to the ACRS, made scram failure a credible event. “The industry by not attempting to mitigate the ‘China Syndrome,’ ” he argued, “has placed the entire burden of protecting the public on the reactor shutdown system.” Without a stout containment
Beyond the Design Basis |
51
only two things stood in the way of catastrophe, he continued: an anticipated transient and a failed scram. Epler backed up his case with probabilities. An anticipated transient, he pointed out, was likely to occur every year at a typical plant, and experience showed that scram systems could fail at a rate of about one in a thousand scrams. If the United States built one thousand reactors, a scram failure could happen about every year.52 Much higher than previous estimates, Epler’s probability figure included an estimate of common-cause failures that could disable all the redundant features of a shutdown system, such as an electrical fire or a common manufacturing defect that disabled all redundant scram systems at once. Such a defect had disabled the scram relays at a GE reactor in West Germany, a condition not discovered for two weeks. Similar ATWS-like episodes occurred in 1970 at Hanford and a test reactor in Arkansas. While the public was led to think a major accident was a one in one million chance, Epler said, experts knew better. The AEC had painted itself into a corner, “having claimed the [large radiation] release to be incredible, having failed to provide for civil defense against a massive contamination, and having failed to support an R&D program to combat the ‘China Syndrome.’ ”53 If a failed scram really was a credible accident, it had the potential to halt reactor licensing. The purpose of the AEC’s Three Ds was to ensure there were safety features for every credible accident. As Clifford Beck observed in 1959, “In the plants finally approved for operation, there are no really credible potential accidents remaining against which safeguards have not been provided to such extent that the calculated consequences to the public would be unacceptable.”54 ATWS might push a reactor beyond the outer boundary of design safety. It was the first of a new category of “beyond design basis accidents” that regulators confronted in the 1970s and 1980s, such as “loss of electric power events” (that is, station blackouts) and large fires, such as the one that badly damaged the Browns Ferry plant in 1975. The staff concluded, “Current scram systems will not provide adequate protection against anticipated transients when there are a large number of reactors (e.g. five hundred to one thousand power reactors) in the U. S. On this basis, we recommend that either the reliability of reactivity reduction be substantially improved or the consequences of ATWS be shown to be acceptable.”55 If the staff concluded a licensed reactor lacked adequate protection from an ATWS, it would have to be shut down for a “backfit.” Backfitting was the tool regulators used to require licensed plants to upgrade
52 |
Beyond the Design Basis
their safety systems to meet new minimum standards of adequate protection or for “enhancements” above the standard if it was deemed safety and cost beneficial. The nuclear industry complained that backfits were too common and of limited safety value. For example, the Humboldt Bay Unit 3 plant in Eureka, California, began operations in 1962 and by 1976 the AEC had required it to make twenty-two safety upgrades related to generic issues and another forty-two safety issues specific to its unique design.56 ATWS backfits might be very expensive, and licensees questioned the regulatory staff’s safety case. More than any previous safety issue, ATWS forced the AEC to think probabilistically. At an ACRS meeting in 1970, numbers flew. GE, the most probabilistically-minded vendor, claimed the odds of an ATWS was less than one in four hundred trillion. AEC staff reported that all four vendors “consider the probability of failure to scram on anticipated transients to be so low that no additional protection is required.”57 Hanauer, however, called the GE estimate “nonsense,” and told his superiors that the vendor was using “fake probabilities.”58 Oak Ridge estimated a scram failure probability might be as high as one in one hundred years. Eight consultants to the ACRS agreed that the best estimate was between one thousand and ten thousand years. British and Canadian experts offered estimates closer to one in one thousand, and the British thought it would take too many years of operating experience to demonstrate a reliability estimate of one in ten thousand years. One ACRS consultant criticized GE’s analysis for missing numerous common mode failures, such as “human error, sabotage, or environmental states such as fires, floods, earthquakes, tornadoes, etc. . . . The AEC Staff figures of 10–3 [one in one thousand] for the unreliability of reactor scram systems is entirely reasonable. Certainly the GE value of 2.4 × 10–15 [~one in four hundred trillion] is entirely unreasonable.” As the divergent failure probabilities estimates attest, ATWS was among a class of potential accidents, such as fires, pipe breaks, and sabotage, that defied easy quantification.59 Whatever the true probability of a failure to scram, staff decided to articulate an informal probability they believed provided adequate protection for a major accident. They debated probabilities between one in one million per reactor year of operation (10–6) and one in ten million (10–7). A 10–7 probability, they believed, was certainly acceptable. With a thousand reactors that would be just one accident every ten thousand years. “We believe that most people would consider this an acceptable probability. We find it difficult to decide whether a 1000-year interval
Beyond the Design Basis |
53
might be acceptable. However, we believe that it would be hard to obtain public acceptance for a 100-year mean interval between serious accidents.” Any individual accident scenario, such as a failure to scram or damage from a major tornado, should be one-tenth of the overall accident probability. These goals were never formally adopted by the AEC, but as one staffer recalled, such informal probabilistic standards had always influenced AEC thinking and, as will be discussed later, found their way into court rulings on AEC environmental impact statements in the 1970s.60 The staff’s informal goal for ATWS was unattainable. Even if the AEC ordered reactor plants to be outfitted with two diverse scram systems, it was not feasible to demonstrate a one in ten million (10–7) probability of scram failure. An entire generation of regulatory staff and their children would be long dead before there was enough operating history to gain confidence that the plants had achieved such a safety goal. Stephen Hanauer warned, “Our present position cannot be defended if challenged. There is no apparent way to close the gap between the unreliability we have and the unreliability that is needed. An interim solution is needed now.”61 AEC leaders worried that ATWS might stall licensing or make them look incompetent and biased if the issue was mishandled. The reliance on engineering judgment was proving inadequate to the task. Better probability estimates, however, required new staff expertise, better data, and new tools like computer modeling and fault trees. AEC consultants urged the agency to improve its analysis of scram failures by developing its own fault-tree expertise.62 The AEC and NRC wrestled with industry for fifteen years on an ATWS regulation. THE POLITICAL NECESSITY OF RISK ASSESSMENT
The 1965 update to WASH-740 was withheld from the public, but it was never quite a secret. The antinuclear movement’s persistent requests for the update’s release, and its growing influence, made it harder to say no. By the early 1970s, the antinuclear movement had gathered strength in states such as California and Massachusetts, and it picked up the support of scientists with considerable credibility, such as MIT physicist Henry Kendall, a future Nobel Prize winner, who founded the Union of Concerned Scientists. The movement boasted congressional allies, too, such as Alaska Senator Mike Gravel. In 1970, Gravel requested the release of WASH-740’s update. Glenn Seaborg recorded in his journal that the
54 |
Beyond the Design Basis
commission debated “the difficult question of [Gravel’s] request.”63 Releasing the update’s grim assessment of worst-case consequences was no more appealing in 1970 than it was in 1965. The commission turned Gravel down but conceded to “an entirely new revision” of WASH-740 “on the basis of present information and experience.” The third time around, the AEC hoped it might finally do WASH-740 right.64 In early 1971, the AEC expanded the proposed study’s scope and split it in two. Milton Shaw’s promotional Division of Reactor Development and Technology would write a report to explain the AEC’s defense-in-depth safety philosophy. The AEC regulatory staff was to study “realistic accident probabilities and the consequences” for major reactor events, a WASH-740 with probabilities. It would “assess, on a quantitative basis, the consequences of accidents” to develop a staff position on whether the Price-Anderson Act was still necessary. The act came up for review and renewal in Congress every ten years on the assumption that enough operating experience and experimental data would allow the nuclear industry and insurance companies to carry liability insurance alone. The new report’s estimates, it was hoped, would also allow risk comparisons, as Chauncey Starr advocated, and provide “a proper perspective for those persons who must make social judgments between the benefits associated with nuclear power balanced against the benefits of fossil fueled power and their respective risks.”65 This quantitative approach, the AEC anticipated, would be “understood and assessed by the informed public.”66 Even though initial estimates for the study were a modest $250,000, workload constraints limited progress. The AEC got a probabilistic nudge from another federal agency. NEPA and Calvert Cliffs opened the regulation of radiation to scrutiny from the Environmental Protection Agency (EPA). Numerous responsibilities were consolidated in the EPA, including setting safety standards for hazardous substances like radioactive materials. NEPA required a cost-benefit analysis of the environmental risks associated with government decisionmaking, and the EPA was keenly interested in developing risk assessment expertise of its own. It questioned the AEC’s reluctance to use probabilities in its Environmental Impact Statements (EIS) to quantify the risk of a major reactor accident, what the AEC called a Class 9 accident. The AEC had to develop environmental impact statements that included a costbenefit analysis and consideration of probabilities and consequences of postulated accidents. The AEC did not include consideration of Class 9 accidents, under the circular logic that even though it could not quantify
Beyond the Design Basis |
55
the probability of a Class 9, such analysis was not needed because these accidents were highly improbable.67 Evolving NEPA case law supported the AEC’s exclusion of Class 9 accidents under the “rule of reason,” a court standard that allowed federal agencies to exclude consideration of impacts and alternatives considered remote and speculative. Courts accepted the exclusion of Class 9 accidents as reasonable based in part on the accident probability estimates offered in the 1957 WASH-740 report and the informal staff estimate of one in ten million. For example, the DC Circuit held that “there is a point at which the probability of an occurrence may be so low as to render it almost totally unworthy of consideration.”68 However, federal agencies pressed the AEC to justify its exclusion of Class 9 accidents through probabilistic methods. In reviews of draft EIS, federal agencies, such as the Department of Interior, pressed the AEC to consider Class 9 accidents. At a meeting with the President’s Council on Environmental Quality, staffer Tom Murley recorded in his notes that CEQ wanted a better answer, too. The “AEC is telling the world that Class 9 accidents are incredible—they should tell the world why they think so.”69 The AEC learned at the meeting that the EPA might invade its turf by crafting its own “Farmer Curve” and establishing an adequate protection standard. Eventually, the AEC inserted a statement in the EIS for proposed plants that the question of Class 9 accident probabilities would be addressed by the Reactor Safety Study.70 The AEC’s jousting with the EPA and other federal agencies paralleled similar skirmishes its congressional patron, the Joint Committee on Atomic Energy, waged with competing congressional committees. From its inception, the JCAE enjoyed unrivaled control over atomic weapons, energy legislation, and AEC oversight. Environmentalism empowered related congressional committees to challenge the JCAE’s monopoly on environmental grounds. Adding to the JCAE’s woes, Gravel proposed the repeal of the Price-Anderson Act’s insurance protection for the nuclear industry. To take back the initiative, Saul Levine, an AEC staffer on loan to the Joint Committee, recommended requesting an AEC study on its safety approach and the probabilities and consequences of major accidents. In October, the Joint Committee requested that Reactor Development and Technology begin its study, WASH-1250. In December, committee staff also pressed regulatory leadership to get moving on their accident study to stave off Gravel’s bill.71 By early 1972, then, there was enough political and regulatory interest to launch a major probabilistic risk assessment.
56 |
Beyond the Design Basis
THE REACTOR SAFETY STUDY
Given the AEC’s growing credibility problems, the regulatory report on risk needed an outside expert to lead it. In March 1972, the AEC appointed Norman Rasmussen, an associate professor of nuclear engineering at MIT. Originally an instructor in the MIT physics department, Rasmussen was recruited to the nuclear engineering department, where he specialized in gamma ray spectroscopy and nuclear fuel experiments in MIT’s research reactor. In 1969, Rasmussen became more deeply involved in teaching reactor systems engineering and had considerable contact with AEC engineers who took summer classes at MIT.72 Rasmussen met with Hanauer to map out the report’s tasks. Several covered the traditional ground of consequence studies like WASH740—estimations of fission product release, dispersion, and health consequences. The hard part would be two groundbreaking tasks: Rasmussen proposed to develop component failure data and construct fault trees for major accident sequences to quantify accident probabilities. He warned, “There will be a significant lack of precision in our final result.”73 Hanauer admitted the report team might have to “learn by trying.” “Are we willing to be told that the task is impossible of achievement with presently available resources?” he wondered. “We want the whole package. Doing [accident consequences without probabilities] would be another WASH-740 with the risk still unquantified. We might have to settle for that, but want to try to do probabilities.”74 Hanauer recognized the AEC needed cross-disciplinary help, and by the early 1970s, there was plenty of it. There was a host of experts attached to the defense industries and think-tanks like the RAND Corporation that were working to apply risk quantification to decision making. Hanauer wrote Howard Raiffa, a professor at the Harvard Business School and a pioneer in decision analysis and decision tree theory. Raiffa’s 1968 book Introductory Lectures in Decision Analysis was influential in business, military, and the emerging profession of public policy. Hanauer thought Raiffa’s ideas could help solve the probabilistic puzzle that bedeviled the AEC. He told Raiffa that he and the AEC had been “toiling in the same vineyard,” and he hoped to consult with him on the project.75 Raiffa’s research proved critical to the Rasmussen Report.76 Rasmussen was hired, but the study had a long way to go to win full support from a divided agency. As regulatory staff and contractors began work in 1972, the specter of WASH-740 hung over discussions. Tom Murley, a commission staffer, was tasked with keeping track of
Beyond the Design Basis |
57
the new study. His informal notes on the early meetings reveal a staff that approached the report with hope and foreboding. Murley summarized AEC General Manager R. E. Hollingsworth’s goal that the new study would “bury WASH-740.”77 Chairman James Schlesinger, a former economist with the RAND Corporation, favored the probabilistic study and believed it would reveal that current designs were already too conservative. Others worried WASH-740’s problems would bury the new report. Commissioner James Ramey remembered the WASH740 struggle well, and he tended to resist moving forward with the report, fearing the same flawed accident probability results that doomed the 1965 update. Among other moves, Ramey forbade Rasmussen from making any accident consequence estimates until he produced defensible probability estimates. With this restriction in mind, the AEC initially presented WASH-1400 to Congress only as a study of accident probabilities. Even as Ramey fretted about the study, he was attracted by its public relations possibilities. He gave the report a pro-nuclear orientation by requesting a section be added comparing nuclear to nonnuclear risks, as Chauncey Starr had proposed. In justifying the new section, the staff paper noted, “The public daily accepts risks to its health and safety when it uses automobiles, airplanes, subways, elevators, and so on. Many of these activities have risks that are precisely known or can be computed. . . . The risks associated with nuclear plants would then be placed in the context of other risks of the modern world.”78 Inserting a comparative section on accident probabilities might be persuasive to the public, but it made the accuracy of Rasmussen’s estimates crucial and controversial. “Get Saul Levine full time!,” AEC Chairman James Schlesinger commanded Murley.79 Work on the report began sucking in more resources. The $250,000 price tag grew by more than an order of magnitude. Rasmussen only served the AEC on a part-time basis, and Schlesinger recognized the study needed a strong hand. Levine brought to the task a temper so “ferocious” that some staffers summoned to his office remained standing to dodge the desk items he might throw at them. But he possessed navy discipline, organization, and attracted loyalty from staffers who adapted to his temper and aim. William Vesely, a fault-tree expert recruited from the Idaho test station, said of his boss, “Saul was the heart of the Reactor Safety Study.” Rasmussen agreed: “His input was greater than mine in many ways.” He thought it should have been known as the Rasmussen-Levine Report.80 Like Starr, Levine believed quantification could change how society understood risk. The problem
58 |
Beyond the Design Basis
FIGURES 10 AND 11 :
MIT Professor Norman Rasmussen and AEC/NRC staff member Saul Levine were the main authors of WASH-1400—the “Rasmussen Report.” Source for Rasmussen: MIT Museum. Source for Levine: US NRC.
was that society could not “quantify the actual risk” of low-probability, high-consequence hazards of new technologies. The report would propel society “into facing this matter [of risk] explicitly as opposed to implicitly. . . . We have already accepted implicitly many low probability large risks in many areas. We are now changing direction such that these risks will begin to be considered explicitly.”81 Levine also brought to the study a conviction that it could have practical application to regulatory questions, such as the ATWS controversy. Meanwhile, Rasmussen toured Great Britain and West Germany to consult with risk experts. Dissatisfaction in Europe with the US design basis accident had grown, and there was considerable work being done to develop a probabilistic model. Rasmussen discussed risk assessment with F. R. Farmer over viciously combative games of ping-pong at the British regulator’s home. He came away impressed with their insights, including event trees, which worked well for the British in modeling “top events,” the major system failures leading to an accident. The critical innovation for Rasmussen and Levine was to combine event and fault trees into one complementary model.82 Over the next year and a half, regulatory staff, national laboratory experts, and outside consultants assembled the data and the new hybrid
Beyond the Design Basis |
59
methodology. In early 1973, following a calculation approach advocated by Vesely, the report team opted to deal with uncertainty by presenting failure rate estimates within “a reasonable error band.” By contextualizing its estimates within bands of uncertainty, Rasmussen’s team thought its estimates could be applied to both technical analysis and policy.83 And maybe regulation, too. Safety questions had become more complex and, as Saul Levine observed, the report offered an avenue to improve regulatory capability. Murley recalled that some within the agency doubted whether the regulatory branch had the competence to handle tough technical issues. For that reason, Levine and Hanauer argued that the regulators had to use the report to gain competence beyond design basis accidents. If the regulatory staff could develop an expertise in the report’s methodology, they could get ahead of new safety questions. The AEC created an office of safety analysis “organized around a cadre from the Rasmussen Study” to investigate risk assessment for regulation.84 A year after the AEC brought Rasmussen on board, Ramey remained wary of a full study. At a meeting in early 1973 Ramey quipped, “If it shows just one human life [lost], I’m against it.” Pillars of the atomic establishment like Ramey were losing influence to the new arrivals, like Schlesinger and Manning Muntzing, who wanted the report regardless of its public impact. It was time to get all potential hazards out in the open, Muntzing thought. He was “inclined to let the chips fall where they may,” Murley noted. “But we should recognize what we are getting into. It may take an organization the size of Sandia,” a laboratory with seven thousand staffers, “to react to the implications [of the study].”85 In mid-May 1973, new events mooted Ramey’s efforts to keep Rasmussen’s probability estimates separate from consequence analysis. Armed with a more powerful Freedom of Information Act, Myron Cherry threated a lawsuit if the AEC did not release records of the 1965 update to WASH-740. Whether Ramey liked it or not, a worst-case accident study was about to make headlines with an estimate of perhaps forty-five thousand deaths. Manning Muntzing worried the update’s release would stoke antinuclear fires. The Rasmussen Report began to look more like a solution than a problem.86 In a decisive meeting in May 1973, Rasmussen carried the day. WASH-740, he reported, had overestimated accident consequences. His own preliminary probability estimates indicated a core-melt accident with serious health consequences was a one in a million proposition. A
60 |
Beyond the Design Basis
worst-case scenario of 1,400 acute (early) fatalities was about one in ten billion years. Drawing on a new study of radiation health effects, Rasmussen estimated that long-term premature deaths from cancer would probably be about the same as acute fatalities. All in all, these were limited consequences for such an improbable accident. His findings, he pointed out, had the concurrence of the probabilistically-minded British. Ramey agreed. WASH-1400 would estimate both probabilities and consequences. “OK to go ahead!” Murley wrote. The Rasmussen Report seemed to have threaded the needle. It would be technically sound, support nuclear power, and be acceptable to the commission.87 His term at an end, Ramey left the commission a month later. Over the next year, Rasmussen’s team of almost sixty fleshed out WASH-1400, and, as it produced positive conclusions on a host of possible accident sequences, it quickly overtook in importance most other AEC reactor activity. The AEC began to deploy Rasmussen and WASH-1400 to make the case for reactor safety. In September 1973 at a JCAE hearing, Rasmussen made a show-stealing appearance where he deftly minimized the bad news on the WASH-740 update. He dismissed its “upper-limit” calculations as “far from reality” by at least a factor of ten. His report would be “fairly favorable” to nuclear power. Delighted, Congressman Craig Hosmer said the report was “one of the most significant things that we have been presented in a number of years in reactor safety.”88 Industry press reported, “If one thing is clear . . . the Atomic Energy Commission is counting rather heavily on the results of the Rasmussen risk quantification study to confirm . . . that the operation of nuclear reactors poses no undue risk to the health and safety of the public.”89 Another called it “a powerful new weapon . . . to reassure the public on the safety question.”90 In January 1974, the AEC began to advertise Rasmussen’s initial probability estimates. In presentations at the National Press Club and before Congress, Chairman Dixy Lee Ray presented the report’s preliminary results with “great relish,” as one trade publication observed. Significant accident consequences were no worse than a major airline accident with a probability of one in one hundred million reactor years. Even a less severe meltdown’s probability was only one in a million years: “It compares with, for instance, the chance of drawing two poker hands in a row of four of a kind while playing 5-card draw.” Ray’s presentation was a hit, but her poker analogy was a portent of mistakes to come. She presented accident probabilities as a single point estimate and did not explain that the report had large potential error bands. Her poker hand, presumably, had none.91
Event tree Main chute
Top Events: System needed to prevent injury
Reserve chute
System succeeds Main chute works, float to ground
Initiating events: Jump from airplane
Reserve chute works, float to ground Both chutes fail, jumper casualty
System fails
Fault tree
Reserve chute fails or
Chute not deployed
Chute tangled
and
Auto activation device fails
Rip cord breaks
or
Altimeter malfunctions
Battery is dead
FIGURE 12 . Sample PRA. A key innovation of WASH-1400 was to combine the advantages of event and fault trees. In this example of a parachute failure, an event tree depicts the major sequences—top events—leading to a complete chute failure. A fault tree details how a reserve chute failure could occur. Adding in component-failure probabilities at each gate would allow for an estimate of overall failure probabilities. Source: US NRC.
62 |
Beyond the Design Basis
WASH-1400 UNDER FIRE
Ray’s presentation galvanized critics of nuclear power. Under the leadership of MIT physicist Henry Kendall and executive director Daniel Ford, the Union of Concerned Scientists became nuclear power’s most effective critic. Their success at the ECCS hearings gave the UCS the credibility to pick up more high-level AEC staff assistance. As Ford later recounted, John O’Leary, the AEC director of licensing, secretly met with Ford and Kendall over lunch at the Cosmos Club in Washington, DC, to hand them a briefcase full of unreleased records about reactor vessel safety issues.92 UCS established a stable of experts willing to take on the nuclear establishment in public. To counter Rasmussen on risk assessment, Kendall recruited non-nuclear aerospace expertise. William Bryan, a former space program fault-tree expert, testified before a California legislative hearing that nuclear power plants, like rockets, were too complex and needed too much data for fault trees to accurately estimate accident probabilities. The multiple judgments required in developing a fault tree made “the absolute value of the number . . . totally meaningless.”93 Fault trees could be important safety design tools, but not to quantify absolute risk.94 Rasmussen countered Bryan’s critique, pointing out that he had overlooked substantial advances the Rasmussen team had made by combining fault and event trees into a unified model.95 The disagreement between the two experts signaled some of the wisdom and flaws in the differing paths chosen by the AEC and NASA. In WASH-1400, the AEC produced a pioneering risk assessment but with the large uncertainties and potential error that Bryan predicted. By contrast, NASA was so skeptical of the weaknesses in probabilistic risk assessment that it did not use it much in the Apollo and space shuttle programs. Only after the 1986 Challenger disaster did it concede that PRA techniques might play a useful role in assessing shuttle risk.96 Released in 1974, the draft of the Rasmussen Report contained unsettling revelations. An analysis of a broad spectrum of accidents revealed that the chance of any kind of core-damaging accident, including minor ones with few health consequences, was much higher than previous estimates—one in seventeen thousand reactor years (typically rounded to one in twenty thousand).97 It also revealed important flaws in the AEC’s regulatory assumptions. It did not consider significant core-melt or containment-failure accidents as credible, but Rasmussen found that such accidents dominated the overall risk to the public. The
Beyond the Design Basis |
63
FIGURES 13 AND 14 .
Despite its methodological advances, the Reactor Safety Study’s numerical risk estimates came under heavy criticism. The study’s most effective antinuclear critic was the Union of Concerned Scientists, led by MIT physicist and future Nobel winner Henry Kendall and Executive Director Daniel Ford. The UCS was effective in raising questions about the potential for error and uncertainty in risk estimates. Source: Daniel Ford; Kendall, by permission of the Norfolk Charitable Trust, Sharon, MA, Amherst College Archives and Special Collections.
greatest risks were not the large coolant pipe breaks postulated in design basis accidents, but more common mishaps, such as small pipe breaks, common-mode failures, station blackouts, and accidents made worse by human error and poor maintenance. This estimate elicited skepticism in the industry and among regulatory staff. Robert Budnitz, who later joined the NRC as a director, observed that the staff remained anchored in the belief that an accident was a one in a million probability. They thought Rasmussen’s probability estimate was high by a factor of ten, an optimism common among experts untrained in probabilistic methods.98 Mostly, though, the report offered good news for the nuclear industry. While his accident probability estimates were higher than previously assumed, Rasmussen’s consequence estimates were much lower. Privately, Henry Kendall scoffed: “Is it more than a coincidence that, with so much public controversy, ultimately the AEC should discover that accidents have much diminished consequences?”99 Nevertheless, Rasmussen’s results were good news for those wishing to promote
64 |
Beyond the Design Basis
nuclear power. Prominently depicted in the executive summary, reactor accident risk to an individual was two orders of magnitude below airline crashes and comparable to freakish deaths from meteors. It provided confirmation for the conviction long held by the industry and regulators that their adherence to extremely conservative safety standards had produced vastly superior safety.100 For the nuclear industry, the report offered the liberating possibilities of rolling back some safety regulations and dispensing with the muchmaligned DBA and numerous unresolved safety issues. Philip Bray of GE strongly endorsed the study in hopes it would replace the industry’s “proverbial albatross,” the design basis accident.101 The report could not have come at a better time. Nuclear power seemed more necessary than ever. The nation was suffering through an energy crisis launched by the 1973 Yom Kippur War. The utility industry expected oil shortages would require a stunning 150 nuclear orders in 1974 across the four major vendors. John Dickman of Exxon Nuclear said, “I take an extreme view here, but, practically speaking, I believe central-station power plants will be essentially all nuclear from here on out both in the US and the highly industrialized nations abroad. There is no longer an economic decision to be made, in any location in the US.” Another industry official expected a “small vocal minority” would continue to oppose nuclear power, “but public acceptance of nuclear plants will be helped by continued good operating experience, expanded public education efforts by all segments of the industry, and increased public awareness of power shortages.”102 The industry was caught off guard when the spike in energy costs and inflation led to almost no growth in electricity consumption in 1974 and only 2 percent growth in 1975. Utilities had based their nuclear construction plans on the assumption of 5 to 7 percent demand growth every year. Nuclear plant construction costs, meanwhile, rose quickly. With flat demand, utilities began canceling reactor orders. In the midst of this tumult, WASH-1400 offered a calming message in the mid-1970s that reactors were safe enough. The executive summary’s comparisons of nuclear power to other risks implicitly established a standard of acceptable risk the public took in other areas of life. The report noted the “surprising degree” to which public tolerance for risk could be expressed in simple quantitative terms: a one in a million fatality risk was considered negligible, a number nuclear power easily met.103 WASH-1400 was also reassuring on specific regulatory issues like ATWS. One industry lobbyist predicted WASH-1400 would “be seen as
Beyond the Design Basis |
65
a turning point in the struggle to replace rhetoric, emotion and intuition with quantitative systematic assessments.”104 The report’s ATWS estimates seemed to confirm the industry view that it was a “nonproblem”; citing WASH-1400’s estimates, one industry publication asked, “Is ATWS Real?”105 Utility companies lobbied to suspend consideration of the issue in licensing proceedings. Saul Levine disagreed that the report justified lowering safety standards, but he suggested the AEC might stand pat since nuclear power was already safer than other common hazards.106 The report complicated a regulatory solution to the failure-to-scram question. The agency’s ATWS team, led by Ashok Thadani, produced an estimate ten times more pessimistic than WASH-1400, and it was greeted with “general disbelief” among regulatory staff, which favored the Rasmussen report’s estimate.107 With conflicting probability assessments, the issue dragged on inconclusively for years. Despite its successful rollout, the report came in for rough treatment due, in part, to its technical flaws, but also to the rapid negative shift of public confidence in US leadership generally. The social unrest of the 1960s, the Vietnam War, and the economic turbulence of the 1970s led to a well-documented disillusionment about leadership across a broad array of public and private institutions. The Rasmussen Report was conceived by the pronuclear AEC during the bandwagon market and in a period of majority public support for nuclear power. The report’s implicit mission to educate the public about the positive benefits assumed that a well-grounded message of reassurance from experts would be trusted by the public. By the time the final draft appeared in 1975, however, the plunge in public trust in the nation’s institutional leadership was, as one pollster described, “simply massive.” This attitude magnified nuclear power’s stumbles. Canceled plants vastly outnumbered new orders; the antinuclear movement thrived; and the new NRC needed to rid itself of the AEC’s promotional legacy and demonstrate that it was an independent regulator.108 The NRC quickly came under fire. It pleased neither the nuclear industry with an efficient regulatory process nor critics who believed the agency was no different than the AEC. When the agency announced it would move expeditiously in dealing with safety questions related to spent fuel reprocessing, critics complained the NRC was “more interested in reassuring the nuclear industry than in reassuring the public.”109 The agency was also caught off guard when one of its engineers, Robert Pollard, announced on the television show 60 Minutes that he was leaving the agency to protest what he believed were the NRC’s lax safety standards.
66 |
Beyond the Design Basis
He joined the Union of Concerned Scientists, where he became its lead technical expert on nuclear power issues. Intervenors also found success in producing significant delays to the construction and licensing of new reactors. One utility executive complained that the NRC’s regulatory process would “strangle the nuclear power industry.” Even if WASH1400 had been flawlessly executed, nuclear power’s economic, political, and regulatory problems were bigger than any report could solve.110 And it was not flawless. WASH-1400 met detractors everywhere, even within the AEC/NRC. The ACRS was mostly positive about the draft report, praising its methodology, but expressed some reservations about potential error in its estimates. Many regulatory staff were skeptical and wary. The report team operated independently of the staff, and the latter was under no obligation to adopt a new complex tool like PRA. By 1974, the Three Ds had hardened into orthodoxy. The triad served nuclear safety well, why change? Executive Director of Operations Lee Gossick pointed out that staff had little expertise in risk assessment and needed extensive training. “We do not foresee, as either necessary or desirable, the complete replacement of current licensing practices by an assessment of aggregate risk for individual plants as was performed in the [report],” Gossick argued. It would neither improve regulatory efficiency “nor would it be likely to significantly affect the level of safety achieved in reactors.”111 WASH-1400 also came under unexpected criticism from physicists outside the agency. Cold War weapons development, Vietnam, and 1960s political activism spurred the rise of a new breed of activist scientists interested in influencing political debates on technology. Frank von Hippel, a Princeton physicist and antiwar activist, took an interest in the role scientists could play in domestic science and technology policy. He called on scientists to “bring about more open and democratic controls on the uses of technology.”112 Inspired by Kendall at the Union of Concerned Scientists, he encouraged the American Physical Society (APS) to evaluate the social implications of science and technology, particularly nuclear power. “With the current energy crisis and general political turmoil, physicists want to become involved,” he wrote.113 Stanford physicist and APS president Wolfgang Panofsky agreed that physicists had “social and political obligations. . . . Increasingly the many problems facing society have come to have important scientific and technological dimensions.”114 The AEC agreed to fund an APS review team, including von Hippel, to evaluate the report. Led by physicist Harold Lewis, a professor at the University of California–Santa Barbara, the APS team confirmed criticism
Beyond the Design Basis |
67
that the report had underestimated the consequences of accidents and needed a better data base. It also doubted accident probabilities could express the absolute risk of an accident. Nevertheless, it praised the report’s methodology and called for quantification of accidents supported by improved data and a substantial safety research program.115 Critical, but not too critical, the APS review did not impede the report’s progress. The Rasmussen team published the final revision of WASH-1400 in October 1975. The NRC was only nine months old, and it inherited the report under vastly different circumstances than those under which it was conceived. A study generated by the AEC to, in part, promote nuclear power, would have to be defended by a new independent agency with no promotional responsibilities. The nuclear industry had started the new NRC era, as Nucleonics Week put it, in “utter chaos” and at the lowest point in its history. The breakup of the AEC and the looming dissolution of the Joint Committee on Atomic Energy left the industry directionless. “It is scarcely an exaggeration to say that utilities have no idea how to finance nuclear plants . . . and no one has any idea whether there is going to be any Price-Anderson liability indemnification available. . . . The new Congress is an unknown quantity as far as its attitude to nuclear power is concerned, and the industry’s super salesmen Reps. Chet Holifield and Craig Hosmer are gone.” The antinuclear movement had the nuclear industry “on the run.” Things were only going to get worse.116 The Rasmussen Report became more a problem than a solution. The NRC commissioners voiced support for the study, but questions grew about its flaws and biases. Following the publication of the draft in 1974, a fire at the Browns Ferry nuclear power plant raised questions about WASH-1400’s apparent underestimation of fire risk. Rasmussen countered that greater consideration of fire did not materially alter the study’s reassuring absolute numerical estimates. But there were other problems. The executive summary’s favorable comparisons to other hazards presented reactor risk as a precise curve without displaying the potential error bands found in the estimates themselves. It contrasted those risks to well-established risk data for airplane crashes and natural disasters. Von Hippel pressed the NRC to rescind the summary.117 For nuclear proponents, the executive summary’s comparisons had been the whole point of the report: to prove to the public that nuclear power was safer than other common risks. Yet, the summary had no discernible influence on the negative trend in public opinion.118 As one friendly critic noted, it “will be a sad day indeed” if the report became known for its absolute risk estimates rather than its more important contribution to
FIGURES 15 AND 16 .
The two most controversial figures in the WASH-1400 report compared its estimates of reactor accident risk to natural and human-made hazards. The graph on the right indicated that the risk of death from an accident was comparable to being struck by a meteor. Source: Reactor Safety Study, NUREG-75/014, Executive Summary, p. 2.
70 |
Beyond the Design Basis
creating an engineering and safety tool. Having endorsed the report, the NRC could not easily back away from the summary. Several commissioners had cited the summary favorably in speeches, including the now ridiculed comparison of nuclear accidents to meteor strikes.119 Von Hippel and other critics successfully lobbied Arizona Congressman Morris Udall’s Subcommittee on Energy and the Environment to hold hearings on the report. Udall’s open door to nuclear critics signaled a shift in the bipartisan support nuclear energy enjoyed. As Democrats in Congress became more closely identified with the environmental movement, staunch allies like Chet Holifield were harder to find across party lines. As the hearings raised questions about WASH-1400, Udall persuaded the NRC to create a second review committee, once again headed by Harold Lewis.120 While the Lewis Committee’s first review of the draft report was brief, its second review took a year. The Rasmussen Report was a massive, and massively complex, multivolume document that daunted many a reviewer with convoluted prose and obscure data references. By 1977, the Lewis Committee benefitted from the work of outside experts who had probed its calculations for weaknesses. In addition, the NRC’s own staff analysis uncovered flaws in WASH-1400 and industry’s sanguine assessment of ATWS.121 Published in September 1978, the Lewis Committee review of WASH1400 was a head-spinning combination of high praise and damning criticism. It lauded the report’s value in creating a logical framework to assess reactor safety and its “pioneering” fault/event tree methodology. It upbraided the NRC for not applying its insights to research and regulations, particularly its findings of the relative importance of a full spectrum of events and actions not covered by design basis accidents, such as human error, small pipe breaks, and transients.122 In setting out WASH-1400’s shortcomings, the Lewis Committee was merciless. It promised to review its sins “in grisly detail,” and it did. “WASH-1400 is defective in many important ways,” it concluded. Its objections boiled down to three categories: poor writing, poor peer review, and poor probability calculations. The report was “inscrutable,” a “major failing” that impaired its usefulness and the conduct of peer review.123 In briefing the NRC commissioners, Harold Lewis joked, “Anyone who had tried to work with the thing [report] and learned how any given calculation was really done, comes away drinking heavily.”124 While the report’s staff cooperated with the Lewis Committee, it had such a “siege mentality” about public criticism that it proved “stubborn” in conceding even basic problems with the report.125
Beyond the Design Basis |
71
The Lewis Committee excoriated the report’s “deficient” calculational techniques, used to compensate for the still nagging problem of limited plant component failure data. Some techniques were inappropriate but inconsequential. For example, when the limited data did not fit into a common bell curve, called a log-normal distribution, WASH1400’s authors drew a bell curve around the data anyway.126 The committee concluded this error likely made little difference in the final estimates. More problematic was WASH-1400’s use of a technique that, to many observers, seemed like cheating. Many of its estimates were, ironically, an amalgamation of expert opinion. Rasmussen’s team had aimed to replace AEC’s qualitative assessments of risk with hard numbers but used “subjective probabilities.” It was an approach that had been used in Howard Raiffa’s decision theory. The Rasmussen team used it when data was lacking, model complexity overwhelmed computing power, or common mode failures dominated an accident model. The Lewis Committee accepted the subjective probabilities as necessary, but objected to the arbitrary techniques WASH-1400 used to arrive at final estimates.127 Estimates of very improbable events, such as pipe breaks and scram failures (ATWS), proved the most troublesome. Experts offered probabilities that were often optimistic and could differ from those of other experts by a thousand times or more. The report had estimated ATWS as a one-in-one-million probability with a narrow upper and lower bound of potential error. The Lewis Committee called the model’s estimate “absurd” and the calculation so arbitrary that it “boggles the mind.” It pointed to a more acceptable approach being pursued by other NRC staff that was four to five times higher than WASH-1400.128 Harold Lewis later noted that, while he sympathized with “the Rasmussen team in their compulsion to quantify the probability of something that has never happened, . . . there may somewhere be a statistician who believes this is a valid procedure [to estimate ATWS], but he has yet to make himself known.”129 The Lewis Committee accepted the use of expert estimates, but their use and limits had to be clearly spelled out. The Lewis Committee panned WASH-1400 for presenting its results as single point estimates when it had large uncertainties and limited data. Collectively, these flaws meant the error bands were so large that absolute risk estimates were not feasible, and that meant comparing nuclear power risk to other hazards was not feasible either.130 The executive summary’s “soothing” comparisons of nuclear power risk to other well-established hazards was an “unfortunate” effort to persuade the
72 |
Beyond the Design Basis
public of nuclear power’s safety. Lewis said the report had “overstepped the state of the art. . . . They tried to find numbers with greater precision than the data base available, the information available, and the statistical tools that they had available would permit.”131 The study’s clumsy use of expert opinion highlighted a growing body of psychological research into the biases and perceptions of risk held by the public and experts. It was not surprising that the public’s untrained perception of risk might be biased, but by 1979, it was clear that experts had their biases, too. They were too optimistic in assessing the probability of improbable events. The experts consulted by the Rasmussen team behaved a lot like geotechnical engineers, professional auto mechanics, and dam builders who, studies found, were often at odds with each other in assessing accident probabilities, too confident in the precision of their estimates, and too confident to admit to the substantial uncertainties in their estimates. Confidence could be disastrous. In 1976, the tragic failure of the brand-new Teton Dam underlined the little-known fact that one in three hundred new dams failed.132 The Lewis Committee conclusions forced the NRC commissioners to clarify their position on WASH-1400 and PRA.133 In a January 1979 press release, the commissioners noted that the report had advanced the state of PRA; they supported “the extended use of probabilistic risk assessment in regulatory decisionmaking” and licensing where “data and analytical techniques permit.” But the commission concluded that WASH-1400’s probabilities and consequence models needed “significant refinements” and should not be used as the principal basis for regulatory decisions. It withdrew its endorsement of the executive summary, warning that the report’s absolute estimates of overall risk could not be “used uncritically either in the regulatory process or for public policy purposes.” Norman Rasmussen got a call late in the evening about the NRC’s impending announcement. He later told an MIT colleague that he lay awake that night worrying. “He was truthfully upset, as you’d expect. He was about to be embarrassed nationally.”134 Reaction to the announcement did not bode well for the study. Morris Udall applauded the NRC statement as a sign that by distancing itself from the report, the agency was shedding the AEC’s promotional mandate: “they’re beginning to be the referee, the impartial referee that we all hoped they would be.”135 Some news outlets incorrectly portrayed the commission as having disavowed the entire report rather than the executive summary. The New York Times called it a “repudiation.” Times columnist Tom Wicker wrote, “the official rejection of the
Beyond the Design Basis |
73
Rasmussen Report is one more piece of evidence that—aside from the basic questions of nuclear safety—both Government and industry have been too careless in their safety studies, too committed as advocates of nuclear energy, occasionally deceptive or misleading in that advocacy, and consistently over-optimistic both in their safety estimates and their judgment of the extent to which the public would accept questionable assurances.”136 Henry Kendall boasted that the report had been “demolished.”137 Daniel Ford characterized the AEC’s motives for WASH1400 as a “religion in need of a Bible.”138 WASH-1400 was launched in part to win over the public to nuclear safety, but the report and the industry were in deep trouble. By 1978, new plant orders had collapsed completely. Numerous licensing hearings around the country were mired in delay, while construction suffered numerous cost overruns and quality assurance problems. Daniel Ford celebrated: “We’ve stopped nuclear power from being the miracle energy cure, and shown it to be a controversial, problem-ridden power source.” Utility executives openly questioned whether nuclear power had a future. John Selby, head of Consumers Power and builder of the ill-fated Midland nuclear power plant, admitted, “I’d be very reluctant to put shareholders’ money into another nuclear plant in the future.”139 Mixing promotional and safety missions in the same report had been a mistake. Risk estimates for policy purposes required broad expert support and public acceptance. WASH-1400 was already swimming against a broad negative tide of public mistrust in government and private institutions, and public support for nuclear power was about to turn negative for the first time in history. Coming from a respected group of physicists, the Lewis Committee’s peer review inflicted considerable damage on the report and made the NRC wary of PRA until much more work was done on it. Two decades passed after the report’s 1975 publication before the NRC issued a policy statement favoring greater use of PRA.140 As a public relations document, it is likely WASH-1400 was doomed from the start. Risk communication remains an extraordinarily difficult challenge. Stumbles like WASH-1400 became so common that one scholar described such a phase of failure as a necessary evolutionary step for experts. They had to learn that numbers were not persuasive. They did not level the comparative playing field among the hazards, especially when the public viewed expert opinion skeptically and believed nuclear power carries unique, hard-to-quantify risks from terrorism or genetic defects.141 Moreover, public perceptions of risk were
74 |
Beyond the Design Basis
not fixed, as Chauncey Starr thought. By the early 1980s, the public opposed the construction of a local nuclear power plant by a two-toone margin, a complete reversal from the numbers a decade earlier. This shift in hazard perception was common across an array of environmental pollutants. Open debate among reputable experts did not help inspire public confidence, as the WASH-1400 debate demonstrated. For committed nuclear opponents, too, the Rasmussen report’s controversy confirmed their belief that risk experts were “shamans” hawking a pronuclear agenda. The same confidence gap resurfaced many years later after the 2011 Fukushima accident. PRA’s return to acceptance faced many obstacles.142 For the time being, it looked like the NRC was finished with risk assessment.143 The commission instructed the staff to review the report’s influence on previous agency decisions. Chairman Joseph Hendrie insisted the commission’s announcement was not “a political book burning,” but the staff distanced themselves from the report.144 They reported back that WASH-1400 was not used much at all in safety decisions. Saul Levine scoffed at the staff’s claim: “When suddenly asked by the Commission whether he had ever used it [PRA], everyone suddenly became a virgin. He’d never used it in any way, shape or form.”145 Frank Miraglia, an NRC supervisor, recalled a co-worker calling the staff review “the WASH-1400 enema.”146 Lewis Committee member Robert Budnitz thought the NRC staff had overreacted to their assessment. Budnitz went on to head the NRC’s research program for a time and found the staff’s attitude “toward risk assessment was generally negative or worse. They generally did not want to use the techniques, avoided the use of the methods and made it difficult for us to develop programs that they could use. . . . [The staff’s] denial of the validity or usefulness of probabilistic methods are shortsighted and backward.”147 RISK ASSESSMENT AND THREE MILE ISLAND
PRA was dead. For two months. The March 1979 accident at the Three Mile Island facility destroyed a reactor, but it saved PRA. It was not the dramatic design basis accident the AEC and NRC anticipated. The Unit 2 reactor near Middletown, Pennsylvania had no large cooling pipe rupture, no catastrophic seismic event. The cause was more prosaic: maintenance. A pressurized water reactor has a “primary” piping loop that circulates cooling water through the reactor core to remove its heat (fig. 8). The primary loop transfers its heat to a non-radioactive second-
Beyond the Design Basis |
75
ary loop as the two loops interface in a “steam generator.” The secondary water boils, and the steam drives the plant’s turbine generator to make electricity. The steam is then condensed back to water, and feedwater pumps send it back to the steam generator to be boiled again. At Three Mile Island, this flow in the secondary loop stopped when maintenance workers caused an inadvertent shutdown of the feedwater pumps. This event was a relatively routine mishap, an anticipated transient, and the reactor scrammed as designed.148 After that, however, nothing went as designed. A relief valve in the primary loop stuck open, and radioactive water leaked out into the containment and an auxiliary building. No pipe broke, but the stuck valve was equivalent to a small-break loss-of-coolant accident. One control panel indicator of the relief valve’s position misled the operators into thinking the valve was closed, and they did not realize they had a leak. This malfunction had happened at other plants, but the operators at Three Mile Island were never alerted to the problem. Unaware of the stuck-open valve, the operators were confused, as primary coolant pressure fell no matter what they did. A temperature indicator that could have helped the distracted operators diagnose the problem was practically hidden behind a tall set of panels. As a result, operators misdiagnosed the problem and limited the supply of cooling water to the overheating reactor core, leading to a partial fuel meltdown. For several days, the licensee and the NRC struggled to understand and control the accident while the nation anxiously looked on. The accident had negligible health effects, but the damage to the NRC and the nuclear industry was substantial. Many of the residents who evacuated the area never again trusted nuclear experts. There was agreement that this event had exposed weaknesses in the NRC’s emergency response capability, but interpreting the accident’s broader meaning for safety split the nuclear community. Some saw a silver lining: there were no measurable health consequences despite a significant core meltdown. Radioactive isotopes and hydrogen leaked into the containment building where the hydrogen ignited. Yet, the containment building held up and prevented the escape of all but a negligible amount of radiation, so negligible that radiation readings taken after the accident were just a third of levels measured at the site during the 1986 Chernobyl disaster some five thousand miles away. Defense in depth had worked. If the Three Ds had proved their worth, was there a place for PRA? Saul Levine did not think so. “Before the Lewis Report and TMI, I would have said the Commission was receptive [to risk assessment] and
76 |
Beyond the Design Basis
things were improving in the right direction. But I think for the moment, we’ve suffered a setback.”149 But it was a turning point. “Suddenly,” an NRC report noted, “the potential value of PRA as a regulatory tool— and of the insights of the [Rasmussen Report] itself—became apparent to the reactor-safety community.”150 As early as 1973, Rasmussen had pointed to the risk posed by human factors in testing, operations, and maintenance. He even recommended that control room panels be redesigned “so that operators can’t make mistakes as easily.” The report’s prescience was hard to ignore.151 Harsh post-accident assessments made PRA impossible to ignore. A presidential commission led by John Kemeny, president of Dartmouth College, excoriated the NRC for its antiquated approach to safety that did not promote learning, and pointed to the Rasmussen Report as a solution. Kemeny called the NRC “a total disaster.” It was an agency “hypnotized by equipment . . . there was literally no systematic way of learning from experience. The NRC believed their equipment was foolproof. TMI showed that no equipment is foolproof.” The accident could have been anticipated if pre-accident reports had been circulated and analyzed, but the training of operators was a “program for button pushing” that did not prepare them to deal with multiple equipment failures.152 A second report commissioned by the NRC, and led by attorney Mitchell Rogovin, faulted complacency with the Three Ds that encouraged ignorance. The NRC had “all but ignored” the human error and “nonsafety related” systems that contributed to the accident.153 Three Mile Island seemed tailor-made to prove WASH-1400’s point that there was more to safety than design basis accidents. Operator errors posed a different kind of risk that had received little attention from the industry and regulators. So little attention had been paid to human factors that another NRC-contracted report concluded, “The human errors experienced during the TMI incident were not due to operator deficiencies but rather to inadequacies in equipment design, information presentation, emergency procedures and training.”154 There had been little operator training on the best response to symptoms of smaller, more common lossof-coolant accidents like Three Mile Island’s. Small accidents required a different configuration of safety systems and operator responses than large ones. “We have come far beyond the point at which the existing, stylized design basis accident review approach is sufficient,” Rogovin wrote. The NRC needed to change “by relying in a major way upon quantitative risk analyses and by emphasizing those accident sequences that contribute significantly to risk.”155
Beyond the Design Basis |
77
The post-accident validation of the Rasmussen Report, one NRC official told a reporter, signaled the “rebirth of WASH-1400.”156 The NRC’s director of probabilistic analysis, Robert Benero, noted, “The Three Mile Island accident seems to have converted many people to be believers in probabilistic risk assessment. Many people believe that if we had listened to WASH-1400 we would have been concentrating on transients, small breaks, and human error since 1975—and would probably have prevented TMI.”157 The AEC had launched WASH-1400, in part, to prove what experts thought they already knew—reactors were safe enough—but it revealed what they did not know.158 For all its warts, the Rasmussen Report really did make risk assessment more realistic and transparent. And, while it seemed ironic that the study used subjective expert estimates, the elicitation of expert judgments became more common. Expert were biased, but predictably so, and their judgments were potentially useful. A recommitment to PRA helped drive more safety research. Beyond failure-rate data, it needed insights from multiple disciplines in technical and social science fields. The NRC spent several million dollars annually on human-factors research, including quantifying error and performance measures for PRA.159 Three Mile Island also added an economic rationale to risk assessment. Defense in depth may have proven its worth in protecting the public, but not in protecting a billion-dollar investment. TMI-2 was a financial disaster because there had been too much focus on unlikely design basis accidents and not enough on the banality of small mishaps. Had small precursor events received attention, the operators at Three Mile Island might have known about a nearly identical event at the Davis-Besse nuclear power plant in Ohio, where operators correctly diagnosed the stuck-open valve missed in Pennsylvania. THE END OF THE FIRST NUCLEAR ERA
The PRA controversy played out amid a tsunami of cancellations for new reactors. The collapse of orders had immediate impact on the utility industry and the far-reaching influence on the application of PRA. For a time, no nuclear construction project seemed safe. First, new plant orders dried up in the mid-1970s. Next, utilities cancelled “paper” plants already ordered but not started; fifty-two were terminated before March 1979—clear evidence that the industry’s troubles started before Three Mile Island. After the accident, however, cuts turned into a rout. Plants
78 |
Beyond the Design Basis
deep into construction were swept away, leaving behind concrete skeletons and debt. After seventeen years of construction problems, delays, and a $4.3 billion investment, Michigan’s Consumers Power Company announced it would convert the Midland nuclear facility to a combinedcycle gas facility, even though the plant was 85 percent complete. The William Zimmer nuclear power plant in Ohio was 97 percent finished when its owners gave up and converted the facility to coal. Others were halted simply because utilities ran out of money. In total, at least 120 plants were canceled, and no more than 112 ever held operating licensees in the same year, a far cry from the one thousand predicted. By the mid-1980s, utility default and bankruptcy loomed. The Washington Public Power Supply System had planned on twenty nuclear power plants and finished just one. It defaulted on $2.25 billion in debt. Electricity rate increases were so steep in some regions that industries considered relocating to low-cost power regions or tried to switch power providers, a rare option in the regulated power markets of that era.160 There were plenty of postmortems and recriminations. In a memorable assessment in Forbes magazine, James Cook declared the nuclear industry “the largest managerial disaster in business history. . . . For the U.S., nuclear power is dead—dead in the near term as a hedge against rising oil prices and dead in the long run as a source of future energy.” Cook, as well as others, broadly apportioned blame among the industry, the NRC, and state regulators, but much of the wrath fell on industry “mismanagement in the first degree.”161 Nuclear power was undone, for the most part, by utility leadership that did not look skeptically at the claimed inevitability of this new technology or doubted its need even though fossil fuels were abundant. They did not question their projections for robust demand growth of 7 percent, the maturity of reactor designs, or the wisdom of the rapid scale-up in power output. True costs were masked by heavy federal subsidies and the loss-leader deals with Westinghouse and GE in the 1960s turnkey era. When it ended, prices predictably doubled, but then continued to rise another four times between 1971 and 1987—much faster than coal. Lead times between the receipt of a construction permit and the commencement of commercial operation stretched from six years to ten. While vendors eventually standardized some plant designs, these came just in time to see orders evaporate in the mid-1970s.162 For many utilities, the selection of a nuclear power plant was not just ill-timed for the slumping power market of the 1970s, it was ill-suited to their modest capabilities. Some built nuclear plants the way they
Beyond the Design Basis |
79
built fossil plants, with limited attention to construction management, quality assurance, and safety. For example, the ill-fated Zimmer plant had so many quality control issues that the NRC halted construction in 1982. Estimates indicated it would take $1.5 billion to fix them and build the last 3 percent. Managing the construction maze of vendors, architect-engineering firms, and subcontractors required detailed dayto-day utility attention, but a NRC study found cases where “no one was managing the project.” It concluded “poor utility and project management” were the root of quality problems. State utility commissions sometimes conducted “prudence reviews” and determined utilities could not recover costs from ratepayers for mismanaged projects.163 Cook’s judgment was scathing, but for the surviving licensees, he offered no roadmap to the future. The plants that escaped the carnage carried hundreds of millions in construction debt, and an extra helping of millions for post-Three Mile Island upgrades, into their operating phase. Had the new plants performed as advertised, licensees might have managed, but the post-Three Mile Island fleetwide capacity factor of about 55 percent was well below early forecasts. Some operated for years at less than 40 percent. Maintenance headaches contributed to operating costs at almost ten times original forecasts. There was obvious economic wisdom in licensees maintaining quality operations that minimized the probability of accident-inducing mishaps such as the one at Three Mile Island, but such a long-term view of operations, the NRC feared, competed with a persistent “fossil-fuel mentality” that shaved expenses for short-term profits. Further economic shocks to the power industry in the 1990s only heightened this concern. Squeezed between an increasingly aggressive regulator and strained finances, the table was set for NRC-licensee conflict. Operators needed new tools to reconcile the apparent contradiction between safe and efficient operation. Increasingly, the industry and the NRC believed PRA applications could do both.164 Whether the NRC contributed to the debacle was a question that came in for scrutiny. Nuclear supporters blamed the NRC for creating an overly complex regulatory system that piled on expensive new rules simply to assuage public opinion. This was not entirely fair, since rising construction costs transcended time and place. Internationally, realized construction costs always exceeded initial cost projections. Most US utilities did not blame the NRC, either. In a study of cancellations between 1974 and 1982, the Department of Energy found that in only thirty-eight of one hundred cancellations did utility executives attribute regulatory burden as a partial contributor to their decision. A substantial
80 |
Beyond the Design Basis
majority cited the industry’s optimistic demand expectations and the difficult construction financing climate. In some cases, regulation did contribute to delay and costs. As the NRC’s own studies showed, the complexity of the US regulatory process was exceeded only by West Germany’s, and Germany’s nuclear industry was not as complex as in the United States. Almost all the plants with the largest construction overruns and delays worldwide occurred in the United States. Estimates indicate that mandatory upgrades to safety and environmental features may have contributed as much as half to rising costs.165 True or not, were these complaints of regulatory burden even relevant to the NRC? It was not the AEC. Its mission was adequate protection of the public and environment, not promotion. If its upgrades bought more safety, they were worth it. Some industry supporters disagreed. One pronuclear author noted a common industry belief that the NRC’s “regulatory ratcheting” was expensive, arbitrary, and “has bought nothing” in the way of safety.166 By adding to plant complexity and requiring the allocation of licensee resources to minor safety issues, NRC backfits may have even reduced safety. Post-accident risk analysis indicated that some post-Three Mile Island upgrades certainly made plants safer, but the industry’s line of questioning revealed a discomfiting insight. No one could be sure whether NRC backfits made plants safer or not. In a 1985 report by the General Accounting Office, NRC officials admitted staff did not analyze whether the safety enhancements they ordered provided “substantial additional protection,” as its backfit rule required. In some cases, the NRC ordered modifications without knowing if they added much to safety or were technically feasible. The GAO referenced a Department of Energy report that claimed NRC backfitting was “out of control.” The flood of modifications came so fast that licensees sometimes delayed maintenance important to plant safety to attend to them. A PRA could have quantified a backfit’s contribution to safety and prioritized the most risk-significant ones, but the agency confessed it did not have enough trained staff for backfit analysis. Between 1983 and 1985, the NRC began to apply risk analysis to general safety issues and in its revised backfit rule.167 Nuclear critics feared PRA would be abused in cost-benefit analysis to block backfits that were expensive but benefited safety. Supporters countered that PRA’s methodology could make risk knowable and bring stability and transparency to NRC regulatory decisions. With PRA, one risk expert said, there was “nowhere to hide.” PRA might not renew public trust, but it would be open and direct attention to important contributors to accident risk.168
Beyond the Design Basis |
81
There was much to be done to make PRA better, but there was now time to do it. Lots of time. With the bandwagon market a memory, the NRC turned to the multi-decade task of reforming its regulation of operating reactors, deploying tools like PRA with an eye to balancing safety and efficiency. For the authors of WASH-1400, the pivot back to risk assessment was gratifying. Norman Rasmussen observed that skeptics at the NRC “have now strongly embraced quantitative risk assessment and insist on people doing it for all kinds of problems.”169 He was confident that critics had overstated the uncertainty in his risk estimates. Saul Levine, too, glimpsed vindication. The NRC’s attitude on PRA, he wrote, “has now come almost full circle. . . . It seems that the United States nuclear power community is finally taking to heart the words of Cicero (circa 40 BC): ‘Probabilities direct the conduct of wise men.’ ”170
4
Putting a Number on “Safe Enough”
After the Three Mile Island accident, the NRC’s complex regulatory structure got more complicated. An agency task force called the NRC’s regulations a “quilt work” of requirements, and a new layer of “beyond design basis” regulations were layered on top of the traditional Three Ds approach. The NRC also added on new requirements for operator training, control-room design, and emergency planning. By 1983, upgrades had cost each plant about $55 million. As noted earlier, many doubted whether the new regulations improved safety.1 To control this growing complexity, the Atomic Industrial Forum revived Chauncey Starr’s vision and called for a long-term conversion to measurable risk-based regulation, where safety decisions needed just two things: quantitative safety goals—a number that said when a plant was safe enough—and a PRA to calculate it. The industry found the approach attractive because it would simplify regulation, limit regulatory backfits to those with substantial safety benefits, and demonstrate that nuclear power was the safest technology of all. “We believe that the application of such a consistent set of safety principles would reveal that the risks associated with other human activities or technologies deserve attention equal to or greater than that currently being focused on nuclear power risks,” the Forum wrote.2 Quantification would create a “rational framework for decisions” that would solve the “arbitrary” nature of the NRC’s deterministic regulations, an industry source stated. “You can argue about the data base, but it’s all laid out” and would address “pub82
Putting a Number on “Safe Enough”
|
83
lic concerns about nuclear energy and getting them out in the open.”3 The NRC’s Stephen Hanauer agreed, arguing that more quantification would help eliminate the perception that the NRC’s proceedings had a “star chamber” quality that relied on opaque expert judgment.4 Regulation based on risk insights met resistance within the NRC. Robert Budnitz, the head of the NRC office of research, said the staff thought PRA was a “Pandora’s box” that would require tremendous effort and offer uncertain benefits. Given the wide variation in probability estimates, an NRC official warned against using PRA as “a magic crutch. . . . To use it for decision making is next to useless because of variations in situations. . . . Getting an adequate data base in our lifetime is not possible, so we have to live with uncertainty.”5 Live with uncertainty the NRC did, but PRA advocates were determined to turn risk assessment into a tool for risk management. The NRC was part of a larger movement, particularly among federal agencies, to incorporate risk assessment into a broad spectrum of government regulatory activities. On the leading edge of this wave, NRC regulators worked to inform regulations with quantitative risk insights while allowing decisionmakers discretion to consider qualitative factors in determining a threshold for “adequate protection.” Ultimately, a new and hybrid regulatory regime emerged from the NRC’s work. Neither solely deterministic nor completely risk-based, it became risk-informed regulation. The agency spent several contentious decades herding quarrelling staff, industry, and critics to an uncertain regulatory model.6 By 2000, it had risk-informed some regulations, developed quantitative safety goals, published a policy statement on risk-informed regulation, and developed risk-informed regulations in plant operations. These gains, however, came with a recognition of PRA’s practical limits. BEYOND THE DESIGN BASIS: SAFETY THROUGH RELIABILITY
After Three Mile Island, the NRC moved with greater purpose to regulate accidents beyond the design basis of existing plants, but it had to be mindful of criticisms that it had not followed its own backfit rule. The design basis met the requirements of AEC/NRC regulations and guidance documents that a plant design provides, in NRC parlance, “reasonable assurance of adequate protection,” or similar language that it could be operated “without undue risk” to the public.7 If a safety feature was essential to cope with a DBA, the NRC considered only its safety benefit when requiring it, not cost. New safety questions not
84 |
Putting a Number on “Safe Enough”
addressed in the design basis cropped up, as did Anticipated Transients Without Scram (ATWS) in 1969. Any backfits to remedy ATWS were considered enhancements beyond the minimum requirement for “adequate protection,” and the NRC could weigh benefits with costs before imposing new requirements. The NRC developed more quantitative regulations for several safety systems and events. The ability to cope with station blackouts was improved with new regulation, training, and requirements that plants meet minimum coping times. The NRC also established quantitative reliability requirements for emergency diesel generators. A study in 2003 showed the station blackout regulations exceeded safety expectations. A probabilistic approach also informed upgrades to auxiliary feedwater systems with seismic improvements, more pumps, and diverse power sources.8 Debate over how to reduce the probability of scram failures (ATWS) had dragged on for over a decade, with the nuclear industry contending it was an almost one in a million event with few consequences. Events settled the issue in the NRC’s favor. In June 1980, the scram system partially failed at the Browns Ferry Unit 3 nuclear power plant in Alabama. When an operator scrammed the plant during a routine shutdown, it failed to insert 76 of 185 control rods. Repeated attempts finally inserted the rods. The event was likely caused by clogging in a hydraulic scram line. A year and a half later, a reactor scram signal partially failed to open circuit breakers and shut down the Salem 1 nuclear power plant during a plant startup. Neither event caused damage, but they lent credence to the NRC’s probability estimates—more pessimistic than WASH-1400 or industry estimates—and the role of common-cause failures.9 In June 1984, the NRC adopted a final rule on the ATWS issue. Incorporating orders the NRC had already issued to licensees, the rule provided a long-term requirement that scram systems incorporate principles of redundancy, reliability, independence, and diversity. The NRC staff recommended a probabilistic goal that an ATWS event should be no more likely than once in one hundred thousand reactor-years of operation. The NRC also advised plant operators to seek ways to reduce the probability of scrams. Every time a plant scrammed, there was a small chance of an ATWS. Fewer scrams meant fewer scram failures and also the side benefit of profit; a scrammed plant could not sell electricity. Safety and shareholder return went hand in hand. In the 1980s and 1990s, scram rates dropped by a factor of ten through more effec-
Putting a Number on “Safe Enough”
|
85
tive maintenance and operations.10 Probabilities were finding a place in the regulations. THE INDIAN POINT AND ZION RISK ASSESSMENTS
After the Three Mile Island accident, WASH-1400 gained new life as a regulatory tool from legal challenges brought by the antinuclear movement. The environmental group Friends of the Earth petitioned the NRC to issue supplemental environmental impact statements at several nuclear power plants in Arizona and California to consider beyond-design-basis accidents known as Class 9 accidents. In 1971, the AEC had determined that it did not need to consider Class 9 accidents under NEPA’s “rule of reason” due to their exceedingly low probability of occurrence. Friends of the Earth argued the “repudiated” probability estimates of WASH1400 and the Three Mile Island Accident proved otherwise.11 The Council on Environmental Quality (CEQ) added weight to the Friends of the Earth petition by revising its regulations on environmental impact statements to require worst case analysis. CEQ informed the NRC that its review of agency environmental Impacts statements was “very disturbing,” and it did not believe the NRC’s position on Class 9 accidents “is any longer sustainable, assuming it ever was.”12 As an independent regulatory agency, the NRC did not view CEQ regulations as having substantive impact on its regulatory functions. It admitted, however, that “a change is needed.”13 In June 1980, the commission approved an interim policy statement containing guidance on the treatment of nuclear power plant accidents in environmental impact statements, particularly sequences that could lead to core melting and radioactive releases.14 The statement took a positive view of WASH1400’s value in regulation. It pointed out that its reconsideration of Class 9 accidents was appropriate, in part, because WASH-1400 had identified accident sequences other than design basis accidents that dominated plant risk. The commission statement acknowledged the uncertainties and limitations of risk assessment highlighted by the Lewis Report, but it expressed optimism that the state of the art was advanced enough to explore the use of the methodology in the regulatory process. The commission was also confident the new assessment would reach a similar conclusion as those in the past as to the remote probability of Class 9 accidents. A later court opinion agreed; in 1984, the DC circuit court ruled that the NRC was within its discretion when it held that an earthquake fault near the Diablo Canyon nuclear power station did not
86 |
Putting a Number on “Safe Enough”
warrant a “special circumstance” to reopen the Class 9 issue in a supplement to Diablo Canyon’s environmental impact statement. WASH1400’s troubles and Three Mile Island, the court concluded, did not mean Class 9 accidents carried more than “remote and highly speculative consequences.”15 In another line of attack, the Union of Concerned Scientists struck at the operating licenses of two of the nuclear industry’s most controversial plants. In September 1979, UCS filed a petition to suspend operation of Units 2 and 3 at Indian Point, New York, located about twenty-five miles north of the Bronx. With nearly 10 percent of the US population living within sixty miles of the site, UCS asserted that the Indian Point plants were unsafe given their “known safety deficiencies,” the “potential for enormous consequences to the densely-settled population,” and the need for a “massive evacuation.” UCS filed a similar petition for the Zion Nuclear Power Station, located halfway between Chicago and Milwaukee. Like Friends of the Earth, UCS argued in its Indian Point petition that with the Rasmussen Report now “repudiated” the “Commission can no longer hide behind the fiction that a [severe] accident . . . can never occur.” Indian Point and Zion, they claimed, were unique risks among nuclear power plants.16 UCS had astutely chosen the timing and ground on which to do battle. The NRC had paused licensing during its Three Mile Island review, and the public was concerned with the safety of other operating plants. Critics believed that radiation releases, confusion, and human errors, such as had happened in a more remote area of Pennsylvania, might unleash chaos in populated areas such as Indian Point. The UCS case for closure of Indian Point and Zion hinged on a question of relative safety: Was nuclear power too risky to be located near cities? The prospect of evacuating a large part of the New York metropolitan area seemed daunting. Just prior to Three Mile Island, the NRC had improved emergency planning near nuclear plants, but requirements and drills amounted to mostly paper and communication exercises between site staff and local authorities. The NRC stepped up emergency planning requirements and drills after Three Mile Island.17 But even NRC Chairman Joseph Hendrie admitted that evacuation plans at Indian Point might require “special provisions.”18 In February 1980, the NRC staff rejected the UCS petition to immediately suspend operations at Indian Point and Zion. Drawing on qualitative judgments and the Rasmussen Report’s failure-rate data, Harold Denton, the director of nuclear reactor regulation, refused to suspend
Putting a Number on “Safe Enough”
|
87
plant operation given the very low probability of a large loss-of-coolant accident of about one in one hundred thousand reactor years of operation. He agreed that the densely populated sites “present a disproportionately high contribution to the total societal risk from reactor accidents” and required interim measures while the NRC investigated.19 The Indian Point/Zion reviews jolted the industry. As John Garrick of the consulting firm Pickard, Lowe, and Garrick (PLG) recalled, the UCS petition created “a sense of desperation and urgency to do something so technically sound and beyond the normal analysis that it would be impossible to shutdown the plants on technical grounds.”20 In 1976, PLG wrote the first industry PRA on the Oyster Creek nuclear power plant, capitalizing on Garrick’s 1967 dissertation work and subsequent advances in the field. PLG’s risk assessment included significant advancements in seismic modeling, including a seismic hazard curve that became a PRA standard. With a daring born of grave circumstances, the industry decided to defend Indian Point and Zion through a PRA developed by PLG, a “WASH-1400 Mini Study” with improved data and methodology.21 The preliminary PRA results were reassuring. Indian Point’s owners argued the plant designs were tailored to the large local population by incorporating extra safety features. In its PRA, PLG claimed it had developed better solutions to some of the problems that bedeviled the Rasmussen Report, including a better accounting for uncertainties, operator error, fires, treatment of external events like floods and earthquakes, as well as modeling containment failure and the escape of radioactive plumes. Accident analysis also benefitted from research launched after Three Mile Island into core melt accidents and the capacity of containments to withstand meltdowns, steam explosions, and flying debris. The results buttressed industry contentions that accidents that penetrated containment had a “vanishingly small probability.” There were some hazards, such as fires and earthquakes, that were more worrisome than previously shown, but the overall results were encouraging.22 With the possibility that the Indian Point/Zion risk assessments might produce defensible probability estimates, interest in establishing quantitative safety goals grew. The utilities requested the NRC spell out “a uniform, quantitative unambiguous safety goal for Zion and Indian Point plants,” a number that equated to “adequate protection” as required by the Atomic Energy Act of 1954. The NRC was already exploring safety goals, and the effort became a major program.23 The Indian Point PRA might save Indian Point and demonstrate nuclear power was safe enough, too.
88 |
Putting a Number on “Safe Enough”
In 1982, the utilities that operated Indian Point 2 and 3 released the final Indian Point Probabilistic Safety Study. The study claimed PRA was ready to serve as an integral part of safety regulation. It calculated the components of reactor risk “as precisely as possible” to compare the relative risks and benefits of nuclear power to competing technologies. The competition was not close, the study found. The probability of an accident resulting in acute deaths of more than one hundred was no more than one in 4.8 million reactor years for Unit 2 and one in 29 million years for Unit 3. The more likely estimate was one in one hundred million for Unit 2 and one in a billion years for Unit 3.24 The commission directed the agency’s Atomic Safety and Licensing Board to conduct informal hearings to collect information and evaluate issues raised in the UCS petition. The commission requested that the board consider PRA as part of the evaluation process. This constituted a new licensing direction for the NRC. Wary of PRA after the Lewis Report, the commission decided to take another look. PRA, as much as Indian Point, was on trial.25 Outside reviews of the PLG study by Brookhaven, Sandia National Laboratories, and antinuclear groups were less confident of PLG’s estimates. Much as with the Rasmussen Report, reviewers raised doubts about the error bands associated with the top-line numerical results. Sandia, for example, found PLG had comprehensively covered previously unaddressed issues, but its estimates were “unconservative” for earthquakes, tornadoes, and hurricanes, underestimated the importance of fires, and expressed “undue optimism” about the reliability of humans when it came to responding to accident situations.26 Expert witnesses for the Union of Concerned Scientists, the National Audubon Society, and Friends of the Earth agreed that PRA could improve reactor safety design, but they saw little improvement in the large uncertainties that had beset WASH-1400. PLG’s absolute probability estimates, they said, were inappropriate for regulatory decisions. Focusing instead on accident consequences, they predicted up to fifty thousand latent cancer deaths.27 While Sandia and NRC staff analysis criticized PLG’s confidence in its estimates, they agreed that the risk from a large radioactive release was “well below the design objective.” Compared to competing hazards that nearby residents lived with every day, Indian Point did not add significantly to background risk. The area already had over seventy-five hundred accidental deaths and twenty-eight thousand cancer deaths per year. The plant likely met the draft quantitative safety goals under development by the commission.28
Putting a Number on “Safe Enough”
|
89
The licensing board majority sought a middle ground. They agreed that risk assessment had value but concluded the large uncertainties might warrant a “risk aversion factor” that established a much lower acceptable probability for unlikely, high-consequence accidents than ones with higher probabilities and lower consequences. The risk aversion factor was similar to that proposed in the British “Farmer Curve” in 1967 that factored in public dread of very large accidents. This was a particularly appealing standard for sites in densely populated areas such as Indian Point and Zion. The nuclear industry, however, objected that aversion factors were unfair and unethical, since they placed the risk burden on rural residents in sparsely populated areas, and they penalized the nuclear industry despite its superior record of safety. Studies showed that the nuclear industry already devoted as much as thirty-five times the resources to save a hypothetical life than the mining industry and highway designers. Nevertheless, the board majority recommended the commission consider adopting such a standard.29 In May 1985, PRA won a modest victory. In its review of the licensing board’s recommendations, the commission concluded Indian Point’s accident risk was a “small fraction of the competing non-nuclear background risk to which the population around Indian Point is exposed.” Drawing on PRA insights and engineering judgment, the commission concluded that Indian Point was not a “risk outlier” and might even pose less risk to the public than comparable plants. It was unnecessary, the majority argued, to add expensive features with little safety benefit, such as filtered containment vents. Beyond the design and emergency planning modifications already underway, the reactors did not need additional modifications.30 The decision was not unanimous. Commissioner James Asselstine wrote that he “could not disagree more. . . . The uncertainties are so large” that anyone could make the case that a catastrophic accident was either credible or incredible. With these uncertainties, the commission should have considered filtered containment vents that would relieve dangerously high containment pressure while removing radioactive particulates. He feared the commission majority was returning to “the complacent attitude toward safety” that existed before Three Mile Island. The other commissioners rejected Asselstine’s assessment, but disputes over PRA uncertainty and the value of accident mitigation systems remained contentious for many years.31 The commission’s Indian Point decision effectively endorsed the consideration of risk assessments in licensing proceedings, but it also set out its
90 |
Putting a Number on “Safe Enough”
practical limitations. The commission agreed with intervenors that PRA estimates were “not empirically verifiable” nor “sufficiently reliable” to serve “as the sole basis” for a licensing decision.32 Yet, PRA had made significant advances since the 1960s, John Garrick wrote, toward “an engineering definition of probability” that better accounted for uncertainty and limited data by using the Bayes Theorem, which infers the likelihood of an event from new information and develops probability curves. With a better accounting for uncertainties, probabilities could assess the credibility of various hypotheses regarding extremely unlikely events.33 As Garrick recalled, his PLG colleague Stanley Kaplan said, “Statistics is the science of handling data. Probability is the science of handling the lack of data.”34 Risk-based regulation that leaned heavily on a quantifying risk was not appropriate; however, PRA could supplement qualitative engineering judgment and deterministic safety with its quantitative insights. SAFETY GOALS
By the 1980s, it had been more than thirty years since the AEC’s Reactor Safeguard Committee first pined for a quantified safety goal of one in a million. Until it had run into trouble, the Rasmussen Report rekindled hope for a quantitative goal. NRC commissioner Victor Gilinsky told Congress in 1976 that with the Rasmussen Report regulators might soon be able to spell out “an explicit numerical safety goal.”35 As the Rasmussen Report became controversial, the issue lay dormant until Three Mile Island. Less than two months after the March 1979 accident, the Advisory Committee on Reactor Safeguards (ACRS) informed the commissioners that it was “time to place the discussion of risk, nuclear and nonnuclear, on as quantitative a basis as feasible” by developing safety goals that could “provide important yardsticks” for judging safety.36 The initiative came from ACRS member and UCLA professor David Okrent. Among ACRS members, Okrent had been the most persistent advocate for conservative safety positions, but he agreed that efficiency and safety went hand in hand. In the mid-1970s, he had directed a multifaceted National Science Foundation project to improve risk assessment methodology and data on nuclear and nonnuclear hazards. In July 1979, he testified to Congress in favor of safety goals that might limit “ad hoc decisions” and “gross inequities” in safety spending. Nuclear plant operators spent more than one hundred times what coal plants did to save a life. “If you don’t [pursue safety] in a way that is cost effective,” Okrent said, “you are killing people.”37
Putting a Number on “Safe Enough”
|
91
A safety goal offered numerical clarity that qualitative terms like “adequate protection,” “defense in depth,” and “engineering judgment” did not. NRC general counsel Leonard Bickwit told the commission that the NRC’s deterministic regulations “are not specifically based on any single underlying concept of adequate protection.” “The regulations,” he wrote, “do not provide any guidance as to which accidents are credible and which are not, although accident probability is clearly the determining factor.” He suggested developing a clear policy statement on the commission’s safety philosophy.38 The time to put a number on “safe enough” seemed to have arrived. Three Mile Island made safety goals imperative. The NRC had paused licensing of new plants during its Three Mile Island review, and in October 1980, it announced it would launch proceedings to establish new rules on siting, emergency planning, and severe accidents; the latter was called the degraded core rulemaking. Industry feared the rulemakings might lead to expensive, exotic safety devices, such as filtered containment vents, or a “core retention” system like the core catcher proposed by Westinghouse in the 1960s. Like a giant soup ladle, it would catch a molten core blob as it melted through the reactor vessel. One trade publication feared “a never-ending spiral of new regulations that cost billions of dollars in delays and backfitting [with] . . . only marginal safety improvements.” Safety goals might put a limit on backfits.39 Norman Rasmussen confided to Saul Levine, “I can’t tell you how important I think it is to establish [safety goals].”40 Goals would give meaning to PRAs. “If you don’t have a [numerical] target in mind,” one official said, “how do you know if your [design] is successful? . . . You really have to have a goal in mind.”41 The seeming objectivity and simplicity of a regulatory system based on quantified safety goals held broad appeal. The president’s commission on Three Mile Island, an NRC staff task force, and the Rogovin Report recommended greater use of PRA and safety goals to increase public understanding and bring order to the NRC’s regulations. Rogovin acknowledged the still “large uncertainties” in risk assessment, but he expected it was “possible to envision establishing quantitative risk standards” to augment existing regulations. “How safe was safe enough?” was a policy question fit for Congress and the executive branch, he admitted, but no such direction was likely. He urged the NRC to strike out on its own. “The time to begin is now.”42 In 1979, the NRC launched an unprecedented effort to apply the latest social science and technical research to a safety goal policy statement.
92 |
Putting a Number on “Safe Enough”
The NRC set out with aspirations to draft goals that acknowledged the public psychology of risk, the value judgments and uncertainties in risk estimates, and the possibility of shifting public expectations. They were to be publicly acceptable, practical for regulatory application, and stated with simplicity. The agency gathered information from numerous public forums and contracted for an important report from psychologists Baruch Fischhoff, Sarah Lichtenstein, and Paul Slovic from the Decision Research group and others who had studied carefully public attitudes toward nuclear power. The input of the social scientists resulted in one of the most lucid NRC reports issued on almost any subject, Toward a Safety Goal. From this input, the NRC sought to eliminate its penchant for stating risk in the fog of negative powers of ten from the goals. The process was to be transparent and the policy statement would include qualitative goals written in plain English, and quantitative targets written as plain numbers expressed as a small percentage of the typical risk borne by the public in everyday life. More complex technical jargon would be relegated to subsidiary goals. Decision Research and the NRC hoped the safety goal process would educate participants and the public: “Somehow society should become better or wiser for its adoption.”43 Drafting the new goals took six years, as NRC aspirations had to confront the messy reality of policymaking. Nuclear power critics were suspicious. PRA methodology and the goals seemed like an effort by experts to hide political choices behind purportedly objective numbers. Toward a Safety Goal acknowledged that nuclear power’s opponents were worried “about power being concentrated in an intellectual elite . . . and ideological biases lurking in the ostensibly neutral assumptions underlying the methods.” This suspicion of risk experts operating under a “cloak of professional wisdom” was shared widely among environmentalists and reached into social science scholarship, where studies of a “risk society” theorized that risk experts exerted influence by redistributing societal risk from the powerful to the powerless.44 The commission considered proposals for goals from the ACRS, the public, and industry. NRC staff vowed goals written with “clarity,” and the Atomic Industrial Forum called for “simplicity of expression and practicality of application.” Drafting clear descriptive qualitative goals was relatively easy. The first goal stated that the risk to residents near a power plant should be “such that individuals bear no significant additional risk to life and health.” A second societal goal stated that risk “should be comparable to or less than the risks of generating electricity by viable competing technologies.”45
Putting a Number on “Safe Enough”
|
93
Simplicity did not characterize the formulation of quantitative safety goals. The debate over them elicited sharp, complex disagreement over arcane numbers. What did it mean to a layperson that nuclear power should have less than one annual fatality per one thousand megawatts of electricity generated? Or that a backfit be limited to $100 per manrem averted? Even a simple value like one in a million was abstract for experts, let alone the public. Education did not seem to work. An intense educational effort about nuclear power launched by the Swedish government left its eighty thousand participants more confused than when they started.46 Mistrust also dominated the debate. Critics of nuclear power saw darker forces at work in the safety goal debate. Ellyn Weiss, an attorney representing UCS, said PRAs “give the illusion of precision, but can be manipulated to support whatever the predetermined objective may be.”47 Paul Gunter spent most of his adult years opposing nuclear power. He recalled, “We saw nuclear power as concentrations of not only power but influence. And, ultimately, connected to the military industrial complex.”48 Critics argued that it was impossible to compare conventional hazards to the incomparable risks of nuclear power. Weiss said any attempt to compare nuclear and non-nuclear technologies will “inevitably overlook” nuclear power’s “unique risks” that turned the exercise into a “political quagmire.”49 “One cannot equate the risks of nuclear weapons proliferation with health and safety risks associated with solar photovoltaic technologies,” critics argued, “nor can one make the risk of a catastrophic accident at one of today’s operating reactors ‘consistent’ with the risk of air pollution from burning fossil fuels.”50 Risks that seemed reasonable to nuclear proponents were viewed differently by others. By aggregating the few additional annual cancers caused by each nuclear power plant, critics turned seemingly small numbers into big ones. They pointed out that over the life of the US fleet of about one hundred plants, the proposed goals accepted the possibility of twelve thousand dead and twenty-one thousand cancer cases. “The quantitative goals [proposed by NRC staff] are incredible,” said one critic. “I can’t believe the commissioners will approve these.” Such goals implied “that a catastrophic nuclear accident in our lifetime is acceptable. We disagree.”51 The antinuclear perspective had a difficult time before the commission. In the 1980s, the political climate for nuclear power had improved. The Jimmy Carter administration had been skeptical of nuclear power. Ronald Reagan’s victory in the 1980 presidential election cemented a
94 |
Putting a Number on “Safe Enough”
more partisan alignment of pronuclear and antinuclear leanings among Republicans and Democrats. This led to changes among NRC commissioners that augured well for the industry. Reagan returned Joseph Hendrie to the NRC’s chairmanship following his demotion from the position after the Three Mile Island accident. When his term ended, he was replaced by Nunzio Palladino, a former member of the ACRS and chairman of the nuclear engineering department at Penn State University. Palladino championed establishing safety goals. They “should be quantitative insofar as that is possible.”52 The new chairman also responded to the contention by nuclear critics that the safety goals would make death from nuclear accidents acceptable. “The implied premise behind the safety goal is not that injuries or deaths from nuclear accidents are acceptable, but that some additional risk from the possibility of nuclear accidents is inescapable.”53 Nevertheless, in a series of votes between 1982 and 1986, the commission mustered a bare majority for new policies and rules on safety goals, severe accidents, and backfits. The early draft safety goals were a close substantive match to industry prescriptions but offered weak execution. Downgrading the quantitative goals to “objectives” or “guidelines,” the commission stated that the individual risk of a prompt fatality within one mile of a plant boundary should not exceed 0.1 percent of the prompt fatality risk from all other accidents. Within fifty miles of the plant, fatality risk from cancer could not exceed 0.1 percent of all cancer risks. These values were remarkably close to the one-in-a-million (10–6) goal that had long dominated nuclear risk thinking. The prompt fatality objective was slightly higher than 10–6, and the cancer risk was slightly less.54 The commission majority acknowledged the “sizeable uncertainties” and “gaps in the database,” and it conceded that quantitative guidelines should be “aiming points” subject to revision “as further improvements are made in probabilistic risk assessment.”55 Reducing the goals to aiming points did not quell dissent from commissioners Peter Bradford and Victor Gilinsky. Bradford approved the draft but echoed the UCS criticism that the quantitative guidelines accepted thirteen thousand deaths over the life of the US fleet. The majority rejected Bradford’s figure on the basis that it was still onethousandth of the total deaths in the same period for all other prompt fatalities and cancer deaths. Gilinsky had come to distrust risk quantification as impractical and an abdication of commission authority. The goals were too abstract, “too remote from the nitty-gritty hardware decisions that have to be
Putting a Number on “Safe Enough”
|
95
made every day.” He was particularly skeptical of turning over commission discretion to numbers. “It is an illusion to think, as the Commission apparently does, that ‘probabilistic risk assessment’ will alter that picture in the foreseeable future.” The majority was “delegating the Commission’s safety decisionmaking to complex computer programs that no one fully understands and which may or may not turn out to contain errors.”56 The NRC staff was divided among supporters and skeptics, too. ACRS Member Harold Lewis, the head of the WASH1400 review team, became a strong advocate of PRA use, but he worried about relying too much on quantitative safety goals.57 As the industry recommended, the commission put in place a twoyear trial period where the staff would evaluate the goals and PRA, but they would not be used in regulatory decisions. The two years also provided time for research on whether radioactive emissions during an accident would really be as high some estimates. The “numerical guidelines” and “design objectives” were to be met “where feasible.” A subsidiary objective called for a probability of a core-melt accident of one in ten thousand reactor years, a figure very close to the probability estimate in the Rasmussen Report. The industry lamented that the policy statement had become more aspirational than concrete.58 At the end of the two-year trial in 1985, the NRC staff issued a favorable review of the safety goals. The policy statement would “strengthen decisionmaking by adding more objectivity and predictability to the regulatory process.” The goals were a good “yardstick against which a wide range of regulatory issues can be measured.” The staff cautioned that the goals would supplement rather than supplant “traditional safety review methods.” It did not recommend major revisions to the policy statement.59 In the same year, the commission approved (in a hotly disputed 3–2 vote) its revision of the backfit rule. It included firmer requirements that the staff review a backfit for its impact on plant complexity, costs, and risk; however, costs were only to be considered for safety enhancements, not backfits that brought a facility up to minimum standards of adequate protection. The rule did not require PRA be used, but the Union of Concerned Scientists and commissioner James Asselstine believed the rule would unleash indiscriminate use of inadequate PRAs to block backfitting. Asselstine argued PRA was not yet ready for backfit determinations “because all contributors to risk cannot be quantified and because core meltdown phenomena are poorly understood no one calculation of risk yields a remotely meaningful value of risk.” He feared the backfit rule and safety goals would freeze safety standards for nuclear
96 |
Putting a Number on “Safe Enough”
power when upgrades were still necessary. The NRC majority disagreed. The rule did not create a PRA straightjacket, they argued, and allowed ample room for staff judgment and qualitative information.60 The final debate on the safety goals was similarly divisive. Commission membership had changed, and a vigorous debate ensued even over the previously uncontroversial qualitative goals. Disagreement emerged over the benefit-cost guidelines of $1,000 man-rem averted. The number had been formulated to control the cost of expensive backfits that did little to reduce radiation risks. Debate arose as to whether the cost savings should also include property damage to plant equipment. Adding in such savings would make it easier for the NRC to order a backfit. The industry and members of the ACRS argued that the NRC should stick to its core mission of public health and safety. For similar reasons, the commission debated the subsidiary objective of limiting the probability of a core-melt accident to one in ten thousand reactor years. The number lumped together accidents, such as Three Mile Island, that did much property damage while killing no one, with severe ones costing dozens of lives. Commissioners Frederick Bernthal and James Asselstine pushed for a risk aversion approach by increasing the core-damage probability objective from one in ten thousand to one hundred thousand reactor years, a standard more restrictive than other energy technologies.61 Outgoing chairman Nunzio Palladino negotiated a compromise that he and two other commissioners supported, and Asselstine and Bernthal did not oppose. With a few tweaks, the final 1986 policy statement looked much as it had in 1983. The benefit-cost guideline was eliminated and turned over to the staff for further development. The final statement was approved just a few months after the Chernobyl accident, and the subsidiary objective of limiting all core-damage accidents to one in ten thousand reactor years of operation was replaced with a non-mandatory goal for a substantial radiation release from containment of one in a million reactor years. The industry objected. To model an accident all the way to the point of a containment failure and radiation release—a level 3 PRA—was complex, costly, and typically had large potential errors compared to a limited core-damage Level 1 PRA that was appropriate to meet the goal of one in ten thousand.62 Nevertheless, the number stood. One in a million had become part of the NRC’s safety goals. The precedent of having a safety goal policy was probably more important than the specific numbers it established. The commission had finally put a number on “safe enough” where constant striving to
Putting a Number on “Safe Enough”
|
97
improve reactor safety was not necessary. The goals also made PRA a permanent regulatory fixture. The policy statement admitted PRA could not operate on its own due to the nagging problem of “sizeable uncertainties . . . and gaps in the data base.”63 Quantitative goals had to be “consistent with the traditional defense-in-depth approach and the accident mitigation philosophy requiring reliable performance of containment systems” and could not “serve as the sole basis for licensing decisions.” Yet, as NRC director Gary Holahan said, “Both PRA and safety goals are intertwined in a way that neither one can be used without the other. PRA technology with no goal in mind never tells you how to be satisfied. Safety goals without PRA would leave you blind to when you had achieved them.”64 By the end of the safety goal policy debate and considering the Chernobyl accident, probabilistic thinking pervaded how NRC commissioners and staff thought of accident risk. In testimony before Congress, Palladino was optimistic. Noting its very different design and lack of a containment building, he expected that Chernobyl had little to teach the United States. He reviewed recent staff analysis of PRA work by the industry and the NRC. In the next twenty years, he told Congress, a severe core melt accident had about one chance in eight, a notable improvement from estimates just a year earlier. Commissioner Asselstine, a PRA critic, vigorously disagreed but did so by invoking probabilities. He predicted, “we can expect to see a core meltdown accident with the next twenty years” in the United States. Chernobyl was a reminder, he said, that “each design has its own core meltdown vulnerabilities,” and called for a containment failure rate of no more than one in one hundred accidents. Pro or con, the language of probabilities dominated the safety debate.65 Translating the quantitative health objectives into regulations took several years and, in the process, they lost clarity.66 The values for prompt fatalities and cancer expressed in the final statement had to be of practical value to plant operators and NRC staff. Engineers lived in a world of hardware, not health physics. Radiation’s health hazards had to be translated into “surrogates” that engineers could use, measures of hardware and safety system failure probabilities. For example, the prompt fatality health objective was expressed as the Large Early Release Frequency (LERF).67 The staff also split into two probabilities the subsidiary objective of one in a million reactor years for a large release: one for core damage frequency (CDF) and one for containment failure probability.68 Splitting the objective this way supported the concept of defense in
98 |
Putting a Number on “Safe Enough”
depth. A licensee, for example, could not argue for eliminating a containment building even if their PRA estimate showed a low core-melt probability without one. In January 1992, NRC staff rolled these objectives into a proposed decision matrix for backfits that linked decisions to PRA results and the safety goals. A reactor with low probabilities of core damage and containment failure could require no backfit on a safety issue. As probabilities of core damage and containment failure shifted higher on the matrix, a cost-benefit analysis was required. Backfits that improved safety for less than $1000/person-rem were cost-justified. If the reactor did not meet the minimum level of “adequate protection,” no cost-benefit analysis was needed to justify the change.69 For all the controversy, convoluted debate, and arcane numbers, the NRC’s safety goal policy statement had worldwide influence. Almost every country affiliated with the Organization for Economic Co-Operation and Development’s Nuclear Energy Agency (NEA) adopted some version of quantified goals, objectives, or criteria.70 Numbers, even ones hard to understand, proved popular in establishing regulatory stability and public acceptance. Did all the effort mean that nuclear power plants met the safety goals? The answer was mixed. The goals were formulated as a rough average for the US fleet. Analysis showed that most plants met the goals for core damage and containment integrity. The staff also concluded that for internal events most plants were likely to meet the quantitative health objectives for prompt fatalities and latent cancers, but some were close to or above them. Add in the risk of external events, one NRC report concluded, and the picture was less sanguine. “While it is likely that the fleet of plants, on average meet the Safety Goals, large margins [of safety] do not exist.”71 Later studies viewed that assessment as pessimistic. SEVERE ACCIDENTS AT OPERATING PLANTS
The Rasmussen Report and Three Mile Island exposed a host of unanswered questions regarding core-melt behavior, containment building performance, data collection, and PRA methodology. The AEC had done limited severe-accident research in the late 1960s, and the NRC supported new studies into the progression of a core meltdown and the potential for flammable gasses, steam explosions, and chemical reactions between a melted core and concrete. Data collection on plant operations and unusual events also received greater priority when the
Putting a Number on “Safe Enough”
|
99
NRC created the Office for Analysis and Evaluation of Operational Data and an Accident Sequence Precursor Program. Precursors were component malfunctions and errors that, if combined with other mishaps, could produce a sequence leading to core damage. The NRC used fault tree analysis to find patterns in these precursors and determine how close these events came to core damage. This work allowed the NRC to make comparisons between events to determine which posed the greatest risk. For example, the NRC found that the 1974 Browns Ferry fire had come closer to core damage than any other precursor event at a nuclear power plant. Fire hazards were a major contributor to plant accident risk.72 The NRC’s research helped inform development of a 1985 policy statement on severe accidents for future designs and existing reactors. The policy statement declared that existing plants did not need further regulatory action unless significant new safety information emerged that might call into question whether a plant posed undue risk to public health and safety.73 Before closing out the question of severe accidents, the NRC applied its research and PRA advancements to a safety review at each plant. First, it carried out an ambitious “do-over” of the Rasmussen Report, NUREG-1150, a multi-plant study that addressed many of WASH-1400’s weaknesses and unanswered questions.74 The new study sought to establish several gold-standard reference PRAs for utility use. The NRC staff could then use the utility results to draw general conclusions about the overall safety of nuclear power plants relative to the safety goals. NUREG-1150 improved on previous PRAs in its use of data and methodology.75 It was also notable for how it drew on insights from social science research to incorporate expert judgment in risk estimates. Research among social scientists had revised scholarly understanding of bias and perception of risk among the public and in expert judgment. Bias among experts was a problem, but the research also suggested their judgments could be used in limited PRA application when structured through a formal elicitation process. The NRC devoted considerable resources to developing elicitation methods for practical PRA application.76 NUREG-1150’s results indicated that accident sequences related to internal events, such as pump failures, pipe breaks, or operator errors, were generally shown to have low probabilities of core-damaging accidents. It also found differences in the overall accident risk posed by different reactor designs. General Electric’s boiling water reactors (BWR) had lower probabilities of core-damaging accidents than pressurized
100
| Putting a Number on “Safe Enough”
water reactors (PWR). In addition, GE’s containments designs—the Mark I, II, and III series—allowed operators to vent gases during a LOCA filtered of many isotopes by the suppression pool, an attractive feature especially after the Three Mile Island accident inadvertently released small amounts of radioactivity.77 Nevertheless, the Mark I containments had long concerned regulators. In a memorable 1972 “shot from Steve,” Stephen Hanauer proposed that the design be discouraged for future construction proposals due to its vulnerability to failure during a core melt accident.78 NUREG1150 concluded that the BWR/Mark I system had a lower probability of a core meltdown, but, if one happened, it was more likely to fail than a PWR with a dry containment. In this situation, defense in depth overrode a literal reading of risk numbers. Regardless of a BWR’s low accident probabilities, defense in depth required it have balanced capability between each layer of defense. The NRC looked for ways to improve the resiliency of the Mark Is. In 1989, the staff recommended to the industry that the gas vent ductwork in Mark Is be replaced with hard piping. By the mid-1990s, all but one of the Mark I plants had voluntarily made the hardened-vent modification. The Mark I vent issue demonstrated the value of risk-informed regulation that balanced PRA insights with defense-in-depth principles.79 The NRC also used NUREG-1150 to improve understanding of lingering accident issues raised by the Rasmussen Report regarding fires and external events, such as airplane impacts, flooding, earthquakes, and tornadoes.80 With these insights, the NRC instructed licensees to draw on NUREG-1150 to conduct Individual Plant Examinations for internal vulnerabilities and conduct a similar review of external hazards. Utilities identified over five hundred safety upgrades to plant operation and design, such as improved procedures, personnel training, upgrades to auxiliary feedwater systems, components, electrical power, and added diesel generators. The review of external events produced similar improvements. While most licensees did not attempt to quantify the safety value of their improvements, those that did indicated that their upgrades reduced the risk of a core-damaging accident.81 In taking its regulations beyond the design basis with NUREG-1150, the NRC supplemented the Three Ds of traditional regulation of reactor designs with the insights of PRA. In the decade after Three Mile Island, the NRC was consumed with improvements to its regulation of operating reactors and digesting the possibilities of PRA. This work had a second implicit goal: a “second
Putting a Number on “Safe Enough”
|
101
nuclear era.” If licensed reactors could operate safely, new ones might be built. By 1990, “nuclear renaissance” became a term used without embarrassment. Even in the bleakest years after the TMI-2 accident, industry and government agencies held quiet discussions about what a second chance might look like, one that avoided the regulatory errors of the first—interminable hearings, unpredictable design requirements, and a labyrinthine licensing process. In 1989, the NRC published a final rule, 10 CFR Part 52, to pre-certify power plant sites, reactor designs, and create a one-step combined construction permit and operating license.82 In this second era, risk assessment would have a central place. Part 52 required PRAs from the start. In 1992, Congress enacted supportive legislation to allow a licensing process with only one evidentiary hearing. Congress also appropriated $100 million over five years to support advanced reactor designs. As one industry journal observed, “In a single stroke, the nuclear community has now been given virtually everything it has said it needed from the federal government to make nuclear power an attractive, economic energy choice.”83 Vendors and the NRC had to bide their time. If market conditions were right, the nuclear industry and the NRC would be ready. In the meantime, upheaval in the power industry altered the staid world of electric utilities. In these tough times, PRA became a tool to make reactor operations more efficient while keeping reactors safe enough.
5
Beyond Design Toward Risk-Informed Regulation
“The principal deficiencies in commercial reactor safety today are not hardware problems, they are management problems,” concluded the so-called Rogovin Report, the NRC-sponsored study on the Three Mile Island accident. “[The NRC] has failed to take timely account of the actual operation of existing plants . . . [and] is incapable, in its present configuration, of managing a comprehensive national safety program. . . . [It] has virtually ignored the critical areas of operator training, human factors engineering, utility management, and technical qualifications.”1 Rogovin set the tone for a new regulatory era. Design safety and construction commanded regulatory attention in the 1960s and 1970s; in the 1980s, the NRC was preoccupied with how people and management influenced safe operations. Like the earlier Kemeny Commission report, Rogovin’s blistering assessment recommended a consistent solution: More oversight, more inspections, and more enforcement.2 But the call for “more” was inconsistent with the two reports’ other main complaint: the nuclear industry was over-regulated. Kemeny and Rogovin left the agency with a paradox. How could the NRC insert itself deeply and punitively in the day-to-day operations of its licensees, meting out “substantial penalties,” without increasing the already “vast body of regulations” that was so “voluminous and complex” as to be a “disincentive to safety” and a threat to industry viability?3 More oversight was sure to spark harsher enforcement and conflict with industry, and 102
Beyond Design
|
103
it did. In 1978, just before Three Mile Island, the NRC issued just fourteen monetary penalties for regulatory violations. By 1987, it issued 114. Well-publicized lapses in safety culture forced the NRC to deliver firm penalties. At the Peach Bottom plant in Pennsylvania, thirty-three of thirty-six plant operators were fined for “inattentiveness” (sleeping) while on duty. Only a whistleblower tip alerted the NRC. To avoid detection and catch offenders, an NRC plant inspector turned off the lights in his plant office and hid under his desk until late into the evening shift. Emerging from his hiding spot, he entered the control room and caught a dozing operator. A Time headline, “Wake Me If It’s a Meltdown,” offered humorous commentary on a serious safety lapse. The NRC ordered a shutdown of Units 2 and 3 that lasted over two years. Each year, industry employees faced criminal charges such as conspiracy, making false statements, and lying to NRC investigators.4 Peach Bottom was not alone. The Tennessee Valley Authority’s three units at Browns Ferry in Alabama remained offline for years. To bring consistency to management, the NRC used the blunt tool of enforcement power. NRC confronted an assemblage of licensees that ranged from small municipal utility districts to Fortune 500 corporations with unsettling variability in management quality. While operations did improve, progress came at a heavy cost. By 1989, the NRCindustry relationship was so poor that executive director Victor Stello admitted to an industry audience the United States had the world’s “most adversarial relationship between regulators and industry. . . . We do not trust you, you do not trust us.”5 Stello was not exaggerating. In turning to its enforcement powers, the NRC was unique among the world’s regulators. In a survey of nine nations, the NRC found all favored quiet negotiation over enforcement. While many nations had empowered regulators to issue fines or shut down facilities, they did not use it. Part of the difference could be explained by national ownership of nuclear power plants in some nations, but it was clear others saw enforcement as a last resort.6 The US regulator-licensee relationship had not always been riddled with such conflict over operations. Since the Atomic Energy Act of 1954, operational safety had been rooted in the twin beliefs that ownership of a plant bred a sense of ownership for safety, and that a regulator was illequipped to regulate corporate management. Until Three Mile Island, the NRC’s oversight role was as an auditor of rules compliance, not an assessor of management competence.7 After the accident, NRC oversight became more aggressive. Resident inspectors became a permanent fixture
104
| Beyond Design
at every site, and it instituted a new oversight and assessment process. There was not yet a satisfactory operational database or reliable quantitative performance indicators, and the domain of an NRC inspector was rules violations, large and small. As the public expected more, the NRC tried to assess management quality and whether a collection of minor violations was a portent of a major safety event. This more aggressive stance was a recipe for conflict that persisted for two decades. To escape, the NRC and industry turned to PRA, whose perceived objectivity might smooth relations. The first part of the solution involved maintenance. MAINTAINING SAFETY
Maintenance does not usually command the attention of historians. Yet, disaster lay in ambush in its dull activities. The underlying causes of the nation’s worst nuclear-related accidents—the loss of the USS Thresher nuclear-powered submarine, the death of three workers at the SL-1 experimental reactor, the Browns Ferry fire, and the Three Mile Island accident—were rooted in improvised maintenance, inspections performed under tight deadlines, and too-tired night-shift errors. The Three Ds focused on design safety and did not recognize sufficiently the relationship of operations and maintenance mishaps to risk. WASH1400 and Three Mile Island demonstrated that significant safety gains could be found by reducing the probability of maintenance problems. At nuclear power plants, personnel devote much of their routine to performing testing and maintenance on safety equipment. NRC regulations, called technical specifications, placed limiting conditions when certain safety equipment became inoperable or taken out of service for maintenance. For example, if problem equipment could not be fixed within a specified time, the plant might have to shut down. Other specifications governed the timing and duration of surveillance-testing requirements. Beyond these minimum requirements there were few regulatory tools to compel high-quality maintenance programs. Although technical specifications set safe operational boundaries, it was not clear that they always made plants safer. Their requirements sometimes included excessive testing and maintenance, which could cause operator errors, accidental scrams, and equipment wear-out. The numerical requirements in technical specifications, such as the time safety equipment was permitted to be out of service, were sometimes arbitrary judgments rather than a true risk-reducing requirement. Post-Three Mile Island accident reviews noted, but did not make much of, maintenance’s role in the acci-
Beyond Design
|
105
dent, and the NRC mostly deferred to industry to improve maintenance programs. Founded in 1979, the Institute of Nuclear Power Operations (INPO) helped utilities strive for “excellence” in safe operations, including reducing unplanned events. Embracing the INPO creed, however, was voluntary and relied on a collective industry ethic of safety and peer pressure. INPO’s message of excellence was not heard by every licensee. Nuclear power required a different managerial attitude than a conventional plant. “Our major management shortcoming,” said the CEO of Boston Edison, “was the failure to recognize fully that the operational and managerial demands placed on a nuclear power plant are very different from those of a conventional fossil-fired power plant.” With this fossil-fuel mentality, one industry official admitted, “repair and maintenance wasn’t a high priority. What was important was continuous operation, and repair and maintenance means being shut down.”8 Another industry official agreed; the lawyers and financial leadership at utilities looked at fossil and nuclear plants the same way, as a “cash cow. . . . They weren’t really interested in how the plant runs , so long as it produced. ‘Let’s run the plant every minute we can and we’ll fix it when it breaks.’” It was not clear to licensee management that an investment in maintenance meant profits.9 But it did become clear that good maintenance meant safety. On June 9, 1985, eight operators took the midnight watch at the Davis-Besse nuclear plant east of Toledo, Ohio. Nestled among soybeans and cornfields in the flat countryside along Lake Erie, Davis-Besse was a near twin of the Babcock and Wilcox reactors at Three Mile Island and, as mentioned earlier, in 1977 it had a nearly identical loss-of-feedwater event and stuck relief valve as did TMI-2, which its operators correctly diagnosed. The supervisors who took the night shift had worked at DavisBesse since it started up in 1975, and they were well versed in how to respond a transient like TMI-2. They did not need a lesson in the seriousness of a loss-of-feedwater event. They were about to get one anyway. All PWRs have three cooling systems (see figure 8). The primary cooling system is a piping loop that circulates water through the reactor core. Once it removes the heat from the core, the water—now radioactive— passes through a steam generator where it transfers its heat, but not its radioactivity, to a secondary loop of water that is boiled at high temperatures and pressures. The secondary system’s steam passes through a turbine generator to make electricity. The steam is condensed back to water by a third loop of cooling water. This last loop is pumped outside and sprayed down into a cooling tower—parabolic concrete structures, such as the four iconic ones at Three Mile Island. Meanwhile, the condensed
106
| Beyond Design
water in the secondary loop is pumped back to the steam generators via feedwater pumps and the secondary cycle repeats. Feedwater in the secondary loop is critical to keeping the primary loop and reactor fuel cooled. Because of its safety importance, an auxiliary feedwater system provides a backup in case feedwater is lost due to a malfunction or operator error. It was an uneventful change of the watch, and the routine of the “graveyard” shift began. The plant was at 90 percent power with no ongoing equipment testing or expected change in plant conditions. There were some concerns with the automatic control system for the feedwater pumps; they had tripped offline the week before and troubleshooting had not identified a cause. Plant management had opted to restart DavisBesse and monitor the pumps. The first hour of the shift change involved instrument checks and routine surveillance. By 1:35 a.m., tedium. One of the reactor operators went to the kitchen for a snack with an equipment operator. The other reactor operator monitored the plant while studying procedures for an upcoming qualification test. The assistant shift supervisor was on his way back to the control room after finishing his snack. The shift supervisor was doing paperwork.10 As the assistant supervisor entered the control room, he saw that one of the main feedwater pumps had tripped offline. The water level in the steam generators began to drop. Temperature and pressure in the primary cooling system rose, and, before the operators could respond, the reactor scrammed automatically due to high coolant pressure. This was a routine anticipated transient, but a scram is an unsettling event. The accustomed scream of the turbine was broken by the steam stop valves as they slammed closed with a thud so forceful the giant valves swayed; pumps stopped and the steam turbine audibly wound down; primary coolant and steam generator relief valves opened; and automatic control systems rapidly cycled valves, started pumps, and tripped electrical equipment. Operators scattered throughout the plant heard the change in plant conditions and headed to the control room. Over the next twenty minutes there were twelve separate equipment malfunctions, including several common-mode failures, and operator errors. The safety significance of this collection of mishaps was that the steam-driven main and auxiliary feedwater systems tripped offline and could not be restarted from the control room. This total loss-of-feedwater event was beyond the design basis of the plant, and the reactor coolant started to heat up. The reactor operators had their hands full as the primary system temperature rose four degrees per minute and pressure soon exceeded 2400 psi. The pilot-operated relief valve cycled open and
Beyond Design
|
107
closed three times to relieve primary coolant pressure before it stuck open—just like at Three Mile Island. The operators isolated it by closing a different valve. The environment in the control room was “hectic” and the assembled group of operators understood the importance of restoring feedwater. Two plant operators would have to improvise and run down several levels to the locked basement auxiliary feedwater rooms to reset the trip valves and start the pumps. Despite a “no running” policy, a pair of men bounded down the stairs. One operator was fleeter than the other, and the lagging operator threw him the key ring as he sprinted ahead. Once they removed the padlock, they had to descend a ladder, remove more chained locks, reset the pump’s trip valve, and reposition other valves. The two operators had not reset the pump trip valves before in such a hurry. Under high steam pressure, the valves were very hard to open. A third operator brought a valve wrench to help turn the handwheels. The assistant supervisor arrived and decided to improvise. He tried to start a small utility electric feedwater pump not used for emergencies. It was a multistep task requiring the supervisor to run to four separate locations of the plant to remove locks, position valves, and install the pump’s fuses. Four minutes elapsed, but it worked. The first water started flowing to the steam generators at 1:51 a.m., sixteen minutes after the event began. Soon, the steam-driven pumps spun to life. Cooling restored, there was no TMI-2, part two, at Davis-Besse. The Davis-Besse loss of feedwater event was the most significant since Three Mile Island. On the positive side, the event did not end in disaster for two important reasons. For all their mistakes that night, the operators understood what had happened and what needed to be done. One NRC staff member thought “the operators performed admirably” by compensating for each other and the malfunctioning equipment. It showed that operators can add to safety, even when they made mistakes. In addition, some post-Three Mile Island upgrades made a positive contribution to the outcome.11 For the second time, Davis-Besse had drawn uncomfortable comparisons to Three Mile Island. “What makes it so unsettling is the auxiliary feed pumps went out and the relief valve stayed open,” said an NRC spokesman. “It was like the early stages of TMI, although you never got to the final stages.”12 From the perspective of PRA, the event underscored WASH-1400’s observations about the value of diverse safety equipment. A standby electric pump could have overcome the common-cause failures to the steam-driven auxiliary feedwater pumps.
108
| Beyond Design
As part of its post-Three Mile Island action plan, the NRC had moved to upgrade auxiliary feedwater systems and had pressed Davis-Besse’s licensee, the Toledo Edison Company, to add an electric feedwater pump. Based on an optimistic PRA it conducted, Toledo Edison offered an inexpensive procedural alternative. The NRC responded with a risk assessment of its own, which found the more expensive option of installing a pump provided a substantial risk benefit. In late 1984, Toledo Edison agreed to install the new pump, but had not started the installation when the event occurred in June 1985. Pointing to the very different estimates in the two PRAs, commissioner James Asselstine argued that Davis-Besse was “an excellent example of the inadequacies of PRAs for truly predicting risk.” The NRC drew a different lesson on the need for greater PRA consistency and standards.13 The root cause of the event, the NRC concluded, was weak plant management and lax maintenance practices. The numerous malfunctions of high-quality equipment were caused by “the licensee’s lack of attention to detail in the care of plant equipment” and a history of performing maintenance “in a superficial manner.”14 Toledo Edison was a small utility company, and its ambitions for nuclear power perhaps exceeded its grasp. Along with co-owner Cleveland Electric Illuminating Company, Toledo Edison had planned on three nuclear plants at Davis-Besse, but cancelled units 2 and 3 in 1980, and post-Three Mile Island upgrades stressed its bottom line, as did its historically low capacity factor. As one shift supervisor at Davis-Besse later wrote, INPO’s emphasis on excellence had not filtered down to Davis-Besse. Toledo Edison ran it as if it was “just another power plant.”15 NRC officials wondered how many other DavisBesses and Peach Bottoms were out there. In 1986, NRC official James Sniezek told an industry conference that events like Davis-Besse were caused by general “shoddy practices” from “significant programmatic and maintenance deficiencies.”16 “We would like to see the industry get better on its own,” said one staffer.17 The NRC concluded it could no longer continue to defer to industry on maintenance. It restarted the human and organizational factors research program it had cut in 1985, and resolved to provide more oversight on maintenance.18 THE MAINTENANCE RULE
Maintenance regulation and oversight were unfamiliar territory for the NRC, and it needed a new kind of regulation. The NRC’s traditional prescriptive regulations were loathed by industry as nitpicking that did
Beyond Design
|
109
not benefit safety. The NRC cast about for a model and investigated programs in Japan, France, and West Germany. Japan had established a superior maintenance program. Its plants had far fewer maintenanceinduced scrams, but international programs did not translate well to the US context. France, for example, had just one utility operator and a fleet of nearly identical power plants, which was nothing like the unique designs and multiple vendors in the United States. Nevertheless, the NRC identified key practices common to all effective maintenance programs, such as a placing a high value on reliability, doing root-cause analysis, providing extensive technical training, and using systems of data collection and monitoring.19 As a regulatory model, the NRC also reviewed the approaches used by other federal agencies, such as the US Federal Aviation Administration (FAA). The FAA operated in an environment like the NRC, with multiple aircraft manufacturers and airlines. Given this diversity, it did not follow a prescriptive approach that spelled out one recipe for developing a maintenance program.20 The NRC kept it simple: require utilities to develop a maintenance program, assess its results, and develop plans to improve it. The NRC’s role would be detached oversight by monitoring the program’s safety results through performance indicators, the agency’s first “performance-based” regulation. More flexible guidance documents or industry standards would fill in the details for maintenance on structures, systems, and components essential to safety.21 The nuclear industry resisted NRC intrusion on management territory. It argued that industry’s voluntary initiatives were already enough. Unconvinced, NRC staff requested permission from the commission to develop a policy statement to “force the industry to sign up.”22 Commissioner James Asselstine told Congress that voluntary initiatives were not working. “We have not yet seen any significant improvement in US industry performance,” he said, and pointed out that other nations had taken a more aggressive approach to maintenance.23 Industry representatives warned the rule’s simplicity masked the complex decisions needed to make it work and would stifle management creativity. For example, the policy statement required all systems, structures, and components (SSCs) important to safety be maintained to perform their intended function. There were thousands of SSCs; which ones were most important to safety, and how much did they matter to overall risk? Since most plants in the United States were unique designs, developing a common standard for maintenance would be difficult.24
110
| Beyond Design
The US nuclear navy lent a hand. Former vice admirals Lando Zech and Kenneth Carr joined the commission in the 1980s and served backto-back as chairmen. Zech’s and Carr’s nuclear navy experience dated back to the world’s first nuclear submarine, the USS Nautilus. Both had skippered submarines and held fleet-wide commands where the quality standards of the nuclear navy were engrained in their professional ethic. The NRC’s deference to industry on maintenance baffled them. Carr was renowned as a fussbudget on housekeeping and maintenance who inspected the emergency battery room of every plant he visited. He noted, “I was amazed when I got here to find that we have all kinds of rules on everything but maintenance. We have zero rules on maintenance.” He vowed to “chew on” industry management to get one.25 One NRC consultant agreed and thought the industry’s resistance to a rule bordered on “paranoia.”26 Zech and Carr grew impatient with what they believed was a dilatory response. Along with commissioner Kenneth Rogers, they resolved to issue a final rule before Zech retired in June 1989.27 As a vote on a final rule approached, opposition intensified. Zech complained that the NRC “got essentially no cooperation at all.”28 Industry feared compliance with the rule could cost $4 billion, and it vowed to go to Congress if a rule was passed. The NRC’s Advisory Committee on Reactor Safeguards (ACRS) gave the industry a surprise boost by warning that the proposed rule “strains severely and may violate the boundary between regulating and managing. . . . [The rule’s scope is] so broad that almost every facet of plant operation would be under the continuing scrutiny of the NRC.”29 Frustrated, the commission sent the rule back to the staff to allow more time for industry improvement and data collection. Carr was blunt. “If they stonewall for the next year,” he warned, “they might get a bad rule” in the end. The ill will spilled out into NRC hallways after the commission meeting. In a heated exchange, NRC’s Executive Director of Operations, Victor Stello, pressed industry leader, Joe Colvin. Stello worried that if the NRC “backed off” in pressing industry to improve maintenance, the industry would slack off too. Colvin objected, “We’re committed to improving maintenance.” Stello shot back, “That’s bulls—. I can give you a hundred examples” where the industry had to be forced to keep up maintenance.30 Zech retired later in June 1989, and ACRS member Forrest Remick, an opponent of the rule, joined the commission. Prospects were uncertain. Carr continued to chew on the industry. As chairman he pressed the staff to develop a more satisfying rule. Ultimately, PRA came to the
Beyond Design
|
111
rescue. In August 1989, the NRC published a draft regulatory guide on effective maintenance programs but added a PRA twist. The guide proposed that licensees establish quantitative maintenance goals for the structures, systems, and components “commensurate with the[ir] safety significance.” The NRC encouraged licensees to tap reliability data and PRAs already developed by utilities. To give licensees guidance, the NRC contracted for a report that would demonstrate “risk-focused” methods of implementing maintenance programs.31 The NRC dangled the possibility that the rule would not be necessary if industry showed progress. The industry, through INPO, took the lead in developing maintenance standards and guidance. NRC staff review found that the industry did not always consider the risk significance of taking equipment out of service for maintenance. For example, the proposed rule required that a safety-related system had to be both available for use and reliable enough to count on. “Available and reliable” created a maintenance paradox. Preventive maintenance done while the plant was operating could make a safety component more reliable, but it had to be unavailable while personnel performed the maintenance. Whether maintenance done during plant operation was worth the added shortterm risk was a question PRAs could address.32 Concluding that industry had made progress, the staff and ACRS favored the policy statement rather than a rule.33 The trade press reported that the nuclear industry “may be approaching success in its efforts to head off [the maintenance rule].” One industry representative noted, “It’s hard to see how an NRC rule could improve [maintenance].”34 The commission passed the rule anyway. In a stunning vote, three commissioners set aside industry pressure as well as staff and ACRS advice. They chose a rule that measured maintenance program results. Industry resistance convinced Carr that licensees might “backslide” once the NRC turned its attention to other issues. Commissioner James Curtiss took a sunnier view and argued that this was a “ripe opportunity to create a results-oriented rule” that could transform regulation. Curtiss’s staff drafted the final language for a “reliability-based, resultsoriented” rule. To limit intrusive NRC oversight, it applied only to systems that might cause safety-related equipment to malfunction.35 The nuclear industry reported it was “surprised” and “disappointed.” Licensees, they noted, had improved their maintenance programs and “existing regulations were fully adequate.”36 Commissioner Forrest Remick voted against the rule as unnecessary and counterproductive. In communications with Curtiss, industry opinion was less
112
| Beyond Design
diplomatic. One industry executive called the rule “a regulatory surprise” that would be an “arduous task to implement” and “will drain significant industry experience, expertise, time and money.” He wondered whether “nuclear power has a place in an environment where cost containment will be the only means of survival.”37 Implementation of the rule resulted in confrontational and blunt meetings between NRC staff and industry. From pessimism to praise, the turnabout was swift. By 1993, industry representatives confessed their astonishment that implementation of the maintenance rule “achieved more than we really had anticipated.”38 The NRC agreed to let industry develop its own guidance document. In selecting the systems and components that should be covered by the rule, industry guidance called for the use of expert panels that could combine PRA insights with their own judgments and defense-in-depth considerations. This approach took advantage of PRA’s possibilities and limitations.39 The rule was the first risk-informed regulation, a term still waiting to be coined, and was distinct from a “risk-based” regulation that relied exclusively on quantitative assessments. PRA’s influence over daily nuclear plant operations expanded rapidly and enabled licensees to deploy new tools that revealed and managed risk from maintenance activities. PRA made implementation of the rule workable, and the rule spurred advancements in the use and sophistication of PRA. These new applications of PRA required later changes to the rule. The NRC, the national laboratories, and industry had worked since the 1980s to make PRA programs dynamic “living PRAs” that licensees could update to reflect current plant conditions and monitor real-time risk during operations and maintenance. By 1995, the NRC found licensees were aggressively deploying them in operations. Licensees in the United Kingdom and Scandinavia were doing it too. Some programs called Risk Monitors were sophisticated “full-plant,” “real-time” PRAs that monitored risk on the fly as plant configurations and maintenance changed. As we will see, the South Texas Project (STP) served as a model for the value of sophisticated PRA tools in operations.40 Implementation still had some hurdles. Licensees had increased preventive maintenance done during reactor operation, but they did not always consider maintenance risk adequately. Moreover, the quality of utility PRAs varied, and they made limited use of general industry operating experience. In 1999, the commission added a new section to the maintenance rule that required a licensee to “assess and manage the increase in risk” from maintenance activities through a “risk-informed
Beyond Design
|
113
evaluation.” The revision recognized the benefits to managing maintenance and operations through risk monitoring programs. The programs helped plant personnel think, work, and plan through an organizational risk culture that probed for small increases in risk. For example, one plant added consideration of the risk of nearby wildfires to its risk profile and worked to manage the hazard’s threat to external power sources.41 The maintenance rule provided a bonus of smoothing the way for license renewals—twenty-year extensions of the original forty-year license. The NRC’s early versions of a regulation on license renewals were panned by the industry as highly prescriptive and expensive. A key point of concern was plant component aging. The maintenance rule already covered the aging of “active” safety components—pumps, valves, and breakers—and utilities argued that these components did not need special review during license renewal. The NRC agreed. In 1995, the NRC revised the rule on license renewal to focus on aging management of “passive” components, such as piping, containment buildings, and pressure vessels, which significantly reduced the complexity and cost of license renewals.42 The maintenance rule was the industry’s road-to-Damascus moment. A new kind of regulatory framework seemed possible that was both safe and efficient. In 1996, the Nuclear Energy Institute (NEI), an industry trade association, offered up “a vision . . . for creating a new paradigm and regulatory culture in our industry” that was “risk-based and performance-based.”43 Citing the success of the maintenance rule and the development of utility PRAs, NEI believed the NRC’s deterministic, inefficient regulations could be complemented with PRA insights. The industry would be more efficient, and so would the NRC. The primary benefit for the nuclear industry was its ability to cut in half the time it took to complete a refueling outage, and well-maintained plants soon saw their capacity factors jump from an industry average of about 65 to nearly 90 percent by the year 2000.44 The two-for-one benefit of the PRA-informed maintenance rule made it a rare success; it pleased everyone. The NRC addressed its safety responsibilities and improved licensee management culture with minimal intrusion. A licensee’s bottom line benefitted from improved plant capacity factors and shortened refueling outages. Most importantly, the nuclear industry found that risk-informing maintenance with PRA-tools made plants safer.45 Licensees could see risk and how it changed with shifting plant configurations. Risk management reduced the possibility of a reactor shutdown gone wrong, such as the one that destroyed TMI-2. Industry
114
| Beyond Design
critic David Lochbaum, the director of the Union of Concerned Scientists’ Nuclear Safety Project, called it “the best thing the NRC has done during my 40-year career in the nuclear power industry.”46 SAFETY CULTURE, BY THE NUMBERS
The maintenance rule and PRA addressed the NRC’s concerns with managing hardware maintenance. Could PRA work the same magic for managing people? The NRC wanted a predictive measure of the risk posed by substandard licensee organization and management. The measure went by different names—organizational culture, organizational factors, and safety culture. The differences in how these terms were defined are of greater import to specialists than to readers of this story. In recent years, safety culture has become an omnibus term subsuming the others.47 It entered the lexicon after the 1986 Chernobyl accident, coined by the International Nuclear Safety Advisory Group of the International Atomic Energy Agency, and the international nuclear community devoted considerable time to defining it and identifying its traits. The NRC attempted to go a step further with a program to quantify safety culture among its licensees. The number would be plugged into a PRA that could predict the likelihood of a licensee’s degrading safety performance. The NRC would then be able to take enforcement action when poor safety culture increased accident risk above a threshold value. Engineers were not experts on culture, and to measure it, they enlisted social scientists and their analytical tools. Quantifying nuclear power plant safety culture at a power plant was a formidable challenge. It was complicated by the NRC’s adversarial relationship with its licensees and professional differences between nuclear engineers and the social scientists they recruited. The differences in part emerged from an engineer’s preference for hard numbers versus the descriptive assessments of human behavior specialists, but there was also a question of professional respect, perhaps best captured in an incident involving Admiral Hyman Rickover. Prompted by a 1970 report on the US Navy’s new human-factors research program, Rickover fired off a memo of protest. The “father of the nuclear navy,” whose name adorns the engineering building at the US Naval Academy, called the report “the greatest quantity of nonsense I have ever seen. . . . It is replete with obtuse jargon and sham-scientific expressions.” The program would require “a vast new social ‘science’ bureaucracy contributing absolutely nothing to the building of ships.” Before an appreciative congressional
Beyond Design
|
115
committee, Rickover elaborated on his dim view of the social “sciences.” “I could just imagine . . . one of these specialists advising a project engineer that . . . his fire control panel should play soft background music to ease the tension during combat.” The program, he concluded, was “about as useful as teaching your grandmother how to suck an egg.”48 Almost fifty years later, social scientists still struggled for acceptance in the nuclear engineering world. Dr. Valerie Barnes, a psychologist, spent a career in the nuclear industry and at the NRC applying her insights on organizational factors and safety culture to assessments of licensees. Until her retirement, she was among a cadre of NRC social scientists that oversaw its human factors, organizational factors, and safety culture programs. She recalled that her engineering colleagues did not understand that her Ph.D. did not qualify her to be a therapist, and they dismissed her professional methods as “fluffy,” unquantifiable, and, therefore, valueless in regulation. “We spoke completely different languages.”49 The NRC’s ambition for safety culture was to get engineers and social scientists to speak the same PRA language. It was a quixotic campaign. Perhaps no nuclear regulator was as supportive of research into the intersection of organizational factors and reactor safety or as cautious in applying those findings to its regulations. The NRC’s ambivalent pursuit of quantified safety culture embodied a mix of competing motives. There was an institutional skepticism of the value of social sciences in the engineering-dominated regulation of nuclear safety. The agency operated from a long-held belief that it should regulate power plants, not people. Intrusive oversight of a licensee’s business would destroy its ownership for safety. Three Mile Island challenged that conviction, and subsequent plant mishaps compelled the agency to cross the line between regulation and management. As the NRC-industry relationship grew more adversarial, the agency needed a non-intrusive, objective basis to judge a licensee’s safety culture. The agency turned to social scientists not simply to improve PRA as a technical tool but to reduce conflict with licensees. Behavioral experts joined plant oversight review teams and received generous funding to develop indicators of risky organizational culture. Scores of scholars at national laboratories, institutes, and a dozen universities contributed to the project. The NRC had previously funded psychologists to produce reports on public risk perception, but its aspiration to quantify safety culture was more ambitious than any social science research it had ever sponsored.50 It did not work out as planned. The social science researchers who stepped into plant control rooms became part of the NRC’s adversarial
116
| Beyond Design
relationship with industry. They were greeted with suspicion by licensees and professional skepticism by regulators, especially when their research findings drew valuable qualitative insights but not PRA-suitable results. The NRC abandoned the work in the mid-1990s. Without an impartial, objective measure of organizational performance, the NRC’s oversight of licensee management remained mired in controversy. The NRC’s road to cultural quantification evolved over two decades. Before Three Mile Island, human error in plant accidents received limited attention in control-room design. Industrial psychologists had been well-established in the human factors profession since World War II, particularly in control-panel design for defense and commercial aircraft industries, aerospace, and consumer applications. Yet the safety importance of the so-called “man-machine” interface in nuclear plant control rooms—the specialty of human factors experts—was not obvious. Neither regulators nor design firms employed human-factors experts. Designers simply borrowed from the functional control rooms of fossilfuel plants. They created sprawling versions at nuclear plants with reliable, oversized controls and indicators spread across multiple panels and arranged for normal operations. In “off-normal” events, operators might dash about from one panel to another to find the right switches and meters to stabilize the plant.51 By 1972, a series of mishaps made evident the risks from a poor manmachine interface. Operator errors aggravated and prolonged several routine plant transients. The Atomic Energy Commission issued a report calling for greater attention to “human engineering” with improved control room design, operating procedures, and personnel training. Several years later, the NRC issued similar reports calling for control-room design and procedures suited to accident conditions.52 WASH-1400 identified the risks posed by human error as significant contributors to accident risk, perhaps even more than the major hardware failures postulated in design-basis accidents. These lessons did not find much application until after the Three Mile Island accident. Even then, the NRC’s response was an engineering solution that hardly tapped social science expertise. It reflected the dominant engineering philosophy that hardware design, procedures, and training would compensate for the unstudied hazards of human foible and organizational weaknesses.53 As it was, there was limited social science research on the contributions to disasters from organizational culture.54 Three Mile Island validated larger budgets for human engineering. Within a couple of years, the NRC’s human factors research program
Beyond Design
|
117
FIGURE 17 . The TMI-2 control room (1979) was typical of the complex, functional panels of the 1970s. After the accident, control rooms underwent substantial redesign to consider human factors in accident conditions. Source: US NRC.
swelled to over $20 million, addressing human error studies known as Human Reliability Analysis, control room design, revisions to procedures, accident training, and organization and management. There was broad agreement that the accident might have been avoided with an effective lessons-learned program, more simulator training, and intelligent control room layouts and procedures.55 Calls for more substantial attention to organizational culture came from outside assessments of the Three Mile Island accident. The Kemeny Commission and Rogovin reports identified management dysfunction as an accident contributor. Kemeny called for “higher organizational and management standards for licensees,” closer attention to organizational decision-making, and utility capability.56 Similarly, Rogovin pressed the NRC to regulate “organizational structure,” human factors, and management. These were high-level observations, however, and in the aftermath of Three Mile Island only limited attention was paid to the influence of organization and management on safety.57 Eventually, the Kemeny and Rogovin reports prodded the NRC into forbidden territory, and agency staff tried to squirm away from direct management assessment. In late 1980, they produced broad,
118
| Beyond Design
uncontroversial guidelines for acceptable utility structure and safety functions they believed were adaptable to the idiosyncratic management structures and styles of the NRC’s many licensees. The agency conceded evaluation would be “on a largely subjective basis” and promised to show great flexibility. Nevertheless, a draft and redraft of the guidelines elicited sharp criticism from utility executives that the guidelines were an “over-reaction to TMI,” “unrealistic and over-zealous,” and “too prescriptive and unnecessarily restrictive.” In search of firmer ground, the staff launched research into “safety-related management attitudes” and “organizational variables [that] can be objectively assessed” to make “it possible to relate effective and ineffective management behaviors to safety criteria through an appropriate model.”58 For the first time, the NRC awarded experts in organizational management and psychology substantial funds to investigate the link between utility characteristics and safety. In 1982, the Battelle Human Affairs Research Center in Seattle, Washington contracted for several reports on industry organization and management. Battelle was a think tank with deep experience in nuclear power, and it proposed developing a database and model of utility organizations predictive of declining safety performance. Battelle dispatched teams of social psychologists, sociologists, and political scientists to observe NRC staff assessments of utility management during their application for a new operating license. The NRC assessments inferred an organization’s competence through its adherence to formal requirements for training, the structure of plant management, regulations, and staffing levels. Lacking training in social-science interview techniques, however, the staff tended to stray into unstructured, subjective questions about management culture, attitudes, morale, and in drawing unmeasurable conclusions, such as having “a good feeling” about management. A licensee’s reputation also seemed to bias staff assessments. Battelle argued for more objective assessments that could withstand legal challenge with better evaluation criteria, training, and performance indicators.59 For operating reactors, Battelle proposed a less intrusive model that did not require new data collection from licensees. It used publicly available qualitative and quantitative data on utility organizational structure and plant performance. Published between 1983 and 1985, the Battelle studies offered an upbeat message that behavioral scientists could solve some of the NRC’s touchy oversight issues with detached assessments. Theirs was “an integrated socio-technical, human machine perspective” on the “interconnections and interdependencies among the human and technical aspects of complex organizations [that would
Beyond Design
|
119
avoid] much of the inherent limitations of narrow, segmented, and partial views” of engineers. Battelle grouped organizational factors into four categories—utility environment, historical context, organizational governance, and organization design. Among them, organization design offered the most promise for making predictive statements about plant operational safety. Their conclusions were preliminary, but they found that “measured aspects of utility organization were . . . associated with measured safety indicators.”60 To verify and validate their tentative findings, Battelle requested continued funding. The consequences of human error, it warned, were too great to ignore. “There is comparatively little opportunity for instructive learning until after dysfunctional effects have occurred.” By creating a model that was “predictive as well as explanatory of safety,” Battelle’s model would optimize utility organization, learning, and resources; a more efficient plant was a safer plant.61 Later scholars considered Battelle’s passive collection of data and focus on organizational structure shortcomings reflective of the pre-Chernobyl scholarship on organizational factors and accidents. It served as an intellectual stepping-stone to research into organizational processes and culture measured through advanced psychometric surveys and interview tools. Battelle scientists were aware of this new scholarship on management culture but slighted it in their model. Why they did so is not clear, but their motive for favoring organizational structure is. Battelle’s model aligned with the political realities of the NRC’s increasingly adversarial relationship with licensees.62 Battelle believed its methods would be “less obtrusive” and more politically palatable than invasive NRC plant oversight. “Neither the NRC nor the industry wishes the NRC to become involved in the day-to-day management of the nuclear plants,” Battelle observed. “This would mean active involvement in supervision and decision-making that would be unhealthy for both parties and the public safety.”63 Battelle’s careful balancing of politics and science still ran afoul of the regulator-licensee divide. At a December 1982 commission briefing on human factors programs, Chairman Nunzio Palladino expressed his discomfort with an aggressive human factors program. “I get worried when we get our tentacles out so far where we seem to be ‘big brothering’ every aspect of the operation. . . . I get a little uneasy when we get into peripheral aspects because . . . if we get into everything we tend to lose the initiative of the organization itself.” Palladino repeated the “tentacles” metaphor, and the staff promised to revamp the program.64
120
| Beyond Design
Issued a month later, the revised program eliminated direct research and regulatory activity on organizational factors in favor of a “more practical program capable of near-term accomplishment, as contrasted to the more academically oriented program previously described in the Plan.” The NRC deferred to self-regulating initiatives under the leadership of INPO.65 The cutbacks reflected a changing environment during the Ronald Reagan administration of general skepticism toward regulation. There was pressure on the NRC to get back to a pre-Three Mile Island “normal.” In January 1981, the NRC controller promised that by 1983 the agency budget would be “just as though there’d never been a TMI.”66 Palladino’s deference to industry initiatives also accorded with the core belief in limited oversight, that licensees had to take ownership for the safe operation of their plants. Thirty years later, former commissioner Kenneth Rogers echoed Palladino’s sentiments. Licensees, he said, must “never lose that sense of responsibility that it is their responsibility, not NRC’s responsibility, to take the lead. . . . ‘This is your plant, this is your facility.’ ”67 The NRC was caught between a tradition of management deference and post-Three Mile Island pressure for more active regulation. In handing the lead to INPO, the staff warned its patience was not infinite. “As now envisioned, NRC will not develop management and organization criteria for operating reactors unless the INPO effort proves to be unsatisfactory for our needs.”68 Dissatisfaction soon followed. The 1986 accident at the Soviet Union’s Chernobyl nuclear power plant crystalized international recognition of the importance of safety culture in operations, and the International Atomic Energy Agency became a consistent champion of developing safety culture criteria and assessment tools. The NRC acknowledged the importance of safety culture, but it also sought to distance the US industry from the Soviets by stressing distinctive features of the US system, including requirements against degrading operator performance and superior safety features of light-water reactors.69 The management lapses at Davis-Besse and Peach Bottom made some wonder if the NRC was drawing a distinction without a difference. Despite post-Three Mile Island reforms and INPO’s drive for licensee excellence, human error and organizational factors played a role in half of all plant events. INPO actively worked to overhaul Peach Bottom management but met stiff resistance. One INPO official explained that the staff at Peach Bottom “thought that ‘By God, we’ve
Beyond Design
|
121
done it this way for years, and it’s worked for us. So we don’t see any need to change.’ It was a fossil fuel mentality. They had never really joined the nuclear era.”70 At the NRC, Chairman Lando Zech announced that the NRC needed to do all it could to create a plant environment that maximized operator performance. In 1989, the commission approved a policy statement on the conduct of plant operations that included a definition of safety culture as “the personal dedication and accountability of all individuals” to practices of plant safety and the promotion of an “environment of safety consciousness.”71 The agency looked again to the social sciences to make its assessment of safety culture objective. It commissioned the National Academy of Sciences to recommend a comprehensive human factors program including organizational research. Chaired by human factors expert Neville Moray, the NAS panel included diverse membership from the nuclear industry, engineering, traditional human factors experts, business management, and the social sciences. The Academy committee faulted the NRC’s post-Three Mile Island human factors program as unimaginative. It offered “purely technical solutions to human problems,” an approach typical of “a community with a strongly established engineering culture.” A nuclear power plant was a complex “sociotechnical system” affected by organizational factors and a technology’s social context. It could only be understood by “multidisciplinary teams,” including university researchers, doing on-site research into its “culture of management.” Moray told the NRC’s Advisory Committee on Reactor Safeguards, “Management can make or break a plant.” The NRC needed to identify what made for a positive organizational culture of reliability and safety and develop feedback mechanism to reduce accident risk.72 By the time the NAS report was published in 1988, much had changed in organizational scholarship. The Academy committee included Todd LaPorte, a political scientist at the University of California– Berkeley, who was a pioneer in the study of “high reliability organizations.” HRO scholarship positioned itself as a practical alternative to earlier theoretical work by sociologist Charles Perrow on “Normal Accidents.” Perrow argued that technologies like nuclear power were too complex and carried consequences too great to operate safely. The ambition of HRO studies was to identify and measure the traits of successful organizations that operated in high risk environments where trial-and-error learning was not an option, such as aircraft carrier flight operations and air-traffic controllers.73 A new NRC research program
122
| Beyond Design
also stood to benefit from the maturation of social-science survey tools, such as psychometric inventories, that quantified aspects of organizational and managerial culture. The combination of theory, methods, and tools heralded the arrival of the social-science PRA expert. As sociologist William Freudenberg argued, if nuclear engineers wanted to improve the accuracy of their PRA calculations, they needed social scientists to help them understand and quantify human and organizational behavior.74 The NRC tried to turn social-science research into regulatory reality. Its staff recommended the NAS report to the commission, anticipating that in-depth research of licensee culture could be combined with its own inspection programs to produce measures that were “predictive as well as descriptive” of degrading licensee performance. If successful, the research would become “a basis for integrating management factors into the probabilistic risk assessment (PRA) process, and as a basis for developing indicators of organization and management performance.”75 Reducing an organization’s cultural complexities to a number in a risk calculation appeared like engineering naivete, but it also reflected regulatory priorities and rising confidence in risk assessment. By the late 1980s, the NRC had painstakingly pieced together elements of a more risk-informed framework with safety goals, a revision to WASH-1400, and several probabilistically-minded regulations. The NRC commissioners and some elements of the staff placed a significant bet on PRA to reform its cumbersome regulations and reduce conflict with its licensees. PRAs used very specific equipment failure-rate data and other quantitative inputs to estimate accident risk. To convert to “risk-based” regulation, PRA experts had to reduce the existing uncertainties associated with hard-to-quantify factors such as poor plant management.76 To be useful, social science research needed to conform to PRA quantification. By 1989, PRA experts were confident it could be done. George Apostolakis, a PRA expert at the University of California–Los Angeles who later became an NRC commissioner, co-authored an editorial that captured the discipline’s optimism that PRA could serve as the basis for a new regulatory framework and resolve industry conflict. PRA had become “a kind of lingua franca of risk-related decision making,” the authors wrote. “We have a common language with which to discuss a particular problem area, like nuclear risk.” Without it, the nuclear industry would suffer “chaos, confusion, controversy, fear, litigation, and paralysis.” With “living PRAs,” utilities would know accident risk at every moment of operation. They expected that soon the uncertainties that bedeviled the
Beyond Design
|
123
quantification of cultural factors, such as “morale, esprit de corps, management attitude, . . . should see considerable progress.”77 Apostolakis laughingly recalled, “we thought we could quantify everything.”78 Brookhaven National Laboratory had on-staff organizational psychologists, and the NRC contracted with it for a $5 million study. Under psychologist Dr. Sonja Haber, Brookhaven worked with two other national laboratories, several institutes, and twelve universities with teams of social scientists. Apostolakis and his UCLA team were to integrate the results into PRA models to see if it worked. Brookhaven developed a structural model of a nuclear power plant’s organization drawn from research by professor Henry Mintzberg at McGill University in Montreal, Canada. A nuclear plant organization, Brookhaven concluded, was best described as a “machine bureaucracy” with highly formalized procedures and rules, specialized groups, extensive professionalism, and a special need for safety. Investigators identified promising organizational factors through interviews, surveys, and observation.79 Wary but interested, Pacific Gas and Electric Company allowed a team of researchers from the University of California–Berkeley and Brookhaven to test out their Nuclear Organization and Management Analysis Concept (NOMAC) model at its fossil-fuel plant at Pittsburg, California. Satisfied with their scientific rigor, PG&E permitted the team to move on to the Diablo Canyon nuclear power plant. The NRC reported with hope that NOMAC “can be implemented in a reasonably nonintrusive manner, can be received favorably by utility personnel, and that meaningful data can be extracted for use in exploring the influences of organization and management factors on reliability and risk.”80 So far, so good, but broadening the study to include other licensees posed a considerable challenge given the NRC’s difficult relationship with its licensees. The NRC’s modest use of social scientists in plant oversight had not reassured the industry. After Three Mile Island, the agency created the Systematic Assessment of Licensee Performance (SALP) to combine quantitative and qualitative assessments into a rating system of plant performance. Following the Davis-Besse and Peach Bottom episodes, the agency added a capstone senior management meeting to review the performance of each plant and a “watch list” of problem plants. Plants with worrisome performance received special attention from a Diagnostic Evaluation Teams (DET) that made a multiweek assessment of operations. DETs consisted of about fifteen staff, including behavioral scientists, that conducted a broad assessment in areas such as organizational culture and management “beliefs,
124
| Beyond Design
attitudes, practices . . . as well as key sociological factors.” Initial DET assessments, staff reported, were useful and utilities appreciated DET insights.81 The era of good feelings did not last. By the early 1990s, the industry had turned decisively against the SALP and the DETs. A diagnosticteam visit was a plague at a utility’s doorstep. DETs were often a prelude to a plant joining the watchlist, a black eye that got upper management fired, sent utility stock prices tumbling, and required millions on maintenance and operational improvements. The industry and trade press derided DETs as “subjective,” a claim substantiated to a degree by the NRC’s DET guidance documents, which requested judgments on “safety culture” without any standards to measure it.82 NRC leadership began to look askance at DET evaluations, too. In the late 1980s and early 1990s, Navy veterans had taken over key leadership positions on the commission and agency staff. The NRC’s executive director, James Taylor, had worked under Rickover and was especially skeptical. Brian Haagensen is currently an NRC inspector, and he worked as a contractor on DETs. He recalled that Taylor thought the “DETs were too aggressive” and that the social scientists on the teams issued harsh grades without understanding management or nuclear technology. Taylor preferred Haagensen’s firm, in part, because it was led by nuclear navy veterans like himself.83 The Brookhaven teams that visited nuclear plants, then, worked for a regulator dubious of their capabilities and studied licensees suspicious of their motives. Rather than serve as dispassionate scientific observers in a clinically controlled setting, they became entangled in their research environment. In anonymous evaluations collected in 1991, they vented their frustrations. “The nature of the relationship between the NRC and the utilities permeates our role as contractors to the NRC,” one researcher reported.84 While some utilities showed keen interest in their work, the “anxiety of nuclear utilities” at their presence was palpable. There was little incentive for licensees to cooperate when research findings might become burdensome regulations. “Success is failure,” one observed. “The better the research on the impact of organizational factors (success), the more likely the industry will put pressure on the NRC to cut the funding for future research (failure).” The whole project, one concluded, was hindered by “the utilities’ lack of trust in the NRC to use the results of our research sensibly.” The scholars were even more exasperated with the NRC. Fearful that the social scientists would produce inflammatory finds just like the
Beyond Design
|
125
DETs, the NRC did not allow the teams to study poorly performing plants or interview upper management in corporate offices. While elements of the NRC staff supported the Brookhaven project, the researchers detected from NRC management “a general distrust of the social sciences and behavioral science data.” These experts in organizational culture also found they could not bridge the cultural divide with engineers on their own research teams. “The gap between engineers/PRAtypes and behavioral scientists does not seem to be closing very fast,” one observed. Another wrote, “Social scientists in an engineering world will always have a tough time.” The research teams bridled at the NRC’s restrictions. With a bit of irony, these experts in social and political sciences wanted to rid their research environment of the politics that defined it. They asked the NRC to calm utilities and remove the fetters on their access to utilities and corporate management. “The future success of this effort depends upon the cooperation of regulators, contractors, and the nuclear industry.” As the teams asked for more, the research itself produced mixed results. Haber reported in 1995 that there were good qualitative lessons in the work. Brookhaven analyzed twenty organizational factors grouped into the four broad categories of culture, communications, human resource management, and management attention to work practices related to safety. There were connections between safety and organizational traits such as effective communication, the ability of an organization to learn, management attention to operations and safety, and the external environment of corporate and regulatory factors. They found some stable correlations between the factors, such as organizational learning and safety performance, but they could not find correlations with many others.85 Applying those insights to PRA did not work. George Apostolakis concluded it was “extremely difficult, if not impossible” to incorporate organizational factors into PRAs.86 Part of the problem, as Haber recalled, was finding utilities willing to work with Brookhaven to develop more data, but survey and interview results were a difficult fit for PRA anyway. It was one thing to develop a 1 to 5 rating scale of a “good” or “fair” utility organization. It was another to quantify organizational influence on equipment failure rates or human error for input into a PRA. Haber concluded that “continued efforts to correlate organizational dimensions with performance indicators may have limited value as a nexus to safety. . . . We consider ‘Culture’ as a ‘higher order’ factor which cannot be incorporated into [PRAs].”87 The Brookhaven
126
| Beyond Design
methodology was best suited for inspections and diagnostic evaluations, such as the unpopular DETs. The NRC staff concluded the program was “resource intensive” and had “relatively low cost-effectiveness.” It recommended discontinuing research until organizational factors could be integrated into PRAs.88 As it approached the twenty-first century, the NRC did not explicitly regulate safety culture. As will be discussed, the NRC abandoned the SALP in favor of a new oversight process that inferred management competence and culture from baseline inspections. The new process avoided the extensive oversight and subjective assessments of the SALP and assumed that significant management issues would show up in performance indicators, such as the number of unplanned plant shutdowns. INPO took the lead in inculcating safety culture among licensees.89 Other nations took an interest in safety culture. The IAEA produced numerous consensus documents on a safety culture definition, framework, and guidance on assessments. It shunned the NRC’s quantitative bent in favor of qualitative guidance in safety-culture reviews. Safety culture, the IAEA observed, was a search for “tangible evidence of an essentially intangible concept.” It encouraged review teams to explore “attitudes, morale, motivation, and commitment to safety” without quantifying them.90 The Brookhaven methodology went into exile. The Atomic Energy Control Board of Canada was interested in the NRC-sponsored research. Haber left Brookhaven and, with Michael Barriere, adapted the NOMAC model to reflect advances in methodology and research. Unconstrained by the NRC’s compulsion to apply organizational factors to PRAs, the AECB simply required the methodology be practical, generate reliable, generate reproducible results, and develop an accurate picture of nuclear plant operations. While the Canadians applied the model to numerous facilities, it was still, as the NRC had found, resource-intensive. Spain investigated a similar model.91 A few countries, such as Switzerland, Finland, Germany, Canada, and Spain formally established safety culture regulations or required operators to perform safety-culture self-assessments. Most other nations adopted policy statements where safety culture appeared as a major theme or stated its key traits.92 The NRC’s inability to translate social science research into a practical tool was a major setback for social scientists who had aspired to engineering rigor. Regulatory conflict gave them a unique opportunity to influence regulation. Armed with new theories on organizational behavior, researchers seemed poised to demonstrate their practical rel-
Beyond Design
|
127
evance to the engineers who dominated safety regulation. The NRC’s adversarial relationship, however, demanded assessment by unambiguous, objective data and methods, a task that did not play to the qualitative strengths of behavioral scientists. In 1998, Nick Pidgeon, a psychologist and safety culture scholar at the University of Wales, assessed with dismay the fracturing of safety culture research. “Some 10 years on from Chernobyl, the existing empirical attempts to study safety culture and its relationship to organizational outcomes have remained unsystematic, fragmented, and in particular underspecified in theoretical terms.” Engineers wanted a “best” solution, but social scientists had only managed to raise “the thorny issue of whether culture can be ‘measured’ at all using quantitative psychometric methodologies such as questionnaires or surveys. . . . If the theoretical fragmentation of the field is not overcome, commentators will conclude that the work failed to realize its considerable promise.” The term “safety culture,” he lamented, might turn into “hollow rhetoric that pays lip service to safety.”93 A few years later, George Apostolakis seconded Pidgeon with an assessment gloomier than the one he held in 1989. “Defining indicators of a good or bad safety culture in a predictive way remains elusive. [PRAs] certainly do not include the influence of culture on crew behavior and one can make a good argument that they will not do so for a very long time, if ever.”94 The NRC’s derailed ambition to quantify organizational factors remains one of PRA’s “grand challenges.” PRAs still do not explicitly model their contributions. Large uncertainties and data collection remain major hurdles, although there has been some progress in demonstrating an empirical link between safety culture and safety performance.95 Several years passed before the NRC revisited the Brookhaven methodology and safety culture. PRA POLICY STATEMENT
Despite the Brookhaven disappointment, the success of the maintenance rule gave the NRC confidence to move forward with a more positive stance toward PRA. First it had to revisit its history. Fifteen years had passed since the commission partially withdrew its endorsement of WASH-1400 and told the staff to go slow on PRA. By the 1990s, the time seemed right for a benediction to go forth with PRA at deliberate speed.96 First, the NRC staff needed to get smarter about PRA. Its applications had grown opportunistically rather than with clear direction. Regulations
128
| Beyond Design
and policy statements on backfitting, ATWS, station blackouts, safety goals, and changes to technical specifications all bore its mark. Pockets of the agency still distrusted PRA. In the early 1990s, Scott Morris joined the agency as an inspector. He recalled that much of the staff viewed PRA as “black magic.”97 As the era of licensing new reactors gave way to the era of oversight, familiarity with design computer codes and PRA had not advanced beyond a coterie of staff experts. In 1991, the ACRS wrote Chairman Ivan Selin that it supported a “deeper and more deliberate integration” of PRA into regulation, but there was “unevenness and inconsistency” in staff use of PRA methodology and a worrisome ignorance of the pitfalls of using of PRA’s bottom-line numbers.98 In response to the ACRS letter, the staff established a working group on integrating PRA within the regulatory system.99 In August 1995, the commission approved a policy statement declaring that PRA should be increased “in all regulatory matters to the extent supported by the stateof-the-art in PRA . . . in a manner that complements the NRC’s deterministic approach and . . . defense-in-depth philosophy. . . . The Commission’s safety goals for nuclear power plants and subsidiary numerical objectives are to be used with appropriate consideration of uncertainties in making regulatory judgments on the need for proposing and backfitting new generic requirements.” In sum, the commission encouraged a thoughtful, deliberate increase in the use of PRA as a supplement to defense in depth and expert judgment.100 To the industry’s discontent, the NRC proceeded with deliberate speed. In 1994, Selin told an American Nuclear Society conference that “the public is best served by a regulatory framework that is well grounded in risk-based priorities.” But he and other commissioners demanded qualitative improvements in data collection and methodology that would encourage public acceptance.101 Commissioner Kenneth Rogers told an industry audience, “Regulation is essentially a political . . . process. It can be successful . . . only if the process and results are accepted by the public and by their elected representatives. . . . Public trust requires that the regulatory process be open and understandable.”102 A change in leadership did not alter the NRC’s deliberate application of risk insights. In 1995, the NRC’s new chairman, Shirley Jackson, said she was skeptical of the term “risk-based” regulation. “I prefer to talk of risk-informed, performance-based regulatory approaches,” which blended traditional qualitative, deterministic principles with new quantitative risk insights.103 Echoing the NEI’s justification for risk-informed regulation, an NRC staff paper explained that it would “better focus
Beyond Design
|
129
licensee and regulatory attention on design and operational issues commensurate with their importance to public health and safety.”104 Riskinformed regulation could be a new “paradigm” that stepped beyond the design basis accident, but Jackson warned that industry PRA’s had to be of high quality. Their PRAs still had “significant limitations. . . . If the industry desires regulatory changes and decisions based on risk insights, then the industry and the NRC must narrow the gap on such issues as PRA methodology, assumptions, consistency, level of detail, and reliability data.”105 The NRC expected deliberate progress toward risk-informed regulation, but events changed its plans. NEAR DEATH: THE BIRTH OF THE REACTOR OVERSIGHT PROCESS
It was “the NRC’s day of reckoning,” Republican Senator Pete Domenici recalled of his meeting in June 1998 with NRC Chairman Shirley Jackson. As the chairman of the Senate Committee on Appropriations’ subcommittee on Energy and Water Appropriations, Domenici had considerable sway over the agency’s budget. As an ardent supporter of nuclear power, he entered the meeting convinced that the NRC had crossed the line from regulating safety to regulating management, and he aimed to do something about it. The NRC was an unreliable and “adversarial” regulator, drove up costs without improving safety, and imposed “interminable” adjudicatory proceedings and arbitrary oversight on licensees. Until that meeting, it enjoyed rubber-stamp congressional review. Domenici decided to administer “tough love” and enlisted the support of Harry Reid, the subcommittee’s senior Democrat from Nevada and a fierce opponent of a proposed high-level radioactive waste repository proposed at Yucca Mountain in his home state. Domenici hit Jackson with a surprise ultimatum: Develop risk-informed, performance-based regulation or face drastic cuts of up to $150 million—a third of its budget—and the loss of seven hundred staff. “You can’t be serious?” Domenici recalled Jackson asking. When it was clear he was, she pleaded for time to show the agency could change. Domenici gave her a few months. “Chairman Jackson got up, left, and didn’t look back.” The meeting, Domenici wrote, was a “turning point” for the NRC in its move to risk-informed regulation and, he anticipated, would spur a “rebirth of interest” in new reactor construction.106 Jackson’s version of events differed. Domenici’s cuts were not a surprise, she insisted. They had been announced in May. Under her guidance, the NRC was already moving toward risk-informed regulation. It
130
| Beyond Design
was a term she had invented! NRC staff had spent several years developing a comprehensive implementation plan.107 The two principals disagreed on the details, but the meeting was a turning point. Years later, NRC staff recalled it as a “near-death” experience.108 Jackson returned from the meeting to calm anxious staff. Budget cuts would not be severe, she assured them. But “it does not mean ‘back to business as usual.’ ” The NRC had to “accelerate the realignment of our regulatory approach to be responsive to legitimate concerns of our stakeholders—even shifting the regulatory paradigm, as appropriate.”109 By the end of the 1990s, the NRC had doubled down on its wager on PRA and risk-informed regulation as something more than a technical tool in regulation. It became its lodestar. The Domenici-Jackson showdown pitted two outsized personalities. Domenici entered the Senate in 1972 and went on to serve six terms. He was a major force in Congress; the New York Times called him “one of the Senate’s hardest-working, most intelligent and most intense members.” He was renowned on Capitol Hill as budget hawk, but one who took care of his home state’s large atomic weapons and nuclear energy infrastructure, so much so that Los Alamos scientists called him “St. Pete” for keeping the national laboratory well-heeled. When discussing nuclear power, he could be as inspired as Dwight Eisenhower delivering his “Atoms for Peace” speech. In the twenty-first century, he predicted, “nuclear power will be a major contributor to global peace and a better quality of life for both the developed and developing world.” His goal was to ensure that by the year 2045, the one hundredth anniversary of the first atomic test in New Mexico, nuclear energy would have a “strongly positive” impact on the world.110 Shirley Jackson was also special. At the age of four, she told her mother that one day she would be called “Shirley the Great.” She grew up a prodigy in the nation’s still-segregated capital, with an intense interest in all things science. Her 1973 Ph.D. in physics from MIT was the first it bestowed on an African-American woman. She moved up the ladder with stints at Bell and Fermi Labs, a research appointment in Europe, and a professorship at Rutgers University. Her rise as a pathbreaking African-American woman, she recalls, also came at some personal cost of being isolated from others, but she was a force. When President Bill Clinton appointed her to the NRC in 1995, she brought to the job her brilliance and a personality that others found compelling or imperious.111 Her time as chairman was confrontational, turbulent, and consequential.
Beyond Design
|
131
There was more to the Domenici-Jackson meeting than personalities. It was a culmination of political and market forces that had had reconfigured Congress, transformed the electric power industry, and finally swept over the NRC. Deregulation of electricity production and distribution reordered the power industry. Previously, utility companies owned the power plants and transmission grid within their territory. State public utility commissions mediated consumer and utility interests by setting rates on a “cost-of-service” basis with a reasonable rate of return on approved capital investments and operating costs. By 1989, it became clear the 1990s would be a decade of upheaval. Utility executive Corbin McNeil predicted that “the threat of deregulation and the reality of competition will replace regulatory and quasi-regulatory pressures.”112 Deregulation broke down utility control over production and distribution territory. The Public Utilities Regulatory Policies Act of 1978 intended to encourage conservation and create a market for alternative energy sources by creating a new class of electricity-generating sources called “qualified facilities,” operated by nonutility power companies. Qualified facilities were given preferential access to utility company grids. Although still a small share of electric power production, qualified facilities tripled their output between 1979 and 1990, and they started to break down the monopoly utility companies enjoyed on power generation. The high costs of nuclear power plants drove up the rate base and encouraged the development of less expensive sources. States sought alternatives with more efficient power production and bidding markets for producers. The National Energy Policy Act of 1992 helped upend the traditional arrangement by allowing electricity sales on an open market with customer choice of a supplier. The Federal Energy Regulatory Commission (FERC) followed with rules giving non-utility power producers access to the transmission system. For the first time, states and regions could create competitive electricity markets.113 Convulsive industry consolidation followed. Mergers created complex energy-producing corporations that were often a mix of utility subsidiaries operating in states that retained their traditional regulated markets and power-producing entities that retailed electricity to deregulated markets in California, Pennsylvania, Rhode Island, Massachusetts, Vermont, and Maine. By 1997, 86 percent of electric industry executives conceded that consumer choice for power was inevitable. Fifty percent thought most utility companies would not survive the decade.114 Deregulation meant the end of the line for smaller, older nuclear plants and even some large ones that were expensive to operate. Maine
132
| Beyond Design
Yankee, Michigan’s Big Rock Point, Zion 1 and 2 in Illinois, and Connecticut’s Millstone 1 closed in 1997 and 1998. Many more closures were expected in the next five years. Plants that had been expensive to build were saddled with high “stranded costs,” which could not be recovered in a competitive market. Battles in state legislatures ensued over how utilities could recover those costs. Utility companies often sold nuclear plants at a loss, which at least relieved a utility of the uncertainty of bearing a plant’s huge decommissioning costs.115 In 1998, the fire sale began. The first to go was the undamaged Unit 1 at Three Mile Island. GPU Incorporated sold it for $100 million to AmerGen Energy Company—a new joint venture of British Energy and PECO Energy. The price was less than the value of the fuel sitting in the reactor vessel. David Lochbaum of the Union of Concerned Scientists compared the sale to “buying a used car with the contents of the gas tank being worth more than the car itself.” A GPU spokesman conceded, “That’s what the market says [nuclear plants] are worth.”116 The next year, AmerGen scooped up New Jersey’s Oyster Creek plant almost for free—$10 million—as its owners unloaded it in an arrangement to recover some of its stranded costs as New Jersey’s power market opened to competition. AmerGen was not the only shark in the water. It lost to Entergy in bidding for Boston Edison’s Pilgrim nuclear plant, and was later rolled into Exelon, which controlled almost a quarter of the nation’s nuclear power.117 On its surface, the nuclear power industry’s consolidation was not the NRC’s concern. Its legislative mandate was safety, period. What did it matter to safety if reactors became “merchant” plants run by large energy corporations? There might even be safety benefits. One industry executive predicted, nuclear power plants “will end up owned by people who can run them efficiently and do well, not by all these mom-and-pop utilities.”118 Nevertheless, a leaner industry posed safety questions. Were efficient plants safe plants? Would utilities have resources for adequate operation and decommissioning? Deregulation, Jackson told an industry audience, represented “potentially revolutionary change” that could have “profound impacts” on a licensee’s access to “adequate funds to operate and decommission their nuclear plants safely.”119 The NRC peered closely at the financial health of its licensees. Licensees peered back at the NRC. Deregulation exacerbated dissatisfaction with “regulatory burden.” The NRC spent ample resources on the safety implications of deregulation, industry officials believed, but devoted little attention to the safety implications of its inefficiency and
Beyond Design
|
133
burdensome regulations. “We have to adapt . . . to competition,” said utility executive Corbin McNeil, but there “is no similar incentive driving the NRC to change the way it does business.”120 Jackson took a resolute line: the agency would be mindful of burden, but it had to ensure “economic pressures do not erode nuclear safety.”121 As nuclear power plants approached the end of their forty-year operating license, industry worried the NRC would create a license renewal process with impossibly long reviews and costly upgrades. As the maintenance rule paved the way for surprisingly easy renewal, the industry was anxious to see similar regulations. An industry financial analyst warned a Nuclear Energy Institute conference, “It’s hard to see how NRC oversight . . . can fit into a competitive environment.”122 The solution to their competitive woes, the nuclear industry concluded, was “risk-based” regulation—a term that later gave way to Jackson’s preference for “risk-informed” regulation. Prescriptive oversight regulations, NEI argued, pushed the NRC across the line into management as it inspected “all aspects of a licensee’s work practices and operations.” It proposed an alternative of formulating and monitoring quantitative safety outcomes based on risk information. Such a monitoring role for the NRC would create “a regulatory framework that is more focused, objective and efficient.” The cultural shift for the NRC would be challenging, NEI acknowledged, but nuclear power “must improve its cost effectiveness to remain a viable option . . . in a competitive market environment.”123 By the mid-1990s, the industry could point to model licensees that deployed PRA to ensure safe operations and save money. Perhaps the greatest PRA success was the South Texas Project (STP). In the early 1990s, the STP’s two reactors faced an NRC action letter shutting them down over numerous management and hardware issues. The plants were expected to remain shuttered for five years. STP turned to PRA. It had been a leader in developing in-house, high-quality PRAs and applying them to operations. It used that capability to prioritize the most risk-significant work and gain approval from the NRC for its recovery plans. Restart took just thirteen months.124 The NRC stuck to its position that the transition to risk-informed regulation would be an “incremental process.”125 As commissioner Kenneth Rogers told an audience, the NRC’s use of PRA and risk insights would be “evolutionary, rather than revolutionary.”126 Incrementalism was not what the industry had in mind. Deregulation, NEI argued, created a “need for urgency in the pursuit of risk-informed,
134
| Beyond Design
performance based [regulation].” NEI held up the maintenance rule as the model that “should be applied to other areas [of regulation] without the need for protracted debate.”127 Frustrated with the NRC’s perceived slow-walking reform, NEI developed a more aggressive critique of the NRC’s prescriptive practices. In 1994, it contracted with the consulting firm Towers Perrin to review the NRC’s relationship with its licensees. After polling power plant staff, the firm submitted its assessment just two weeks before the momentous 1994 elections where the Republican Party gained control of Congress.128 The report portrayed the NRC as an arrogant regulator whose “current regulatory approach represents a serious threat to America’s nuclear energy generating capability.” NRC and industry metrics, it argued, indicated plant operations were safe and getting safer. Despite that record, plant staff told Towers Perrin that the NRC punished high performance with an arbitrary, punitive oversight process with no safety benefit. In fact, NRC activities might reduce the safety margin by distracting licensee management with trivial issues. Plant staff confessed they dared not object to inspection violations “because of an intense and widespread fear of retribution by the NRC.” Two-thirds of respondents said they avoided confronting inspectors even when they issued violations for non-violations. The report concluded that NRC oversight diverted plant management from real safety issues, eroded public trust, and was pricing nuclear power out of the market.129 Towers Perrin zeroed in on the NRC’s plant inspections and Systematic Assessment of Licensee Performance (SALP). Industry respondents believed the NRC’s SALP rating system and dreaded “watch list” of problem plants were arbitrary, secretive, and subject to the whims of NRC managers. Relegation of a plant to the watch list could have serious economic consequences for a licensee, as some plants on the list stayed shut down for years, lost hundreds of millions in revenue, and, as Towers Perrin argued, gained no safety benefit. The report claimed NRC inspectors wrote up frivolous violations even as they admitted, in private, that they did so to meet unofficial quotas. Inconsequential violations included one for leaving blank spaces on a routine form and poor housekeeping that missed dustbunnies behind a plant telephone. Trivial safety was crowding out real safety. The NRC promised reform, Towers Perrin claimed, but delivered disappointment.130 Some of SALP’s weaknesses reflected the practical limits of oversight in the late 1970s. When Three Mile Island rocked the agency, staff had been at work in crafting a more objective assessment process based on
Beyond Design
|
135
quantitative data. The hope had been to find leading indicators that flagged a plant with degrading safety performance, but data was limited. Three Mile Island forced the agency to turn what had been a pilot program into a permanent qualitative assessment process, dominated by the detection of procedural violations large and small and hard-tomeasure judgments of a licensee’s willingness to institute prompt remedial action. The NRC enforced SALP findings with the blunt tools of poor performance ratings, fines, and shutdowns.131 SALP’s tools were blunt but effective, the NRC believed. Scott Morris joined the NRC as an inspector after serving in the nuclear navy. The SALP, he believed, “gets a bad rap.” Tough as it was, it was appropriate to the difficult regulator-licensee environment of the 1980s. For many years following implementation, it influenced industry performance in positive ways. But as industry performance improved, the SALP did not adapt. There was no rule, but the NRC seemed to reward inspectors for writing tickets. “Scads of tickets!,” Morris recalled. Getting inspectors to focus primarily on the risk-significance of violations was difficult.132 NRC assessments of the SALP supported some of Towers Perrin’s claims that it was subjective and produced inconsistent grades between NRC regional offices. Arbitrary grades by the regions was a “big time” problem, Morris agreed.133 Efforts to improve consistency with seniorlevel management meetings created another layer of review and another layer of complaints. The industry believed the NRC paid too much attention to well-run plants and not enough to the poor ones. Without numerical safety goals and quantitative measures of performance, SALP was opaque and divisive.134 While the nuclear industry portrayed the NRC as an overbearing regulator, others faulted NRC permissiveness. The agency had tried to be a “kinder, gentler NRC,” said director Tom Murley, where enforcement would be “more diagnostic and less punitive.” Yet, poorly managed plants were numerous, one NRC inspector recalled. Searching for violations at a plant was “like fishing in a stocked pond.”135 Nevertheless, a GAO report thought NRC oversight was lax and upbraided its “culture of tolerating problems” and inconsistency.136 Some problem plants managed to avoid the NRC’s watch list even as others stayed on it for years. The NRC searched for the Goldilocks zone that was not too hot and not too cold.137 Even as the NRC worked to reform the SALP, the “problem plants” produced intense publicity and regulatory confrontation. In March 1996, Time magazine ran a cover story about George Galatis, an engineer-
136
| Beyond Design
turned-whistleblower at Northeast Utilities, which operated five nuclear power plants, including three at the Millstone station in Waterford, Connecticut. In 1992, Galatis realized there was a licensing issue with a common practice at Millstone, Unit 1. During outages, staff routinely removed, or “offloaded,” all the reactor fuel from the reactor vessel to its spent fuel pool. The potential hazard of relocating all the physically hot fuel to the pool had not been analyzed as part of the plant’s operating license. It was a configuration outside the plant’s design basis. Time detailed Galatis’s successful three-year battle to compel Northeast Utilities to file a license amendment to modify the plant’s procedures and systems to accommodate a full offload.138 The episode tainted the reputation of both Northeast Utilities and the NRC. NRC inspectors knew about the full offload, but they did not grasp its significance. The article suggested darkly that there might have been “collusion between the utility and its regulator,” and that the utility had problems with its safety culture.139 Unable to quantify safety culture, the nuclear industry and NRC often debated how to apply the ill-defined concept for US licensees. Millstone seemed like a clear example of a licensee that lacked it. Forty percent of Millstone’s staff, Time reported, did not trust management enough to raise safety concerns. The NRC conducted a close examination of Millstone’s operations, which uncovered multiple issues regarding site management, whistleblower protections, and the NRC’s response to the controversy. It took more than two years for Millstone-2 and Millstone-3 to receive NRC permission to restart. Millstone-3 alone cost Northeast Utilities $500 million to return to service. Millstone-1 closed for good. “The changing utility structure and electric marketplace,” the utility announced, “lead us to the harsh reality that there is insufficient value in Millstone-1 for our customers.”140 Millstone was not an outlier. Problems of weak safety culture made headlines elsewhere. In January 1996, the NRC issued a confirmatory letter suspending operations at the Maine Yankee Nuclear Station. A follow-up NRC assessment of numerous violations portrayed unacceptable managerial incompetence. Many of these violations and underlying causes were longstanding and appeared to be caused by ineffective engineering analyses, review and processes which led to inadequate design and configuration control; a corrective action program which was fragmented; a quality assurance function which was not effective at both an individual and organizational level; and ineffective oversight as well as inadequate knowledge of vendor activities. The NRC’s assessments . . . found that Maine Yankee was a facility in which
Beyond Design
|
137
pressure to be a low-cost performer led to practices which over relied on judgment, discouraged problem reporting, and accepted low standards of performance, as well as informality rather than rigorous adherence to program and procedural requirements. Lastly, Maine Yankee had become insular, falling to keep up with industry practice and failing to communicate adequately with the NRC.141
Maine Yankee never produced another watt of electricity. Closure allowed the utility to avoid a substantial fine, and abandonment was less costly than NRC-required upgrades. The NRC’s more aggressive position left some debating whether Maine Yankee’s demise was the result of an NRC-enforced “homicide or suicide” by poor utility management.142 Millstone and Maine Yankee alarmed the NRC. At a 1996 NRCsponsored conference, James Taylor, the NRC’s executive director of operations, expressed a sense of betrayal as he scolded industry executives. “We have placed a tremendous amount of trust in licensees to operate their facilities in accordance with their operating license. . . . I want to reemphasize how disturbing the situation at Millstone is. With nuclear power generation a mature industry, it is most distressing that an organization can treat its design basis with such disregard that issues fester for years, are ignored, or don’t get fixed.”143 Jackson warned the audience that market competition could degrade safety. She noted that Maine Yankee had fallen prey to “economic pressure to be a low-cost energy producer,” and the NRC would remain on the lookout for plants where “economic stress may be impacting safety.” She worried that in its rush to respond to Towers Perrin, the NRC too may have “moved away from an important balance” in its inspections of design and operational safety. Jackson’s words matched the NRC’s deeds. In just one year, the number of plants on its watch list spiked from six to fourteen.144 The NRC went further. In late 1996, it issued a “request for information” to all licensees to demonstrate they were within their design basis and report deviations found. As Millstone demonstrated, plant management had made numerous seemingly small changes to their plant designs, often at NRC direction, which were not analyzed in the plant’s Final Safety Analysis Report. It was akin to a home mechanic souping up a sports car with numerous modifications, and then being surprised with a safety inspection that required proof that the changes did not degrade the car’s original safety design. Initial estimates indicated that some older plants might have to spend $100 million to satisfy the NRC. While Jackson argued that that “public health and safety is entirely
138
| Beyond Design
compatible with a deregulated environment,” others were doubtful.145 A utility executive said, “A lot of utilities are looking really seriously at shutting down if they have a big regulatory problem.”146 Where industry saw NRC excess, critics of nuclear power perceived a slipping commitment to safety. Groups such as Public Citizen and the Nuclear Information and Resource Service warned that economic pressures might foster a “utility attitude of ‘minimal compliance’ ” with NRC regulations, and PRAs would simply justify reduced safety margins in the face of “mounting competitive pressure.”147 David Lochbaum of the Union of Concerned Scientists also pointed out that the request for information had revealed numerous instances where plants were not built to their original design specifications, as PRAs assumed. In one instance a critical cooling system would not have worked if needed.148 The NRC’s burgeoning watch list did not mollify critics, either. The NRC, they argued, responded to poor performers reluctantly and with kid gloves rather than decisive enforcement. A General Accounting Office report seconded their arguments.149 In reality, the NRC oversight process split the difference between pronuclear and antinuclear perspectives. Staff hours devoted to inspections and oversight had declined since the early 1990s, and the watch list’s growth indicated the agency was focused on poor performers. Yet, industry fumed at what was called the “Jackson effect,” a reference to the chairman’s perceived fixation with conformance to original licensing documents that proved costly to the whole industry.150 The spike in watch-listed plants and issuance of low-level violations appeared arbitrary and pinned an embarrassing scarlet letter on licensees that could send company stock prices tumbling. Wall Street analysts even used SALP scores in models to predict the stock performance of nuclear power companies.151 “The existing regulatory process gives the public an inaccurate view of plant safety,” said Corbin McNeill, chairman of the PECO energy company. “The NRC applies the regulatory process to every plant as if it were performing at a low level.”152 Even Millstone’s problems, some argued, did not add up to a significant safety problem. Jackson fired back, “It’s like saying, ‘Well, nothing has happened yet.’ ”153 An NRC Inspector General report sided with industry. It pointed out that the NRC’s practice of “enforcing strict compliance” had turned up few safety-significant problems and was costly. The NRC had estimated licensees would expend only four hundred hours to comply with its request for information, but the industry average was two thousand hours. Only two percent of the problems identified warranted further
Beyond Design
|
139
evaluation. Also, the SALP and watch lists did not prove to be predictive of poor performance or a cure for it. Being placed on the watch list did not necessarily incentivize licensee improvement.154 Missing from the debate was congressional guidance. The “Republican Revolution” of 1994 established a new tone in congressional oversight; it was friendly to business, skeptical of bureaucrats, and instinctively hostile to regulation. Initially, the GOP’s wrath focused on agencies inside the beltway. Legislators introduced wide-ranging bills to halt bureaucratic action with regulatory moratoriums and proposals to abolish numerous agencies, including a “near-death” bill for the Department of Energy appropriately named the “Department of Energy Abolishment Act.” Things were comparatively quiet at the NRC’s headquarters in Rockville, Maryland, a safe sixteen subway stops from Capitol Hill. The agency had not had a specific authorization hearing in thirteen years.155 The quiet was shattered in 1998. Dennis Rathbun, who served in Jackson’s office and headed the Office of Congressional Affairs, recalled visits to Capitol Hill that spring where he heard hints that NRC staff might have to worry about how they were going to pay the mortgage. The news broke in May. The nuclear industry had armed Senator Pete Domenici with a study from Tim D. Martin Associates that estimated seven hundred NRC staff could be cut without impact to safety. The reduction in force was to be applied like the tenth plague of Passover in singling out its victims—five hundred engineers and inspectors heavily involved in oversight. The Senate appropriations report excoriated the NRC and echoed much of the criticism lodged in the earlier Towers Perrin report. An industry source told a journalist that he had never seen such language in an appropriations bill and called it a “commentary on the Jackson regime.”156 After his confrontation with Jackson, Domenici said his intention was to send a message. It worked. Jackson gave him satisfactory assurances that the NRC would change. The budget cuts were scaled back. Domenici won a commitment to hold NRC oversight hearings in late July to consider his criticisms and discuss moving to risk-informed, performance-based regulation.157 In mid-June, commissioner Nils Diaz went to Capitol Hill and signaled the NRC’s path forward. A former professor of nuclear engineering, Diaz was sympathetic to the nuclear industry and pushed for “fully risk-informed regulations.” Nuclear power was “a socio-political issue,” he noted, and he acknowledged the “strong inducement” Domenici gave the NRC “to conduct a searching
140
| Beyond Design
evaluation of itself.” It was time to “give priority . . . to risk-informed regulation. . . . Everyone agrees to it, why not do it?” If the industry petitioned the NRC to develop ambitious risk-informed regulations, he suggested, “I believe you will get it.”158 The subsequent oversight hearing, conducted by Oklahoma Republican James Inhofe, produced an incongruous moment of unity between the industry and antinuclear activists on the need for NRC change. NEI applauded Congress’s renewed interest in the NRC and assailed the agency’s “outdated, ineffective regulatory framework” that relied too much on subjective judgments and punitive enforcement actions. “Immediate, fundamental changes in policy and culture are necessary.” The NRC needed to adopt “risk-informed and performance-based concepts” that would be “objective, safety-focused, and responsive.” David Lochbaum was skeptical that PRA quality was good enough for regulatory use, but he agreed with industry that “the NRC needs to have objective criteria to understand what plant performance is. They don’t have that and that puts them into this box where a good performing plant overnight comes on the watch list. . . . That is not fair to anybody involved.”159 Congress wanted swift action and got it. In June, the commission directed an end to adversarial oversight. Enforcement, they told the staff, should not be the “driving force” of a new assessment program. At the July 30 hearing, commissioner Edward McGaffigan admitted the NRC’s existing system was not working. “We have this old, prescriptive, deterministic framework hanging around, driving us to do things that are trivial.”160 After the hearings, Jackson directed the staff that Congress expected “the NRC to accelerate ongoing improvements and to immediately make additional, significant changes and improvements” to the reactor oversight program and speed up processing license renewal applications. Jackson wanted quick victories on a few visible projects; shed work elsewhere as necessary, she instructed. After that, “we will need to consider more profound changes . . . over several years.”161 With the NRC in retreat, the nuclear industry tossed it a lifeline, then another, and another. Just three days after the Jackson-Domenici meeting, NEI sent the NRC a policy blueprint to “conduct a reasoned debate” on the policies necessary to allow nuclear power to thrive. Time was of the essence, the brochure warned. “Competition in the electricity business necessitates the NRC’s achieving these changes as soon as possible.” The “new paradigm in regulation” would be to employ “risk insights from probabilistic [risk] assessments, [which] can greatly improve the safety focus of regulatory requirements.”162 NEI had ideas for a new
Beyond Design
|
141
oversight program that married engineering judgment, defense in depth, and insights from probabilistic risk assessments. Objective numerical thresholds of declining performance would trigger greater NRC oversight. The NRC suspended the SALP program in October, and the following spring it halted publication of the “hated” watch list.163 With the moment ripe, NEI also proposed what one industry publication called a “revolutionary” revision to regulations. It outlined a way to risk-inform the whole of the safety regulations governing existing operating reactors. The revisions to “Part 50,” known for its location in the code of federal regulations at 10 CFR Part 50, were simple and breathtaking. It covered 51 separate sections of Part 50 and its appendices. It redefined the term “safety-related” and limited its application to accidents and safety systems “which operating experience or probabilistic risk assessment has shown to be significant to public health and safety.” Applying PRA to the regulations could reduce requirements for reactor systems, fire protection, emergency planning, quality assurance programs, control room designs, and spent fuel pools. Millions in savings was predicted for each plant.164 The most revolutionary part of this revolutionary proposal would eliminate as a credible accident the grandfather of all design basis accidents, the large-break loss-of-coolant accident (LOCA). As the founding myth of nuclear safety, every operating light water reactor in the world kept vigil to cope with it. A plant’s safety-related systems had special requirements for quality, redundancy, and reliability based on a LOCA— fuel design, emergency core cooling systems, containment spray, containment sump pumps, electrical equipment, containment buildings, and hydrogen scavenging systems. The LOCA also set standards for “environmental qualifications,” the harsh accident environments safety equipment had to operate in. Changing the design basis accident to a smaller pipe size could relax these requirements and save money by cutting back on refueling requirements. Rather than stylized accidents selected by expert judgment, those with quantitative safety significance would take center stage. All that was needed was enough data, operator experience, and a thorough PRA.165 Data, experience, and good PRAs were easier said than done. But for the moment, advocates of risk-informed regulation had momentum, helped along, as one industry publication noted, by the fortunate timing of the industry’s safety record and “congressional overseers [who] are sympathetic if not outright supportive.”166 To some NRC staff, the change in direction after the JacksonDomenici meeting was a serious challenge to the NRC’s independence.
142
| Beyond Design
At an agency meeting, commissioners Jackson, Diaz, and McGaffigan worked to persuade employees skeptical of risk-informed regulation to make a “culture change.” McGaffigan warned, “Congressional attention is not going to go away. . . . The old model was [a] ponderous utility dealing with ponderous state utility commission[s] dealing with ponderous Nuclear Regulatory Commission. That model is not going to be adequate in the twenty-first century.” One staffer said the NRC had been “relatively resistant to political pressure” and resented “the fact that . . . we are being threatened by someone who has the power of the purse over us.” Jackson responded, “We are creatures of Congress, and we have a responsibility to be responsive. . . . Congress has provided us with a platform to accelerate our movement in a direction we know we must go, a direction we ourselves already had decided we needed to go.” “Take a drink of risk-informed regulation and let it go to work in your system,” Diaz joked. “You never know. You might enjoy it.”167 In public testimony, Jackson went further. When asked by Senator Bob Graham of Florida about nuclear power’s decline, she declared, with the endorsement of the other commissioners seated with her, “I am a proponent of nuclear power. I regulate it, but I believe in it. I don’t think it is desirable to have a decline. I think having a diversity of energy supply from an economic security point of view is an important goal.” The way the NRC could help solve the industry’s economic problems was through “shifting the regulatory paradigm to become risk-informed and performance based” and “no unnecessary burden beyond that.”168 The commission talked of a paradigm shift, but substantive change was more modest. Among the successes was the Reactor Oversight Process (ROP), which replaced the SALP. As one industry publication noted, NEI “literally co-wrote” the ROP with NRC staff.169 The new process combined a probabilistic approach with defense in depth expressed in “cornerstones” of safety. Grading of plant performance was done through a more transparent and objective color-coded scheme that measured increments of increasing risk from performance indicators. Enforcement was calibrated to the safety significance of an infraction. Except for issues found to be in the lowest category of safety significance—a “green” finding—NRC specialists were to perform a risk assessment of their findings. Higher-level risk findings would require more NRC oversight resources, public involvement, and utility planning. As the NRC relied on performance indicators, however, some qualitative assessments were left behind. The NRC would infer declining safety culture from performance indicators, but it would not meas-
Beyond Design
|
143
ure it.170 “That was the end of the DETs,” recalled Brian Haagensen. For the moment, the experiment with the social sciences and organizational research was over.171 While the ROP closely reflected industry desires, industry did not get all it wanted. It proposed that the NRC perform inspections only when performance indicators revealed worrisome trends. The NRC rejected this concept. “Our mere presence at a site changes licensee staff behavior,” observed Scott Morris, one of the ROP’s authors.172 The final product won the approval of some industry critics. David Lochbaum of the Union of Concerned Scientists served on a task force that evaluated the piloting of the ROP. The new process, he concluded, “can make a large, positive contribution to nuclear power plant safety” and was “substantially better than the [SALP] and Watch List processes.”173 Along with the maintenance rule, the more objective, risk-informed ROP brought some peace to the NRC-licensee relationship.174 A warming of relations between industry and regulator was evident as Jackson’s momentous term as chairman wound down. In February 1999, Senate oversight hearings before subcommittee chairman James Inhofe were mostly amicable.175 The NEI expressed its general satisfaction with the NRC’s progress. Nuclear critics, on the other hand, complained that the nuclear industry was “regulating the regulator.”176 Jim Riccio of Public Citizen later quipped that the NRC’s “near death experience” had really been a “non-hostile takeover of NRC by NEI.”177 Whether or not that was a fair assessment, it was noteworthy that the industry’s safety performance has substantially improved during the ROP’s two decades. As it was, few mourned the SALP’s demise. Sparse attendance marked the hearings; Inhofe mostly sat alone while his colleagues attended to details of President Bill Clinton’s impeachment trial. PRA had made inroads into NRC regulations in the 1990s. The maintenance rule, PRA policy statement, and emerging Reactor Oversight Process had given the methodology practical applications in ways not envisioned in WASH-1400. Its quantitative insights contributed to a fundamental rethinking of the Three Ds’ qualitative safety, particularly the meaning of defense in depth. The 1957 WASH-740 accident study described defense in depth as three barriers to the escape of radiation— fuel-rod cladding, primary-coolant piping and vessel, and containment building. By the twenty-first century, the lessons from operations, mishaps, and PRA produced a more sophisticated understanding of defense in depth. In the risk-informed era it involved multiple layers of technology and humans—barriers, safety systems, operators, organizations, and
144
| Beyond Design
emergency responders—as compensation for its incomplete knowledge of accident hazards. Quantitative risk insights helped make qualitative safety better.178 The improved regulatory climate came along just when prospects for the nuclear industry were looking up. The predicted wave of decommissioned plants never broke over the industry. Instead, licensees sought twenty-year license extensions, a process eased by the maintenance rule. In 1999, former NRC commissioner James Asselstine was a managing director for Lehman Brothers. While at the NRC, he had been an industry critic, but he told an NEI conference that there was palpable optimism on Wall Street regarding nuclear power. “Investors are coming to realize well-run nuclear plants can be very competitive.”179 Performance measures at operating plants were on the rise and on par with or better than plants in other nations. Since 1982, significant safety event reports had dropped 92 percent; the median unit capability factor had risen 38 percent. The plants that survived “a healthy industry shakeout,” one industry official said, became valuable baseload providers. The plants that closed, another conference speaker pointed out, had not been managed well and had capacity factors below the industry average.180 The surprise of deregulation, Scott Morris thought, was that it made plants safer not less, as the NRC feared. Market forces made licensees “terrified” of low marks from the NRC.181 They went to great lengths to stay in the good green column of ROP performance metrics. Improved performance allowed the industry to turn to the future. At a “Profit with Nuclear Power” conference—a title unthinkable just a few years earlier—a vice president with Entergy Nuclear confessed, “Three years ago I said 20–25 [nuclear] units would shut.” Now he predicted “a nuclear renaissance.”182 A renaissance was more than conference talk. With nuclear power’s stock on the rise, the NRC, Department of Energy, and reactor vendors laid the groundwork for a second nuclear era. In addition to the streamlined licensing process for new reactors, DOE offered seed money to encourage vendors to obtain NRC-approved standardized, “passive safety” reactor designs. There was also money for utilities to bank preapproved sites, ready for the day they could place a reactor order.183 With the success of risk-informed regulation loomed potential setbacks. As it permeated NRC regulations and decisionmaking, it would rise or fall, be praised or blamed based on solid plant performance, mishaps, and accidents. A little of all those factors awaited in the new century, but the nuclear industry and the NRC could look back at a surprisingly positive run. Paul Joskow, an MIT economist, remarked,
Beyond Design
|
145
“The 1990s were an especially good decade for nuclear energy. The U.S. nuclear industry has finally learned how to operate the existing fleet of nuclear plants economically and safely.”184 Before turning to the NRC’s PRA story in the twenty-first century, it is edifying to step outside US reactor plant regulation and to review how risk assessment fared in other applications. As the NRC and nuclear industry worked to establish PRA as a cornerstone of a new reactor safety era, the descendants of WASH-1400 were making their mark in new applications at home and abroad.
6
Risk Assessment Beyond the NRC
Risk assessment succeeded as a nuclear safety tool, disappointed at public persuasion, and enjoyed modest progress in NRC regulation. The NRC’s creation had varied but important influence outside the US nuclear power industry. Mishaps and disasters revealed the limitations of qualitative safety and engineering judgment in other complex technologies. The PRA lessons of Three Mile Island were taught anew during the chemical disaster in Bhopal, India, the loss of the space shuttle Challenger, and the Chernobyl disaster in Ukraine. The NRC’s reports on PRA quickly became templates for nonnuclear safety assessments.1 This chapter details three cases where the NRC influenced the use of risk assessment beyond the original application to US nuclear power plants: the US space program; assessing the safety of Soviet-designed reactors after the fall of the Communist bloc; and the NRC’s cooperative effort with the Environmental Protection Agency (EPA) to deploy risk assessment in the decommissioning of NRC-licensed facilities. In two of these stories, PRA helped produce positive results. The National Aeronautics and Space Administration (NASA) came to lean heavily on the nuclear industry for a new safety model, as did former Communist-bloc nations searching for safety in a post-Chernobyl, postSoviet world. PRA became a tool of diplomacy as a nascent European Union sought to resolve reactor safety disputes across the former Iron Curtain.
146
Risk Assessment Beyond the NRC |
147
Risk assessment diplomacy did not always work, however. Policymakers at the NRC and EPA hoped risk quantification would allow them to speak a common technical language and facilitate their occasionally contentious cooperation in the regulation of low-level radiation hazards at decommissioned facilities. Risk assessment did clarify differences between the two agencies in their approach to low-level hazards. Ironically, this improved understanding exacerbated their differences and, for a time, hindered compromise. A final resolution required regulators to step beyond quantified risk to consider a mixture of social, political, and technical factors. These three cases, then, reveal the possibilities and limits of risk assessment when used by government agencies and international negotiations on technical issues. WASH-1400 COMES TO NASA
“Statistics don’t count for anything. They have no place in engineering anywhere,” said Will Willoughby.2 In the 1970s, Willoughby directed NASA’s reliability and safety program; his comment, made in 1989, reflected the disdain some engineers held toward statistics and probabilities and their preference for engineering judgment. With the 1986 Challenger disaster and a subsequent, and scathing, presidential commission report, NASA was under fire for not taking risk quantification seriously. Nobel-prize-winning physicist Richard Feynman served on the Challenger commission and ridiculed NASA’s probability guesswork that a shuttle disaster—a Loss of Vehicle (LOV)—would occur in only one in one hundred thousand launches. It was, he noted, the equivalent of sending a shuttle aloft every day for three hundred years and expecting just one LOV. NASA’s obliviousness to the real probability, he believed, encouraged it to play “a kind of Russian roulette” where each successful launch convinced its leadership to keep spinning the cylinder and pulling the trigger on the next one.3 The cause of the Challenger disaster was straightforward: an unusual Florida cold snap led to a common-cause failure of a primary and backup O-ring in one of Challenger’s solid rocket boosters. The long, tubular boosters mounted on either side of the shuttle’s large liquid-fuel tank consisted of numerous ringed segments stacked vertically to a height of one hundred fifty feet, the largest solid rocket ever built. The gaps between each segment were sealed by O-rings to prevent the escape of hot gasses created by the igniting rocket fuel. The frigid morning
148
| Risk Assessment Beyond the NRC
temperatures shrank and hardened the O-rings, creating tiny openings. Once the leak started, the rush of hot gasses enlarged the openings even more, destroying the shuttle seventy-three seconds after launch. If the accident itself was simple, NASA’s failures were complex and hotly disputed. Technical and scholarly investigations mostly focused on a host of design and management problems, particularly NASA’s culture of high production, a can-do confidence that refused to consider the possibility of failure, and engineering groupthink that accepted a flawed design and launch decision as normal. As one scholarly assessment put it, NASA was beset by “a diffusion of responsibility, a bias toward political pressure, and an overall success-oriented perspective.”4 Success bred a euphoric optimism that blinded management to a probability of shuttle failure that was at least a one in one hundred proposition. Feynman blamed NASA “management’s fantastic faith” in success that was disconnected from the real risk.5 Other explanations explored NASA’s excessive faith in redundant components, such as the extra booster O-ring that failed, too.6 Another faulted NASA’s “normalization of deviance,” in which the risks of launch were accepted until they could be proven to be very high.7 Harold Lewis, who headed the committee that critiqued the Rasmussen Report, believed NASA was victim of the same “psychological trap” that befell the nuclear industry before Three Mile Island and the Soviets before Chernobyl “that because something has not happened, you are doing just fine.”8 Others attributed NASA’s blindness to risk to an engineering cultural bias that, like Willoughby, disdained probabilities. “There’s a general reluctance by engineers . . . [to use] probabilities,” one risk expert said. “This reluctance is not unique to NASA.”9 Safety margins and engineering judgment were “the way engineers have operated for, well, forever,” another expert agreed. “The Romans were not computing probabilities when constructing aqueducts.”10 Even after Challenger, one NASA official said, “For everyone in NASA that’s a big PRA seller, I can find you 10 that are equally convinced PRA is oversold.”11 Like the NRC before it, NASA adopted PRA incrementally and, for many employees, reluctantly, over two decades that included the additional inducement of the loss of the Columbia spacecraft in 2003. These explanations overlook NASA’s longer history with risk assessment; it was not biased against quantified tools, but rather had turned away from them after unsatisfactory experience. In the 1960s, it had energetically pursued PRA for its Apollo program, and some of the nation’s best PRA experts came from aerospace. Just as quickly, how-
Risk Assessment Beyond the NRC |
149
ever, NASA abandoned this tool in favor of a system that relied heavily on qualitative engineering judgment. The choice of qualitative over quantitative tools brought NASA much success with Apollo, and Willoughby’s comment reflected hard experience with still-clunky PRA tools and a justifiable pride in NASA engineering that, like the Three Ds of nuclear safety, worked well. This decision, however, came at a longterm cost. PRA might have helped the agency see the relative risk significance of the shuttle’s solid rocket boosters to crew safety. Dependent on qualitative safety tools, NASA’s poor grasp of probabilities was normal. Experts in multiple fields operated in confident ignorance of risk.12 Untethered from disciplined risk assessment methodology, NASA’s maligned one-in-one-hundred-thousand launch-failure estimate (about two thousand times lower than its estimates for a loss of crew on Apollo missions) was a little pessimistic compared to guesswork in other fields. Uncommon hazards usually came with a common probability: about one in a million (10–6). In nuclear energy, numbers close to 10–6 were used by the 1949 Reactor Safeguard Committee, the 1957 WASH-740 report and its 1965 update, and later by the AEC regulatory staff. In other fields, the FDA, EPA, and even the Supreme Court picked something close to 10–6 when establishing risk goals regarding hazardous substances. Congress, too, chose 10–6 as an acceptable emission risk standard for individuals when it amended the Clean Air Act in 1984. The million-to-one standard was popular, so popular that there was even a unit of measure for it, the micromort. Nevertheless, it was arbitrary; it equated to the odds of getting cancer from smoking 1.5 cigarettes in a lifetime or drinking three glasses of wine.13 “One in a million” was shorthand for the belief that modern life could be made to pose negligible risk. NASA’s poor instincts on shuttle risk fit with the times. Not everyone at NASA abstained from quantitative risk assessment. Post-Challenger assessments revealed that in the back corners of the agency, away from shuttle launch decisions, qualitative and quantitative safety did battle even before the doomed launch. Risk experts familiar with nuclear industry PRAs squared off with NASA engineers who stood by engineering judgment. PRA experience brought to these debates a realistic perspective grounded in booster failure-rate data and an awareness of common cause failures. They exposed the space agency’s successful qualitative safety approach as antiquated and cumbersome. Without PRA, NASA, like the NRC before it, struggled to develop reasonable estimates of launch risk and identify safety issues that should receive the most attention. In the years after Challenger, NASA turned
150
| Risk Assessment Beyond the NRC
to the nuclear industry for PRA expertise, which has grown in importance at NASA. Why, in the 1960s, did NASA stop quantifying and start guessing on probabilities? Its decision was rooted in the crude state of 1960s PRAs and the demanding deadlines of the Apollo program. Along with the US ballistic missile program, NASA pioneered quantitative risk assessment. By the mid-1960s, NASA had developed fault trees for Apollo’s Saturn V rockets and the Command, Service and Lunar Modules. Then just as rapidly, it abandoned quantification as too expensive, labor intensive, and slow to keep up with the swift redesigns NASA’s engineers made as they raced to the moon.14 NASA discarded PRA in favor of a more qualitative methodology called Failure Mode and Effects Analysis (FMEA). FMEA worked well during Apollo, and its limitations only became evident during the complex space shuttle program when qualitative approaches left NASA unable to discern the most beneficial safety design changes. There may have been more to the story. NASA abandoned quantitative risk assessment, some believed, because it delivered bad news. As NASA’s chief engineer later testified after the Challenger disaster, the Apollo risk assessment “was so pessimistic that we did away with that study.”15 NASA veterans recalled that the early PRAs indicated success rates as low as one in twenty. Revised estimates came in with an alarming one failure in three launches, and even the more optimistic goal of an 80 percent success rate indicated the chance of disaster over a dozen missions was distressingly high.16 A more benign interpretation was that NASA wisely concluded that quantitative risk assessments were just wrong. Even accounting for the mission failure of Apollo 13, NASA enjoyed greater success than their PRAs predicted. The slow, labor-intensive construction of fault trees did not keep pace with Apollo’s rapidly evolving design. NASA staff believed it correctly opted for FMEA—a careful engineering investigation of each safety system to identify weaknesses and “single-point failures” that might spell doom for a flight.17 FMEA prioritized safety problems through a tracking system into “criticality” categories of their potential to cause a loss of mission and crew. Items on this Critical Issues List (CIL) had to be resolved or waived before launch. While some subsystems might employ probabilistic data and methods, the FMEA/CIL system relied on component reliability data and engineering judgment from multiple committees of engineers.18 For Apollo, the success of the FMEA/CIL system and a heavy emphasis on quality assurance convinced NASA that careful attention to detail
Risk Assessment Beyond the NRC |
151
and old-fashioned engineering judgment offered superior safety. Going to the moon had substantial political support and funding to solve design issues through NASA’s qualitative approach. The cash-strapped shuttle program was another matter. By the time the first shuttle launched in 1981, almost a decade had passed since the last Apollo mission. The shuttle was extremely complex, and unresolved problems grew. Advances in computers and PRA could have been useful in assessing risk and prioritizing safety issues. Yet, NASA’s pride in its pencil and paper launch decisions created a culture distrustful of PRA. Early shuttle success added to this confidence. Nuclear regulators admired NASA’s enviable safety record.19 In the deep recesses of the US space program, PRA survived. Nuclearpowered satellites often used uranium or plutonium as a heat source to generate electricity. NASA launched many nuclear-powered satellites in the 1960s and early 1970s under the AEC’s Systems Nuclear Auxiliary Power Program (SNAP). In 1961, the AEC recognized that an accident from a SNAP satellite had important safety, environmental, and diplomatic implications. Despite their small size, a failed SNAP launch and burn-up in orbit could spread radiation globally. These risks proved more than hypothetical, as one plutonium unit disintegrated on reentry in 1964 and almost tripled the global atmospheric deposit of a key plutonium isotope. President John F. Kennedy issued directives to NASA, the AEC, and the Department of Defense to assess each satellite launch for its risks and benefits. President Jimmy Carter issued an updated directive in 1977.20 The AEC’s traditional conservative reliance on large design margins would not work in space. SNAP units had to be light and compact. This meant a probabilistic assessment of potential accidents was important to ascertaining that a mission posed no undue risk. By 1965, an interagency panel drew on experts with nuclear and aerospace expertise to assess risk with rudimentary event trees and estimates for postulated accident scenarios. SNAP’s quantitative risk assessments encouraged similar work for land-based civilian and weapons reactors in the 1960s.21 The use of PRA for nuclear-powered satellites continued during the late 1970s. The reentry and burnup of a Soviet satellite over Canada led to a United Nations working group on satellite safety. The United States planned for its new space shuttle program to launch plutonium-powered scientific probes to Jupiter and the sun, and it contributed papers to the working group calling for space-bound nuclear sources to be subject to a probabilistic risk analysis, an environmental impact statement, and a numerical “risk index” evaluation. Through this approach, the United
152
| Risk Assessment Beyond the NRC
States said the radiation risk from its units would be as low as reasonably achievable. These satellite initiatives gave the US space program something of a split personality on risk. Qualitative engineering judgment dominated shuttle design and launch decisions, and NASA established a poorly-informed failure rate objective of one in one hundred thousand launches. However, whenever a shuttle carried a nuclear-powered cargo, the mission would be evaluated with a quantitative risk assessment.22 In anticipation of the shuttle carrying nuclear payloads by the mid1980s, NASA contracted with the J. H. Wiggins Company to estimate the probability of a loss of the shuttle. Since early shuttle launches did not carry nuclear payloads, NASA’s running debate over the Wiggins study played out in the background and had no role in any launch decision, including Challenger.23 The Wiggins study began before the first shuttle was launched in 1981, and it lacked operating data on many shuttle components. Wiggins looked outside aerospace, collecting failure-rate data from the nuclear industry, the WASH-1400 study, and relevant military missile technology. The failure data on the shuttle’s solid rocket boosters was alarmingly high, so high it dominated the total probability of a LOV. Historically, a catastrophic failure occurred in one in fifty-seven flights. With two boosters, shuttle failures could be twice that. After consultations with NASA, however, Wiggins concluded that NASA’s “learning curve” would be a steep one, much like the Apollo program. It concluded the failure data was too pessimistic. Wiggins set aside the data in favor of an engineering estimate that lowered the probability by more than an order of magnitude, to one in one thousand flights. A NASA committee further claimed that “unique improvements” in the boosters justified adding a failure range between one in a thousand and ten thousand flights. From this estimate, Wiggins concluded that the shuttle represented a risk “very small compared to other public risks.”24 The revisions to the Wiggins study were greeted with skepticism. An interagency nuclear safety review panel thought the risk estimates were too optimistic, and the Air Force contracted for a review of its findings from Teledyne Corporation. Its authors, Robert Weatherwax and E. William Colglazier, had worked on aerospace and nuclear power risk assessments. Weatherwax had offered an important critique of WASH1400 and served as expert witnesses for intervenor groups on the Indian Point nuclear plant risk assessment discussed previously. Although they had been critical of the Indian Point study by Pickard, Lowe, and Garrick, the duo suggested the Wiggins report should use this latest nuclear PRA methodology. They were concerned that Wiggins made “selective use of
Risk Assessment Beyond the NRC |
153
data” and “optimistic and unrealistic” assumptions. Wiggins’s results “seriously understate the Shuttle risk . . . and should not be relied upon.” They concluded that the historical data indicated a mean failure rate of one in seventy for each solid rocket booster.25 Debate continued. The Marshall Space Flight Center countered that the shuttle’s solid rockets were a superior design. The use of a second O-ring to prevent hot gas leakage through gaps between rocket segments, they predicted, would dramatically lower failure rates. “It is our expectation, and that of Morton-Thiokol [the booster designer], that [with the solid rocket’s] inherent design margins, unique configuration and control measures that have been taken, the zero-failure data base [in tests] will be maintained.”26 That Marshall believed the second O-ring would eliminate the possibility of leakage revealed its ignorance of common cause accidents, such as the cold snap that rendered both O-rings inoperable on Challenger. Common cause accidents had been debated in detail in the WASH-1400 controversy. The Air Force next turned to Sandia National Laboratory’s light water reactor safety division, which had done PRA work for the NRC. Their review of the Wiggins and Teledyne reports took a middle position, finding Teledyne’s criticisms of Wiggins valid, but thought its risk estimates could be too pessimistic.27 In May 1985, the interagency nuclear safety review panel sided with Weatherwax and Colglazier. As one NASA official noted after the Challenger accident, the panel expressed “grave concerns” about booster safety, concluding that NASA’s probabilities “ignore historical data and are based instead on engineering judgment.”28 It rejected the Johnson Space Center’s claim that the boosters were two orders of magnitude safer than previous versions, since there were few design differences. In fact, the boosters were more complex than earlier designs and might fail at higher rates. The panel demanded revisions to account for human factors and common cause failures. Only months before the Challenger disaster, the panel estimated an LOV probability of between one in a hundred and one in a thousand. These experts, familiar with nuclear risk assessment, pegged shuttle failure probability almost one hundred times higher than NASA. When NASA abandoned PRA during Apollo, it became blind to realistic risk assessment. The Challenger disaster led to reform efforts within NASA’s culture of decision-making and a substantial redesign effort of the shuttle, particularly the solid rocket boosters. Like the NRC, the space agency was reluctant to augment its previously successful qualitative launch decisions with PRA insights. The presidential panel on the Challenger disaster, however,
154
| Risk Assessment Beyond the NRC
called for an expert panel under the National Academy of Sciences (NAS) to investigate NASA’s risk assessment methodology. The NAS committee included experts in PRA from the nuclear industry, such as John Garrick of the firm Picard, Lowe, and Garrick. In 1988, the committee concluded the NASA system of risk assessment and management was “illogical,” “biased,” and “subjective.” The report asserted, “Without more objective, quantifiable measures of relative risk, it is not clear how NASA can expect to implement a truly effective risk management program.”29 NASA’s safety tracking system, the committee found, was too unwieldy to make launch decisions. One journalist reported that the committee “found a system in chaos, one that provides top officials with lots of data but very little perspective.”30 The CIL was more like an unmanageable to-do list. After Challenger, the number of items on the list labeled “Criticality 1”—components and systems whose malfunction could destroy a shuttle—jumped from 2,500 to about 4,500. Without risk quantification, the committee pointed out, it was impossible for anyone to sort out the items most important to safety or determine if some of the items belonged on the list at all. In a typical nuclear power PRA, only twenty to fifty accident sequences out of the millions of possibilities pose a significant risk, and it was likely that many of the 4,500 Criticality 1 items were unimportant. NASA’s post-Apollo aversion to PRA was impacting safety, the committee noted. “This disinclination still prevails today. As a result, NASA has not had the benefit of more modern and powerful analytical assessment tools [developed by] the communications and nuclear power industries.” NASA needed to adopt PRA “at the earliest possible date.”31 As with the NRC before it, adoption of PRA at NASA took decades. In 1987, NASA started raiding the nuclear industry for talent, hiring Benjamin Buchbinder, an NRC chief of risk analysis. Buchbinder acknowledged the difficulty in selling PRA to NASA leadership: “A probabilistic way of thinking is not something that most people are attuned to. We don’t know what will happen precisely each time. We can only say what is likely to happen a certain percentage of the time. . . . That is no comfort to the poor decision maker.”32 In response to the NAS committee recommendations, NASA also created an Office of Safety, Reliability, and Quality Assurance to oversee PRA development.33 NASA contracted with several companies with deep roots in nuclear power risk assessment to develop PRAs for the nuclear-powered Galileo space probe. The early studies were successful enough to help defeat law-
Risk Assessment Beyond the NRC |
155
suits against the Galileo launch, since they found that while a launch catastrophe had a higher probability than previously expected, the risk posed to the public was still quite low. By the early 1990s, NASA used fault trees and other PRA methodology at the component and system level.34 Science Applications International Corporation (SAIC), a veteran of nuclear industry risk assessment performed a complete PRA on the space shuttle in 1995. It concluded that a median value for a LOV was about one in 145, supporting previous pessimistic estimates. It found that the main engines represented the greatest risk to the shuttle (37 percent of the total) and that the new redesigned solid rocket boosters posed much less risk. It also concluded that the CIL “only grossly correlated with the actual distribution of risk.” Other systems not on the list were significant risk contributors. PRA’s value to the shuttle, the authors argued, was as “a living model” that could be revised to incorporate new data.35 In the following decade, importation of nuclear PRA knowledge continued. NASA purchased the SAPHIRE PRA code, first developed for the NRC, from Idaho National Laboratory. William Vesely, a technical expert on the Rasmussen Report, was the lead author in developing a fault-tree handbook for aerospace applications. In the late 1990s, cooperation on risk assessment between NASA and the NRC increased, with conferences and an interagency agreement for cooperation on risk assessment.36 The loss of the Columbia highlighted additional weaknesses of NASA’s risk approach. Although engineers were aware that foam insulation could break loose from the external fuel tank and strike the shuttle’s heat tiles, poor communication and a failure to perform a comprehensive risk analysis obscured the total picture of potential damage.37 A NASA risk expert admitted, “There’s all sorts of complexity that we may not be able to capture.”38 With the end of the space shuttle program, NASA deployed PRA to assess the International Space Station program and elements of future missions to Mars. In 2010 and 2014, NASA published a two-volume system safety handbook with striking similarities to the NRC safety framework, including the use of PRA as part of making a risk-informed case for safety, establishing numerical safety goals, and a requirement to be as safe as reasonably practicable. This was hardly a coincidence; the handbook’s authors were veteran PRA experts from the nuclear industry, DOE national laboratories, and the NRC’s safety research program.39 Any future mission to the moon or Mars will rely on nuclearinspired PRA safety.
156
| Risk Assessment Beyond the NRC
RISK ASSESSMENT ABROAD
PRA crossed the technological border between nuclear power and space flight; it also found applications across national and Cold-War borders as a technological-diplomatic tool. After the Chernobyl accident in 1986, the fall of the Iron Curtain and the dissolution of the Soviet Union, much of the public in Western Europe expected Soviet nuclear power technology to join the communist system on the ash heap of history. Instead, there followed an unprecedented collective effort by the United State, Western Europe’s nuclear nations, and Japan to “Westernize” the safety of Soviet-designed reactors in the former Soviet states and Central and Eastern Europe (CEE). Only those deemed unsafe and unsalvageable were closed. After a decade of sorting the blessed from the unblessed reactors, CEE nations shuttered six older PWRs and two Chernobyl-design reactors.40 After some upgrades, sixteen newer Soviet models continued to operate. The paltry list of closures was something of a surprise. In the early 1990s, safety inspections of CEE reactors by Western experts were so alarming that they evoked comparisons to Chernobyl. At the same time, the post-Soviet desperation of impoverished CEE nations seemed to put the region at mercy of the new European Union (EU) and United States. Bereft of Soviet technical expertise, the CEE needed the West’s resources to ensure safety and build new regulatory systems of their own. In such an unequal relationship, closure decisions were expected to be swiftly dictated by the West. Instead, negotiators took a decade to wring limited plant closures from CEE nations as they sought accession to the EU.41 On the whole, the controversy over the closure of Soviet-era plants has been interpreted though the lens of EU accession politics and the negotiating leverage Western Europe enjoyed over CEE nations. There were, to be sure, political factors at play in closure decisions, such as the unique dynamics of the EU accession process, the strength of EU antinuclear sentiment and activism, and consideration of the interests of each accession nation. Yet, the safety case for these plants was critically important.42 The death sentences meted out to a few CEE reactors correlated closely with their lack of safety features as compared to Western plants. Reactors that met informal international safety standards kept operating; ones that did not were shut down.43 The fate of these Eastern-bloc reactors turned less on politics than on technical evaluations, including PRA. This success in turning potential political conflict into satisfactory technical outcomes stemmed from a framework of cooperation created
Risk Assessment Beyond the NRC |
157
by Western experts in collaboration with their Eastern counterparts. To the satisfaction of policymakers, they were able to answer which Soviet designed reactors were about as safe as those of Western nations or could be upgraded as a reasonable cost.44 They found answers in traditional Western concepts of safety, such as defense in depth, and tools such as PRA and other computer codes.45 They helped establish a common technical language between nuclear experts working across the former Iron Curtain. In evaluating exotic Soviet reactors, PRA learned to speak Russian, German, French, and Bulgarian as it gained some acceptance by the international nuclear community. CEE nations accepted PRA and other tools because they were empowered by them. It offered a fair chance to demonstrate their reactors were safe enough.46 To bring Eastern bloc nations into this system, the nuclear nations of the West gave away PRA and other reactor modeling codes and provided financial, technical, and training assistance. These carrots, along with the sticks of safety-based EU accession criteria, induced broad-based change. The relationship between Western and CEE experts was not an equal one, but it was mutually beneficial. CEE nations had to participate in a program that could shut down their plants, but in return they gained the means and competence to meet safety standards and protect their national interests. They influenced the consensus that CEE reactors were not as safe as Western plants but not as dangerous as initial assessments supposed. PRA provided the breathing room to carry out an orderly evaluation process. CEE nations had to accept some painful closures of their oldest, least safe reactors, but they gained expertise and regulatory independence.47 The collaborative nature of this system had reciprocal influences across the East-West divide. CEE nations were largely compelled to join the Western reactor safety system, but it was a system that they learned from, helped shape, and from which they drew benefits. In remarkably short order, a group of safety experts found technical and regulatory solutions to a political/diplomatic problem that sped the technological integration of Europe. PRA’s influence extended beyond questions of individual plant safety and European integration. The end of the Cold War created a fluid situation in Europe, with many competing interests vying to influence safety decisions on Soviet reactors. An international community of experts worked to tame these contending forces by creating a stronger international framework for reactor safety based on international conventions, capable regulatory agencies, and safety assessment tools, such as PRA.48
158
| Risk Assessment Beyond the NRC
Common technical rules, computer evaluation codes, and PRA, they hoped, would help curb the socio-political forces at play and build an international consensus on safety.49 The groundwork for the integration of East and West in nuclear power was laid during the Cold War through the long history of scientific cooperation between the United States and Western Europe. As John Krige has shown, the United States used its considerable scientific advantages early in the Cold War to establish a collaborative program of technological development that proved beneficial to European allies who accepted US assistance and leadership. In this post-Cold-War story, the scales shifted, with the United States and Europe operating more as equals to integrate CEE nations, assisted by technical tools pioneered in the United States and shared widely as a common language of safety.50 Such cooperation was not an obvious outcome before the Cold War ended. Nuclear nations carefully guarded the autonomy of their strategic and civilian nuclear programs, and nuclear safety standards were national standards. While there were international conventions on nuclear accident liability, state sovereignty over nuclear programs precluded any binding agreement on safety standards. As early as 1974, the International Atomic Energy Agency (IAEA) sought to achieve a global consensus on reactor safety. Prospects had improved, in part, because most nations outside the Soviet bloc abandoned alternatives to USdesigned light water reactors. While new reactors were built using shared principles of design, national regulatory safety approaches differed enough to cause the IAEA concern. Over a twenty-year period, the IAEA, the European community, and the United States worked toward an international set of safety and reliability criteria. Nations agreed to the optional guidance documents because they did not challenge national nuclear autonomy. The IAEA initiatives were modest, but until Chernobyl, considered good enough.51 This regulatory patchwork made it difficult for Western nuclear regulators to articulate a common vision of what “safe enough” looked like.52 The shock of the Chernobyl accident provided a boost to an international nuclear safety regime. “A radiation cloud doesn’t know international boundaries,” IAEA Director General Hans Blix noted. “[Chernobyl] will help foster stronger bonds for international nuclear safety.”53 Nations quickly signed on to conventions establishing rules for mutual emergency assistance pacts and early accident notification, but there was little headway toward meatier agreements on nuclear reactor safety or improved liability compensation for nuclear accidents.54 As Blix
Risk Assessment Beyond the NRC |
159
noted, “It is important to retain the principle that responsibility for nuclear safety must remain with national governments. They alone can legislate. They alone exercise the power to enforce. They cannot be relieved of this duty by any international arrangements.” The Netherlands’ permanent representative to the IAEA concluded, “The nuclear community, operators, regulators and constructors seemed to have closed ranks behind the principle of exclusive national control and national safety standards.”55 If formal international agreements were limited, technical East-West dialogue blossomed, nonetheless. In early 1988, direct talks between the United States and the Soviets led to a formal memorandum of cooperation between the NRC and Soviet regulators. In high-level meetings with their Soviet counterparts after Chernobyl, NRC staff shared their experiences with the Three Mile Island accident and noted that older Soviet plants lacked standard US safety features. Soviet regulators, they said, needed to adopt the three pillars of the US safety approach: independent capable regulation, safety-minded operations, and safe plant designs. Like the old Atomic Energy Commission, Soviet regulators still had a contradictory mission of both promoting nuclear power and ensuring reactor safety. The NRC proposed exchanges of inspectors, regulators, and operators with the Soviets, and urged them to take advantage of US PRA tools and codes.56 The international community was not uniformly enamored of PRA. Nations relied on the United States’ Three Ds, but some viewed PRA with skepticism. Pierre Tanguy, head of France’s Électricité de France, admitted that the “French philosophy [on safety] started with the NRC regulations,” but doubted the widespread use of PRA was worthwhile.57 The specter of Sweden, the Netherlands, Switzerland, and Italy retrenching their nuclear programs after Chernobyl alarmed the more pronuclear French. In 1988, Tanguy told a gathering of the American Nuclear Society that French politicians would “never again be enthusiastic” about nuclear power and PRAs would not win them back. “The probabilistic argument never worked well with the public; it does not work at all now [after Chernobyl].” He preferred easy-to-grasp deterministic solutions, such as invulnerable containment buildings. “We must always keep in mind that our approach on future nuclear plants should not make it too difficult to explain why we are still confident in the safety of plants presently in operation.”58 If the French were skeptical, the rest of Europe was more open to PRA, especially West Germany, Britain, and Sweden. Chernobyl prodded the IAEA to step up its
160
| Risk Assessment Beyond the NRC
FIGURE 18 . After the 1986 Chernobyl accident, thawing relations between the United States and the Soviet Union resulted in a cooperative agreement between the NRC and its Soviet regulatory counterpart. In early 1988, Chairman Alexander Protsenko and NRC Chairman Lando Zech signed the Bilateral Agreement on Reactor Safety. To Zech’s left is NRC and State Department staff member Carol Kessler. The cooperative agreement allowed the United States to begin sharing computer accident modeling programs, including PRA, with the Soviets. Source: US NRC.
work on PRA and develop a very similar version of the NRC’s quantified safety goals. By the late 1980s, the French too developed quality PRAs, though their role in regulation was limited.59 Chernobyl and the collapse of the Soviet bloc disrupted entrenched views. In 1992, IAEA experts concluded that a poor reactor design substantially contributed to the accident, rather than criminally negligent operators as was first claimed. “There was a visceral part of Chernobyl” that far exceeded the international response to Three Mile Island, recalled Karen Henderson, an NRC expert in international programs. “The Russians hadn’t told people about it; the lying that went on. . . . There was . . . a feeling that once again the Soviets were not being honest with people and that made them crazier.” The NRC’s more transparent response to Three Mile Island and safety reforms contrasted positively with the secrecy of the Soviets. Their failures “resonated all through the years after that.”60 At an international meeting in 1989, Secretary of Energy James Watkins called on the Russians to “get their
Risk Assessment Beyond the NRC |
161
cultural act together” and develop a “firmly embedded safety culture of openness, critical self-assessment, and resolute corrective follow-up.”61 This repudiation of Soviet regulators and Chernobyl’s graphite-moderated RBMK reactor—the only “good RBMK is a dead RBMK”—spilled over to assessments of Russia’s light-water PWR design, commonly built in CEE. Soviet-bloc reactors had been designed and regulated from Moscow, and the Soviet pullout from East Europe left reactor operations there in chaos. Former satellite nations had limited indigenous regulatory and operational expertise, no money, many plants Western experts considered dangerous, and a workforce so demoralized that one East German official estimated that 70 percent might be drunk on the job.62 West German experts found terrifying conditions at East Germany’s Greifswald facility, which had several of the oldest Soviet PWRs, the VVER440. There were two models of the VVER-440. The VVER-440/230 was akin to a base-model, no-frills automobile that sometimes lacked even rudimentary defense-in-depth features, such as emergency core cooling systems and containment buildings. Their instrumentation and control systems readily caught fire. The VVER-440/213 was like the 230, but it came with an extra package of safety features and a “confinement” building that did a more effective job of limiting the escape of radiation during a loss-of-coolant accident. As a result, safety concern in the West centered on the older 230s rather than the 213s.63 At Greifswald, West German inspectors reported its 230s had “serious deficiencies” in safety culture and design safety issues that were “very bad.”64 The reactor pressure vessels—the big steel pots that held reactor fuel—were dangerously close to brittle fracture, a condition where the vessels might split open and make fuel cooling impossible. A Western expert said, “Those machines are very far off our own regulations and requirements. Not marginally off, but incredibly far off.”65 One West German news magazine called Greifswald an “atomic bomb that could blow up any second.”66 In 1990, the West German government in Bonn announced that the plants were beyond salvaging and closed them forever.67 Bulgaria’s reactors seemed to be in the same shape. Its four 230s at the Kozloduy facility supplied about 40 percent of the country’s electricity. As economic crisis set in, unpaid Russian operators left. An IAEA review team found appalling housekeeping, significant fire hazards, ignorant and powerless inspectors, and poorly trained operators. The IAEA review team concluded that the sorry conditions at Kozloduy were due to “more emphasis on production than on safety; lack of
162
| Risk Assessment Beyond the NRC
safety culture; poor work practice” and numerous other alarming deficiencies. Restarting the reactors, they warned, would be “imprudent.” For the diplomatic IAEA, it was a stern warning.68 Kozloduy seemed to prove that former Communist nations had defective reactors, defective operators, and defective regulators. The West’s experts agreed on the need for wholesale reform in CEE. “There were probably 200 engineers or less around the globe that were really trying to struggle and understand in some level of technical detail what those differences and similarities were [between Soviet and Western reactors],” recalled Jack Ramsey, an NRC expert on Soviet reactors. “I’m a child of TMI and a child of Chernobyl. . . . Trying to make sure that things like that don’t happen has been a very strong driving force for me.” “These accidents instilled in me a very strong lesson that any technology has the potential to be both good and bad.” Among his international counterparts, he noted, Chernobyl was a particularly important moment: “The design deficiencies, . . . a complete mistrust [of the Soviets] . . . as the[y] initially lied about Chernobyl, and a vivid memory of how . . . a radioactive plume had traversed Europe were all factors that led to a common belief among ‘western’ experts that urgent safety upgrades were needed at the highest-risk plants.”69 The perception of Soviet incompetence was further fueled by a memoir published by Valery Legasov, a high-ranking Soviet nuclear official. Legasov detailed the failings of the Soviet nuclear safety system and claimed it was incapable of solving its problems. Of PRA, he despaired that “not a single organization in the Soviet Union is posing and examining these questions with any degree of competence.” His suicide sparked speculation that it was out of despair over Chernobyl.70 Even before the dissolution of the Communist bloc in 1989, CEE nations turned to Western nations for safety advice. By 1987, the requests were so numerous that the Department of Energy contracted for a study to vet the kinds of technical help that could be offered to still-Communist nations. Recognizing that the safe operation of CEE reactors was a global issue and essential to the continued viability of the nuclear industry, the DOE and NRC supported numerous requests for the technology transfer of computer codes and PRA, as well as training and advice. Other nuclear nations, such as West Germany, received similar requests.71 As Western experts inspected CEE operations, a consensus hardened that former Communist nations could not be trusted with reactors of dubious safety. The World Association of Nuclear Operators recom-
Risk Assessment Beyond the NRC |
163
mended that VVER-440/230 reactors should be rapidly closed.72 NRC Chairman Ivan Selin agreed: “I think [the CEE is] basically a dangerous area.”73 A West German expert confessed, “It’s a miracle there has been no accident so far.”74 “I don’t feel good about any of them working,” said another. “Even with a responsible crew, those plants have high safety risks.”75 Germany’s minister of the environment and nuclear safety, Klaus Toepfer, called for an emergency program on Kozloduy aimed at pulling units 1–4 “from the grid for good.”76 At the NRC and other federal agencies, concern with Soviet reactor safety was so great that officials pressed for presidential engagement with Russia. At a 1993 summit between President Bill Clinton and Russian President Boris Yeltsin, Clinton made a substantial commitment of US resources to improve reactor safety. On a subsequent commission chaired by Vice President Albert Gore and Prime Minister Viktor Chernomyrdin, Gore emphasized US discomfort with the continued operation of Russia’s least safe reactors and the critical need for an independent regulatory agency like the NRC. He also requested the Russians enact measures to reduce accident risk at RBMK and VVER-440/230 plants.77 To experts in former Communist nations, the pressure to close their reactors seemed unfair and ill-considered. Yanko Yanev, Bulgaria’s leading nuclear regulator in the early 1990s, believed CEE nations knew how to attain safety, but Western experts “didn’t understand VVER technology and they were applying their standards to a technology which had completely different margins [of safety].”78 Nevertheless, the “clunkers” at Kozloduy, the industry press speculated, might never be restarted.79 Initially, upgrading the oldest plants seemed impracticable. Safety backfits of all CEE reactors were estimated to cost over $20 billion, a figure so large, some called it a nuclear Marshall Plan. Neither the United States nor even the Group of 7 (G-7) industrialized nations would fund such an ambitious program. Rather, the G-7 declared RBMKs and 230s should be shut down, without offering much funding for alternative power sources.80 In early 1993, the NRC met with a group of wise men—many former high-level NRC officials—who had been active in assessing Soviet-designed reactors. The group argued that Western nations had no coherent assistance strategy, and CEE nations were ignoring the West’s demands to close reactors. “There is little evidence that western recommendations to close down the less safe plants are being followed.” In fact, it seemed likely Russia, Lithuania, Ukraine, and CEE nations would operate the RBMKs and VVER 440/230s “for
164
| Risk Assessment Beyond the NRC
the indefinite future.” Regulators in the former Communist bloc concluded that no financial aid was forthcoming, and what was spent went into the pockets of Western contractors. Some nations, such as the Czech Republic and Hungary were becoming quality regulators, the wise men thought, but others had a long way to go to “de-Sovietize” their nuclear programs. Making plants safer did not have to be expensive, they reported. Former Communist nations needed to be taught to be independent, given the technical tools to do the job, and “take ownership of their own safety and regulatory programs.”81 Unable to buy or force the closure of Soviet technology, the United States and Europe would have to invest in Westernizing them. Relations got worse before they got better. Some CEE nations began to resist Western intrusion. The Bulgarians viewed the West’s meddling as an imperialistic move to promote the economic interests of its nuclear industry, and they resented Western technological chauvinism. “We won’t be the Trojan horse that will let Western banks impose unacceptable conditions on East European countries,” said Yanko Yanev, Bulgaria’s head of nuclear regulation. Shuttering the reactors was “a sovereign decision.”82 Isidro Lopez Arcos, a nuclear expert with the European Commission, noted that Western European nations worried that CEE nations would take the assistance without shutdown commitments. “They were obviously resistant [and we were concerned] that they were using our help to improve the status of their reactors to avoid shutting [their troubled reactors] down.”83 In 1994, the NRC’s Chairman, Ivan Selin, noted, “frankly we in the West still lack confidence in the ability of Russia, Ukraine, Bulgaria, Lithuania, and Armenia to manage their nuclear power systems with the same attention to safety that we take for granted.” Somehow, CEE reactors and operations had to be made safer.84 But how safe was “safe enough” for CEE reactors? There was no consensus yardstick among Western nations to measure the safety of designs different than US light water reactors. CEE nations recognized the division. CEE regulators could shop for the best answer among US, French, British, and German experts and corporations. “It was the wild West,” recalled Karen Henderson.85 PRAs offered a tempting measure of safety, but former Communist nations quickly grasped that a PRA with optimistic starting assumptions produced optimistic results. It was “a very dangerous slope,” Jack Ramsey noted. He worried that some regulators simply wanted a PRA to “cook” some positive accident probability numbers in a “public relations” campaign to justify continued plant operation.86
Risk Assessment Beyond the NRC |
165
The positives and negatives of PRA studies were evident at the Kozloduy station. German deterministic analysis had concluded that units 1 and 2 might need billions in refitting and upgrades.87 The Bulgarians, however, had other options. Even before the Soviet Union collapsed in late 1991, the San Francisco based firm EQE International swooped in to Sofia to propose that it fill the post-Soviet void with consulting services, including a PRA of Kozloduy’s reactors based on US and UK standards. It was the first PRA on a Soviet-designed plant. EQE’s surprising conclusion was that a modest $150 million would raise plant safety to minimum Western standards, not good enough for long-term use but safe enough to run a plant through a cold Bulgarian winter or two. EQE said an expensive containment building was not necessary, even though it was an essential feature of deterministically designed Western PWRs. Safety could be raised in the less robust confinement buildings, the report continued, by fitting them with a filtration system for radioactive particulates. EQE also highlighted that even the 230s had some superior safety features, such as enormous volumes of water that could cool the reactor fuel for a longer time than a typical Western PWR during a station blackout. This was not enough to compensate for the 230s’ defense-in-depth deficiencies, but the extra water helped save two plants during station blackouts caused by fire.88 EQE’s PRA offered a creative, cost-effective way to improve safety, but dismissing the need for containment was a cardinal sin that highlighted the weaknesses of a PRA-only approach. There was limited data to develop risk estimates for Soviet reactors, and EQE’s PRA had large uncertainties. EQE might claim containment was unnecessary, but experience indicated otherwise. Three Mile Island’s containment building had been critical to limiting discharges to the environment during the accident. The lack of a containment building at Chernobyl contributed to the magnitude of the disaster. EQE’s PRA also had no peer review. Without it, a PRA could bury optimistic assumptions in the computer program. A 1997 PRA done for Kozloduy 3–4 did that by implausibly assuming a loss of coolant accident was nearly one hundred times less likely than Western PRAs did.89 CEE nations wanted a riskbased approach to safety. The West wanted something more like the NRC’s risk-informed regulation that combined the Three Ds with risk insights. Deterministic analysis and defense in depth provided a more certain margin of safety. CEE gaming of PRAs, as international experts saw it, led to doubts that some former Communist nations were ready to use PRA tools. Still
166
| Risk Assessment Beyond the NRC
missing was an adequate safety culture. One German regulator said, “there has not been enough progress to move from deterministic to probabilistic analysis.”90 The IAEA launched several initiatives aimed at standardizing the safety evaluation of Soviet-designed plants, and it developed PRA methodology and peer-review standards. Jack Ramsey recalled that the idea of performing peer review on a report “was a foreign concept in the Eastern bloc countries. There was no peer review of anything.”91 Nevertheless, change came. Carol Kessler, an NRC and State Department staff member, noted the “remarkable success” of NRC PRA education efforts in Russia. It “opened Soviet eyes to what it really meant to do a [PRA]. . . . None of them knew how their plants were built,” but “what we saw over the years was the scientists on the Russian side slowly beginning to see that they weren’t being given proper tools [in the Soviet system] to evaluate the reactors.”92 Luis Lederman, who led the IAEA PRA initiatives, noted, “The PRAs started to [change] from the ones of EQE . . . to the more comprehensive ones . . . in the second half of the 1990s.”93 To this end, much of the support provided by the United States and Western Europe focused on training CEE regulatory staff for plant inspections, developing codes and standards, using computer evaluation tools, and developing regulations.94 This training helped inculcate in former Communist experts a Western safety philosophy, but it also allowed them to deal with the international community as equals. As their PRAs achieved greater quality and showed positive results, they disrupted reactor closure negotiations. These “living PRAs” called into question the need to negotiate rapid plant closures. PRAs mediated difficult politically-charged technical choices through an objective measure of reactor safety.95 The Bulgarian reactors at Kozloduy were an example of how engineering methods and analytical tools empowered CEE nations and even smoothed fraying relations. The positive assessment from EQE emboldened the Bulgarians to move forward with a restart plan for Kozloduy reactors 1–2, much to the consternation of European officials who believed the reactors were still too dangerous.96 Materials testing indicated that Kozloduy 2’s reactor vessel was not nearly as embrittled as once thought. It restarted in late 1992 with the acquiescence of Western experts. As one industry publication surmised, it was “something of a recognition by the West that the Model 230 Soviet-design PWRs, rejected not so long ago as near monsters, may be salvageable for operation to, say, 1995.”97 There were, however, limits to the Western endorsement of the 230s, which lacked basic defense in depth. “I had always been under the
Risk Assessment Beyond the NRC |
167
impression, and actually still am today, that there are some very serious safety deficiencies associated with [the 230s],” noted Jack Ramsey.98 NRC staffer Karen Henderson recalled that “everyone [in the West] was searching for a handle” to compel reactor closures.99 The emerging European Union provided the prod. Responding to a request by the European Community, the IAEA held a conference in 1991 on nuclear safety. German nuclear safety chief Klaus Toepfer surprised non-European delegations with his proposal for an “international regime for nuclear safety.” Arguing that “a serious accident in any country represents a set-back for us all,” Toepfer suggested that the convention use existing IAEA-based standards. Citing the CEE situation, particularly at Kozloduy, Toepfer asked that the convention include “internationally binding minimum requirements for safety provisions [that] can be established and implemented.” He announced that the German government was “absolutely decided” that Kozloduy 1 and 2 “must close” soon. He saw international standards as a way of ending the Bulgarian stalemate in the West’s favor.100 A few nations balked. The United States and Japan opposed any binding international standards. NRC Commissioner Forrest Remick doubted the standards would influence the CEE. Other US representatives worried that standards would “stifle creativity” and an international regime would waste resources reviewing well-established programs, rather than weaker CEE programs.101 Nevertheless, the conference members agreed to work toward a convention. The final convention managed to protect US and EU interests through a common Western vision of reactor safety within a collaborative framework. The key sticking point was protecting national sovereignty. Any international mechanism with enforcement power was a nonstarter. The Convention on Nuclear Safety offered a novel solution. It obligated each signatory nation to abide by its tenets, but it was an “incentive” document. There was no verification or enforcement mechanism beyond the peer pressure felt by a nation when its program report was critiqued by experts of other nations at regular meetings. Convention standards were loosely defined and based mostly on IAEA standards and technical documents.102 Nuclear nations could choose to define how they met the convention’s mandates. The peer pressure provision appeared to offer little additional leverage over CEE nations, but that was not the case. First, a consensus had formed among Western nuclear nations providing aid that CEE nations needed to do more to reform their systems. Ivan Selin declared it was “crucial” that former Communist nations become signatories to the
168
| Risk Assessment Beyond the NRC
convention and “demonstrate to the rest of the world their commitment to international values regarding nuclear safety.”103 Secondly, EU accession provided teeth to enforce the convention’s standards. In July 1997, the European Commission issued a road map for EU enlargement. It required that accession nations meet Western safety standards and culture. Singling out Lithuania and Bulgaria, the roadmap noted that “the problems of nuclear safety in some candidate countries cause serious concern to the EU . . . and should be urgently and effectively addressed. It is imperative that solutions, including closure where required, be found to these issues in accordance with Community nuclear [standards] and a ‘nuclear safety culture’ as established in the Western world as soon as possible and even before accession.”104 The EU’s firm position pleasantly surprised US officials who had advocated closing all model 230s. They worried that the EU would be too conciliatory, but Carol Kessler noted, “They went further than we would have believed.”105 While some nations, such as Lithuania, were viewed as lacking in Western safety culture, others were ready for the change. Karen Henderson at the NRC recalled, “the knee-jerk reaction was shut them [CEE reactors] all down.” But this overreaction dissipated with demonstrations of regulatory competence in some CEE nations. “Some of those regulators were very, very tough. They had been waiting in the wings all this time to really be free to do their job— the Hungarians, the Czechs—model regulators.”106 Regardless of regulatory competence, reactor closures hinged on defense-in-depth safety. VVER-440/230 reactors were shut down throughout CEE. Czechoslovakia’s vaunted reputation for technical competence did not save its VVER-440/230s at the Bohunice facility. Slovakia closed them as part of its accession agreements with the EU. The plants in the Czech Republic and Slovakia that remained operating were VVER440/213s.107 PRAs were sometimes a critical measure of whether the surviving plants met Western standards. The Czech Republic fended off pressure from its neighbors to cancel a new reactor at its Temelin facility. There was an antinuclear movement within the Czech Republic, and an aggressive one in neighboring Austria and Germany, driven by Green Parties and Greenpeace. Opposition to Temelin became so heated that activists blockaded all entry points into the Czech Republic. The Austrian government hinted it might veto Czech EU admission over Temelin.108 Temelin’s opponents could not overcome its PRA-supported safety case. Temelin’s VVER-1000 had upgraded safety features, such as emergency core cooling, redundant systems, and a containment building.
Risk Assessment Beyond the NRC |
169
Its Soviet-era instrumentation and control system was inferior to Western plants, but a PRA indicated a new system would achieve, as one Czech utility official put it, the “Westernization” of the facility. The Czechs contracted with Westinghouse Corporation and the final “Eastinghouse” design met all Western safety standards.109 Gunter Verheugen, an EU enlargement commissioner, predicted that “when all planned changes are completed, Temelin will probably be the safest nuclear power plant in Europe.”110 Czech officials deflected Austrian pressure to close the plant, responding that “we would shut down Temelin only if it were objectively proved that it does not comply with fundamental safety criteria.”111 The startup of Temelin and the admission of the Czech Republic to the EU went off with little delay.112 As the Bulgarians discovered, the persuasive power of a PRA had limits. Officials in Sofia lobbied the EU to accept its Kozloduy 3 and 4 reactors as sufficiently Westernized to operate. EU officials had allowed Bulgaria to proceed with accession if it agreed to close Kozloduy 1–2 by 2002 and units 3–4 at the end of 2005.113 The Bulgarians submitted PRA results claiming the plants met recommended international probability guidelines for fuel-damaging accidents.114 PRA or not, the Kozloduy reactors needed a containment building. Defense in depth still mattered.115 The EU insisted that all 230 models close. Units 3 and 4 were shut down in 2006. Yanko Yanev argued that the decision was political: “This has always been on the border of politics and technics. Technically, it is OK, politically it not OK. And I always say, in nuclear you have two conditions, you have to be technically sound and politically correct. Only technically sound doesn’t work.”116 Western experts doubted Kozloduy’s technical case and thought the upgrades were of dubious quality. Eager to join the EU, Bulgaria stuck to the December 31, 2006 closure deadline. Shutting down all model 230s proved a politically and technically clean solution for the West. The solution was not as clean for Bulgaria. Yanev pointed out that it lost an inexpensive, green source of energy and increased its reliance on more expensive, polluting coal power.117 Chernobyl and the end of the Cold War created an unprecedented crisis for Soviet-designed reactors. CEE nations initially lacked the capacity, expertise, and regulatory institutions to operate their reactors safely. It was inevitable that Western assistance would dramatically change their reactor operations. While assisting countries had an upper hand in the relationship, CEE nations were empowered by the process; their reactors were safer, their regulators were smarter, and their
170
| Risk Assessment Beyond the NRC
national nuclear autonomy was stronger. Yanev recalled that the great benefit of period was that “nuclear people started to talk to each other” across the former Cold-War border.118 This improved dialogue required greater efforts at consensus. Western experts could not impose their nuclear safety system on CEE nations unchallenged, in part because their students learned their lessons. CEE regulators gained the expertise, clout, and independence to police nuclear safety in their countries and challenge existing assumptions about Soviet reactor safety. Dana Drabova, the Czech Republic’s chief regulator, enthused about the “significant benefit” of the EU’s reactor-safety harmonization efforts. “We will have common grounds for understanding what is expected of us.”119 Notable, too, was how solving this engineering and regulatory problem changed the West. The engineering program to Westernize or close exotic Soviet reactors led the international community to overlay a lightly applied international framework of conventions, guidelines, and values on a sovereign system of reactor safety. “The basic safety principles are common now,” Isidro Lopez Arcos observed. “There is common ground.”120 EU integration, however, changed its safety approach. The new EU members appropriated Western technical knowledge and became fans of PRA studies. PRAs, they saw, overcame the bias against Sovietdesigned plants by providing an engineering measure of their worth. CEE’s PRA-friendly regulators complicated the harmonization of reactor safety among EU nations. France sought a unique European safety approach on PRAs, one that used but looked skeptically at this USsponsored product. As France called for a “very prudent” use of PRAs and cautioned against “giving excessive confidence to numerical results,” CEE nations, the industry press reported, had jumped “enthusiastically onto the NRC’s ‘risk-informed’ bandwagon.” This enthusiasm, however, has been limited in the long run by the expense of performing and maintaining a plant PRA among cash-strapped nations.121 The effect of European integration and an emerging international framework has not been to achieve safety through a unique European approach, as France wanted; it has been done largely the American way. Regulatory bodies around the world have moved to separate technology from politics with independent regulatory structures like the NRC. Article 8 of the Convention on Nuclear Safety states that regulatory independence is essential to reactor safety, a tenet that led France to reconfigure its nuclear regulatory regime to align more with the NRC’s independent structure. This aping of the US regulatory approach
Risk Assessment Beyond the NRC |
171
includes the adoption by many nations of the US safety philosophy: defense in depth, quantitative safety goals, risk-informed regulation, and widespread requirements for PRA use by licensees and regulators.122 The engineers and diplomats who created the international nuclear safety system had to juxtapose international conventions and technical safety systems against existing political systems in ways that accommodated multiple interests and concerns while not losing sight of Western safety goals and standards. PRA became a lingua franca of these negotiations. As they evaluated exotic Soviet technology in a post-Cold War world, PRA experts helped smooth international cooperation on nuclear safety by creating a common language. This reliance on PRA has since increased in international activities, especially after the 2011 Fukushima accident.123 PRA, then, was part of a package of Western techno-diplomatic tools that helped resolve a difficult post-Cold War conflict with benefits to former Communist nations that would have been hard to imagine when the Soviet Union collapsed in 1991. On March 11, 2012, the first anniversary of the Fukushima nuclear accident, German and Austrian Green activists gathered at the Temelin nuclear power facility in the Czech Republic to protest the site’s planned addition of two new nuclear reactors. The protestors no doubt appreciated the irony that a leading competitor for the new contracts was a Russian design.124 The balancing of safety, national interest, and national sovereignty were negotiated in less publicly contentious ways among engineers and diplomats through technical tools, international conventions, technical committees, and international agencies. That continues to be the goal. Jack Ramsey noted that in his peregrinations around the globe promoting NRC safety methods and values, he was often asked to toast his hosts. He told them that if they did their job well, they will be forgotten by society, as history typically remembers nuclear safety failures not successes. He then raised his glass for the toast: “In twenty years, I hope no one will remember us.”125 QUANTIFYING RISK IN THE FEDERAL GOVERNMENT
While risk assessment successfully mediated foreign relations, it did not always smooth relations at home. Quantified risk assessment could sharpen disagreement, even as it increased transparency and understanding. This proved to be the case in a difficult episode for the NRC and the US Environmental Protection Agency (EPA). Its early work in quantifying
172
| Risk Assessment Beyond the NRC
risk made the NRC a leader in the federal government in promoting risk assessment in regulation. Other agencies, including the EPA, turned to quantitative risk assessment to analyze diverse risks posed by drugs and toxic chemical substances in the workplace and environment. Government agencies found quantitative risk assessment to be an appealing technical solution to bureaucratic turf wars and political disagreements, inserting objectivity into risk-based policy decisions and hopefully restoring public faith in government. The burst of interest in risk assessment led to a complex web of interagency efforts to harmonize risk approaches. The 1983 government publication Risk Assessment in the Federal Government: Managing the Process signaled the broad acceptance of risk assessment in federal agencies.126 Yet, for the EPA and NRC, risk quantification increased disagreement during their collaboration on the development of residual radiation criteria at the decommissioned sites of former NRC licensees. There were sometimes practical limits that a trust in numbers could offer on complex issues. Risk runs a gamut from improbable, catastrophic events down to routine, low-level emissions of hazardous substances. Somewhat uniquely among federal agencies, the AEC and NRC regulated at both ends of the spectrum. Early radiation standards developed for low-level exposures to workers and the public often served as templates for other agencies that regulated cancer-causing chemical substances, such as the Occupational Safety and Health Administration (OSHA), Food and Drug Administration (FDA), and the EPA.127 Even as the number of federal agencies that managed risk multiplied, they shared a common scientific and regulatory heritage in radiation’s Linear No-Threshold model (LNT). LNT assumes that a linear relationship exists between dose level and the risk of cancer from a very large exposure right down to an exposure to a single gamma ray. Some laws have banned even tiny hazardous exposures, but for the most part, the linear model led to regulations that balanced benefits and risk by permitting use of a substance at very low doses. In the 1970s, researchers used numerous studies of tumor rates in laboratory animals administered high doses of chemicals to quantify risk at low doses by linear extrapolation. By the 1980s, these scattered efforts to understand low-level risk coalesced into interdisciplinary professional organizations such as the Society for Risk Analysis.128 Courts, too, favored such quantified cost-benefit regulation. For example, in 1980, the Supreme Court ordered OSHA to develop a riskbased standard for airborne benzene in the workplace, and even suggested a probability range of acceptable risk. In his decision, Justice
Risk Assessment Beyond the NRC |
173
John Paul Stevens ventured that the public would find “plainly unacceptable” a lifetime risk of cancer greater than one in a thousand for any toxin while “plainly acceptable” odds approached one in a billion.129 The difference in the range between Stevens’s unacceptable and acceptable standards—one in a thousand to one in a billion—was huge, a million times. But these rough boundaries became the gray area within which most substance regulations operated. Risk, radionuclides, and regulation brought the EPA and NRC into a close relationship that sometimes turned fractious. The two agencies had much in common. They shared similar histories as a new breed of technically capable regulatory agencies. Each took an early interest in quantitative risk assessment and produced their first quantitative risk assessment in 1975. For both agencies, risk assessment offered a solution to the dueling expectations that they regulate safely and efficiently.130 The potential for conflict emerged from an overlap in regulatory responsibilities. The EPA set standards for radionuclide exposure by the general public. The NRC enforced those regulations for its licensees and the public. Essentially, once radiation drifted beyond a licensee’s fence line, it was more the EPA’s concern. The division of responsibility was clear on paper but not in practice. Differences concerning jurisdiction emerged, but conflicts over this coregulation were minimized by the EPA’s deference to the NRC’s oversight of its licensees if its regulation and enforcement practices were “protective,” that is, equivalent to the EPA’s. The NRC worked with the EPA on regulations for air and water emissions and low-level and high-level waste facilities. The two agencies’ growing reliance on quantitative risk assessment in the 1990s helped drive them into bitter disagreement. While they worked together on high-profile issues regarding regulatory oversight of radionuclides in air, water, and at low-level and high-level waste facilities, it was an issue of comparatively low public interest—the decommissioning of NRC-licensed facilities—that pushed them into what one industry publication called a “war” so disagreeable that they abandoned negotiation and beseeched their Congressional allies to legislate a solution.131 The key question was: when releasing for unrestricted public use the property of a decommissioned NRC-licensed facility, what level of residual radiation was acceptable? Over several years, each agency deployed their risk assessment expertise and management tools to find an answer. As the EPA and NRC came to slightly different conclusions, the process exposed the historical differences in their methods, safety standards, and
174
| Risk Assessment Beyond the NRC
legal authority. The two agencies had produced standards and practices with similar levels of safety, yet the small gaps in their criteria took on importance in public perception that transcended narrow safety questions. Both agencies set out to answer the common question “how safe is safe enough?,” but found compromise almost impossible on a practical question: “when is a risk estimate close enough to safe enough?” It was only when the two agencies accepted a less quantitative framework of risk management that they found a workable agreement. The dispute between the agencies was ironic since their approach to risk assessment sprang from the same source, national and international research on health effects of radiation. In addition to the fault-tree PRA approach of WASH-1400, the AEC’s weapons development program also contributed to quantified risk assessment methods to estimate the probability of contracting cancer and other diseases from what were considered low-level, non-fatal exposures to radionuclide emissions; it was a debate that eventually influenced the regulation of non-radioactive hazardous substances, too. Before World War II, radiation protection standards established in the United States set a “tolerance dose.” The standard was based on a maximum exposure that produced no observable symptom of injury, typically skin inflammation. The standard did not consider the potential for latent effects, such as cancer or genetic damage. While experts understood that such standards were preliminary, a tolerance dose could be interpreted to mean there was a “safe” dose below which there was little or no risk. The Manhattan Project, bombings of Hiroshima and Nagasaki, and postwar above-ground weapons testing forced a reconsideration of radiation regulation. Research by H. J. Muller and other geneticists indicated that even small doses of radiation caused imperceptible gene mutations that scientists worried could be inherited by offspring or cause cancer. It was not feasible to design studies to detect an imperceptible rise in general cancer rates from low doses. To estimate such low-level risk, the health effects from high doses on atomic bomb survivors were extrapolated to lower exposure levels. Initially, exposure guidelines were set by an informal committee of scientists through the National Council on Radiation Protection and Measurements (NCRP). Later, in 1964, the NCRP received a formal charter from Congress for its work on radiation safety research and standards recommendations. The NCRP cut the tolerance dose in half and renamed it the “maximum permissible dose” to convey the idea that the “the probability of the occurrence of such injuries [from the dose] must be so low that the risk would be readily accept-
Risk Assessment Beyond the NRC |
175
able to the average individual.” This step across the regulatory divide from a no-risk, tolerance-dose to acceptable-risk regulation had farreaching consequences. In accepting that a substance had health consequences down to small amounts, regulation became a balancing act between benefits and risk. Since even small doses “must be regarded as harmful,” NCRP Chairman Lauriston Taylor said, “the objective should be to keep man’s exposure as low as possible and yet . . . not discontinue the use of radiation altogether.” The evaluation of hazards switched from ascertaining the bright line between safety and danger to a less certain exercise of balancing risk vs. reward, costs vs. benefits.132 Regulators turned to a linear model of radiation damage as a conservative safety standard, which assumed a simple straight-line correlation between dose and effect. The true relationship between regulation and reality was uncertain. Scientists disagreed as to the validity of the LNT, since there were other plausible models. For example, some scientists posited that radiation damage at low doses was sublinear, meaning that it dropped off dramatically. Experts who favored a sublinear model reasoned that humans had evolved over thousands of years bathed in natural radiation and developed genetic repair mechanisms to low-level radiation. Only at high doses did radiation overwhelm human defenses. It was nearly impossible to verify the validity of the linear and sublinear models due to the virtually undetectable number of cancers that would occur in either case.133 For regulators, there seemed little alternative to using the LNT model in regulation. It was more conservative than the sublinear model, easy to apply, and allowed for risk comparisons among hazards. One member of the International Commission on Radiological Protection (ICRP)—an international body of radiation scientists similar in function to the US NCRP—admitted the “arbitrary judgments” that went into the standards the ICRP set, but he argued that the conservative risks that went into those judgments would be acceptable to individuals exposed. He favorably compared the risks posed by maximum permissible doses with common risks accepted by society. More data would make possible quantitative comparisons that “relate acceptable radiation risks to other risks of society.”134 The ease of implementing the LNT model in regulation was attractive to a broad array of life-science researchers and government officials who dealt with chemical toxins and radioactive materials. By the 1970s, radiation’s LNT model applied to non-nuclear toxin regulation for food additives, pesticides, and other chemicals by the FDA, EPA, and OSHA.
Risk (excess cancers)
176
| Risk Assessment Beyond the NRC
Hypersensitivity LNT Threshold Hormesis Epidemiological data
Approximate lowest dose where excess cancer has been observed 100 mSv Dose (above background) FIGURE 19 . Models for the health risks from exposure to low leveles of ionizing radiation. There are several competing hypotheses about biological damage at low doses of radiation. While all models converge on damage at higher doses where there is ample data, extrapolation to lower doses is uncertain. The linear no-threshold line dominates radiation and chemical regulations, but some studies suggest damage at low doses could drop to zero (threshold), provide a health benefit (hormesis), or even compound damage (hypersensitivity). Source: International Journal of Radiology and Medical Imaging, https://www.graphyonline.com/archives/archivedownload .php?pid=IJRMI-123.
Their common use of the LNT in regulations introduced a common assumption about how a hazard affected health. Radiation and chemical regulations had to govern exposures so low that the health effects were estimated by extrapolation from victims of high single doses. In chemical regulation, low-dose cancer risk was typically extrapolated from high-dosage tests on animals.135 Regulation by LNT shifted public perceptions and regulation away from prompt observable symptoms to conservative assumptions of long-term cancers. For the AEC, regulating by the LNT model produced substantial controversy and negative publicity, which, ironically, spurred more effort to quantify radiation’s risks and benefits. Several dissenting scientists argued that the allowed levels of radioactive discharges by AEC licensees could lead to tens of thousands and even hundreds of thousands of cancers and genetic mutations.136 While other scientists criticized these esti-
Risk Assessment Beyond the NRC |
177
mates, the uncertainty of the competing estimates and the ease with which licensees already met emission standards led the AEC to revise downward the permissible dose. It adopted an approach unique among federal agencies at the time called “As Low as is Reasonably Achievable” (ALARA). ALARA meant licensees should keep radiation releases as low as reasonably achievable after “taking into account the state of technology, and the economics of improvements in relation to benefits to the public health and safety, and other societal and socioeconomic considerations, and in relation to the utilization of atomic energy in the public interest.” ALARA forced licensees to consider, for example, the benefit of operating radioactive medical equipment against its cost to health calculated by probabilities and linear damage estimates.137 In practice, ALARA resulted in routine exposures well below permissible dose standards, but its lack of a clear numerical standard made it a complicated, hard sell with other federal agencies and the public. With the NRC and EPA speaking the same probabilistic language and working from the same linear model of risk, harmonization of their regulations should have come quickly. But each agency operated from different governing legislation, confronted different hazards, and relied on different risk assessment methods and management approaches. In some ways, the NRC had the easier regulatory task. In the field of atomic energy, unitary control dominated. Early on, radiation protection operated mostly from one law, the Atomic Energy Act of 1954, drafted by one congressional committee, the Joint Committee on Atomic Energy, and gave regulatory oversight to one agency, the AEC/NRC. The result was one measure of radiation’s health consequences—the REM (Roentgen Equivalent Man), called Sieverts in international units. The REM condensed all kinds of radioactive emissions—gamma rays along with alpha, beta, and neutron particles—from all exposure pathways (air, water, and external exposures) into a single measure of biological damage. Most low doses workers and the public received were expressed in millirems (each one-thousandth of a REM).138 There was no similar unity to the regulation of chemical substances. At least seven different agencies were responsible for screening and regulating cancer-causing substances under more than two dozen statutes. The EPA’s enabling legislation came from multiple congressional committees that did not sort out overlapping responsibilities and conflicting language. The EPA mandate was to regulate specific pathways through clean air and water acts or Superfund legislation for site contamination. A further complication was that, unlike ionizing radiation, there was no
178
| Risk Assessment Beyond the NRC
consensus on the mechanism that triggered cancer among many different chemical compounds and no universal measure of damage like the REM. Instead, EPA regulators set concentration limits for each hazardous substance along each pathway. In setting radiation standards, the EPA used a hybrid approach, developing a standard for total population dose and an individual one that limited specific pathways.139 While the EPA had taken an early interest in quantitative risk assessment, pressure grew in the 1980s for all agencies to make extensive use of risk assessment and numerical criteria. In 1980, the Supreme Court validated the use of risk assessment methods for hazardous substances to determine if they represented a “significant” risk. The decision implied an upper bound at which an agency should act to reduce a substance’s risk: a one-in-one-thousand lifetime cancer risk. It also suggested a lower limit where the risk could be neglected, one in a million. Additional court rulings and the passage of the Clean Air Act of 1990 further supported the use of risk assessment.140 The EPA’s full embrace of quantitative risk assessment came during the Reagan administration, in the wake of accusations that it had a proindustry bias. Land contamination episodes, such as in the Love Canal neighborhood in Niagara Falls, New York, led to the passage of “Superfund” legislation to clean up waste sites. In the early 1980s, Congress investigated suspected Superfund mismanagement, which forced the resignation of EPA administrator Anne Gorsuch Burford in 1983. Reagan replaced her with the respected former head of the EPA, William Ruckelshaus. Restoring faith in the agency was Ruckelshaus’s top priority, and he did it by leaning on professionals held in high regard, scientists. Ruckelshaus was a devotee of Harvard economist Howard Raiffa’s advocacy of applying risk techniques to environmental problems. Raiffa’s decision theory and risk quantification methods had been a key model for Rasmussen and Levine’s construction of the WASH1400 model. Ruckelshaus determined that the EPA would pursue risk management through quantified risk and science-based decisions. Acknowledging that many communities with toxic waste nearby were “gripped by something approaching panic,” Ruckelshaus called on the public to reject “emotionalism” in favor of “the idea that disciplined minds can grapple with ignorance and sometimes win: the idea of science.” He set out a clear distinction between risk assessment and risk management. “Scientists assess a risk to find out what the problems are. The process of deciding what to do about problems is risk management. . . . Risk management . . . [weighs a chemical’s] benefits, the costs
Risk Assessment Beyond the NRC |
179
of the various methods available for its control, and the statutory framework for decision.” By contrast, risk assessment “must be based only on scientific evidence and scientific consensus.”141 The initiative continued with Ruckelshaus’s successor, William Reilly. The EPA set its priorities according to what science said were the biggest risks not, as one EPA official put it, “the last phone call from Capitol Hill or the last public opinion poll.”142 Policy by science was a tall order. In the 1980s, the EPA, like the NRC, had to contend with public opinion that was increasingly unwilling to trust experts. Public perception of the seriousness of many hazards was at odds with expert opinion. For example, nuclear power plants and radiation generally figured high as a hazard in public polls but did not appear at all among top concerns by EPA experts. Hazardous waste sites were also a conspicuous public concern that did not crack the experts’ top ten list. Like the NRC, the EPA hoped to reorder its priorities to focus on the greatest environmental hazards, but it was no more successful in convincing the public to treat risk probabilistically than was the AEC/NRC. Trust in authority had declined across nearly every profession, including science. Basing risk management decisions on scientific reputation and the still fragile tools of risk assessment was a gamble. Ruckelshaus admitted that the “enormous scientific uncertainties” in formal risk assessment create a great “dissonance between science and the creation of public policy,” but he saw no other option.143 Ruckelshaus relied on the LNT model. Even if there were doubts about extrapolating the observed damage from large doses down to small ones, the LNT was acceptably conservative to the public. Ruckelshaus wrote, “We must assume that life now takes place in a minefield of risks from hundreds, perhaps thousands, of substances. We can no longer tell the public that they have an adequate margin of safety.” He pledged millions to improve risk assessment capabilities to provide a technical basis for a “statutory formula” for risk-benefit analysis.144 Over the next decade, the EPA worked vigorously to articulate its approach to risk assessment and management. In laying out acceptable risks for Superfund sites and air pollution standards, the EPA established an unofficial criterion about an individual’s lifetime likelihood of cancer from a toxic substance. It would not exceed the ever-reliable one-in-a-million standard. The EPA’s data on chemical toxicity was often worse than what the NRC struggled with. One EPA scientist who had worked on radiation and chemical hazards noted, “When you turn to chemicals, the information is even more incomplete [than radiation
180
| Risk Assessment Beyond the NRC
hazards].”145 Allowing for error and uncertainty, the EPA eventually established an acceptable range for cancer risk of one in ten thousand to one in a million. One in a million served as an aspirational goal of de minimis (trivial) risk, a “point of departure” that should cover about 90 percent of the population near a Superfund site. The EPA’s range was also shaped by court rulings that compelled the agency to determine minimum levels of “acceptable risk” based on a judgment of “what risks are acceptable in the world in which we live.” Only then could it weigh costs and benefits of exposure levels. This determination was analogous to rulings that the NRC’s regulations meet a minimum level of “adequate protection” before costs of safety improvements could be considered, and the NRC also used one-in-a-million as a point of departure for its approach to many of the risks it regulated.146 The EPA’s range aligned with levels at other federal agencies. As one study found, agencies almost always acted on a hazardous substance when its estimated lifetime risk exceeded about one in two hundred fifty. Risks lower than that usually involved a risk-benefit calculation. The risks at NRC licensed facilities compared favorably to sites with non-radioactive hazards.147 As the EPA promulgated numerous regulations for airborne, waterborne, and ground contamination, its criteria had the potential to run afoul of the NRC’s more unitary regulation of its licensees. The NRC worried that the EPA’s more fractured regulatory approach could “destabilize the federal regulatory framework” for radiation hazards. Calling for a high-level interagency task force to work out differences, the NRC proposed that the EPA defer to NRC regulation where EPA involvement provided “no additional benefit.” The task force focused on “the development of a common understanding of the quantification of risk” and of what constituted “an adequate level of protection from those risks.”148 Congress sought to reduce regulatory conflict with revisions to the Clean Air Act that allowed EPA to defer to NRC on airborne emissions of radionuclides. The revisions did not address other areas, such as low-level waste and groundwater.149 The EPA and NRC worked to achieve regulatory harmony. In a 1992 memorandum of understanding, they agreed to “actively explore ways to harmonize risk goals and . . . cooperate in developing a mutually agreeable approach to risk assessment methodologies for radionuclides.” The EPA pledged it would not impose its own regulations on NRC licensees if existing regulations were “protective” by providing “an ample margin of safety” equivalent to EPA regulations. That same
Risk Assessment Beyond the NRC |
181
year, the EPA proposed to rescind its regulation of emission standards for radionuclides in deference to the NRC.150 The two agencies had to traverse complex terrain to reach harmony. The NRC’s emission standards accorded with international and national radiation standard-setting bodies. These criteria were above the EPA’s standard of 10 millirem (0.1 millisievert in international units) per year for public exposure. While the EPA’s criteria seemed more stringent than the NRC’s, the latter agency argued that its ALARA guidance to licensees produced results comparable to EPA standards. An EPA survey of NRC licensees found that ALARA was successful in reducing emissions below the EPA criteria. While the NRC incorporated ALARA into some of its regulations, its use was a vexing public relations problem. It was a goal to reduce exposures, not a hard and fast numerical requirement.151 ALARA’s complexity was only part of the problem in reconciling radiation regulation. By the mid-1990s, radiation safety regulation had become more convoluted than in previous decades and was governed by a quilt work of contradictory rules comprehensible only to experts. A General Accounting Office (GAO) survey found there were more than two dozen draft or finalized federal radiation safety standards or guidelines with numerical criteria. The multiplicity of regulations, the GAO found, was a symptom of deeper problems. Agencies did not agree on what constituted acceptable risk, risk-assessment methodology, or protective strategies. Did these regulatory differences matter to public safety? There was little confidence among experts that they could detect the small number of cancers that might result from marginal differences in exposures. In a population of one hundred thousand, the GAO noted, twenty thousand would contract cancer from all causes. Assuming the LNT model was correct, only one thousand cancers were due to natural background radiation. Even if the entire US population received the extra 100 millirem (1 millisievert) per year allowed by the ICRP and NRC public dose limit, 350 cancers might be added to the twenty thousand, essentially statistical noise. The EPA’s more restrictive approach reduced it to about eighty-eight cancers, but that was below detectable levels, too. The GAO questioned the wisdom of exhaustive debate over tiny differences: “Federal radiation limits (however precise they may appear to be numerically) reflect a series of theories and assumptions about radiation effects. They are inherently imprecise.”152 In response to the GAO report, Senator John Glenn of Ohio criticized the federal government’s “piecemeal approach” to radiation protection and called on the NRC
182
| Risk Assessment Beyond the NRC
and EPA to harmonize their radiation standards. “I don’t think the public’s interest is being served” by the debate, he said.153 Glenn’s exhortations did not work. By 1995, the harmonization task force’s progress had ground to a halt. The two agencies could only agree on a “white paper” laying out their methodological and numerical disagreements. The harmonization initiative began, the paper reported, because differences in risk assessment and management “appeared to be the root cause” in their regulatory disagreements. The paper admitted that despite “fundamental differences in approaches” to assessing and managing risk, the EPA and NRC achieved similar levels of protection. The NRC criteria seemed more permissive and, to the public, more dangerous, but that was less true on close examination. For example, when decommissioning a site, the NRC used a longer seventy-year exposure period, while the EPA used just thirty years at its Superfund sites. NRC modeling of risk was very conservative. If the NRC just used the EPA’s “less stringent standards,” the interagency task force wrote, “[it would likely show] a level of protection equivalent to that achieved by EPA.” Moreover, the NRC’s use of ALARA meant licensees almost never came near the annual limit.154 The EPA and NRC’s unbridgeable differences boiled down to a 10 millirem (0.10 millisievert) gap in their criteria for releasing a decommissioned site for unrestricted use—what was called “green field” status. The EPA called for a 15 millirem (0.15 millisievert) annual exposure standard while the NRC favored 25 millirem (0.25 millisievert). The EPA also favored a separate standard for the groundwater pathway of 4 millirem (0.04 millisievert) per year. In 1994, the NRC initially proposed criteria that aligned with the EPA’s. After receiving public comment, NRC staff shifted to the 25-millirem criteria recommended by the Health Physics Society (HPS). The HPS pointed out that 15 millirem was substantially lower than recommendations from it and national and international radiation councils. It argued the criteria was so low that it fell within natural variations of background radiation within a region. It would be “of little or no benefit” to health and safety and extremely difficult to demonstrate compliance. While the gap between 15 and 25 was small, the costs associated with compliance could be huge. Some sites might spend exorbitant sums for remediation below 25 millirem. The HPS wrote, “We are concerned that the NRC staff is . . . developing new standards limits for radiation protection without appropriate scientific justification or basis.”155
Risk Assessment Beyond the NRC |
183
The HPS objections went to the heart of the “small-dose problem” of reaching chosen cleanup thresholds. As Justice Stephen Breyer observed, cleaning up the first 90 percent of a contaminated site might be easy. Once the low-hanging fruit was picked, tens of millions of dollars might be needed to save the next statistical life. This “last 10 percent” might exhaust resources while making a “virtually meaningless” reduction in risk. For example, an EPA ban on asbestos pipe was predicted to save seven to eight lives at the cost of $200–300 million over thirteen years. The standard saved fewer lives than those lost by accidentally swallowing toothpicks. To the public, it still mattered. Asbestos, like radiation, was a dreaded hazard and the agency faced intense pressure to regulate it.156 NRC staff reported significant concerns that EPA standards lacked a scientific and cost-benefit justification. With Republicans gaining control of Congress in 1995, the NRC and EPA were under fire for excessive, inefficient regulation. In late 1996, NRC Chairman Shirley Jackson reported at a conference that the NRC was considering a criterion above 15 millirem and that a separate pathway standard favored by EPA for groundwater was not justified. The announcement shocked the EPA. Administrator Carol Browner, one industry publication reported, “fired a shot across the NRC’s bow” in a letter to Jackson, calling the NRC’s “significant changes” to its decommissioning rule “disturbing.” Drinking water was “a valued national resource,” and if the NRC followed through on its plans, Browner warned, the EPA would likely not accept them as protective under Superfund legislation. Such an action by EPA meant licensees would face double jeopardy in meeting separate NRC and EPA regulations.157 The NRC did not blink. In July 1997, it issued its rule on decommissioning, including 25-millirem “all-pathways” criteria.158 NRC staff argued that there was no good reason to use a separate criterion for groundwater, and its criteria was just 25 percent of its public dose limit already established in its regulations and international standards. One EPA director warned at an NRC Commission meeting that the 25-millirem criteria did not provide enough margin of safety and was “nearly doubling the allowed level of cancer risk to the public.” It was a higher allowable risk than other toxic substances the EPA regulated. The NRC was treating radiation as a “privileged pollutant.” Privately, EPA sources accused the NRC of doing the industry’s bidding. “What’s come into play here is that [the NRC] wants unrestricted release of [licensee] sites,” an official noted, when the NRC’s ALARA guidance had no “teeth.”159
184
| Risk Assessment Beyond the NRC
Browner turned to Congress. EPA ally Senator Joseph Lieberman of Connecticut complained to the NRC that the rule posed an unacceptable risk to the decommissioning of his state’s Haddam Neck nuclear power plant. The Senate’s GOP majority, however, sided with the NRC. Frank Murkowski of Alaska said he feared the EPA’s use of the “Superfund’s vendetta-like policy of ‘polluter pays’ ” would result in fewer decommissioned sites due to expense. Senators James Inhofe, Craig Thomas, and Mike Enzi warned that the application of EPA Superfund standards to NRC sites would be “fundamentally unsound and disruptive.” They threatened to give the NRC sole regulatory authority over its licensees.160 The EPA issued a memo to its regional directors that the NRC’s criteria “will not provide a protective basis” for remediation goals at decommissioned sites, but its threat to invoke Superfund oversight remained in limbo.161 Stalemate. The bureaucratic turf war seemed to have no significance for public safety. The insignificance of the 10-millirem gap could be expressed many ways. It was the dose to a passenger on a one-way flight from Tokyo to New York, one quarter of a mammogram, one-tenth the extra dose received by moving from sea-level Boston to mile-high Denver, or the average natural dose a person receives from living just twelve days on earth.162 Such risk comparisons had failed before to persuade the public. WASH-1400’s executive summary comparing nuclear power to coal plants and meteor strikes had been a debacle, and recent experience indicated nothing had changed. An initiative by the NRC to establish a level “Below Regulatory Concern” (BRC) for low-level waste sites was abandoned, even when it adopted just 1 millirem as its threshold interim policy. The proposed level was less than one chance in a million of fatality risk per year, and intended to avoid unnecessary regulation of tiny hazards. NRC outreach failed to quell a public outcry. Antinuclear groups suspected the NRC’s motives and claimed the policy would cost over 12,412 cancer deaths. Citizen groups sued, states sued, and Congress intervened by revoking the policy in the Energy Policy Act of 1992.163 One NRC staffer concluded, the public “didn’t buy the ‘below regulatory concern’ concept . . . [and] was outraged because they didn’t understand the concept, and it wasn’t explained to them.”164 Viewed across multiple efforts to communicate radiation risk, it was more likely that even good communications campaigns fight an uphill battle with public mistrust. Divisions within the staff over small differences and
Risk Assessment Beyond the NRC |
185
outside criticism opened the agency to accusations that it served industry interests, not public safety. The NRC’s conflict with the EPA risked a similar public relations backlash. Yet bridging the 10-millirem gap was almost impossible. Carefully quantified, the EPA and NRC standards were the sum of their intricate regulatory frameworks, assumptions, and technical approaches crafted over many years. The underlying basis for the numbers could not be reconfigured for the sake of numerical consistency like a set of Legos. For the NRC, altering its criteria to suit the EPA meant abandoning its consistency with international radiation standards, which were already considered quite conservative. For the EPA, accepting the NRC’s numerical criteria meant backing away even more than it had already from the arbitrary but symbolically meaningful one-in-a-million point of departure. The EPA had already shown flexibility by permitting risks down to one in ten thousand in limited circumstances. The EPA had oversight for many toxic substances. Even if the NRC achieved equivalent safety in practice with more conservative calculations and ALARA principles, consistent safety criteria across multiple substances was important to the public and the courts. The NRC’s clash with the EPA fed into a larger debate within the nuclear community about its approach to radiation safety. For many years, the AEC and NRC had minimized radiation controversies by reducing exposures through stricter standards and improved ALARA practices. This strategy worked because the nuclear industry usually operated well below existing criteria. Tighter standards were easy to meet. The conflict with EPA, however, indicated that compliance costs could become prohibitive. Had the NRC gone too far? NRC Commissioner Gail de Planque, a leading health physicist who held a Ph.D. in environmental health science, thought so. “I think we are pushing the dose limit thing too far. To get some interagency agreement [with EPA], we may be moving away from what I consider some very valid regulatory philosophies. I think . . . a lot of people forget about the ‘R’ for reasonable in ALARA. . . . We’re in danger of pushing [criteria] much lower than they need to be for public health and safety . . . and would not stand up to ‘a valid risk-cost analysis.’ ”165 Doubts about the validity of LNT emerged in some quarters. Some research in the 1990s found little correlation between low doses and cancer. At a meeting of health physics professionals, many experts criticized the NRC for conceding too much and questioned the ethics of
186
| Risk Assessment Beyond the NRC
regulating by an unproven linear model. “The people who preach the false science of LNT should take responsibility for doing harm to humanity,” one critic said, by scaring the public away from life-saving therapies and diagnostic technologies.166 NRC Commissioner Edward McGaffigan said that the NRC’s “radiation standards may or may not be totally rational.”167 “We pretend that [radiation has] a linear effect. We pretend that you can go down to 1 millirem and calculate that the chance of a certain type of cancer goes up . . . and therefore we have to do something.” NRC Commissioner Greta Dicus countered that there was no conclusive evidence to revise the LNT model. A Canadian regulator agreed with Dicus: “Without evidence in hand [to refute the model], the regulator has no choice but to rely on some form of LNT.”168 The NRC remained committed to the LNT as a regulatory principle, but its disagreement with EPA forced the staff to confront its practical limits at very low doses. When the NRC published its 25-millirem rule on decommissioning, it pointed to the “uncertainty associated with estimating risks at such dose levels.” Proof of health effects, the NRC pointed out, were lacking even as high as 20 rem (800 times the 25-millirem standard).169 Given this uncertainty and its quite conservative assumptions about public exposures, the NRC stated its criteria were based on “judgment . . . rather than an analysis based on probability distributions for such exposures.”170 The commission concluded the scientific basis for the EPA’s criteria was “based on outmoded modeling that does not reflect current understanding of the uptake and doses resulting from ingestion of radionuclides through drinking water.”171 For the EPA, the NRC criteria seemed to compromise the compromises it had already made. In the 1980s it developed a broader more flexible framework. It accepted risk estimates that fell short of its onein-ten-thousand minimum threshold—one in 3,333. This approach had withstood court review. However, the NRC rule was closer to an approximate increased lifetime risk of one in two thousand. With a commitment to ALARA, the real risk would be lower but unquantified and uncertain.172 Ultimately, both agencies backed away from the spiral down into a maze of hypothetical numbers of risk assessment and found common ground in risk management. It was a compromise eased by a change in leadership. In March 2001, NRC Chairman Richard Meserve met with EPA Administrator Christine Todd Whitman, the first meeting of agency heads in five years. Soon, the EPA signaled its willingness to accept the NRC’s 25-millirem limit at New York’s West Valley Demon-
Risk Assessment Beyond the NRC |
187
stration Project, but with an array of risk management tools, such as the development of administrative controls and mitigating measures.173 In 2002, Whitman and Meserve signed a new memorandum of understanding confirming this new approach. Risk from residual radiation at a decommissioned site would be managed by returning a greater measure of judgment to the process. The EPA affirmed that its involvement in site evaluation would occur “very infrequently,” since most sites decontaminated under NRC would fall below EPA Maximum Contaminant Levels (MCL). The memorandum replaced a hard and fast numerical standard with numerical “trigger levels” of consultation between the two agencies when the MCLs were likely to be exceeded.174 In the controversy over site decommissioning, quantitative risk assessment did not provide the cross-agency consistency and “statutory formula” EPA administrator William Ruckelshaus had sought. He looked to risk assessment to save a risk management process that seemed hopelessly flawed by politics. In this case, the risk managers and political decisionmakers had to save quantitative risk assessment by broadening regulatory discretion to include qualitative factors. As the EPA and NRC reached the limits of technical certainty and their complex legal histories, policy judgments were essential parts of the “formula.” Institutional and administrative controls at decommissioned sites became critical tools for sites that fell short of “green field” status. The experience of risk assessment outside the context of US nuclear power plant regulation reinforced the lesson that PRA was a powerful tool with important limits. In the right place, it could provide risk insights and produce consensus, but it also required risk-informed decisions rather than risk-based ones. Policymakers still needed a range of management options and the flexibility to consider qualitative information and social and political factors.
CHAPTER 7
Risk-Informed Regulation and the Fukushima Accident
As the NRC entered the twenty-first century, it charted a clear, aggressive course to risk-inform its regulations to ensure adequate protection of the public while improving efficiency. After the Domenici-Jackson meeting in 1998, risk-informed regulation dominated NRC rhetoric. The NRC’s strategic five-year plan for 1997–2002 mentioned “riskinformed” only a half dozen times, but in the revised volume for 2000– 2005, it appeared more than ninety times. Similarly, the NRC had barely mentioned reducing “regulatory burden” in the 1997–2002 strategic plan, but it appeared over one hundred times in the next volume, often as a benefit of risk-informed regulation.1 To be “risk-informed” was more than a technical approach to safety regulation, it was the NRC’s language of progress, a program that would modernize it as a regulator. It was not certain whether the industry would follow along. NEI was a champion of risk-informed initiatives, but the industry was still a hodgepodge of large and small entities whose bottom-line calculations often varied considerably. Licensees were skeptical risk-informed regulation would provide the advertised benefits. One industry publication described the risk-informed initiative as a “massive overhaul of NRC regulations” requiring “spending millions of dollars—how many millions nobody knows—to create a shadow set of risk-informed regulations with little assurance that anybody will use them when they’re finished.”2 John McGaha of Entergy Operations told the commission that its adoption of risk-informed regulation would hinge on a “business 188
Risk-Informed Regulation and Fukushima
|
189
decision,” and he confessed that his plant staff were so wary of it that none of Entergy’s plants volunteered to pilot risk-informed experiments.3 Industry wanted to see an obvious payoff from a long-term investment in risk-informed regulation. The NRC’s new chairman, Richard Meserve, admitted, “It’s sort of a gamble, I guess, at this junction.”4 Gamble the NRC did. As one industry newsletter noted, NRC staff floated “things that would have been unheard of from federal regulators before,” such as the possibility of risk-informing the ECCS acceptance criteria to allow more fuel degradation during an event.5 It committed millions to the implementation of the risk-informed Reactor Oversight Process and risk-informing regulations for fire protection, technical specifications, in-service inspection, and loss-of-coolant accidents. Some regulatory revisions were completed with modest effort, but risk-informing just one regulatory section on the obscure “special treatment” of safety equipment consumed fourteen thousand staff hours within a couple of years.6 The NRC also worked to communicate risk and risk-informed regulation to the public. As one NRC commissioner noted, “We can have the most advanced risk insights, the best science, the leading experts in the field, but if we do not have an effective communication plan, we will fail.”7 The field of risk communication was thirty years old and the NRC sought to distill its lessons into training useful to effective public communication strategies on the role of risk in regulation.8 Despite the effort, skeptics of risk-informed regulation abounded. “If I was a betting man, I don’t think that it’s going to happen,” said David Lochbaum of the Union of Concerned Scientists.9 He predicted the cost of compliance with new risk-informed regulation would exceed savings. Many in industry were in unusual agreement with one of their primary critics. The industry’s lobbying organizations in Washington, DC, and a few larger utilities had a strong interest in applying PRA to regulations, but a manager at a smaller plant dismissed the expensive safety tool. “For older plants, single unit utilities, the cost-benefit just isn’t there,” he said. Industry participation “is all going to be very marketdriven,” one NEI official admitted.10 There were plenty of skeptics of risk-informed regulation within the NRC staff, too. One executive of a PRA firm thought much of the staff was not interested in risk-informed regulation and were “very comfortable with the way things are.”11 PRA expert John Garrick noted that the commissioners were usually more interested in reforming regulations than the staff. Referring to outgoing chairman Shirley Jackson, he said,
190
| Risk-Informed Regulation and Fukushima
“The person who coined the phrase [risk-informed]” is leaving. “I don’t know what to expect from the others.”12 Even while some nuclear critics thought risk-informed regulation might not happen at all, others worried it might. They feared riskinforming regulations meant reducing safety margins. Public Citizen’s Jim Riccio warned that steps away from traditional deterministic safety would endanger the public. Risk-informed regulation, he said, was merely a euphemism for deregulation, an economic motivation wrapped in the language of safety.13 NEI’s Ralph Beedle objected to Riccio’s characterization. Economics and safety could be pursued simultaneously. “There should be no doubt in anybody’s mind that the industry is interested in maintaining these plants in a safe configuration. There is no one that’s more interested in that than the CEOs of these facilities. Because an unsafe plant is one that represents major financial risk, and he’s not about to take that. So, the last thing we want is deregulation of the safety regulations.”14 The NRC, too, had to struggle with the perception that risk-informed regulation sacrificed safety to benefit the industry’s bottom line. Gary Holahan, a regulatory office director at the NRC, expressed his frustration that much of the debate hinged on appearances. “If we had a riskinformed approach that everybody was comfortable with . . . [and it] didn’t look so much like relaxing safety, but as a good decision,” there might be more risk-informed regulation.15 Another NRC staffer admitted to a reporter that if the public did not see risk-informed regulation as an initiative to improve safety, critics would have the upper hand.16 The maintenance rule and ROP may have created excessive confidence in what could be accomplished with risk-informed regulation. Nuclear plants had been designed according to a complex deterministic regulations and guidance documents. To risk-inform a regulation, NRC staff and industry had to apply risk insights piece by piece, regulation by regulation, while keeping alert to how changes in one small part could affect the whole. The NRC considered ambitious initiatives to risk-inform Part 50 regulations for operating reactors. One option was simple but far reaching: change the application of the term “safety-related.” Safety-related equipment and systems had “special treatment requirements” for quality assurance, maintenance, and their ability to survive in the harsh environments of design basis accidents that exceeded requirements for standard commercial versions. If a licensee had a high-quality PRA, it could identify equipment with low safety significance and replace it as
Risk-Informed Regulation and Fukushima
|
191
it aged with a less expensive commercial grade. A safety-related valve priced at $36,000 might cost just $9,500 at commercial grade. Industry estimates indicated that 75 percent of safety-related equipment could convert to commercial grade.17 In 2004, after six years and tens of thousands of staff hours, the NRC issued its new risk-informed special treatment regulation, known as 50.69, with high hopes.18 The following year NEI published a guidance document to the industry.19 There was confidence that the rule would have wide application to operating plants and even new reactor construction. NRG Energy, the operator of the South Texas Project plants, indicated it was considering the addition of two new reactors where special treatment regulations could control costs and achieve safety. In 2006, an STP official said its plant staff were excited at the prospect of being able “to do things right the first time. . . . This will be a riskfocused station” that will “better focus construction” on true safety.20 In 2007, NRC staff gave the commission a positive update on the special treatment rule and reported it was “awaiting industry implementation.” NEI’s Biff Bradley predicted, “I think the day will come when every plant has adopted 50.69.”21 The wait lasted ten years.22 The NRC’s PRA quality requirements and high implementation costs made the rule a hard sell among licensees. In parallel with the special treatment rule, the staff and industry followed up on NEI’s proposal to risk-inform and essentially eliminate as a design basis accident the large-break, double-guillotine, LOCA. As an accident that had never happened, it was, as one industry official put it, the “mother of all assumptions.” Preventing and mitigating a LOCA had been a significant concern since the Hanford production reactors. Removing the large-break LOCA from the regulations was akin to deleting original sin from the book of Genesis.23 For an industry under competitive pressure, the savings from the LOCA rulemaking were tantalizing. For less than $1 million in application costs, industry estimated, a licensee might bank $8 million per year in savings. Adding in the potential for power uprates (increases in allowed power plant output), the collective industry benefits could be nearly $12 billion.24 The safety case for eliminating the large-break LOCA had been around for decades. Experts were confident that large pipes did not suddenly break in half, as postulated. The much more likely scenario involved a pipe developing a detectable “leak before break” long before any sudden rupture occurred. Only smaller pipes in the range of six to eight inches might break suddenly. Pipe breaks above a “transition break size”
192
| Risk-Informed Regulation and Fukushima
could be neglected due to their improbability. Proof had to await operating experience. By 2002, when NEI filed a rulemaking petition to treat a large-pipe break as “an extremely unlikely event” of “negligible risk,” PRAs could draw on data from two thousand five hundred reactor years of operation.25 Experts concluded that large-break LOCA probabilities were much lower than WASH-1400’s estimates.26 The long path to revising the LOCA regulations illustrated how difficult it was to achieve the promise of risk-informed regulation in practice, particularly when the costs of acquiring regulatory approval and implementation provide uncertain long-term benefit. In debates about the proper balance between qualitative and quantitative safety, the quality of licensee PRAs and data collection, the expected benefits could vanish. The transition break size became a major sticking point. NRC staff divided all breaks into categories above and below the transition break size, typically the second largest pipe diameter in the plant. At about twenty inches, such a break was still too large to provide much savings to the owners of General Electric Boiling Water Reactors (BWRs), and they withdrew from the rulemaking. PWR owners remained interested parties. NEI’s Anthony Pietrangelo argued that unless the transition break size was small enough to relax safety requirements on Emergency Core Cooling Systems and other safety equipment, the rule change would be an “implementation nightmare” that could cost the industry over $160 million. The NRC’s Brian Sheron replied that the staff would reduce the break size if it preserved safety, but if it just meant “they’re going to crank out more megawatts and make more money, we’re not that receptive.”27 As the industry pushed for a smaller transition break size, the ACRS pushed the other way. It objected that the NRC was in danger of abandoning the principle of defense in depth. In November 2006, ACRS Chairman Graham Wallis said the proposed rule “should not be issued in its current form.” He called for revisions to “strengthen the assurance of defense in depth for breaks beyond the transition break size.”28 Disagreement was strong enough that the NRC did not issue a revision for almost three years. In 2009, the staff’s proposed final rule satisfied the ACRS and industry, though with less benefit than anticipated. The ACRS concluded that the staff had crafted an acceptable risk-informed regulation that preserved defense in depth for larger breaks even while relaxing certain low-probability safety requirements. Despite PRA quality requirements that were “pretty high,” NEI’s Biff Bradley observed, the rule remained “the poster child for risk-informed regulatory reform” and was “a fun-
Risk-Informed Regulation and Fukushima
|
193
damental reform objective for the industry.” Bradley voiced the industry’s “hopes [that] we can achieve the win-win of safety benefits as well as being able to enable power uprates and other benefits of the rule.”29 In January 2011, the rule’s twelve-year journey was nearing its end and it went to the commission for approval. The NRC’s regulations were on the threshold of major change.30 DAVIS-BESSE’S HOLE IN THE HEAD
In the years after its 1985 loss-of-feedwater event, Davis-Besse gained a reputation as a well-run plant. Events in 2002 changed that.31 Like all PWRs, the reactor vessel at Davis-Besse is a huge steel pot that holds the reactor fuel and cooling water at temperatures greater than five hundred degrees and pressures above two thousand pounds per square inch. The vessel walls are a nearly impervious six and a half inches of carbon steel and, to limit corrosion, are lined with stainless steel cladding about three-eighths of an inch thick. The reactor fuel is loaded from the top and an equally substantial domed vessel head is clamped on the vessel like a pressure cooker with gigantic bolts to create a perfect—almost perfect—seal. The plant’s chain reaction is managed with long control rods, sixty-nine of them, including a few penetrations for other purposes that poke down through the vessel head. A forest of nickel-alloy steel tubes, called nozzles, are inserted into each hole. The nozzles are welded to the head and capped with a flange. The drive mechanism for each control rod is bolted to the flange to complete the pressure seal. Leaks sometimes occur at the flanges. The primary coolant that leaks out contains chemicals, notably boron, an element that helps control the chain reaction. Like salt from boiled seawater, the boron in the leaky water precipitates out as the water flashes to steam. The boron usually deposits on the vessel head and elsewhere in the reactor building. If the leak is small, the boron is more of a nuisance than a hazard. While wet boron could become corrosive boric acid, boron deposits on the very hot reactor vessel generally are harmlessly dry and periodically cleaned off during outages. Over time, leakage became a more complicated issue for some plants. In the early 1990s, it was evident that the nozzles and the welds that attached them to the vessel head were susceptible to developing small stress corrosion cracks. Axial cracks, running vertically up the nozzle, were far less dangerous than circumferential cracks that might go all the way around a nozzle tube. If a circumferential crack circled all the way
194
| Risk-Informed Regulation and Fukushima
around and went all the way through the nozzle wall—a “through wall” crack—the tube and control rod could sever from the reactor head and eject violently. A medium-break LOCA would ensue. In 2001, nozzle cracking went from a minor to major technical problem. Previously, most cracks had been axial, but circumferential cracks were found at two plants at the Oconee Nuclear Station. They were revealed by inspection methods not normally required by engineering codes. Further investigations found cracking at other plants, particularly if the nozzle was made of Alloy 600 steel. The NRC was confronted with a “special circumstance” where engineering codes and NRC regulations might not provide assurance that adequate protection existed at the nation’s sixty-nine PWRs.32 The nozzle cracking issue emerged while the application of riskinformed regulation was still new. Staff guidance documents on riskinformed regulation were only a year and a half old. It had not been applied to a generic operational issue with such uncertain safety significance. There was only limited guidance about how to assess the quality of PRA submissions from a licensee and how to weigh quantitative risk with qualitative factors. The staff adapted a procedure used elsewhere in the licensing process. As one NRC official later said, the nozzle-cracking issue became a learning experience in applying risk-informed regulation.33 To understand the extent of this new problem, the NRC sent a bulletin to all PWR licensees requiring them to analyze their plants for susceptibility to cracking. For plants with high susceptibility, the NRC wanted further information about how safety would be maintained until the next scheduled shutdown. If a licensee could not make that deadline, they were required to justify how they would maintain adequate protection until they shutdown to inspect. As requested by the industry, the NRC allowed licensees to perform a risk-informed analysis as part of their response.34 Deadline: December 31, 2001. Based on this process, the NRC identified twelve plants as highly susceptible to cracking, including Davis-Besse, which was owned by FirstEnergy Nuclear Operating Company (FENOC). Of the twelve plants, only Davis-Besse and D. C. Cook requested an extension of time before shutting down to inspect. FENOC hoped to align the inspection with its scheduled March 31, 2002 refueling outage. It argued accident risk was low. The staff was unpersuaded and began drafting a shutdown order even as it continued to communicate with the licensee. To address staff concerns, FENOC offered a series of measures to reduce risk. It pre-
Risk-Informed Regulation and Fukushima
|
195
sented video recordings of previous vessel inspections, which indicated that there was no significant nozzle leakage, though viewing the head was made difficult due to sizable deposits of boron FENOC said were from past flange leaks. There were also a few nozzles at the very top of the head that were inaccessible to viewing, but from what could be seen on the rest of the vessel head, nothing was amiss. FENOC also cut in half its extension request to February 16, and it offered compensating measures on plant operations, maintenance, and monitoring that it claimed reduced core damage risk by 20 percent. The NRC’s own calculations indicated that the increased risk of letting the plant operate was within regulatory guidelines.35 FENOC’s proposal created a dilemma for the NRC. How could they turn their uncertain judgment about accident probabilities into a regulatory decision? When confronting this “special circumstance” of nozzle cracking, staff had to “assume the burden” of demonstrating that new information indicated that the “adequate protection” standard was not met. This was a difficult threshold to determine based on uncertain data and PRA assessments.36 There was no bright line where staff could declare FENOC’s data and measures met the adequate protection standard. Its deadline of December 31, 2001 was somewhat arbitrary, “not a defined threshold of risk acceptability.” The NRC’s Allen Hiser pointed out the staff’s conundrum: “We cannot demonstrate numerically that this is the ‘correct’ date, and we certainly cannot differentiate 12/31/01 from 01/01/02, or 01/19/02 or 03/31/02.” Trying to claim there was a firm technical basis for the date, he wrote, was “a no win situation for us . . . due to the judgmental nature of the date” that was set by “regulatory discretion.” It was a date set by a mixture of quantitative and qualitative factors, including risk estimates, uncertainty, and practical questions such as plant scheduling for the short supply of skilled repair contractors who jumped from one plant repair to the next. The NRC, with some staff dissent, accepted FENOC’s proposal.37 Davis-Besse accordingly shut down on its revised schedule of February 16, 2002, and its repair contractor, Framatome, began nozzle inspection and repair. The first cracks discovered were reassuring—less dangerous axial cracks with some leakage and boric acid deposits. Workers moved on to repair Nozzle 3, which was located almost at the summit of the domed vessel head. During the machining process, the nozzle tipped to the downhill side and only stopped when it hit the nozzle flange next to it. Since the nozzle tubes were inserted through the 6.5-inch thick
FIGURES 20 AND 21 .
The Davis-Besse degraded vessel-head event was one of the most serious in the history of commercial US nuclear power. The illustrations above identify the location of nozzle no. 3, near the summit of the vessel head. The photo on the right is a cleaned-away image of the erosion area. The silver area is the inner stainless-steel cladding, just threeeighths of an inch thick. The arrow points to a crack in the cladding. Source: US NRC.
Risk-Informed Regulation and Fukushima
FIGURES 20 AND 21 .
|
197
(continued)
vessel head, tipping should have been impossible. It was as incongruous as waking up one morning in Manhattan to see the Chrysler Building leaning up against the Pan Am/Metlife building. After cleaning away the boric acid deposits, Framatome discovered a “massive amount” of vessel-head erosion, a cavity the size of a pineapple in a 180-degree arc around the nozzle. A crack in the nozzle had opened a pathway where boric acid had eaten away the vessel head. In some spots the erosion was so complete that only the stainless steel inner liner remained as the last barrier to a LOCA. It was considered one of the most potentially dangerous events in US commercial nuclear power history.38 The Davis-Besse vessel-head event had grave implications for FENOC. It lost millions in operational revenue, replaced plant management, and received the largest fine in regulatory history, $33.5 million in criminal and civil penalties. Some plant staff were found guilty in federal court of intentionally misleading the NRC about the state of the reactor.39 The leak may have started as early as the mid-1990s and been overlooked for years. FENOC’s internal review concluded the staff had followed “minimum compliance” in controlling boric acid degradation
198
| Risk-Informed Regulation and Fukushima
and did not heed symptoms of a leak. FENOC concluded the plant’s reputation as a top performer in the industry had allowed it to become “complacent” and “isolated.”40 The episode exposed the NRC to attacks on its use of risk-informed guidance and PRAs. David Lochbaum of the Union of Concerned Scientists and Paul Gunter of the Nuclear Information and Resource Service penned an analysis entitled “NRC has a brain, but no spine.” They claimed the NRC had worried more for the financial health of FENOC than living up to its risk-informed principles. Detailing the staff “anguish” and dissent over the decision, Lochbaum and Gunter charged that “the NRC knowingly and deliberately violated its own safety policies and procedures to allow Davis-Besse to continue operating.” Briefing slides, they argued, showed staff may have recognized that continued operation would increase the chance of core damage above acceptable levels. The NRC had the evidence against operation but had capitulated. As Lochbaum and other UCS staff argued, industry reluctance to invest in improved PRA quality was at the core of the NRC’s poor decision.41 Discovery of Davis Besse’s vessel-head erosion cast doubt on NEI’s recently filed petition to the NRC to risk-inform the large-break LOCA and ECCS rule. Representatives Edward Markey of Massachusetts and Marcy Kaptur of Toledo, Ohio, pointed to the potentially disastrous consequences of the Davis-Besse event and asked the NRC if it should reconsider risk-informed regulation entirely.42 Reviews by three federal entities were also critical of the NRC. An investigation by the NRC’s Inspector General, Hubert Bell, concluded that the NRC decision was “driven in large part by a desire to lessen the financial impact” of a shutdown on FENOC. The staff had “established an unreasonably high burden of requiring absolute proof of a safety problem” before requiring a shutdown. NRC practices, the inspector told a Senate hearing, were increasingly cost conscious and “becoming unsafe.”43 The NRC’s lessons-learned task force also noted FENOC’s risk assessment had “a considerable level of uncertainty” and the final staff decision was poorly documented, including the degree to which it relied on quantitative risk estimates.44 A report by the General Accounting Office took the NRC to task for its limited guidance to staff on PRA use, poor documentation of its decisions, and for failing to recognize that accident risk was higher than its own standards.45 The NRC admitted it did not recognize some symptoms of a leak, and it committed to deal better with risk uncertainty and documenting its decisions. But it was blunt about other criticisms. Gary Holahan,
Risk-Informed Regulation and Fukushima
|
199
director of the Division of Systems Safety and Analysis, warned the commission that accepting Bell’s conclusions threatened the agency’s risk-informed initiatives. The decisionmaking process at “Davis-Besse was not only correct, but . . . it constitutes a good and appropriate model for future actions.” The staff was constrained by the lack of evidence that the FENOC was in clear violation of the technical requirements of its license. The increase in the probability of core damage of 5 × 10–6 (one in two hundred thousand) reactor years was acceptable under NRC guidelines. Holahan believed the NRC decision “was the only decision that could have been made in light of the applicable processes and the information and analysis available at the time.”46 Holahan’s point about the reliability of licensee information was echoed in the NRC’s rejoinder to a GAO report on the event. Noting the ongoing criminal investigations by the Department of Justice into the veracity of information supplied by Davis-Besse staff, the NRC told GAO that its oversight of licensees relied on them to “provide us with complete and accurate information.” In a point-by-point rejection of the GAO’s critique of the NRC’s risk-informed methodology and calculations, the NRC concluded, “We regard the combined use of deterministic and probabilistic factors to be a strength of our decisionmaking process.”47 SAFETY CULTURE, AGAIN
Davis-Besse did compel the NRC to revisit its deference to the nuclear industry on safety culture. For twenty-five years, the nuclear industry had resisted NRC initiatives to assess or quantify safety culture. After the Brookhaven study, INPO and licensees took the lead on safety culture. The industry made dramatic improvements to its operations, including an order-of-magnitude reduction in significant events at power plants. Why should the NRC measure safety culture? The vessel-head erosion event provided an answer: even if safety culture could not be quantified, the NRC needed to include it in the ROP. Citing IAEA qualitative safety culture guidance, the GAO report pressed the NRC to develop some safety-culture assessment methodology for the ROP. The NRC pointed to the subjectivity of any methodology and the time-honored line between regulation and management. Yet the questions continued. FENOC had taken over operations from Toledo Edison Company and enjoyed a reputation as a top performer, a reputation that may have caused the NRC to let its guard down on the nozzle cracking issue. FENOC met minimum performance requirements of the ROP
200
| Risk-Informed Regulation and Fukushima
without revealing a plant culture that prioritized efficiency over safety.48 The public and Congress asked tough questions. Why had the licensee’s safety culture degraded? Where was plant management? Where was INPO? Where were NRC inspectors? Sixteen years after Chernobyl, why didn’t the NRC have a safety culture regulation? The NRC felt the heat from Capitol Hill. Ohio Senator George Voinovich pressed NRC Chairman Nils Diaz: “Why do you disagree with everyone that you should put in place a regulation to monitor safety culture? . . . Why do the Europeans do it? . . . You are going to be [doing safety culture assessments at] Davis-Besse for the next five years. . . . Why [don’t we] make that applicable to all the facilities?” Echoing Nunzio Palladino’s reservations twenty years earlier, Diaz replied, “Because it will get into an area that the Commission believes that we should not be, which is managing the facility.”49 As a condition of restarting the plant, the NRC required that FENOC contract for an independent safety culture assessment. Sonja Haber, who had led the Brookhaven study, headed a team of consultants that included psychologist Valerie Barnes and veterans of the nuclear navy. Using an updated version of the Brookhaven methodology, the team assessed FENOC by widely agreed upon IAEA safety culture traits: 1) Recognition of the value of safety; 2) Integration of safety into all plant activities; 3) Accountability for safety; 4) Learning-driven safety practices; and 5) Clear leadership for safety. Despite improvements, the team concluded, FENOC had made only partial progress toward those characteristics. FENOC spent several years conducting additional safety culture assessments.50 The NRC did not develop a safety-culture regulation, but it inserted a little more room for qualitative judgment on safety culture into its Reactor Oversight Process. In 2006, NRC psychologists such as Julius Persensky collaborated with other technical staff to create opportunities for inspectors to diagnose and act on safety culture weaknesses, a process to determine the need for an evaluation of a licensee’s safety culture, and guidance on evaluating a licensee’s self-assessment of safety culture or to conduct an independent assessment.51 To Valerie Barnes, a recent addition to the NRC staff, integrating safety culture into the ROP seemed misguided because untrained NRC inspectors—engineers rather than social scientists—would judge safety culture. Nevertheless, she and other social-science staff helped establish appropriate protocols in the ROP and the language of the NRC’s 2011 safety culture policy statement. The final products, Barnes believed,
Risk-Informed Regulation and Fukushima
|
201
captured standard elements of such assessments. The NRC’s training and guidance documents, as well as INPO’s work in the area, provided a shared vocabulary for the NRC and industry to discuss safety culture issues. She remains concerned that the commitment to safety culture may not endure.52 The insights of social science experts found a place in the ROP, but only after a delicate balancing of quantitative and qualitative factors. THE RENAISSANCE
By 2010 the Davis-Besse episode seemed like a bump on the road to an industry revival. As the industry’s outlook improved, the lessons from Davis-Besse, the maintenance rule, the ROP, safety goals, and other riskinformed initiatives were connected to a larger purpose: the “renaissance.” The NRC and industry had made plants safer without wrecking their economic viability. In the twenty-first century, old and new reactors might flourish together. As one Entergy official put it, the flood of license renewal applications for operating reactors “builds a bridge to the nuclear renaissance.” Economic prospects were so good for operating plants that eighty-year operation—a second twenty-year extension— seemed a likely prospect for many plants.53 With operating plants money-makers, crossing the bridge to the renaissance was not such a long trip. Nuclear power’s public standing was higher than it had been in decades, topping 60 percent approval in some polls. Its potential to solve the threat of global warming brought around high-profile environmentalists, such as Stewart Brand, James Lovelock, and Greenpeace founder Patrick Moore. Other environmentalists were unmoved. Jim Riccio of Greenpeace called the renaissance “more propaganda exercise than a serious discussion of the viability of the industry.”54 There were still substantial doubts new nuclear plants were a sound investment. Another accident, the chairman of the World Association of Nuclear Operators warned, would undo all the good work of recent years. It would be almost impossible to convince the public, power providers, and politicians to take another chance with nuclear power, thought one engineering news publication. The “return of the nukes will take a miracle.”55 Yet nuclear power seemed to be walking on water. Wall Street and the power industry were taking another look at the peaceful atom. “Performance has just been stellar,” said a director at Goldman Sachs. Investors “are not afraid of nuclear power.”56 While deregulation had hit nuclear plants hard in the 1990s, they had trimmed costs and profited.
202
| Risk-Informed Regulation and Fukushima
Meanwhile, the dark side of a deregulated electric marketplace became evident. California, a leader in deregulation, suffered from power shortages and rolling blackouts. The uncertainties and shortages had been so severe that public opinion favoring new nuclear plant construction spiked in 2001 to 66 percent. There was even growing acceptance of local construction projects. Amid rising fuel costs, power producers searched for stability and engaged in bidding wars for operating nuclear plants. The time seemed ripe for a second nuclear era. As one industry observer noted, “the nuclear industry has another chance to rescue itself.”57 In the White House and on Capitol Hill, the renaissance enjoyed bipartisan support. Nuclear power was a major part of President George Bush’s energy plan. In 2001, Vice President Richard Cheney touted it as a solution to global warming. “If you want to do something about carbon dioxide emissions,” he said, “then you ought to build nuclear power plants.”58 Congressman Billy Tauzin told an industry audience, “Who would have thunk that we’d be discussing the possibility of nuclear construction in this country?”59 Talk turned to action. In 2005, with funding from Congress, the Department of Energy finalized a cooperative agreement with NuStart, an industry consortium of energy companies and vendors Westinghouse and General Electric, to provide $1.1 billion to help guide the industry through the new NRC combined construction and operating license process. “I used to wonder whether any of us would ever see the day,” said Senator Pete Domenici, “but the horizon is getting shorter and we can see some real action coming.”60 Two years later, the NRC accepted an application from the Tennessee Valley Authority for two new plants at the Bellefonte site in Alabama; NRC Chairman Dale Klein announced, “The nuclear renaissance is here.”61 The NRC had a new spring in its step. There was strong interest in its regulatory programs for operating and new reactors. In 2008 an Electric Power Research Institute survey found the “vast majority” of licensees expected extensions to eighty years. With a wave of new plant orders expected, the NRC hired more staff and expanded to a third building on its campus in Rockville, Maryland. Support for a new generation of power plants was evident in public opinion and at the NRC’s annual conference. One session on new reactors was so packed that four NRC employees were stationed at entrances to hold back the overflow crowd.62 With the NRC’s burst of activity, it was the industry that had fallen behind. Although the NRC had taken years to give birth to its riskinformed regulation on large-break LOCAs, Biff Bradley of NEI acknowl-
Risk-Informed Regulation and Fukushima
|
203
edged “most envisioned regulatory improvements have been achieved.” Bradley called on the industry to upgrade its PRA tools. “The investment will pay off,” he predicted. With PRA expertise in short supply and a noted “trepidation” by licensees to commit the resources or volunteer their plants to pilot regulations, it was the industry that seemed to be racing to catch up with their forward-thinking regulator. When NRC staff asked owners of PWRs why none of them had implemented the riskinformed special treatment regulation, known as Part 50.69, a perplexed participant asked, “What’s 50.69?”63 These were the NRC’s halcyon days. Democrats, historically skeptical of nuclear power, took back control of the White House and Congress, but it did not matter. In his 2010 state of the union address President Barack Obama called for “a new generation of safe, clean nuclear power plants in this country.”64 Obama’s support was a critical piece of the puzzle. For all the hope of a new day for nuclear, its economics were still such an open question that it needed federal seed money. In a 2009 update to its upbeat report issued in 2003, an interdisciplinary team of MIT faculty noted that advanced reactor designs met safety goals, but cost was uncertain. Expectations for cost reductions and on-time construction of nuclear power “were plausible, but not yet proven.” The only way forward, the study concluded, was for the federal government to provide more loan guarantees to a demonstration “first mover” project. Just a couple weeks after Obama’s speech, the Department of Energy announced $8.33 billion in loan guarantees for two Westinghouse AP1000 reactors to be built at the Alvin W. Vogtle Generating Plant in Burke, Georgia. In its press release, DOE argued nuclear power was an environmental necessity: “To meet our growing energy needs and prevent the worst consequences of climate change, we need to increase our supply of nuclear power.”65 The NRC received seventeen applications for twenty-six new reactors, had a heavy backlog of operating license renewal applications, and worked to bridge its shortage of degreed nuclear engineers. In early March 2011, the NRC’s annual conference again hosted record crowds. Congress still had not passed a bill to limit carbon-dioxide emissions—a critical boost to nuclear power’s competitiveness—but commissioners spoke confidently of the NRC’s accomplishments and the way forward. Chairman Gregory Jaczko boasted of a recent positive evaluation of the US regulatory system by the International Atomic Energy Agency, and he pointed with pride to the staff’s persistence in grinding out a complex risk-informed regulation on fire protection that endorsed an engineering fire code called NFPA 805. The process of qualifying two pilot plants
204
| Risk-Informed Regulation and Fukushima
had taken seven years, he admitted, but the use of PRA in NFPA 805 “works.” Commissioner William Magwood ventured that the nuclear industry was “entering a new era” and predicted that new reactor technologies would dramatically alter energy production by mid-century.66 Commissioner William Ostendorff agreed: “I envision that nuclear has a clear, important role in our future.”67 FUKUSHIMA
The Tohoku earthquake of March 11, 2011 registered 9.0 on the Richter scale, the fourth strongest in recorded history. A quake of such magnitude was considered an incredible event in the Japan Trench off the coast of the island nation. In six minutes, the annual electrical energy consumption of the Los Angeles region erupted along a three-hundredmile rift about forty miles off the coast of the northeast section of Japan’s main island, Honshu. Japan grew wider, as Honshu’s eastern coast was pulled eight feet closer to North America and dropped two feet in elevation. The quake’s tremors made Antarctica’s glaciers flow faster. Seawater in Norwegian fjords sloshed about. The earth shrank; tipped on its axis; rotated faster; gravity lessened; days shortened. The quake’s epicenter was eighteen miles beneath the sea floor, but it affected the orbit of satellites in space.68 It was the largest quake in Japan’s history, but its people were ready. The advanced early warning system picked up the fast-moving, relatively harmless P-waves that arrived ahead of the slower, more deadly S-waves. Tokyo had eighty seconds to prepare. Factory workers, office employees, and college students knew the drill. Families had their emergency kits ready. Many children trained with earthquake simulators calmly strapped on the soft helmets that normally served as classroom seat cushions and executed “duck-and-cover” drills or rode down evacuation chutes from upper floors. Trains slowed down, isolation valves on natural gas lines closed. Japan’s already strict building codes, upgraded after the 1995 Kobe quake, worked as intended. High-rise buildings around the island swayed, but for a temblor that easily exceeded safety margins, damage was limited. They did not slip over a “cliff edge” and collapse.69 Japan also prepared for tsunamis. About 40 percent of Japan’s coast had tsunami seawalls of varying sizes. The northern Sanriku coast had experienced Japan’s worst tsunamis, and some small fishing communities had built impressive sea walls. Taro’s “Great Wall” was thirty-three feet high, and Fudai’s was fifty-one feet. There were drills in some
Risk-Informed Regulation and Fukushima
|
205
towns, too. A week earlier, Taro, a village of five thousand, had completed its annual tsunami drill on the anniversary of a wave that destroyed the village in 1933. With a tsunami, there is a cliff edge. As the Pacific Plate slid under Japan’s Okhotsk Plate—a process called subduction—the latter thrust upward twenty-three feet, displacing the ocean water above it. Magnified by a combination of plate displacement, vibrations, and underwater landslides, the resulting tsunami surpassed anything Japan had seen. In deep water, the tsunami was only a few yards high as it rolled toward the island at about five hundred miles per hour. As it came ashore, it slowed and rose to a crest of between thirty and seventy feet. Front to back, a tsunami’s wavelength can be dozens of miles deep, and its surge does not quite crash on shore like a surfing wave. The whole ocean seemed to rise for over twenty minutes. As the ocean overtopped seawalls, it looked like the sides of a giant bathtub that overflowed from some enormous faucet that someone forgot to turn off. A wall just a few feet too short was almost like no wall at all. Fudai’s fifty-one-foot wall held and one resident died. Taro’s thirty-three-foot Great Wall did not; the town was destroyed. Its March 3 drill did help limit casualties. Some towns had no wall and had made limited preparations and few drills, such as Minamisanriku. They saw thousands die amid complete destruction.70 For the nuclear power plants closest to the epicenter, wave heights and seawalls determined which plants slipped over the cliff edge. On the coast north of Tokyo, there were fourteen plants clustered at five stations. Each one had a General Electric boiling water reactor (BWR). Each reactor in operation scrammed, started its diesel generators, and began cooling down. Even with an earthquake that exceeded their design basis, the plants were safe. Until the wave. Only the Onagawa station was designed for such a tsunami. Located to the north of Fukushima Daiichi on the Sanriku coast, Onagawa’s three reactors sat behind a Fudai-like sea wall just over forty-eight feet high, enough to survive the forty-two-foot wave. To the south, the Fukushima Daini station was fortunate. Its defenses were much lower, but so was the tsunami. It did not completely lose power, and its backup cooling systems worked sufficiently to restore cooling within fourteen hours.71 Fukushima Daiichi, however, suffered Minamisanriku’s fate. It was designed to handle a tsunami only half as high as the wave that arrived about fifty minutes after the quake. The flooding that followed disabled all diesel generators at the four oldest units, as well as electrical switchgear
206
| Risk-Informed Regulation and Fukushima
and cooling pumps, in what is known as a common cause event that takes out multiple safety systems and nullifies the careful redundancy designed into them. Heroic efforts by Daiichi station workers to string emergency power cables and bring in mobile diesel generators, pumps, and cooling water also failed. Battery power was eventually lost, too. As NRC risk assessments had observed, a loss of power was one of the most dangerous accidents for BWR Mark I containment. It was smaller than a comparable PWR containment and vulnerable to damage from melting fuel and the buildup of explosive hydrogen gas. As pressure rose, the hydrogen leaked from the containment vessel into the secondary containment building. Operators had the capability to vent off the gas, but complications in local evacuations, chain of command, communications, and the operation of the vent valves created delays. Nearly seven thousand miles away, there was growing alarm among US nuclear experts. As NRC staff arrived for work on Friday morning, March 11, the news was already “ominous.” Word of the station blackout and loss of reactor cooling quickly circulated. With the knowledge of over thirty years of post-Three Mile Island severe accident research, computer modeling, and training, the gravity of the situation became evident to some staff from the scraps of information gleaned from the media. By early afternoon, confidence that emergency cooling would be restored turned to alarm. In nearby Delaware, the NRC’s PRA expert, Nathan Siu, attended a conference of international PRA experts. He recalled that those in attendance expected the plant would recover quickly and were “stunned” as conditions grew worse. Charles Casto, who would soon lead a post-accident NRC team in Japan, followed the news in disbelief. “I kept thinking, Soon they will restore emergency power; they’ll stop the accident from happening.” Grim reports of resident evacuations, a declaration of an emergency, and containment venting told one staffer, “This may get really ugly in the next few days.” Within a couple hours, it already had. Staffer Daniel Mills wrote: “BBC is reporting radiation levels at reactor are 1000 times [above] normal. I feel like crying.” Ten hours later, Unit 1’s secondary containment building exploded from leaking hydrogen. Unit 3 followed two days later, Unit 4 the day after. Unit 2 did not explode, it burned. Core meltdowns occurred at Units 1–3. It was the first live broadcast of a major nuclear accident in history.72 The shock of the explosions and evacuations set off an international mobilization to help stabilize the plants and limit the escape of radiation. Experts also worried there might not be enough cooling water in
Risk-Informed Regulation and Fukushima
|
207
Unit 4’s spent-fuel pool, filled with thousands of used fuel rods. With experts from industry, federal agencies, and other nations, over four hundred NRC staff supported the effort.73 For the nuclear community and PRA experts, introspection was in order. The quake and tsunami raised age-old questions about the wisdom of design basis accidents and the dividing line between credible and incredible events, especially when the incredible seemed to happen regularly. There was a disquieting familiarity to the surprise expressed by Japanese engineers, echoes of the Three Mile Island and the Challenger disaster. “We can only work on precedent, and there was no precedent,” said one former director of the Fukushima site. The existing tsunami defenses, he admitted, provided a false sense of security. “When I headed the plant, the thought of a tsunami never crossed my mind.”74 A Fukushima shift manager said the tsunami was “bigger than what we expected, trained, prepared for, or believed was possible—it was unimaginable. We must always be prepared for the possibility that something much bigger can happen.”75 Critics and supporters of PRA debated anew the merits of risk informed regulation as they mined assessments from Japanese, international, and U.S. organizations. The risk triplet asked: What could go wrong? How likely is it? What are the consequences? Experts and policymakers had to answer a similar set of questions: Could an accident like Fukushima happen here? What might be the consequences? Could existing reactors be made safe enough? In deterministic fashion, some nations opted to answer the last question first with a firm “no.” Germany and Belgium announced they would phase out their nuclear fleets, and other European nations reduced their plans for future reliance on nuclear. For some European experts, the accident discredited US-inspired PRA safety approaches even though Fukushima was built in the era of deterministic design. Olivier Gupta and Philippe Jamet of the French Nuclear Safety Authority argued that Fukushima demonstrated the limits of PRA. As a coordinator of the EU response to Fukushima, Gupta said, “What looked acceptable before Fukushima no longer looks that way after Fukushima.” Analysis of extreme events like Fukushima, he said, would be done “independent of the probability of such situations.” Jamet pointed out that a Fukushimatype event had been “completely neglected” in PRAs worldwide. Gupta asked a reporter, “What probability would you have given to having three core melts?”76 Javier Reig of the OECD Nuclear Energy Agency said, “Risk alone is not enough,” and predicted “the US will probably have to switch” back toward deterministic safety.77
208
| Risk-Informed Regulation and Fukushima
Russia, too, criticized the made-in-America PRA approach to safety. Sergey Kirienko, the head of Russia’s state corporation Rosatom, told an international conference, “A probabilistic approach is no longer valid. We should go to a deterministic approach [that will] guarantee” safety in all circumstances. New Russian-designed reactors for sale to other countries, he said, met these requirements. NRC Commissioner George Apostolakis, a PRA expert from MIT and former member of the ACRS, said it was “utterly false” to claim, as Kirienko did, that Fukushima had invalidated PRA. Abandoning PRA, he said, “would roll back the cause of safety for several decades.”78 There were more specific criticisms of PRA. As experts admitted, the potential significance of a region-wide disaster was missing from even the best PRAs. The treatment of two related disasters, such as an earthquake and tsunami, needed greater consideration. Human factors were an important contributing factor in the accident’s progression. Despite their heroic efforts, operators at Fukushima had made mistakes, too. Their training had not prepared them for the extreme accident conditions they faced. During pre-accident station blackout drills, Fukushima instructors “froze” the simulation before they reached a loss of all DC battery power. When they really lost all DC power, one Fukushima operator joked, “Isn’t this where the instructors say, freeze?”79 In darkness, operators had to perform critical, unpracticed emergency actions while coping with physical exhaustion and worry over the fate of loved ones. PRAs did not model sufficiently the contributions to accidents from such error-prone conditions.80 Academic critics of PRA in the social sciences also joined the fray. When PRA and risk management emerged in the 1980s as disciplines, some political scientists and sociologists labeled the development negatively as the birth of a risk society controlled by experts. Charles Perrow, a Yale University sociologist, observed that the accident was symptomatic of a uniquely dangerous industry. Fukushima had occurred because of nuclear power’s “probabilistic” approach to safety. For such dangerous technologies and nuclear power, Perrow called for a “possibilistic” treatment of risk, where any credible or incredible accident should be considered. PRAs were instruments of power, he argued, which gave experts control over those who lived with the risks. Fukushima, he concluded, demonstrated “the risk that we run when we allow high concentrations of energy, economic power, and political power to form.” Similar critiques were offered by scholars in international studies, political science, and history.81
Risk-Informed Regulation and Fukushima
|
209
Defenders of risk assessment countered that the problem at Fukushima was not that PRA was used too much, but that it had been used too little. Japan, it was claimed, did not have the regulatory structure or a strong commitment to PRA. “Japanese safety rules generally are deterministic because probabilistic methods are too difficult,” admitted a Japanese expert on BWRs. “The US has a lot more risk assessment methods.”82 In the United States, the creation of PRA, risk-informed regulation, and the backfit rule allowed the NRC to regulate “beyond-design basis” accidents when an upgrade provided a “substantial” safety benefit. Japanese regulators did improve reactor safety, but a substantial backfit regulation was developed after Fukushima. For example, while the Tokyo Electric Power Company modestly improved its tsunami defenses, Fukushima’s original design basis for tsunamis had not changed. It was still under examination by TEPCO in response to recently discovered evidence of large tsunamis striking the coastline near the plant in the previous three thousand years.83 As one US seismic expert explained of the slow response, “The Japanese fell behind. Once they made the proclamation that this was the maximum earthquake, they had a hard time re-evaluating that as new data came in.”84 Whether this was a uniquely Japanese problem was debatable. Analysis of emergent evidence followed a deliberate process in other countries, too. Post-accident assessments argued in favor of increased use of risk insights for external events. An IAEA report issued in 2015 concluded that the Japanese regulatory system and the Tokyo Electric Power Company would have benefitted from external-hazard PRAs that measured up to IAEA standards and thoroughly considered common-cause failures and the vulnerability of diesel generators and electrical equipment to fires and floods.85 The IAEA and other lessons-learned assessments in the United States agreed that nuclear regulation needed to rely on more risk insights.86 PRAs grew in scope and sophistication, and were required more often by many regulatory systems, even in PRA-skeptical nations. Risk-informed oversight gained currency in Japan as it adopted a similar version of the NRC’s Reactor Oversight Process.87 Fukushima raised difficult questions at a tender moment in the US nuclear renaissance. As US licensees lined up for twenty-year licensing extensions and power-output uprates, nuclear plant opponents brought Fukushima into the proceedings. For example, an expert supporting intervenors on the Pilgrim plant’s license renewal called on its owner, Exelon, to revise its PRA in view of Fukushima. Adding the accident into the worldwide operating data history, he pointed out, indicated
210
| Risk-Informed Regulation and Fukushima
that the probability of a core-damaging accident was an order of magnitude higher than claimed in Exelon’s PRA. This new estimate was based on a history of five core-damaging accidents—Three Mile Island, Chernobyl, and the three reactors at Fukushima—and over fourteen thousand years of worldwide plant operation. If correct, the world could expect one core-damage accident every six to seven years.88 Fukushima, industry critics argued, also demonstrated that PRAs were still too incomplete and of limited quality to use extensively in regulatory decisionmaking. Congressman Edward Markey of Massachusetts and his staff argued that Fukushima revealed that “flawed assumptions and under-estimation of safety risks are currently an inherent part of the NRC regulatory program,” particularly risk-informed regulations of emergency diesel generators and hydrogen control equipment. His office also produced a report that claimed the NRC had failed to keep current with recent findings in seismology that indicated earthquake risk in metropolitan areas such as Boston and New York presented “a much higher probability of core damage . . . than previously believed.”89 While recognizing that PRAs “can be a valuable tool when used appropriately,” the Union of Concerned Scientists cautioned, “the NRC should not make decisions about reactor safety using probabilistic risk assessments (PRAs) until it has corrected its flawed application.” Fukushima proved “the peril the public can face when regulators make safety decisions based on overconfidence in [quantitative] assessments of low risk.” Industry needed to raise the quality of its PRAs to account for “all internal and external events that could lead to an accident.”90 While there were key differences in nuclear regulation between Japan and the United States, particularly the NRC’s more independent authority as a regulatory commission, there were parallels too. Some Japanese regulations were close cousins of the AEC and NRC versions. Fukushima’s design was a lot like that of more than two dozen United Statesbased BWR plants. While US reactors on the west coast had robust defenses against seismic and tsunami hazards, two events in the summer of 2011 served as reminders that earthquakes and floods happen in surprising places. First, a rare 5.8 magnitude earthquake struck Virginia near the North Anna nuclear power station. The plant shut down safely with no significant damage. The quake, however, exceeded the plant’s licensed design basis earthquake. As part of its accident sequence precursor program, the NRC concluded the event had a greater than one in ten thousand probability of causing core damage, and it initiated a formal analysis and review by the licensee.91 Additionally, there was no
Risk-Informed Regulation and Fukushima
|
211
chance of a tsunami in Nebraska, but the Missouri River crested within six feet of the Fort Calhoun plant’s design-basis flood. Precursor flooding events, such as a 1999 event at France’s Blayais Nuclear Power Plant, also gained attention.92 THE NRC RESPONSE TO FUKUSHIMA
Within weeks of the accident, the NRC established a Near-Term Task Force to review the implications of the accident and report back to the commission. Although designed to provide a rapid response to the accident, the task force concluded that no emergency actions were necessary. The NRC’s current regulations, plant design, and licensee ability to mitigate severe accidents meant continued operation of nuclear power plants did not “pose an imminent risk to the public.”93 To bolster the layers of defense in depth most challenged by the Fukushima accident, the task force made ten specific recommendations in three broad areas: 1) improve the ability of licensees to protect against fuel damaging external events, such as earthquakes and floods; 2) improve the ability of operators to mitigate against the consequences of station blackouts and hydrogen control; and 3) emergency preparedness to address prolonged station blackouts, events involving multiple units, and radiation monitoring.94 The task force also recommended a restructuring of NRC regulations. Fukushima, the task force noted, was emblematic of the informal beyond-design-basis category of events the AEC and NRC had developed since the anticipated transients without scram (ATWS) issue first emerged in 1969. Regulators had created the beyond-design category “piece-by-piece over the decades. . . . The result is a patchwork of regulatory requirements and other safety initiatives” including “voluntary” industry initiatives, especially for coping strategies during severe accidents. Dissatisfied with relying on “industry initiatives for a fundamental level of defense-in-depth,” the task force sought to formalize and extend the beyond-design-basis category. It also called for a “more balanced application” of defense in depth and use of “state-of-the-art” licensee PRAs to assess severe accident risk. This extended beyonddesign-basis category, the task force argued, would create a coherent program “for dealing with the unexpected.”95 The task force report landed before the staff and commissioners at a time of growing internal strife. NRC Chairman Gregory Jaczko was at odds with the other four commissioners on issues of policy and
212
| Risk-Informed Regulation and Fukushima
management. Commission disputes attracted extensive press and congressional attention. Jaczko came to the NRC after serving as a staffer to Congressman Edward Markey and Nevada Senator Harry Reid, the latter a fierce opponent of the proposed Yucca Mountain high-level waste repository in his home state. With Congress’s defunding of the Yucca licensing proceedings, Jaczko was criticized for his handling of the closure of NRC activity on it. Intra-commission policy disagreements also grew over the design certification of the new Westinghouse AP1000 reactor and about approval of new reactor construction and operating licenses. Jaczko’s leadership came under fire, especially during the NRC’s response to the Fukushima accident. The other commissioners wrote to the White House expressing their dissatisfaction with the chairman, who resigned in mid-2012.96 Congress also disagreed over how the NRC should respond to the task force report on Fukushima. California Senator Barbara Boxer called for the “prompt” implementation of the task force recommendations. “Any stalling will not be viewed favorably by the American people. . . . Their support for nuclear power is waning.” Senator Bernie Sanders of Vermont agreed: “Delay is not an acceptable option.”97 Supporters of nuclear power called for a more deliberate consideration of the issues. They offered some support for specific task force recommendations to upgrade plant hardware and enhance on-site emergency equipment but were skeptical of the task force’s more sweeping call for regulatory reform. James Inhofe of Oklahoma warned that the NRC needed “to take time to learn the right lessons” from Fukushima rather than act rashly, as he thought it had done after the Three Mile Island accident. Senator John Barrasso, from Wyoming, criticized the task force report for “a lot of recommendations for more Washington red tape.” The NRC should not be distracted redesigning “a regulatory framework that has served us very well.”98 Despite their disagreements, the commission unanimously directed the staff to prioritize the task force recommendations into three tiers. Tier 1 activities had substantial safety benefit and should begin without unnecessary delay. Tier 2 and 3 activities needed further technical assessment or resources.99 In March 2012, the staff issued a series of orders and requests for information from licensees. For example, spent fuel pools were to be outfitted with instrumentation to monitor pool water level during a beyond-design-basis event. Licensees were also requested to reevaluate vulnerabilities to earthquakes and floods and perform system “walkdowns” to identify flaws and degraded plant conditions.100
Risk-Informed Regulation and Fukushima
|
213
The accident highlighted the importance of mitigating strategies and emergency equipment in coping with severe accidents. At the lessseverely-damaged Fukushima Daini plant, south of Fukushima Daiichi, operators had been able to improvise long enough to prevent core damage. Even at Fukushima Daiichi, where operators made some mistakes, they forestalled core damage longer than anticipated in accident models. Counterintuitively, it was possible to be too conservative for the good of safety. By assuming a meltdown would occur faster than in reality—that it was “game over” early on—the pessimistic assumptions in design basis accidents might discourage training operators for severe accident situations, as was the case with the loss-of-DC-power training mentioned previously.101 These insights into the value of coping measures validated NRC actions taken after the September 11, 2001 attacks on the Pentagon and the World Trade Center. The 9/11 attacks demonstrated that the risk of terrorist attacks and sabotage might come from unimagined and unquantifiable sources, but power plant personnel should still possess skills and resources to cope. The agency required multiple improvements to nuclear power plant security, portable emergency equipment, and strategies to mitigate the potential loss of power and the release of radiation.102 After Fukushima, the NRC staff built on the post-9/11 regulations and issued an order that licensees implement further coping strategies for a station blackout to protect the reactor core, containment integrity, and spent fuel pools. The NRC’s own research supported post-Fukushima lessons about excessive conservatism. The agency’s SOACA study found that the largebreak loss-of-coolant accident was too unlikely to influence calculations of accident probabilities. Even in situations where operator action was unable to affect the progress of an accident, containment failure and the escape of radiation evolved more slowly than in previous studies. Emergency responders had more time to carry out an orderly evacuation, and there was low risk of short-term deaths. Long-term cancer fatality risk was “millions of times lower” than overall cancer rates. Nuclear plants easily met the safety goals adopted by the NRC in 1986. Economic and social consequences, such as the permanent loss of property and dislocation of communities, could be more severe than previously estimated.103 The nuclear industry organized a “diverse and flexible mitigation capability” strategy called FLEX to ensure that there were multiple sources of power and cooling for the reactors and spent fuel pools. Exelon, an owner of seventeen nuclear power plants, estimated that it
214
| Risk-Informed Regulation and Fukushima
would spend $350 million to implement the plan. There would also be regional backup centers that would cost $30–40 million to establish and $4 million in annual operating costs.104 The NRC strengthened previous efforts to upgrade containment venting with an order that required a hardened venting system for BWRs with Mark I and Mark II containments. In June 2012, the NRC modified the order to ensure that the vents would remain functional even after extensive core damage and high pressures in the containment building.105 The accident also compelled upgrades to some licensee PRAs. Most licensees had not made significant changes in assessing external events since the early 1990s, and only 40 percent had conducted a seismic PRA. Even fewer had done one for flooding hazards. The GAO found that more than half of the experts it interviewed believed that licensees should expand their use of PRAs in assessing natural hazards. The NRC staff required a seismic risk analysis at fifty-nine operating plants in the eastern and central United States. Twenty-seven licensees reported that additional risk analysis was necessary. Numerous other nations also upgraded their seismic PRAs.106 The NRC and industry also agreed to a quantitative approach to credit post-Fukushima mitigating strategies in risk-informed decisions. Fukushima fed into increasingly negative assessments of nuclear power’s viability in competitive electric power markets. At an industry meeting in May 2011, executives were still confident in the market for new reactors, even with a Great Recession dip in electricity consumption. The CEO of Mitsubishi Heavy Industries said Japan and the United States “have the responsibility to proceed” with new nuclear construction projects “because we have a lot of experience and technology.” An official with the French firm Areva, however, said his company had paused establishing a reactor component manufacturing facility in Virginia “because the economy slowed in the U.S. dramatically two years ago.” At the meeting there was also mention of a new wrinkle in power markets: falling natural gas prices. An executive from Dow Chemical noted the influx of shale gas produced cheaply by the new “fracking” technology, but he was skeptical of the “dash to gas. . . . The need for more nuclear power cannot be discounted.”107 By late 2011, the economic outlook for new nuclear plants had deteriorated again. The International Energy Agency warned that financing for new projects might dry up in the face of mounting regulatory burden and longer lead times for construction.108 Although their fuel costs were higher than nuclear power, natural gas plants were just a fifth of
Risk-Informed Regulation and Fukushima
|
215
nuclear’s “overnight” construction costs.109 In the United States, the drive to build new nuclear power plants halted. The NRC had applications for twenty-eight new reactors, but in early 2012 power companies had backed out of two projects in Texas and one in Maryland. NRG Energy CEO David Crane credited his company’s withdrawal to the “multiple uncertainties” induced by Fukushima.110 Exelon concluded that “new merchant nuclear power plants in competitive markets [are] uneconomical now and for the foreseeable future.”111 Prospects for new plants survived in South Carolina and Georgia, which had regulated power markets that guaranteed a return on investment, although the South Carolina project was later cancelled, too.112 The threat to nuclear power from the fracking revolution in fossil fuels eventually spread to operating reactors, especially small single units. The generating costs for a lone nuclear power plant were about 35 percent higher than a multi-unit site. As commercial gas prices plummeted almost in half, profits became losses. In 2011, the owner of the Kewaunee nuclear power plant in Wisconsin obtained a twenty-year license renewal. A year later, it could not find a buyer. “It’s very difficult to take a 50-year-view as a merchant [plant],” said one industry official.113 Fukushima upgrades also placed demands on an NRC staff still busy with applications for twenty-year license renewals and power uprates. Long-term projects on risk-informed regulation had less priority. With no nuclear renaissance in the offing, Congress took a more skeptical view of the NRC budget, and the agency launched an initiative to improve efficiency and shed non-essential workload.114 The Fukushima Task Force’s recommendation for a new regulatory category of “extended design-basis requirements” received more scrutiny. The proposal had been picked up by a different task force on risk management established just before the Fukushima accident. NRC commissioner George Apostolakis, a PRA expert, chaired a task force. It renamed the proposal a “design-enhancement category” and rolled it into an ambitious effort to finally resolve the “patchwork” nature of NRC regulation. The Apostolakis task force called for a Risk Management Regulatory Framework “establishing a common language of risk management” and balancing defense in depth with insights from risk assessment.115 There was resistance to imposing an ambitious regulatory change with uncertain safety benefits and potentially burdensome new requirements. Citing limited resources, NRC staff recommended against implementation, and the commission voted against the proposal in 2016.116
216
| Risk-Informed Regulation and Fukushima
Work on risk-informed regulations was trimmed back, even the highly touted rulemaking on the large-break, loss-of-coolant accident. In 2012, the NRC staff requested commission approval to withdraw the rule to review it in the context of the Fukushima task force report and as part of a broader initiative to reduce staff workload. The withdrawn rule could be revised and resubmitted, but prospects for the NRC’s most ambitious risk-informed regulation seemed dim. NEI’s Biff Bradley said the industry was “extremely disappointed.” “It seems crazy to . . . come that close to the finish line and then just boot it off into oblivion.” Critics of nuclear power praised the decision. Edwin Lyman of the Union of Concerned Scientists said the rule was poorly formulated and based on “ill-defined” criteria. “It’s not the right time to be thinking about reducing safety margins like that.”117 In 2016, the commission accepted a staff recommendation to discontinue the rulemaking. Lyman reiterated his organization’s assessment of PRA, consistent since it had first criticized WASH-1400 in 1974. PRA was still not ready. Efforts to improve it had “floundered. . . . We don’t think that probabilistic risk assessment is at a state where it can be used to justify risk-informed regulations, especially something as fundamental as a large-break LOCA.” The “absolute values” produced by PRAs were still too unreliable.118 After forty years of development, debate, and disagreement, proponents and critics of PRA remained divided along familiar lines. THE SEARCH CONTINUES
In 2013, a discouraged Anthony Pietrangelo, NEI’s vice president, wrote NRC Chairman Allison Macfarlane. Risk-informed regulation had “stagnated” after its promising start in 1998. He blamed the industry’s waning interest on the negative political climate for nuclear power, the “Fukushima Fallacy” that the accident undermined the case for PRA, and a cultural bias among NRC staff for “deterministic thinking” and a reliance on qualitative factors even when quantitative information was available. He pointed to what he believed was a failed effort to produce a practical risk-informed fire protection regulation, called NFPA 805. “Political pressure,” he concluded, had produced a fire regulation that was “laced with conservatisms” and a process that was too “protracted, costly, and unstable” to implement. “NFPA-805’s Chilling Effect” had diminished industry confidence in risk-informed regulation. To regain the initiative, the NEI vice president called for a collection of new initiatives.119
Risk-Informed Regulation and Fukushima
|
217
The knotty problem Pietrangelo pointed to of balancing qualitative and quantitative factors has vexed risk-informed regulation for many years. The nuclear industry pressed for a position that qualitative factors should supplement quantitative factors in regulatory analysis, rather than the other way around.120 That position has elicited spirited debate among commissioners which will likely continue.121 Whatever balance may be struck, risk-informed regulation remains the NRC’s primary solution to reconciling its need for safety, efficiency, and transparency. In the years following Pietrangelo’s letter, there was an uptick in activity of PRA-related licensing actions, particularly regarding regulations on fire protection and the special treatment of safety equipment. The latter regulation had been issued in 2004 and languished for over a decade. The successful piloting of the special treatment regulation at the Vogtle nuclear plant in Georgia produced a surge of new applications. Riskinformed regulation is still nuclear power’s primary, if elusive, option.122 The seventy-year campaign to apply PRA to nuclear safety has left a substantial legacy. Beginning in the insulated setting of AEC plutonium production reactors, engineers turned to risk quantification since qualitative safety tools were sometimes inadequate to determine when reactors were safe enough. Determinism, defense in depth, and design-basis accidents established a remarkable safety foundation that remains in place today, but they were born of necessity in an era uncertain about quantitative probabilities. Risk quantification methodologies were one of the intellectual technologies AEC engineers deployed to understand and solve the technical challenges the Three Ds could not. As nuclear power passed from secrecy to public application, the external motivators of political power, public opinion, and industry expectations buttressed the technical case for developing a PRA like the Rasmussen Report. While WASH-1400 initially proved a political disappointment, it has allowed regulators to accommodate rising public expectations for safety, made decisionmaking more transparent, and influenced nuclear power’s safety philosophy. As experience has shown, the most important causes of reactor accidents have been more varied than those considered by design basis accidents. WASH-1400’s insights enabled the regulation of beyond-design-basis events and reshaped defense in depth from a design concept into a multilayered strategy of technology, humans, and organizations. Despite its flaws, WASH-1400’s methodology and key insights endured. As the nuclear industry found practical applications for PRA in plant operations, a model for risk-informed regulation emerged that provided benefit to regulator and licensee alike.123
218
| Risk-Informed Regulation and Fukushima
Although its probability estimates were highly controversial, WASH1400 seems to have gotten the big questions right. It observed that accidents were more likely but less consequential than previous approximations had suggested. Historical experience and later studies are reasonably consistent with WASH-1400 estimates, though the later studies included consideration of more internal and external events. WASH-1400’s low consequence estimates were also substantiated. Subsequent research estimates and the consequences of Three Mile Island, Chernobyl, and Fukushima indicated that the tens of thousands of fatalities predicted in the 1957 WASH-740 study were too pessimistic. The authors of WASH1400, however, might have given greater consideration to the disruptive impacts of land contamination and forced relocations on local populations, as happened at Fukushima.124 As risk assessors pushed back on the uncertainties that dogged early PRAs, absolute results have crept into some regulatory decisions. Work continues to reduce uncertainties, establish appropriate conservatisms, and balance deterministic and probabilistic approaches to safety.125 Some practitioners, such as John Garrick, believe risk assessment methodology, comprehensiveness, and transparency make it possible to quantitatively assess an accident hypothesis’s credibility, a term criticized and avoided since the 1960s as an arbitrary, opaque engineering judgment.126 The path to PRA and risk-informed regulation forged by WASH1400 influenced safety regulation and understanding of risk in ways that would have pleased and disappointed some of its authors and supporters. PRA altered the scientific and social science understanding of risk, risk perception, and the role of bias in expert judgments. It spurred risk management and assessment methods in fields as diverse as the petroleum industry, aerospace, medical devices, and bioterrorism. It found unanticipated applications in plant operations and maintenance. PRA helped solve a techno-diplomatic dispute over Soviet designed reactors, but it was no panacea. The NRC’s conflict with the EPA over site decommissioning indicated that quantitative assessments had to co-exist in policy decisions with political, social, and other qualitative factors. There remain “grand challenges” to further PRA success and riskinformed regulation. It is still difficult to capture the contribution to accident risk from errors of commission (an operator who takes a wrong action), safety culture, and security events, such as sabotage. Finding a clear linkage in the ROP between safety culture and safety performance has been challenging. Comprehensive PRAs are expensive, and the benefits can take years to realize. This has left the NRC and the
Risk-Informed Regulation and Fukushima
|
219
nuclear industry at odds over appropriate plant PRA quality standards for regulatory use and the balance between defense in depth and risk insights in regulations.127 Even as the quality of PRAs improved, their numbers had limited communication value. Public views of nuclear power have responded less to risk communication than events. Support for nuclear power rose with industry success in the 1990s and plunged after accidents like Three Mile Island, Chernobyl, and Fukushima. To trust the numbers, the public had to trust the experts through an established record of safe operation. Risk-informed regulation remains a central strategy for change at the NRC. At an NRC conference held in 2016, Chairman Stephen Burns told the audience, “Part and parcel of everything we do is an assessment of risk.” Burns arrived at the NRC in 1978, just before the Three Mile Island accident. He rose in the ranks from a young staff attorney to become general counsel before joining the commission and came to see risk assessment as not just a technical tool but as a necessary public process. “There is a considerable level of risk aversion, of fear, even paranoia” of nuclear power that stemmed from a lack of trust in the NRC and nuclear industry. The “regulatory craft,” he believed, had to build public confidence by “constantly reassessing how safe is safe enough.”128 Quantifying “safe enough,” as envisioned by Chauncey Starr, aimed to end the nuclear power debate; it became instead a continuous search for safety. The search continues. Despite its setbacks and the expense of a good PRA, the NRC’s commitment to risk assessment and risk-informed regulation reflects PRA’s ability to make accident risk knowable, improve defense in depth, and reconcile the need for safety and efficiency.129 The NRC’s Nathan Siu maintains, “Although a PRA aims to identify ‘what can go wrong,’ it actually is a statement of optimism—despite the complexities of real life and the uncertainties in the future, we understand things well enough to identify what’s important to safety and can state this understanding in a way useful to decision making.”130 But this pragmatic proponent’s view continues to be tested by persistent questioning. In 2015 physicist Frank von Hippel, the influential critic of WASH1400 and skeptic of PRA, assessed the changed regulatory landscape since the 1970s. “We won the battle and lost the war” on PRA, he conceded, but “I’m still fighting the war.”131 Von Hippel’s sentiment reflects an enduring reality. PRA’s progress will be tempered by those who doubt whether experts can and should quantify risk.
Abbreviations
ACRS
Advisory Committee on Reactor Safeguards
AEC
United States Atomic Energy Commission
AIF
Atomic Industrial Forum
ALARA
As Low as is Reasonably Achievable
APS
American Physical Society
ATWS
Anticipated Transient Without Scram
BORAX
Boiling Water Reactor Experiment
BRC
Below Regulatory Concern
BWR
Boiling Water Reactor
CDF
Core Damage Frequency
CEE
Central and Eastern Europe
CEQ
President’s Council on Environmental Quality
CIL
Critical Issues List
DBA
Design-Basis Accident
DET
Diagnostic Evaluation Team
DOE PRRC
DOE Public Reading Room Catalog, http://reading-room. labworks.org/Catalog/Search.aspx
ECCS
Emergency Core Cooling System
EPA
Environmental Protection Agency
ERDA
Energy Research and Development Administration
221
222
| Abbreviations
EU
European Union
FAA
Federal Aviation Administration
FDA
Food and Drug Administration
FENOC
FirstEnergy Nuclear Operating Company
FERC
Federal Energy Regulatory Commission
FMEA
Failure Mode and Effects Analysis
GAO
General Accounting Office
GE
General Electric Company
HPS
Health Physics Society
HRO
High Reliability Organizations
IAEA
International Atomic Energy Agency
ICRP
International Commission on Radiological Protection
INPO
Institute of Nuclear Power Operations
JCAE
Joint Committee on Atomic Energy
JCAE Records
Joint Committee on Atomic Energy, Record Group 128
LERF
Large Early Release Frequency
LNT
Linear No-Threshold
LOCA
Loss-of-Coolant Accident
LOFT
Loss-of-Fluid Test
LOV
Loss of Vehicle
MCA
Maximum Credible Accident
MCL
Maximum Contaminant Level
MIT
Massachusetts Institute of Technology
MTR
Materials Test Reactor
NASA
National Aeronautics and Space Administration
NCRP
National Council on Radiation Protection and Measurements
NEA
Nuclear Energy Agency
NEPA
National Environmental Policy Act
NOMAC
Nuclear Organization and Management Analysis Concept
NRC
Nuclear Regulatory Commission
NRC ADAMS
US Nuclear Regulatory Commission, ADAMS Public Documents Main Library website, http://www.nrc.gov /reading-rm/adams.html
NRC Legacy
US Nuclear Regulatory Commission, Public Documents Room, Legacy Library fiche collection
Abbreviations
|
223
OSHA
Occupational Safety and Health Administration
PLG
Pickard, Lowe, and Garrick
PRA
Probabilistic Risk Assessment
PSA
Probabilistic Safety Assessment
PWR
Pressurized Water Reactor
RBMK
Reaktor Bolshoy Moshchnosty Kanalny (high-power channel reactor)
RDT
Division of Reactor Development and Technology
RELAP
Reactor Loss-of-Coolant Accident Program
REM
Roentgen Equivalent Man
ROP
Reactor Oversight Process
SAIC
Science Applications International Corporation
SALP
Systematic Assessment of Licensee Performance
SATAN
System Accident and Transient Analysis of Nuclear Plants
SNAP
Systems Nuclear Auxiliary Power Program
SSCs
Systems, Structures, and Components
TVA
Tennessee Valley Authority
UCS
Union of Concerned Scientists
VVER
Vodo-Vodyanoi Energetichesky Reactor (water-water energy reactor)
Notes
ACKNOWLEDGMENTS
1. Wellock, Critical Masses; Preserving the Nation. 2. See Wellock, “Social Scientists in an Adversarial Environment”; “Engineering Uncertainty and Bureaucratic Crisis at the Atomic Energy Commission”; “The Children of Chernobyl”; “A Figure of Merit.” PREFACE
1. AEC, The Nuclear Industry—1970, November 1970, accession number 9903300155, US Nuclear Regulatory Commission, Public Documents Room, NRC Legacy. 2. Pope, “ ‘We Can Wait. We Should Wait,’ ” 349–73; Walker, Containing the Atom, 406. 3. Walker, Containing the Atom, 400, 403–5, 408; Glenn Seaborg to William F. Ryan, March 14, 1969, 8110310157, NRC Legacy; Peter A. Morris to Russell Hatling, April 24, 1970, accession number ML112910482, US Nuclear Regulatory Commission, NRC ADAMS; “Action Summary of Commissioners’ Meeting with the Advisory Committee on Reactor Safeguards, Thursday, October 9, 1969, 11:10 A.M. Room 1046, D.C. Office,” October 13, 1969, 9210120206, NRC Legacy. 4. Wellock, “Engineering Uncertainty and Bureaucratic Crisis at the Atomic Energy Commission,” 846–84. 5. Stephen H. Hanauer, “Notes on MIT Study Proposal,” March 22, 1972, in Ford, A History of Federal Nuclear Safety Assessments, 49. 6. A “figure of merit” is a common statistical term. It was used by nuclear experts to express their goal of developing an overall estimate of a reactor accident risk—the product of accident probability and consequences. See Holmes 225
226
| Notes to Preface, Pages xv–xvii
& Narver, Proposal for the U.S. Atomic Energy Commission; Garrick, Reliability Analysis of Nuclear Power Plant Protective Systems, iv. 7. Stephen H. Hanauer to Howard Raiffa, March 30, 1972, box 8, Manson Benedict Papers; Stephen H. Hanauer and Peter A. Morris, “Technical Safety Issues for Large Nuclear Power Plants,” July 28, 1971, 8707170357, NRC Legacy. 8. Hays, “The Evolution of Probabilistic Risk Assessment in the Nuclear Industry,” 131; Keller and Moddarres, “A Historical Overview of Probabilistic Risk Assessment Development,” 271–85; Loewen, “To Understand Fukushima We Must Remember Our Past.” The International Atomic Energy Agency suggested the Japanese nuclear industry and regulators needed to use PRA more comprehensively for large earthquakes and tsunamis; see IAEA, IAEA International Peer Review Mission. 9. The relationship of PRA to risk society scholarship has received considerable attention. In addition to Perrow’s classic, Normal Accidents, see Downer, “Disowning Fukushima,” 291–94; Perkins, “Development of Risk Assessment for Nuclear Power” 274–77; Clarke, Worst Cases, location 724–32, Kindle; Miller, “The Presumptions of Expertise,” 164, 166; Mazur, “Bias in Risk-Benefit Analysis,” 28–29; Shrader-Frechette, Risk and Rationality, 5; Dumas, Lethal Arrogance, 292; Hellstrom, “Science-Policy Interaction, Constitutive Policy Making and Risk,” 6–10; Boland, “The Cold War Legacy of Regulatory Risk Analysis”; and Rip, “The Mutual Dependence of Risk Research and Political Context.” The scholarship on disaster, risk, and power is extensive. For two reviews, see Vaughn, “The Dark Side of Organizations”; Gephart, Van Maanen, and Oberlechner, “Organizations and Risk in Late Modernity.” 10. Garrick and Christie, “Probabilistic Risk Assessment Practices for Nuclear Power Plants”; Hays, “The Evolution of Probabilistic Risk Assessment in the Nuclear Industry,” 120–23; Garrick, “PRA-Based Risk Management,” 49; US NRC, Perspectives on Reactor Safety, 1.5–1–1.5–2; Keller and Moddarres, “A Historical Overview of Probabilistic Risk Assessment Development and its Uses in the Nuclear Power Industry.” 11. Kletz, “The Origin and History of Loss Prevention,” 110–11; Fullwood, Probabilistic Safety Assessment in the Chemical and Nuclear Industries, 3–4; Wreathall and Nemeth, “Assessing Risk: The Role of Probabilistic Risk Assessment (PRA) in Patient Safety Improvement”; Mosleh, “PRA: A Perspective on Strengths, Current Limitations, and Possible Improvements,” 2; Baecher, Annotated Readings in the History of Risk Analysis in Dam Safety; National Research Council, Department of Homeland Security Bioterrorism Risk Assessment, 20–23; Nash, “From Safety to Risk,” 2. Outside of the nuclear power industry, PRA has had its greatest influence on the National Aeronautics and Space Administration (NASA). See Vesely, Fault Tree Handbook with Aerospace Applications; NASA, NASA System Safety Handbook. 12. On disaster and risk studies in the social sciences and history since Fukushima, see Knowles, “Learning from Disaster?,” 775; Pritchard, “An Envirotechnical Disaster,” 222–23; Hamblin, “Fukushima and the Motifs of Nuclear History,” 286; Downer, “Disowning Fukushima,” 291–94. 13. Porter, Trust in Numbers, xi, 115, 189.
Notes to Chapter 1, Pages 1–5 |
227
CHAPTER 1
1. Kaplan and Garrick, “On the Quantitative Definition of Risk,” 11–27. 2. Solomon et al., Estimate of the Hazards to a Nuclear Reactor from the Random Impact of Meteorites. 3. Into the 1960s, the term Maximum Hypothetical Accident was still used on occasion, especially where applicants sought to demonstrate that no conceivable accident would result in a lethal exposure to the public. It was also used by some applicants interchangeably with a Maximum Credible Accident. See University of Missouri, Preliminary Hazards Report: University of Missouri Research Reactor, March 1961, R-103, ML16215A374, NRC ADAMS; Description and Safety Analysis for a Conceptual Unit at Indian Point, Volumes I and II, Docket 50–247, October 1, 1965, 8110210085 and 8110210077, NRC Legacy. 4. In 1960, the AEC defined the MCA as one “which would result in the most hazardous release of fission products, the potential hazard from this accident would not be exceeded by that of any other accident whose occurrence during the lifetime of the facility would appear to be credible.” See “Reactor Site Criteria: Report to General Manager, AEC-R 2/19,” December 10, 1960, ML021960199, NRC ADAMS. 5. The explosions considered possible at the Hanford plants were not of the force of an atomic bomb, but more like a major chemical explosion. See John A. Wheeler, “Hazard Following Explosion at Site W with note by A. H. Compton, CH-474,” February 17, 1943, accession No. NV0180023, Department of Energy (DOE) Nuclear Testing Archive, Las Vegas, NV; J. H. Hall to H. M. Parker, “Review of Reports Concerning Radiation Hazards in Event of Catastrophe, HW-7–632,” September 6, 1944, accession no. DDRS D197208259, DOE PRRC. While Hanford reactors included safety features to protect personnel, the site’s isolation mostly protected the public; see US Department of Energy, Historic American Engineering Record B Reactor, 48–51. On the safety design and construction approach used by Du Pont at Hanford, see Carlisle and Zenzen, Supplying the Nuclear Arsenal, 26–45; Carlisle, “Probabilistic Risk Assessment in Nuclear Reactors,” 924–25. 6. Mazuzan and Walker, Controlling the Atom, 61. 7. Summary Report of Reactor Safeguard Committee, WASH-3 (Rev.) (Oak Ridge, TN: US Atomic Energy Commission, March 31, 1950), 9–10, 14, ML15113A624, NRC ADAMS. 8. Summary Report of Reactor Safeguard Committee, WASH-3 (Rev.), 18; Mills, A Study of Reactor Hazards. See also, McCullough, Mills, and Teller, “The Safety of Nuclear Reactors.” 9. Johnson, “Making the Invisible Engineer Visible,” 562. 10. H. M. Parker, “Report of Reactor Accidents, HW-10079,” June 1, 1948, DA02604524, DOE PRRC; H. M. Parker to File, “Summary of Hanford Works Radiation Hazards for the Reactor Safeguard Committee, HW-10592,” July 27, 1948, D1972263496, DOE PRRC. 11. E. Teller, M. Benedict, J. Kennedy, J. Wheeler, and A. Wolman, “Review of Certain Hanford Operations, Reactor Safeguard Committee Hanford Meeting, June 14, 15, and 16, 1948,” D8741088, DOE PRRC.
228
| Notes to Chapter 1, Pages 5–8
12. E. Teller, M. Benedict, J. Kennedy, J. Wheeler, and A. Wolman, “Review of Certain Hanford Operations, Reactor Safeguard Committee Hanford Meeting, June 14, 15, and 16, 1948,” D8741088, DOE PRRC; Harry A. Kramer, “A Study of the Effects of a Disaster at Grand Coulee Dam upon the Hanford Works, HW-15882,” February 3, 1950, D197308534, DOE PRRC. 13. G. M. Roy, E. L. Armstrong, G. L. Locke, and C. Sege, “Summary Report of Reactor Hazards for Twin 100-K Area Reactors HW-25892,” October 10, 1952, D8442387, DOE PRRC; Johnston, “Making the Invisible Engineer Visible,” 562. 14. E. Teller, M. Benedict, J. Kennedy, J. Wheeler, and A. Wolman, “Review of Certain Hanford Operations, Reactor Safeguard Committee Hanford Meeting, June 14, 15, and 16, 1948,” D8741088, DOE PRRC. 15. M. Smith to File, “Application of Reactor Safeguard Committee Formula, HW-24191,” April 22, 1952, D198097358, DOE PRRC; G. L. Locke and C. Sege, “Summary Report of Reactor Hazards for Twin 100-K Area Reactors HW-25892,” October 10, 1952, D8442387, DOE PRRC; E. Teller, M. Benedict, J. Kennedy, J. Wheeler, and A. Wolman, “Review of Certain Hanford Operations, Reactor Safeguard Committee Hanford Meeting, June 14, 15, and 16, 1948,” D8741088, DOE PRRC; H. M. Parker to File, “Summary of Hanford Works Radiation Hazards for the Reactor Safeguard Committee, HW-10592,” July 27, 1948, D1972263496, DOE PRRC; W. S. Nechodom, “Comment Issue: Hanford Reactor Safety System Philosophy and A Review of Ball-3X System Requirements, HW-74598,” August 9, 1962, D197187104, DOE PRRC. 16. H. M. Parker to File, “Summary of Hanford Works Radiation Hazards for the Reactor Safeguard Committee, HW-10592,” July 27, 1948, D1972263496, DOE PRRC; M. S. Brinn, et al., “Reactor Safety Determination, Savannah River Plant,” April 15, 1953, DPW-53–593-Del.Ver., SROSTI93053168, DOE OpenNet. On the political and economic forces that influenced Hanford-reactor design, see Carlisle and Zenzen, Supplying the Nuclear Arsenal, 46–66, 92–130. 17. H. M. Parker, “Potential Damage from A K-Reactor Accident, HW-29387,” September 18, 1953, D198159332, DOE PRRC (emphasis in the original). 18. J. W. Healy, “Computations of the Environmental Effects of a Reactor Disaster, HW-30280,” December 14, 1953, D198160849, DOE PRRC; Parker and Healy, “Environmental Effects of a Major Reactor Disaster, P/482.” 19. Reactor Safeguard Committee, WASH-3, 49. 20. D. F. Bunch to Distribution, “ATWS Distribution List,” March 14, 1977, 8104170003, NRC Legacy. 21. E. Teller, M. Benedict, J. Kennedy, J. Wheeler, and A. Wolman, “Review of Certain Hanford Operations, Reactor Safeguard Committee Hanford Meeting, June 14, 15, and 16, 1948,” D8741088, DOE PRRC; O. H. Greager to D. G. Sturges, “Recommendations by Advisory Committee on Reactor Safeguards, HW-36621,” May 6, 1955, D3878586, DOE PRRC. As early as 1948, the Safeguard Committee used probabilistic estimates of seismic hazards to accept the design of a research reactor at Brookhaven National Laboratory. See Brookhaven National Laboratory, Supplement to Report on the Brookhaven Nuclear Reactor, 12.
Notes to Chapter 1, Pages 8–10 |
229
22. E. J. Bloch to David F. Shaw, “Fully Enriched Hanford Reactions, HAN54397,” April 2, 1954, D198079178 (attachment D198079264), DOE PRRC; E. R. Price to E. J. Boch, “ACRS Report on Modifications to Hanford Reactors, HAN-68122,” January 23, 1958, D198082548, DOE PRRC; G. M. Roy, E. L. Armstrong, G. L. Locke, and C. Sege, “Summary Report of Reactor Hazards for Twin 100-K Area Reactors, HW-25892,” October 10, 1952, D8442387, DOE PRRC; A. B. Greninger to J. E. Travis, “Review of Statements by Advisory Committee on Reactor Safeguards Pursuant to December 20, 1957 Meeting, HW-54853,” February 5, 1958, D198081874, DOE PRRC. 23. Hanford had five thousand accident-free days of operation, but statistically that meant engineers could only estimate with confidence 2,500 days (less than seven years) between disasters. See C. A. Bennett to A. B. Greninger, “Evaluation of Probability of Disasters, HW-28767,” July 20, 1953, D8451637, DOE PRRC. 24. C. A. Bennett to A. B. Greninger, “Interim Report—Evaluation of Probability of Disaster, HW-31073,” March 9, 1954, D8471494, DOE PRRC. 25. Johnston, “Making the Invisible Engineer Visible,” 572; Saleh and Marais, “Highlights from the Early (and pre-) History of Reliability Engineering,” 251– 53; Coppola, “Reliability Engineering of Electronic Equipment,” 29–30; Zachmann, “Risk in Historical Perspective,” 16–18. 26. D. J. Foley, G. J. Rogers, and J. W. Talbott, “Evaluation of Proposed Disaster Control Systems, HW-38579,” August 10, 1955, D199131886, DOE PRRC; G. C. Fullmer to O. H. Greager, “Nuclear Safety Objectives—Recommended Course of Action, HW-40519,” December 27, 1955, D8482427, DOE PRRC; J. W. Healy, “Reactor Hazards Re-Evaluation, HW-41529,” February 24, 1956, D198080396, DOE PRRC; R. L. Dickeman, “Meeting with Advisory Committee on Reactor Safeguards-March 1, 1956, HW-42185,” March 22, 1956, D198175961, DOE PRRC; R. E. Trumble and J. W. Healy, “Part 1 Evaluation of Reactor Hazards and Part 2 Radiological Considerations, HW-53638,” November 18, 1957, D8412319, DOE PRRC; R. E. Trumble et al., “K Water Plant Improvements, HW-59696,” March 19, 1959, D8507166, DOE PRRC; Staff of Process Design Sub-Section, “Hanford New Production Reactor Confinement System, HW-60742,” August 10, 1959, RL-1–369374, DOE Opennet; C. L. Goss, “Reliability of the NPR Emergency Cooling System, HW-70225,” September 28, 1961, D8384729, DOE PRRC; W. S. Nechodom, “Comment Issue: Hanford Reactor Safety System Philosophy and A Review of Ball-3X System Requirements, HW-74598,” August 9, 1962, D197187104, DOE PRRC; Alvin D. Wiggins, “Application of Boolean Matrices to System Reliability Studies, HW-3A-2036,” June 17, 1960, DA02852503, DOE PRRC; Alvin D. Wiggins, “Studies in Reliability I. The Algebra of Four-State Safety Devices, HW-SA2304,” October 13, 1961, DA613704, DOE PRRC; I. M. Jacobs, “Safety Systems for Nuclear Power Reactors,” 670–73; Peter A. Norris to A. Philip Bray, July 28, 1969, 8707090371, NRC Legacy. 27. A. B. Carson, K. W. Hess, and L. H. McIlwen, “Note on Decisions Involving Reactor Hazards, HW49161,” January 7, 1957, D8351562, DOE PRRC; N. T. Hildreth and R. Nilson, “Reliability Analysis of Beckman System,
230
| Notes to Chapter 1, Pages 10–13
HW-73167,” D8566128, DOE PRRC; C. L. Goss and A. J. Lindsay, “Reliability Analysis of the New Production Reactor [NPR] Flow Monitor Prototypes, HW-71626,” November 7, 1961, D8553427, DOE PRRC. GE also applied probabilistic approaches at its Knolls Atomic Power Laboratory in upstate New York, where it built the world’s first containment building. GE estimated the probability of a reactor accident releasing significant fission products to the environment were between one in ten thousand and one million reactor years. Lacking data, the estimate relied on expert judgment of the probability of a mishap at each stage in an accident’s progression. See Fitzgerald, Criteria and Methods for Evaluating Environmental Reactor Hazards, 12–14. 28. R. L. Junkins, “Answers to Questions of the Advisory Committee on Reactor Safeguards Relative to the Need for Retention of Government Land, HW-82657,” June 8, 1964, D3202903, DOE PRRC; Nechodom, “Comment Issue: Hanford Reactor Safety System Philosophy,” D197187104, DOE PRRC. 29. Licensing and Regulation of Nuclear Reactors: Hearings Before the Joint Committee on Atomic Energy, 90th Cong., 1st sess. (April 4 and September 12, 1967) (R. T. Richards to Milton Shaw, April 7, 1967, 641–43); Carlisle, “Probabilistic Risk Assessment in Nuclear Reactors,” 928. GE Hanford veterans also sought to promote risk comparisons between nuclear power and other hazards of modern life. See J. W. Healy, “Frequency of Catastrophic Accidents in Nuclear Plants,” August 1966, attached to J. S. Healy to Claire Palmiter, July 24, 1967, Folder: Major Nuclear Accidents Group, Box 10 Records of the Federal Radiation Council, RG 412, NARA, College Park, MD; Carlisle, “Probabilistic Risk Assessment in Nuclear Reactors,” 923 and 929. See also Zachmann, “Risk in Historical Perspective,” 20–21. CHAPTER 2
1. Walker, Containing the Atom, 51–53. 2. Walker, Containing the Atom, 51–53. 3. 42 U.S.C. §2232. 4. “Reactor Site Criteria: Report to General Manager, AEC-R 2/19,” December 10, 1960, ML021960199, NRC ADAMS; Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 2–42–2–44; and Mazuzan and Walker, Controlling the Atom, 243–45. For a discussion of “adequate protection,” see Leonard Bickwit, Jr. to the Commission, “Adequate Protection of the Health and Safety of the Public,” October 18, 1979, 8007210266, NRC Legacy. 5. Harold L. Price, “Regulatory Control of Radiation Hazards,” Meeting of the Fifth Annual Atomic Energy in Industry Conference, Philadelphia, PA, March 15, 1957, 81-hlp-b15-f04, “Speeches,” Harold L. Price Papers, Herbert Hoover Presidential Library, West Branch, IA. 6. Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 7–18. 7. Cohn, Too Cheap to Meter, 63–67; Stacy, Proving the Principle, 128–37.
Notes to Chapter 2, Pages 14–19 |
231
8. “Siting of Power Reactors in Large Metropolitan Areas (Discussion Paper), AEC 943/19,” May 12, 1965, DOE OpenNet, https://www.osti.gov /opennet/servlets/purl/1047519.pdf. 9. This was not strictly true, since a full containment seal required the closure of some valves. The concept of defense in depth in this era was spelled out by Clifford K. Beck: “Current Trends and Perspectives in Reactor Location and Safety Requirements,” March 22, 1966, accession no. HS63–2012–0008, DOE Opennet. 10. Beck, “Engineering Out the Distance Factor,” 245–60; Okrent, Nuclear Reactor Safety, 70–71; Herbert Kouts to Glenn Seaborg, November 18, 1964, 8610140375, NRC Legacy; and Walker, Containing the Atom, 141. 11. Walker, Containing the Atom, 33; Morone and Woodhouse, The Demise of Nuclear Energy?, 79. 12. Hirsh, Technology and Transformation; Walker, Containing the Atom, 57–112. 13. Walker, Containing the Atom, 57–112; US AEC, “Department of Water and Power of the City of Los Angeles, Docket No. 50–214—Initial Decision, AEC-R 89/30,” July 15, 1966, 9210130304, NRC Legacy. 14. R. L. Doan to John E. Logan, December 15, 1964, ML011140377, NRC ADAMS. 15. “A Precedent-Setting Decision,” Nuclear Industry 2 (December 1964): 4–9. 16. W. Kenneth Davis, “Problems of Applying Fixed Formulae to Safety Criteria and Site Selection,” American Nuclear Society Fall Meeting, October 10, 1963, SciTech Connect, http://www.osti.gov/scitech/biblio/4878748. 17. Hirsh, Technology and Transformation, 106–9. 18. Willis, Statistical Safety Evaluation of Power Reactors. 19. Siddall, “Reliable Reactor Protection,” 124–29; Yamada, “Safety Evaluation of Nuclear Power Plants,” 515; Kellermann and Seipel, “Analysis of the Improvement in Safety Obtained by a Containment and by Other Safety Devices for Water-Cooled Reactors,” 403–20; Farmer, “Siting Criteria—A New Approach,” 303–24; Lindackers and Stoebel, “Probability Analysis Applied to LWR’s,” 79–85; Siddall, Reactor Safety Standards and Their Attainment, 2, 12–19; Laurence, “Reactor Safety in Canada,” 73; “Draft Technical Progress Review,” Nuclear Safety, ORNL-TM-195 3, no. 4. (Oak Ridge, TN: Oak Ridge National Laboratory, May 4, 1962); Hughes and Horsley, “Application of the Safety Limitation Against Depressurization to the Calder and Chapelcross Reactors.” 20. Beck, “The Thesis is Good; Practice is Difficult,” 67. 21. [Clifford Beck to H. L. Price], “Updating the Brookhaven Report,” October 1964, (see pages 57–69), ML20087J449, NRC ADAMS; US Atomic Energy Commission, Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants, 3–6 and 13. At almost the same time, a similar consequence study was developed by industry for the Fermi 1 nuclear power plant. See Gomberg, Report on the Possible Effects on the Surrounding Population. 22. “A New View of Safety: Reassuring Rasmussen Report”; US Congress, Joint Committee on Atomic Energy, AEC Authorizing Legislation, Fiscal Year 1965, 88th Cong., 2nd sess., February 18, 1964, 404–6; US Congress, Joint
232
| Notes to Chapter 2, Pages 19–23
Committee on Atomic Energy, Subcommittee on Legislation, AEC Omnibus Bills for 1963 and 1964, 88th Cong., 2nd sess., May 19, 1964, 122–23. 23. Walker, Containing the Atom, 46–47; and Dyke, Mr. Atomic Energy, 232–33, 265–66. 24. For a complete account of the 1965 update controversy and efforts by critics to have the report released, see Walker, Containing the Atom, 117–30; Balogh, Chain Reaction, 253–57; “Updating the Brookhaven Report,” October 1964, ML20087J449, NRC ADAMS. 25. Stanley A. Szawlewicz to U. M. Staebler, “Discussion with BNL Staff on the Revision of WASH-740,” November 13, 1964, ML20087H368, NRC ADAMS. 26. For several years, Holmes & Narver had been involved in efforts to quantify risk of earthquakes at Hanford, and it sought to apply quantitative techniques to operating experience and reliability studies of the nuclear power industry. In 1963, it won an AEC contract to pursue data collection and reliability studies at some civilian power plants. In later contracts, it modeled accident chains and developed some of the most sophisticated fault-tree methodology of the 1960s. See Holmes & Narver, Inc., Proposal for the U.S. Atomic Energy Commission. For other probabilistic work in the early 1960s, see Fields and McWhirter, “Probability Analysis of a Severe Reactor Accident,” 98. 27. Mulvihill et al., Analysis of United States Power Reactor Accident Probability. 28. John Garrick, email to the author, April 11, 2020; Clifford Beck, Draft of letter for Glenn Seaborg to Chet Holifield, February 26, 1965; US AEC, “Siting of Power Reactors in Large Metropolitan Areas (Discussion Paper), AEC 943/19,” May 12, 1965, 34–37; Clifford K. Beck and Milton Shaw to the Commission, June 17, 1965, attachment #2. All three in OpenNet, www.osti.gov .opennet/servlets/purl/1047519.pdf. 29. Belson, “Matching and Prediction on the Principle of Biological Classification”; Magee, “Decision Trees for Decision Making.” 30. Eckberg, WS-133B Fault Tree Analysis Program Plan. 31. Boeing Corporation, Fault Tree for Safety; Ericson, “Fault Tree Analysis.” 32. Detterman, Weitzberg, and Willis, Aerospace Safety Program; Willis and Carlson, “Fault Tree Analysis for SNAP Reactor Disposal Systems,” 159–61; Headington, Stewart, and Zane, Fault Tree Analysis of the PBF Transient Rod Drive System; Schroder, Fault Trees for Reliability Analysis; Wall, “Probabilistic Assessment of Risk for Reactor Design and Siting,” 169; Brunot, “Reliability of a Complex Safety Injection System from Digital Simulation,” 169–70; Gekler, “Development and Application of a Reliability Data Collection Program in Nuclear Power Plants,” 170; Garrick, Probabilistic Analysis of Nuclear Reactor Containment Structures; Garrick, “Unified Systems Safety Analysis for Nuclear Power Plants”; P. A. Crosetti, “Comparative Reliability Analysis C Reactor Secondary Coolant System, DUN-4073,” April 15, 1968, D0167454, DOE PRRC. 33. Northern States Power Company, NSP Monticello Nuclear Generating Plant, Monticello Minnesota, Unit 1, Amendment 4 Answers to AEC Ques-
Notes to Chapter 2, Pages 23–27 |
233
tions, AEC Docket 50–263, January 10, 1967, 3000000437, NRC Legacy; McWethy, Pluta, and Sherer, Core Design Development; Burnett, Reactor Protection System Diversity in Westinghouse Pressurized Water Reactors. 34. Rubel, “Reliability-Engineering Methods in Reactor-Safety Technology,” 496–99. 35. Farmer, “Siting Criteria,” 303–24. 36. Wall, “Probabilistic Assessment of Risk for Reactor Design and Siting,” 169; Wall, Nuclear Safety Objectives, 1–9; Wall, Stuart, and Nguyen, Probabilistic Assessment of Risk for Reactor Design and Siting, 1–3. Similarly, see Doron and Albers, “Mean Annual Severity,” 349–56. 37. Starr had advocated quantifying nuclear risk for several years; see “Radiation in Perspective,” 325–333. 38. Starr, “Social Benefit versus Technological Risk,” 1232–38, quotation at 1237; Starr and Greenfield, Public Health Risks of Thermal Power Plants, 54–59. J. W. Healy, a Hanford scientist, also anticipated elements of Starr’s approach. See J. W. Healy, “Frequency of Catastrophic Accidents in Nuclear Plants,” August 1966, attached to J. S. Healy to Claire Palmiter, July 24, 1967, Folder: Major Nuclear Accidents Group, Box 10 Records of the Federal Radiation Council, RG 412, NARA, College Park, MD. 39. US NRC, Toward a Safety Goal, 18; Committee on the Science of Science Communication, Communicating Science Effectively, 29–31. 40. Tversky and Kahneman, “Judgment under Uncertainty,” 1124–31; Fischhoff et al., “How Safe is Safe Enough?,” 127–52; Fischhoff, “Managing Risk Perceptions,” 92; Slovic, The Perception of Risk, 220–31 and 390–412. For a very readable review of the impact of public risk perceptions on regulations, see Breyer, Breaking the Vicious Circle, 33–51. 41. Garrick, Reliability Analysis of Nuclear Power Plant Protective Systems, iv; Vesely, “Reliability and Fault Tree Applications at the NRTS,” 473 and 476–78; John Garrick, email to the author, April 11, 2020. 42. Internal Study Group, Report to the Atomic Energy Commission on the Reactor Licensing Program, June 1969, in US Congress, Joint Committee on Atomic Energy, AEC Licensing Procedure and Related Legislation, Part II, 92nd Cong., 1st sess., June 22–23 and July 13–14, 1971, 1065; Otway, The Application of Risk Allocation to Reactor Siting and Design. 43. S. H. Hanauer to Trevor Griffiths, May 8, 1969, 8707170372, NRC Legacy. 44. Pugh, Probability Approach to Safety Analysis; Lambert, Fault Trees for Decision Making in Systems Analysis, 5–6. The report team also drew on the expertise of decision theory experts in the United States, such as Howard Raiffa of Harvard University; see Stephen H. Hanauer to Howard Raiffa, March 30, 1972, box 8, Manson Benedict Papers, Collection 428, MIT. 45. DiNunno, Calculation of Distance Factors, 9; Davis and Cottrell, “Containment and Engineered Safety of Nuclear Plants,” 363–71; Cottrell and Savolainen, U.S. Reactor Containment Technology; and Herbert Kouts to Glenn T. Seaborg, “Report on Engineered Safeguards,” November 18, 1964, 8610140375, NRC Legacy.
234
| Notes to Chapter 2, Pages 28–32
46. Thompson and Beckerley, Reactor Materials and Engineering, 765–78. 47. ACRS, “Meeting of the Humboldt Bay Subcommittee,” Washington, DC, July 7, 1965, fiche address ACRS-0224, NRC Public Documents Room (PDR) fiche collection. 48. ACRS, “Meeting of the Humboldt Bay Subcommittee,” July 7, 1965, fiche address ACRS-0224, NRC PDR fiche collection; “Sixty-Fourth Meeting of the Committee of Reactor Safeguards, Wash. DC, July 8, 9, and 10, 1965,” fiche ACRS-0227, NRC PDR collection. 49. GE’s view of the advantages and disadvantages of the GE design was later supported by a 1990 NRC study that found that GE BWR plants were far less likely to suffer core damage during an unusual event than a PWR, but the GE Mark I containment was more likely to fail if a severe accident did occur. See US NRC, Severe Accident Risks, chapters 8 and 9. 50. ACRS, “Meeting of the Humboldt Bay Subcommittee,” July 7, 1965, fiche address ACRS-0224, NRC PDR fiche collection. 51. ACRS, “Meeting of ACRS Subcommittee on Reactor Design and Operating Criteria, Washington, DC, July 15, 1965,” ACRS-0227a, NRC PDR fiche collection; ACRS, “Sixty Seventh Meeting: Advisory Committee on Reactor Safeguards, October 7–9, 1965, Washington, DC,” ACRS-0247, NRC PDR fiche collection; ACRS, “69th Meeting Advisory Committee on Reactor Safeguards, Washington, DC, January 6, 7, & 8, 1966,” ACRS-0260, NRC PDR fiche collection; Murray Joslin to Edson Case, July 29, 1966, ML17115A824, NRC ADAMS; J. C. Rodgers to ACRS Members, “Report of DRD&T Meeting with Water Reactor Vendors to Discuss the Water Reactor Safety Research Program, Washington, DC, March 24, 1970,” April 2, 1970, ML20087K717, NRC ADAMS; “Shaw Demands Better Quality Assurance.” 52. Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 2–3. 53. Rogers and Kennedy, Fission Product Release During a Simulated Meltdown of a PWR Type Core; Wilson, Hauge, and Matheney, Feasibility and Conceptual Design for the STEP Loss of Coolant Facility; [Clifford Beck to H.L. Price], “Updating the Brookhaven Report,” October 1964, (see sections “Rough Draft of Meltdown Sequence” and “Use of a Reactor Accident Model as a Guide to Fission Product Release”), ML20087J449, NRC ADAMS; ACRS, “Meeting of Dresden 2 Subcommittee, September 1, 1965, Washington, DC,” ACRS0239, NRC PDR fiche collection; R. H. Wilcox to Files, “DRL-Consolidated Edison Co. Meeting Concerning Indian Point 2 Held at Bethesda, Maryland on November 2, 1965,” December 28, 1965, 9210120136, NRC Legacy. For a more detailed discussion of the China Syndrome and ensuing debates, see Walker, Containing the Atom, 139–202. 54. ACRS, “Meeting of Dresden 2 Subcommittee, Washington, DC, September 1, 1965,” ACRS-0239, NRC PDR fiche collection; S. H. Hanauer to ACRS Members, July 28, 1966, in Okrent, On the History of the Evolution of Light Water Reactor Safety, 2–212; R. O. Ivins, J. C. Hesson, and R. E. Wilson to R. C. Vogel, July 27, 1966, “Task Force,” Okrent Papers; R. H. Wilcox to H. Etherington, July 11, 1966, 9210120200, NRC Legacy.
Notes to Chapter 2, Pages 32–35 |
235
55. Edson G. Case, “Meeting on 1/17 with General Electric Company Representatives Concerning Emergency Core Cooling Criteria and Related Containment Design Criteria,” January 20, 1967, 8707090428, NRC Legacy. 56. Okrent, Nuclear Reactor Safety, 236–41; S. H. Hanauer to ACRS Members, July 28, 1966, in Okrent, Evolution of Light Water Reactor Safety, 2–212; David Okrent, “Meeting Notes, [July 7, 1966], Dresden 3,” Okrent Papers; “Meeting of Dresden 3 Subcommittee, July 7, 1966, Washington, DC,” 9210120001, NRC Legacy; Hanauer and Walker, Design Principles of Reactor Protection Instrument Systems, 51–57. 57. Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 7–17; “ACRS Warns.” 58. “COMSAT-Type Entity proposed for Enrichment by Hosmer”; ACRS, “95th ACRS Meeting, March 7–9, 1968, Washington D.C.,” ACRS-0443, NRC PDR fiche collection; ACRS, “97th ACRS Meeting, May 9–11, Washington D.C.,” ACRS-0464, NRC PDR fiche collection; ACRS, “98th Advisory Committee on Reactor Safeguards Meeting, June 5–8, 1968, Washington, D.C.,” ACRS-0470, NRC PDR fiche collection. 59. R. F. Fraley to ACRS Members, August 16, 1966, 9210120152, NRC Legacy. 60. R. F. Fraley to ACRS Members, August 16, 1966, 9210120152, NRC Legacy; “Seventy-Seventh Meeting, Advisory Committee on Reactor Safeguards, Washington, DC, September 8, 9, and 10, 1966,” ACRS-0311, NRC PDR fiche collection. 61. Bill Ergen, “Task Force on the ‘Chinese Syndrome,’” October 13, 1966, ML20087K716, NRC ADAMS. 62. “Minutes of Third Meeting of the Industry Advisory Committee on Power Reactor Emergency Core Cooling Systems, November 16–18, 1966, Oak Brook IL,” DOE/AEC RG 326, RDT Files, box 33G4; “Minutes of Fifth Meeting of the Industry Advisory Committee on Power Reactor Emergency Core Cooling Systems, January 16–20, 1967, Palo Alto, CA,” DOE/AEC RG 326, RDT Files, box 33G4. 63. M. W. Libarkin to ACRS Members, December 6, 1966, 9210120049, NRC Legacy; M. W. Libarkin to ACRS Members, May 23, 1967, 9210120092, NRC Legacy. 64. J. F. Young to Glenn T. Seaborg, October 7, 1966, 9210120043, NRC Legacy; Seaborg, The Journal of Glenn Seaborg, October 31, 1966, 13:461; R. F. Fraley to D. Okrent, “Meeting of Regulatory Staff with Representatives of the General Electric Company, November 30, 1966,” December 1, 1966, 9210120231, NRC Legacy; Edson G. Case, “Meeting on 1/17 with General Electric Company Representatives Concerning Emergency Core Cooling Criteria and Related Containment Design Criteria,” January 20, 1967, 8707090428, NRC Legacy; and R. H. Wilcox to Files, “Regulatory Staff-General Electric Company Meeting Held at Bethesda, Maryland on February 24, 1967,” February 27, 1967, 9210120235, NRC Legacy. In this era, “consequence-limiting” design features typically referred to LOCA barriers such as the primary system piping and containment. See Peter A. Morris to Russell Hatling, April 24, 1970, ML 112910482, NRC ADAMS.
236
| Notes to Chapter 2, Pages 35–38
65. Licensing and Regulation of Nuclear Reactors: Hearings Before the Joint Committee on Atomic Energy, 90th Cong., 1st Sess. (April 4 and September 12, 1967) (R. B. Richards to Milton Shaw, April 7, 1967, 641–43). 66. See Hake, “The Relation of Reactor Design to Siting and Containment in Canada,” 77–92; Levy, “A Systems Approach to Containment Design in Nuclear Power Plants,” 227–42; Farmer, “Siting Criteria,” 303–29. 67. Levy, “A Systems Approach to Containment Design in Nuclear Power Plants,” 227–42; Farmer, “Siting Criteria,” 303–29. 68. Clifford K. Beck to Chairman Seaborg, et al., “Letter from Mr. James F. Young, Dated October 7, 1966 Regarding Safeguards Adjustments,” October 28, 1966, 9210120065, NRC Legacy; R. F. Fraley to D. Okrent, “Meeting of Regulatory Staff with Representatives of the General Electric Company, November 30, 1966,” December 1, 1966, 9210120231, NRC Legacy. 69. Ergen, Emergency Core Cooling, 4–8, 20–21, 25, 32–33, 44, 60; Carroll W. Zabel to Glenn T. Seaborg, February 26, 1968, in Okrent, Evolution of Light Water Reactor Safety, 2–377. 70. R. O. Ivins, J. C. Hesson, and R. E. Wilson to R. C. Vogel, July 27, 1966, “Task Force,” Okrent Papers. 71. Margolis and Redfield, FLASH; Slifer, Loss-of-Coolant Accident; David Okrent to Glenn T. Seaborg, “Report on Reactor Safety Research Program,” October 12, 1966, 9210120127, NRC Legacy; J. C. McKinley to D. Okrent, March 6, 1968, ACRS-0439, NRC PDR fiche collection. For the very different positions on Design Basis Accidents (the renamed Maximum Credible Accident) between the AEC, industry, and activists, see “Design Basis Accidents for Power Reactors.” 72. Marvin M. Mann to A. K. Luedecke, “Proposal for a ‘Maximum Credible Accident’ Reactor Experiment,” January 24, 1963, 9210160201, NRC Legacy; [Clifford Beck to H. L. Price], “Updating the Brookhaven Report,” October 1964 (see section “Use of a Reactor Accident Model as a Guide to Fission Product Release”), ML20087J449, NRC ADAMS; Licensing and Regulation of Nuclear Reactors: Hearings Before the Joint Committee on Atomic Energy, 90th Cong., 1st sess. (April 4 and September 12, 1967) (R. B. Richards to Milton Shaw, April 7, 1967, 641–43); “AEC-AIF Meeting on Water-Reactor Safety Research Program, November 17–18, 1969, Washington, DC,” December 10, 1969, 9210160132, NRC Legacy; J. C. Rodgers to ACRS Members, “Report of DRD&T Meeting with Water Reactor Vendors to Discuss the Water Reactor Safety Research Program, Washington, DC, March 24, 1970,” ML20087K717, NRC ADAMS; “Reactor Safety R&D Held Too Divorced from Licensing Process.” 73. R. F. Fraley to David Okrent, “Minutes of Subcommittee Meeting on Reactor Safety Research on December 14, 1966,” February 2, 1967, 9210160112, NRC Legacy; F. Schroeder to D. E. Williams, July 14, 1967, box 8, RDT files, Job 1145, DOE/AEC RG 326; Okrent, Nuclear Reactor Safety, 164, 174; US General Accounting Office (GAO), This Country’s Most Expensive Light Water Reactor, 8–12. The AEC’s decision to end molten core research is detailed in Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 2–380–2–394. 74. Weinberg, The First Nuclear Era, 193.
Notes to Chapter 2, Pages 39–42 |
237
75. For an example of how the AEC increasingly emphasized upgrades to ECCS systems and power supplies that improved its reliability, see Peter Morris to Niagara Mohawk Power Corporation, October 20, 1966, 8904200426, NRC Legacy; David Okrent to Glenn T. Seaborg, August 16, 1966, in Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 2–251–2–253. CHAPTER 3
1. Roger Mattson, email to the author, April 11, 2020. Even after research revealed uncertainty with ECCS performance, AEC staff believed that a meltdown was not a credible accident. In late 1971, it estimated that the odds of a major core damage accident were 10–8 per reactor year (one in one hundred million). As a later NRC report noted, this “was a highly optimistic estimate [by several orders of magnitude], but it typifies the degree to which meltdown accidents were considered ‘not credible.’ ” See “Additional Guidance on Scope of Applicant’s Environmental Reports with Respect to Accidents, SECY-R-338,” November 15, 1971, 7906130184, NRC Legacy; and US NRC, Perspectives on Reactor Safety, 1.5–2. 2. D. O. Hobson and P. L. Rittenhouse to Morris Rosen, March 1, 1971, ML20087K719, NRC ADAMS; Robert J. Colmar to M. Rosen, “FLECHT Flow Blockage Tests,” November 6, 1969, ML20087K718, NRC ADAMS; Rittenhouse, “Fuel-Rod Failure,” 487–95; R. O. Ivins to D. Okrent, May 31, 1966, “Dresden 3,” Okrent Papers; Joseph A. Lieberman to Those Listed Below, February 20, 1968, 9210120169, NRC Legacy; J. C. McKinley to D. Okrent, March 6, 1968, ACRS-0439, NRC PDR fiche collection; Saul Levine to Files, March 4, 1968, 9210120261, NRC Legacy; “Notes on ACRS-AEC-Industry Meeting on Safety Research for Emergency Core Cooling of Large Water-Cooled Power Reactors, February 27–28, 1968,” March 6, 1968, 9210120001, NRC Legacy. 3. Ybarrondo, Professional Journal, June 15, 1971 and December 8, 1971. 4. R. C. DeYoung to P. A. Morris, June 2, 1970, ML093630208, NRC ADAMS. 5. “Minutes Indian Point Unit No. 2 Subcommittee Meeting, May 28, 1970, Chicago Illinois,” June 5, 1970, ML20087K721, NRC ADAMS; P. A. Morris to Harold L. Price, July 20, 1970, 8001070548, NRC Legacy. 6. Glenn Seaborg to John O. Pastore, April 27, 1971, 9210120170, NRC ADAMS. 7. A key goal of the reoriented LOFT tests was that they would provide greater confidence in accident codes. See GAO, This Country’s Most Expensive Light Water Reactor Safety Test Facility, 10; Roger Mattson, email to the author, April 11, 2020. 8. Roger Mattson, email to the author, April 11, 2020; Ybarrondo, Professional Journal, December 7, 1971. 9. Ybarrondo, Professional Journal, September 30, 1969. The regulatory staff became so dependent on Idaho’s expertise that the researchers there framed their role as “AEC system analysts” (Ybarrondo, Professional Journal, July 14, 1972). 10. Ybarrondo, Professional Journal, May 8, 1970.
238
| Notes to Chapter 3, Pages 42–44
11. Gillette, “Nuclear Safety (III),” 974. 12. David Dinsmore Comey to Glenn Seaborg, June 3, 1971, ML17037B760, NRC ADAMS; J. C. Haire to Multiple Addressees, February 12, 1971, ML20087K723, NRC ADAMS. 13. “AEC Task Force Considering Lower Power,”1; O’Toole, “A-Plants Face Delays”; “ACRS Questioning PWR Emergency Core Cooling.” 14. Edson G. Case to Harold Price, Clifford K. Beck, Marvin M. Mann, Stephen H. Hanauer, C.L. Henderson, Howard K. Shapar, Peter A. Morris, “Regulatory Staff Report on Emergency Core Cooling,” March 5, 1971, ML20087K725, NRC ADAMS; J. C. McKinley to ACRS Members, June 3, 1971, “ECCS”, Okrent Papers; Bray and Ianni, “Why Did We Need a Design Basis Accident?,” 203–4. 15. Mattson, email to the author, April 11, 2020. Hanauer’s files were so interesting that the Union of Concerned Scientists published them as a book. See Pollard, The Nugget File, 1–3. 16. W. B. McCool to Director, Office of the Federal Register, “Interim Policy Statement: Interim Acceptance Criteria for Emergency Core Cooling Systems for Light-Water Power Reactors,” June 25, 1971, 9210120206, NRC Legacy; and “Two Public Rulemakings Initiated.” 17. For a review of the certainties and uncertainties contained in the Idaho and vendor codes, see Ybarrondo, Solbrig, and Isbin, “The ‘Calculated’ Loss-of-Coolant Accident”; Gillette, “Nuclear Reactor Safety: A Skeleton at the Feast?” 18. Robert J. Colmar and Morris Rosen to ECCS Task Force Members, “Comments and Recommendations to the REG ECCS Task Force,” June 1, 1971, ML20087K727, NRC ADAMS. 19. S. H. Hanauer to ECCS Task Force, December 2, 1971, ML20087K730, NRC ADAMS; US AEC, “Daily Digest of Rulemaking Hearing: Interim Acceptance Criteria for ECCS,” May 8–10, 1972, ML20087K729, NRC ADAMS. 20. US AEC, “AEC Schedules Public Rulemaking Hearings on Major Regulatory Issues,” November 29, 1971, ML20087K731, NRC ADAMS; Navigant Consulting, Assessment of the Nuclear Power Industry—Final Report, 28. 21. Myron M. Cherry to L. Manning Muntzing, December 9, 1971, box 5126, 326765 Schlesinger Office Files, DOE/AEC; “Cherry Threatens Suit to Block AEC Rulemaking Hearings”; “No Holidays for Trouble”; Ripley, “Safety Gear Untried at A-Power Plants.” 22. Myron Cherry, email to the author, August 21, 2012. 23. Daniel Ford, email to the author, April 14, 2020. See also Balogh, Chain Reaction, 271–80; Temples, “Politics of Nuclear Power,” 246; Duffy, Nuclear Politics in America, 64; Campbell, Collapse of an Industry, 61; Ford, Cult of the Atom, 120. 24. Myron Cherry, email to the author, August 21, 2012. 25. Emshwiller, “Nuclear Nemesis.” 26. US AEC, “Daily Digest of Rulemaking Hearings,” March 21, 1972, AEC Schlesinger Office Files, box 5132, DOE/AEC, RG 326; “Suppressed Oak Ridge Memo”; “Charges of Reprisals Against Staff”; “AEC Internal Documents on ECCS Reveal Staff Qualms”; “Cherry Attacks Hanauer”; Gillette, “Nuclear Reactor Safety: At the AEC the Way of the Dissenter is Hard”; Lyons, “Aide Who Criticized A.E.C. Over Safety Shifted by Agency.”
Notes to Chapter 3, Pages 45–47 |
239
27. Ybarrondo, Professional Journal, February 4 and March 21, 1972; “AEC’s Shaw Comes Off Second Best”; “ECCS Enters New Phase”; “UCS Cross-Examination of Milton Shaw”; Walker, Three Mile Island, 141. 28. Roger Mattson, email to the author, April 11, 2020. 29. US AEC, “Acceptance Criteria for Emergency Core Cooling Systems”; “AEC Reveals ACRS-Suggested ECCS R&D Now Underway”; “German, Italian, and Japanese Positions on ECCS, SECY-R 521,” August 22, 1972, ML20087K732, NRC ADAMS; Ybarrondo, Professional Journal, February 14, 1972; Leach, Ybarrondo, Hicken, and Tasaka , “A Comparison of LOCA Safety Analysis”; “Westinghouse Fuel Redesign”; “B&W Comes Up With Preliminary ECCS Clad Temperature Peak”; “ECCS Rulemaking Was Worthwhile”; “Consolidated Edison Has Spent $6-million on ECCS Equipment for Indian Point 1”; “Consolidated Edison is Now Offering to Run Indian Point-1 at 40% of Power.” 30. Lapp, “Nuclear Power Safety.” 31. Walker, Containing the Atom, 363–86; “Interview with Commissioner William O. Doub, July 31, 1972,” box 5135, Schlesinger office files, DOE/AEC, RG 326. 32. “Interview with Commissioner William O. Doub, July 31, 1972,” box 5135, Schlesinger office files, DOE/AEC, RG 326. 33. W. B. McCool Memorandum for the Record, September 16, 1971, box 3321, McCool files, DOE/AEC, RG 326. 34. [W.B. McCool?], “An Analysis of the AEC Organization,” September 20, 1971, box 3321, McCool Files, DOE/AEC, RG 326. 35. “Interview with Commissioner William O. Doub, July 31, 1972,” box 5135, Schlesinger office files, DOE/AEC, RG 326. 36. Shapiro, “James R. Ramey.” 37. “Doub Makes Clear to Utilities”; William O. Doub, “The Right to be Heard—Laying It on the Line,” October 18, 1971, 1971 Annual Conference of the Atomic Industrial Forum, Bal Harbor, FL, box 57, Doub Papers; “Schlesinger: The End of One Era”; Walker, Containing the Atom, 376–84; “ExCommissioner Doub: AEC Can’t Regulate Every Aspect of Nuclear Power”; “Lowenstein Urges Split-off”; Hearing on the President’s Proposal to Establish a Department of Natural Resources, 92nd Cong., 2nd sess. (January 28, 1972) (statement of Clarence E. Larson, commissioner of the US Atomic Energy Commission.”) 38. John O. Pastore to James R. Schlesinger, January 19, 1972, box 35, Union of Concerned Scientists, Manuscript Collection 434, MIT archives; James Schlesinger to John O. Pastore, January 28, 1972, box 7, Doub personal papers. 39. John O. Pastore to James R. Schlesinger, June 6, 1972, General Files— Atomic Energy, John Pastore Papers. 40. Weinberg, First Nuclear Era, 198–200; Wm. B. Cottrell to A. J. Pressesky, June 30, 1970, ML20087M237, NRC ADAMS; Wm. B. Cottrell to A. J. Pressesky, March 4, 1971, ML20087M255, NRC ADAMS. On the difficult relations between Shaw and officials at Idaho’s National Reactor Testing Station and Argonne National Laboratory, see Stacy, Proving the Principle, 174– 83; Holl, Argonne National Laboratory, 270–77, 297–99.
240
| Notes to Chapter 3, Pages 47–51
41. “Muntzing Says He’ll Need Bigger Staff to Cope.” Before Ray became chairman, efforts to strip Shaw of his power had failed. See “Schlesinger Wants to Keep Shaw in Charge.” 42. “Dixy Ray: Another Non-Nuclear Face.” 43. Guzzo, Is It True What They Say About Dixy?, 114. 44. “AEC Strips Shaw of LWR Safety”; James R. Schlesinger to Melvin Price, April 26, 1972, box 2, Doub personal papers; Holl, Argonne National Laboratory, 297–99; “Shaw Still Angry”; “AEC Operational Reorganization Creates.” 45. Milton Shaw to R. E. Hollingsworth, June 8, 1973, box 2, Doub personal papers; “Minutes of Executive Session No. 21,” June 14, 1973, ML20087M281, NRC ADAMS. 46. Dixy Lee Ray, William O. Doub, Clarence E. Larson, and William E. Kreigsman to The President, n.d., no box, Doub personal papers. 47. Finney, “New AEC Chairman Moves.” See also “AEC Strips Shaw of LWR Safety”; “AEC Loses a Controversial Figure”; “Kouts Moves to Kill Research-by-Timetable Methods.” For similar views expressed by Commissioner William Anders, see US Senate Subcommittee on Reorganization, Research, and International Organizations, Committee on Government Operations, To Establish a Department of Energy and Natural Resources, Energy Research and Development Administration and a Nuclear Safety and Licensing Commission, 93rd Cong., 2nd sess. (March 13, 1974) (statement by William Anders, 360–62). 48. “An AEC-Industry Alliance”; Doub, “Public Acceptance: Baseline for Achieving the Nuclear Power Opportunity,” 52nd International Conference of the Association of Industrial Advertisers, Bal Harbor, FL, June 17, 1974, box 57, Doub personal papers. 49. Ford and Kendall, An Assessment of the Emergency Core Cooling Systems Rulemaking Hearings, cover. 50. US House of Representatives, Subcommittee on Reorganization, Research, and International Organizations of the Committee on Government Operations, Department of Energy and Natural Resources and Energy Research and Development Administration (Part 1), 93rd Cong., 1st sess. (July 24, 25, 26, 31, and August 1, 1973) (statement of L. Manning Muntzing, 182–85); US NRC, Compendium of ECCS Research, 1–1; Lyczkowski, “The History of Multiphase Computational Fluid Dynamics,” 5029, 5035; Martin, “ScienceBased Nuclear Design and Safety in the Emerging Age of Data Based Analytics,” 2–3; Mattson, email to the author, April 11, 2020. 51. For an early analysis of the potential for an ATWS event, see Pacific Gas and Electric Company, “Preliminary Hazards Summary Report, Bodega Bay Atomic Park, Unit Number 1, Docket 50–205,” December 28, 1962, 8709240123, NRC Legacy. 52. E. P. Epler to R. F. Fraley, January 21, 1969, FOIA-76–0392, NRC PDR fiche collection; Epler, “Common Mode Failure Considerations,” 38–45. 53. “ACRS Subcommittee Meeting on Reliability of Reactor Safety Systems,” August 26, 1970, “Anticipated Transients Without Scram,” Okrent Papers; US NRC, Anticipated Transients Without Scram for Light Water Reactors, vol. 1, 17–22.
Notes to Chapter 3, Pages 51–54 |
241
54. Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 2–31. 55. “Report to the ACRS: Anticipated Transients Without Scram (ATWS),” September 1970, “Anticipated Transients Without Scram,” Okrent Papers. 56. US NRC, No Undue Risk, 2. 57. Norman H. Roberts to Reliability Analysis Subcommittee, ACRS, September 28, 1970, “Anticipated Transient Without Scram,” Okrent Papers. 58. Ford, Cult of the Atom, 197. 59. Norman H. Roberts to Reliability Analysis Subcommittee, ACRS, September 28, 1970, “Anticipated Transient Without Scram,” Okrent Papers, emphasis in the original. 60. US AEC, “June 5, 1970 Conference Call with CE on Anticipated Transients with Failure to Scram,” June 5, 1970, “Anticipated Transients Without Scram,” Okrent Papers; D. F. Bunch to Distribution, “ATWS Distribution List,” March 14, 1977, 8104170003, NRC Legacy; AEC, Technical Basis for Interim Regional Tornado Criteria, WASH-1300, 14. See also, Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 7–1. 61. E. P. Epler to R. F. Fraley, January 21 1969, FOIA-76–0392, NRC PDR fiche collection; Epler, “Common Mode Failure Considerations”; “ACRS Subcommittee Meeting on Reliability of Reactor Safety Systems,” August 26, 1970, “Anticipated Transients Without Scram,” Okrent Papers. 62. Advisory Committee on Reactor Safeguards, “ACRS Subcommittee Meeting on Reliability of Reactor Safety Systems,” August 26, 1970; “Report to the ACRS: Anticipated Transients Without Scram (ATWS),” September 1970, “Anticipated Transients Without Scram,” Okrent Papers; Norman H. Roberts to Reliability Analysis Subcommittee, ACRS, September 28, 1970, “Anticipated Transient Without Scram,” Okrent Papers; R.F. Fraley to Harold L. Price, December 17, 1970, “Anticipated Transients Without Scram,” Okrent papers. On the rising criticism of the AEC and reactor safety, see Walker, Containing the Atom, and Wellock, “Engineering Uncertainty.” 63. Seaborg, The Journal of Glenn Seaborg, November 2, 1970, 23:328. 64. Seaborg, The Journal of Glenn Seaborg, November 4, 1970, 23:343. Launching the Reactor Safety Study, it has been claimed, came from a 1971 request from the Joint Committee on Atomic Energy as part of renewing the Price-Anderson Act. However, the legislation was not up for renewal for another seven years. The AEC commitment to Gravel in 1970 suggests the study began earlier to forestall the release of the 1965 WASH-740 update to him. The AEC response to Gravel did not mention Price-Anderson as a justification for the study, and the Commission only discussed the relevance of the new study to renewal as part of their internal debate over Gravel’s request. See US Senate, Committee of Public Works, Underground Uses of Nuclear Energy Part 2: Hearings Before the Subcommittee on Air and Water Pollution of the Committee on Public Works, 91st Cong., 2nd sess. (August 5, 1970) (C. E. Larson to Mike Gravel, December 4, 1970, 1662 and 1664). 65. US AEC, “Study of Nuclear Risks and Benefits, SECY-1621,” May 28, 1971, ML20087M293, NRC ADAMS. For additional context about the
242
| Notes to Chapter 3, Pages 54–57
expansion see L. Manning Muntzing to Commissioner Ramey and Doub, January 17, 1972, ML20087M368, NRC ADAMS. 66. US AEC, “Study of Nuclear Risks and Benefits, SECY-R-199,” March 29, 1971, ML20087M332, NRC ADAMS. 67. Gimpel, “Risk Assessment and Cost Benefit Act of 1995,” 65; W. B. McCool, “Additional Guidance on Scope of Applicant’s Environmental Reports with Respect to Accidents, SECY-R-338,” November 15, 1971, 7906130189, NRC Legacy; US Senate, Committee on Interior and Insular Affairs, Environmental Constraints and the Generation of Nuclear Electric Power: The Aftermath of the Court Decision on Calvert Cliffs, 92nd Cong., 1st sess., November 1971, Part 2 (Russell Train to Chet Holifield, November 4, 1971, 392–93); US AEC, “Proposed Rule Making”; Clifford K. Beck to L. Manning Muntzing et al., September 21, 1972, ML20087M415, NRC ADAMS; Sheldon Meyers to Lester Rogers, February 15, 1973, Reactor Safety, Box 629, JCAE Records; James B. Graham to Edward J. Bauser, February 27, 1973 Reactor Safety, box 629, JCAE Records; US AEC, Annual Report to Congress, 1971, 21–29. 68. Ecology Action v. US Atomic Energy Commission, 492 F.2d 998 (2d Cir., 1974); Carolina Environmental Study Group v. United States, 510 F.2d 796 (DC Cir., January 21, 1975). 69. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” June 16, 1972, ML20087N390, NRC ADAMS. 70. US AEC, “Final Environmental Statement Related to the Operation of Nine Mile Point Nuclear Station Unit 1, Niagara Mohawk Power Corporation,” Docket no. 50–220, January 1974, ML082830088, NRC ADAMS. 71. Saul Levine to Edward J. Bauser, August 13, 1971; John O. Pastore to James R. Schlesinger, October 7, 1971; William C. Parler to Edward J. Bauser, December 8, 1971, Reactor Safety Comprehensive Study, box 629, JCAE Records. 72. Kent F. Hansen, “Norman C. Rasmussen, 1927–2003,” National Academies Press, Memorial Tributes, vol. 21 (2017), https://www.nap.edu/read/24773 /chapter/57. 73. Norman C. Rasmussen and Manson Benedict to Stephen Hanauer, March 17, 1972, in Ford, A History of Federal Nuclear Safety Assessments, 42–48. 74. S. H. Hanauer, “Notes on MIT Study Proposal,” March 22, 1972, in Ford, A History of Federal Nuclear Safety Assessments, 49. 75. Stephen H. Hanauer to Howard Raiffa, March 30, 1972, box 8, Manson Benedict Collection. 76. Rasmussen, “The Safety Study and its Feedback.”. 77. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” July 10, 1972, October 12, 1972, and January 9, 1973, ML20087N390, NRC ADAMS; see also [Peter A. Morris],“Study of Nuclear Safety: Accident Probabilities and Consequences: Scope of Study,” July 5, 1972, ML20087M499, NRC ADAMS. 78. John A. Bewick and Thomas E. Murley, “Note for Chairman Schlesinger, Topics for Meeting with Professor Rasmussen,” July 13, 1972, ML20087M548, NRC ADAMS; US Congress, Joint Committee on Atomic Energy, Nuclear Reactor Safety Part 1: Phase I and Phase IIa, , 93rd Cong., 1st sess. (January 23, September 25–27 and October 1, 1973) (39 and 46).
Notes to Chapter 3, Pages 57–62 |
243
79. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” July 17, 1972, ML20087N390, NRC ADAMS. 80. Tom Murley to Simi [Levine], January 2, 1985, Norman Rasmussen to Simi [Levine], January 25, 1985, Bill Vesely to Simi [Levine], January 21, 1985, folder: Levine, Saul, 1981–85, box 7, Rasmussen papers, MIT. 81. Saul Levine, “Introduction,” [1973], ML20087M588, NRC ADAMS. 82. Paul C. Bender to Norman Rasmussen, Martin R. Hoffmann, and John J. Flaherty, September 22, 1972, ML20087M629, NRC ADAMS. 83. On Vesely’s previous work on error bands, see Vesely, “Reliability and Fault Tree Applications at the NRTS”; Vesely, The Evaluation of Failure and Failure Related Data, 8–9. 84. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” August 1 and 15, 1972, ML20087N390, NRC ADAMS; “Memorandum for the Record,” January 9, 1973, ML20087M694, NRC ADAMS; William O. Doub to L. Manning Muntzing, January 24, 1973, Anticipated Transients Without Scram, May 1972—, Box 3, Doub personal papers; “Anticipated Transients Without Scram in Water-Cooled Nuclear Plants, Draft, SECY-R-74–35,” September 12, 1973, Box 3, Doub personal papers; Norman C. Rasmussen to Chairman Schlesinger, October 3, 1972, Rasmussen Study, 326765, box 5127, Schlesinger Office Files, DOE/AEC; Paul C. Bender to Norman Rasmussen, Martin R. Hoffmann, and John J. Flaherty, September 22, 1972, ML20087M629, NRC ADAMS; Paul C. Bender, “Safety: Topics List, Draft,” October 31, 1972, Safety and Health, Box 5130, Schlesinger Office Files, DOE/AEC; “EPA Rejects AEC’s Emergency Cooling Draft Environmental Statement,” 2–3. 85. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” January 9, 1973 and May 15, 1973, ML20087N390, NRC ADAMS. 86. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” May 16, 1973, ML20087N390, NRC ADAMS. 87. Tom Murley, “T. E. Murley AEC Notes from 1972–73,” May 31, 1973, ML20087N390, NRC ADAMS. 88. Joint Committee on Atomic Energy, Nuclear Reactor Safety Part 1, 124–48. 89. “JCAE Safety Hearings: Rasmussen’s Debut,” 11–12. 90. “The AEC Testifies.” 91. “JCAE Safety Hearings: Rasmussen Study,” 22; “May 31, 1973 Executive Session [Draft],” ML20087M732, NRC ADAMS. It is likely that the playing card comparison came from Saul Levine. See John A. Harris to Dr. Kouts, January 22, 1974, ML20087M780, NRC ADAMS. 92. Ford, “The Hole in the Reactor.” 93. Charles Warren to Dixy Lee Ray, March 11, 1974, Attachment, Testimony of William Bryan, Hearings of the Subcommittee on State Energy Policy, Committee on Planning, Land Use, and Energy, California State Assembly, February 1, 1974, ML20087M834, NRC ADAMS. 94. Smith, “Probability Analyst Calls Rasmussen Accident Study Futile.” 95. Robert D. Thorne to Charles Warren, March 27, 1974, ML20087M868, NRC ADAMS.
244
| Notes to Chapter 3, Pages 62–67
96. Pinkus et al., Engineering Ethics, 242–74; Pate-Cornell and Dillion, “Probabilistic Risk Analysis for the NASA Space Shuttle”; Zachmann, “Risk in Historical Perspective,” 25; Jasanoff, Learning from Disaster. 97. US AEC, Reactor Safety Study Draft, 122–25, and 235–47. This probability included a full range of meltdown scenarios, from very limited damage to a containment failure and release of radioactivity to the environment. 98. Wall, Bernero, Millunzi, and Rosen, “The Reactor Safety Study,” 15; President’s Commission on the Accident at Three Mile Island, “Deposition of Robert Jay Budnitz,” August 27, 1979, 7910310241, NRC Legacy. 99. Henry W. Kendall to John P. Holdren, July 23, 1974, box 35, Union of Concerned Scientists, Manuscript Collection 434, MIT archives. 100. US AEC, Reactor Safety Study Draft, 100, 137, 185–86, 227–28, and 235–47. 101. Bray and Ianni, “Why Did We Need a Design Basis Accident?” 203. 102. “Order Forecasts for ‘74 See Another Peak Year.” 103. US AEC, Reactor Safety Study Draft, 37–39; Cohn, Too Cheap to Meter, 136; Bray and Ianni, “Why Did We Need a Design Basis Accident?” 203. 104. “With the Rasmussen Report in Hand, Why Escalate Design Conservatism?” 105. Lellouche, “ATWS—Impact of a Nonproblem.” 106. J. E. Gilleland to Edson G. Case, September 30, 1974, ML111100879, NRC ADAMS; Advisory Committee on Reactor Safeguards, “Reactor Safety Study (WASH-1400) Working Group Meeting, Washington DC,” October 9, 1974, “RSS,” Okrent papers; W. Donham Crawford to Saul Levine, October 30, 1974, ML20087M911, NRC ADAMS. 107. US AEC, Technical Report on Anticipated Transients Without Scram; [Richard] Vollmer to [Harold] Denton, [March 14, 1977], 8104170140, NRC Legacy. 108. Lipset and Schneider, The Confidence Gap, 13–40, 66. 109. Walker, Three Mile Island, 38. 110. Walker, Three Mile Island, 38–43, quotation from 42. 111. “Briefing on WASH-1400 (Reactor Safety Study), SECY-75–71,” March 4, 1975, ML20087M969, NRC ADAMS; William Kerr to William Anders, April 8, 1975, 7903140227, NRC Legacy; US NRC, Staff Discussion of Fifteen Technical Issues; R. T. Kennedy to Lee V. Gossick, “Risk Assessment Program,” January 7, 1977, ML20087N019, NRC ADAMS; Lee V. Gossick to Commissioner Kennedy, “Risk Assessment,” March 2, 1977, ML20087N053, NRC ADAMS. 112. Primack and Hippel, Advice and Dissent: Scientists in the Political Arena, ix, 208–32. 113. Frank von Hippel, “Status of the American Physical Society Proposal for a Summer Study on Nuclear Reactor Safety,” November 18, 1973, von Hippel Papers. 114. “Comments by W.K.H. Panofsky on the Role of the American Physical Society in Conducting Studies of Public Relevance,” January 22, 1974, von Hippel Papers. See also Wellock, Critical Masses, 98–102. 115. “Report to the APS by the Study Group on Light-Water Reactor Safety,” S5, S47–52.
Notes to Chapter 3, Pages 67–71 |
245
116. Smith, “Nuclear Power in the U.S.” 117. Frank von Hippel to Morris Udall, April 27, 1976, and Frank von Hippel, “Notes for Discussion with ACRS Working Group on the NRC’s Reactor Safety Study,” January 4, 1977, Von Hippel Papers. 118. Rosa and Dunlap, “Poll Trends: Nuclear Power.” 119. “House Committee Probes Nuclear Power in Blue Ribbon Debate”; Weatherwax, “Virtues and Limitations of Risk Analysis,” 29–32; “Transcript of Proceedings: Press Conference on the Reactor Safety Study Project,” October 30, 1975, ML20087N085, NRC ADAMS; Ken Pedersen to the Commissioners, October 11, 1978, “Summary of Prior Commission Statements on Report of the Reactor Safety Study,” ML20087N136, NRC ADAMS. 120. US House of Representatives, Subcommittee on Energy and the Environment of the Committee on Interior and Insular Affairs, Oversight Hearings on Nuclear Energy: The Price-Anderson Nuclear Indemnity Act, Part 5, 94th Cong., 1st sess. (December 1, 1975) (148–53); US House of Representatives, Subcommittee on Energy and the Environment of the Committee on Interior and Insular Affairs, Observations on the Reactor Safety Study: A Report (Washington, DC: Government Printing Office, January 1977); Frank von Hippel to Morris Udall, April 27, 1976, von Hippel Papers; Philip M. Boffey, “Reactor Safety: Congress Hears Critics of Rasmussen Report”; Frank von Hippel to Morris Udall, January 28, 1977, von Hippel papers; Morris K. Udall to Marcus Rowden, May 11, 1977, ML20087N180, NRC ADAMS; Saul Levine to the Commissioners, June 24, 1977, “Risk Assessment Review Group, SECY-77– 350,” ML20087N224, NRC ADAMS. 121. Yellin, “The Nuclear Regulatory Commission’s Reactor Safety Study”; Joel Yellin to Edward A. Mason, July 1, 1976, ML20087N264, NRC ADAMS; Joel Yellin to Joseph M. Hendrie, August 23, 1977, ML20087N299, NRC ADAMS; Joel Yellin to Joseph M. Hendrie, February 13, 1978, ML20087N332, NRC ADAMS; Union of Concerned Scientists, The Risks of Nuclear Power Reactors; Okrent, On the History of the Evolution of Light Water Reactor Safety in the United States, 4–77—4–86. 122. Ad-Hoc Review Group, Risk Assessment Review Group Report, 2–3. 123. Ad-Hoc Review Group, Risk Assessment Review Group Report, 2. 124. US NRC, “Meeting with Risk Assessment Review Group,” September 7, 1978, 7907270220, NRC Legacy. 125. Ad-Hoc Review Group, Risk Assessment Review Group Report, 2. 126. Ad-Hoc Review Group, Risk Assessment Review Group Report, 7–9; Reactor Safety Study Final Report, III-90. 127. Ad-Hoc Review Group, Risk Assessment Review Group Report, 7–9; Nash, “From Safety to Risk,” 15; Cooke, Experts in Uncertainty, 19–41. 128. Ad-Hoc Review Group, Risk Assessment Review Group Report, viii-xi, 9–10, 32–33, 46–47. 129. Lewis, “The Safety of Fission Reactors,” 61. 130. Ad-Hoc Review Group, Risk Assessment Review Group Report, 40; NRC, Reactor Safety Study, Appendix III, p. 7. For similar criticisms of WASH1400 over confidence in the results and precision of its estimates, see US EPA,
246
| Notes to Chapter 3, Pages 72–74
Reactor Safety Study (WASH-1400): A Review of the Draft Report; Lichtenstein, Fischhoff, and Phillips, Calibration of Probabilities, 40. 131. Ad-Hoc Review Group, Risk Assessment Review Group Report, viii-xi, 9–10, 32–33, 46–47; “The Risk Factor in Nuclear Power,” The MacNeil/Lehrer Report, January 23, 1979, copy of transcript in possession of author. 132. Fischhoff et al., Acceptable Risk, 37; Slovic, Fischhoff, and Lichtenstein, “Rating the Risks,” in The Perception of Risk, ed. Slovic, 104–20; Committee on the Science of Science Communication, Communicating Science Effectively, 27. 133. Ken Pedersen to the Commissioners, October 11, 1978, “Summary of Prior Commission Statements on Report of the Reactor Safety Study,” October 11, 1978, ML20087N136, NRC ADAMS. 134. Joseph Hendrie to Morris Udall, January 18, 1979, ML 11129A163, NRC ADAMS. See Frederick, “Predicting Three Mile Island.” 135. Burnham, “Nuclear Agency Revokes Support for Safety Study,” 1. 136. Wicker, “No Nuclear Credibility.” 137. “How Safe are Nuclear Reactors?” 138. “The Risk Factor in Nuclear Power,” The MacNeil/Lehrer Report, January 23, 1979, copy of transcript in possession of the author. 139. Emshwiller, “Nuclear Nemesis.” 140. Carlisle, “Probabilistic Risk Assessment in Nuclear Reactors”; Lipset and Schneider, The Confidence Gap, 13–40; Campbell, Collapse of an Industry, 67; US NRC, “Use of Probabilistic Risk Assessment Methods in Nuclear Regulatory Activities: Final Policy Statement,” 42622. A risk-based approach has proved to be a very difficult to sell to a skeptical public; see Davies, “The Effectiveness of the Sizewell B Pubic Inquiry.” 141. Fischhoff, “Risk Perception and Communication.” 142. Dunlap, “Trends in Public Opinion Toward Environmental Issues,” 287–92. Rodney Carlisle argues the Rasmussen Report’s public communications problems were due to its excessive complexity. However, risk assessments often do not convince a skeptical public. See Carlisle, “Probabilistic Risk Assessment in Nuclear Reactors;” Rip, “The Mutual Dependence of Risk Research and Political Context,” 11; Campbell, Collapse of an Industry, 65–67; US Congress, Risk/Benefit Analysis in the Legislative Process: Joint Hearings Before the Subcommittee on Science, Research and Technology of the Committee on Science and Technology, US House of Representatives and the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science and Transportation, 96th Cong., 1st sess. (July 24 and July 25, 1979) (178–82); Slovic, “Perceived Risk, Trust and Democracy”; US Congress, Office of Technology Assessment, Nuclear Power in an Age of Uncertainty, 220, 229–30; Perrow, Normal Accidents, 3–12, 14, and 305–15. On Fukushima, see Downer, “Disowning Fukushima.” 143. US NRC, “Use of Probabilistic Risk Assessment Methods in Nuclear Activities: Final Policy Statement,” 42622. 144. Subcommittee on Energy and the Environment, Reactor Safety Study Review, 18. 145. “The Lewis Review of Rasmussen’s Report.”
Notes to Chapter 3, Pages 74–79 |
247
146. Frank Miraglia, email to author, October 29, 2015. 147. President’s Commission on the Accident at Three Mile Island, “Deposition of Robert Jay Budnitz,” August 27, 1979, 7910310241, NRC Legacy; Lewis, “The Safety of Fission Reactors,” 62. 148. For a detailed description of the TMI-2 accident, see Walker, Three Mile Island, 72–78. 149. “The Lewis Review of Rasmussen’s Report.” 150. US NRC, Probabilistic Risk Assessment (PRA) Reference Document, 18. 151. “JCAE Safety Hearings: Rasmussen’s Debut.” 152. Solnick, “Compton Lecturer Criticizes NRC Set-Up,” 1, 3; Kemeny, The President’s Commission on the Accident at TMI, 62. 153. US NRC, Special Inquiry Group, Three Mile Island, 147–52, at 150. 154. US NRC, Human Factors Evaluation of Control Room Design and Operator Performance at Three Mile Island-2, v. 155. US NRC, Special Inquiry Group, Three Mile Island, 147–52, at 150. 156. “ACRS Urges New Safety Research Direction Using WASH-1400 Cornerstone.” 157. “Methods Refined Probabilistic Risk Studies Earn Increased Acceptance with Regulators.” The Rasmussen Report did not analyze the exact scenario of TMI, in part because it studied a Westinghouse pressurized water reactor rather than TMI’s Babcock and Wilcox design. See US NRC, Special Inquiry Group, Three Mile Island, 90 and 151. See also Kemeny, The President’s Commission on the Accident at TMI, 62. 158. US NRC, Special Inquiry Group, Three Mile Island, 90 and 151. See also Kemeny, The President’s Commission on the Accident at TMI, 62; “Methods Refined Probabilistic Risk Studies Earn Increased Acceptance with Regulators.” 159. Granger, Henrion, and Morris, Expert Judgments for Policy Analysis; US NRC, Eliciting and Analyzing Expert Judgment; Boring, “A Survey of Expert Elicitation Practices for Probabilistic Risk Assessment,” 1447–51; Cooke, Experts in Uncertainty, 27–40. 160. The count of cancellations includes a DOE list up to 1982 and the NRC’s list thereafter. US DOE, Nuclear Plant Cancellations, 6, 11–14; US NRC, Information Digest, 2019–2020, Appendix D and K; Maloney, McCormick, and Sauer, “On Stranded Cost Recovery in the Deregulation of the U.S. Electric Power Market,” 78–82; Cook, “Nuclear Follies.” 161. Cook, “Nuclear Follies,” 1; US DOE, Nuclear Plant Cancellations, 6, 11–14; US NRC, Information Digest, 2019–2020, Appendix D and K. 162. Campbell, Collapse of an Industry, 4–5; Hirsh, Technology and Transformation, 240–41. 163. Burnham, “U.S. Orders Construction Halt on Ohio Atom Plant”; “Nearly Completed Nuclear Plant Will be Converted to Burn Coal,” 1; Cook, “Nuclear Follies”; US NRC, Improving Quality and the Assurance of Quality in the Design and Construction of Nuclear Power Plants, 2–9; Office of Technology Assessment, Nuclear Power in an Age of Uncertainty, 113–42; Hirsh, Technology and Transformation, 152. 164. Campbell, Collapse of an Industry, 4; Cohn, Too Cheap to Meter, 106.
248
| Notes to Chapter 3, Pages 80–84
165. US DOE, Nuclear Plant Cancellations, 6, 11–14; US NRC, Selected Review of Foreign Licensing Practices for Nuclear Power Plants, 120; Haas, Thomas, and Ajanovic, “The Historical Development of the Costs of Nuclear Power”; Hultman and Koomey, “Three Mile Island: The Driver of US Nuclear Power’s Decline?”; Sovacool, Gilbert, and Nugent, “An International Comparative Assessment of Construction Cost Overruns for Electricity Infrastructure,” 155–56; Gilbert et al., “Cost Overruns and Financial Risk in the Construction of Nuclear Power Reactors,” 645–47; Cohn, Too Cheap to Meter, 101. 166. Cohen, The Nuclear Energy Option, chapter 9. 167. Weinberg et al., The Second Nuclear Era, 58–59; GAO, Nuclear Regulation: Process for Backfitting Changes in Nuclear Plants Has Improved, 2, 16–17, 22, 36–37, 49, 63–64; US NRC, A Prioritization of Generic Safety Issues, NUREG-0933, 3–7. The use of PRA in backfit analysis expanded and became more sophisticated, as is evident in the regular revisions to Regulatory Analysis Guidelines of the U.S. Nuclear Regulatory Commission, NUREG/BR-0058. 168. US NRC, Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants; US NRC, A Proposed Risk Management Regulatory Framework. 169. Rasmussen, “Measuring Nuclear Risks,” 16. 170. Levine and Stetson, “How PRA is Being Used in the USA.” CHAPTER 4
1. US GAO, Nuclear Regulation: Process for Backfitting, 2. 2. “AIF Advises NRC on Use of Probabilistic Risk Assessment in Licensing,” 9. 3. “Probabilistic Assessment Seen Increasingly Accepted, But Caution Urged,” 10. 4. “At Best, Commissioner Support is Lukewarm for Way of Ranking Safety Issues,” 9–10. 5. Hill, “A Farewell Interview with NRC Chairman Rowden,” 27; President’s Commission on the Accident at Three Mile Island, “Deposition of Robert Jay Budnitz,” August 27, 1979, 7910310241, NRC Legacy; “Probabilistic Assessment Seen Increasingly Accepted, But Caution Urged,” 10–11. 6. Graham, “The Risk Not Reduced,” 382; US NRC, Recommendations for Enhancing Reactor Safety in the 21st Century, 18, 20. 7. This is common NRC language; see US NRC, “About NRC,” https:// www.nrc.gov/about-nrc.html. 8. US NRC, “10 CFR Part 50, Station Blackout,” 23203; US NRC, Reliability of Emergency AC Power Systems at Nuclear Power Plants, xiii–xiv; US NRC, Station Blackout Accident Analyses, 60–63; US NRC, Evaluation of Station Blackout Accidents at Nuclear Power Plants, 1–1 to 1–4; US NRC, Standard Review Plan, Section 10.4.9; US NRC, Loss of Main and Auxiliary Feedwater Event at the Davis-Besse Plant on June 9, 1985. 9. Advisory Committee on Reactor Safeguards, “Subcommittee Meeting on Anticipated Transients Without Scram (ATWS),” March 26, 1980, 120–144, 8004210313, NRC Legacy; William J. Dircks to the Commissioners, “Amendments to 10 CFR 50 Related to Anticipated Transients Without Scram (ATWS) Events, SECY-83–293,” July 19, 1984, 8308080642, NRC Legacy.
Notes to Chapter 4, Pages 85–87 |
249
10. In September 2003, the NRC assessed the effectiveness of the ATWS rule. It found that the design changes and procedural improvements mandated by the rule had increased safety for less expense than estimated. All four reactor vendors met that target. See US NRC, Regulatory Effectiveness of the Anticipated Transient Without Scram Rule, 20. For a critical assessment of the NRC’s handling of ATWS, see Lochbaum, “Anticipated Transient Without Scram.” 11. Harold R. Denton to W. Andrew Baldwin, “Director’s Decision Under 10 CFR 2.205, DD-80–22,” June 19, 1980, 8006260158, NRC Legacy. 12. Gus Speth to John Ahearne, March 20, 1980, ML20079E535, NRC ADAMS. 13. Harold R. Denton to the Commissioners, “Accident Considerations Under NEPA, SECY-80–131,” March 11, 1980, 8004040200, NRC Legacy. 14. Harold R. Denton to W. Andrew Baldwin, “Director’s Decision Under 10 CFR 2.205, DD-80–22,” June 19, 1980, 8006260158, NRC Legacy; US NRC, “10 CFR Parts 50 and 51 Nuclear Power Plant Accident Considerations Under the National Environmental Policy Act of 1969,” 40103. 15. Deukmejian v. NRC, 751 F.2d 1287 (DC Cir., December 31, 1984). A second challenge to the NRC’s position on Class 9 accidents diverged from the DC circuit and has left this issue unsettled. See, Limerick Ecology Action, Inc. v. United States Nuclear Regulatory Commission, 869 F.2d 719 (3rd Cir. 1989); Weintraub, “NEPA and Uncertainty,” 1583–86. 16. “Union of Concerned Scientists’ Petition for Decommissioning of Indian Point Unit 1 and Suspension of Operation of Units 2 & 3,” September 17, 1979, 7910180121, NRC Legacy. 17. Prior to the Three Mile Island accident, the NRC’s limited but increasing emphasis on emergency planning and drills can been seen in the issuance and revisions to guidance documents in 1975 and 1977. See US NRC, Regulatory Guide 1.101: Emergency Planning for Nuclear Power Plants, November 1975 and its revision in March 1977, ML13350A291 and ML12305A226, NRC ADAMS. After Three Mile Island, the NRC, with the Federal Emergency Management Administration, developed more ambitious guidance. See US NRC and FEMA, Criteria for Preparation and Evaluation of Radiological Emergency Response Plans. 18. “Union of Concerned Scientists’ Petition for Decommissioning of Indian Point Unit 1 and Suspension of Operation of Units 2 & 3,” September 17, 1979, 7910180121, NRC Legacy. 19. “Director’s Decision Under 10 CFR 2.206, DD-80–5,” February 11, 1980, Nuclear Regulatory Commission Issuances: Opinions and Decisions, Vol. 11 (1980): 369. 20. John Garrick, emails to the author, March 18, 2016 and April 11, 2020. 21. Commonwealth Edison Company, Consolidated Edison Company, and Power Authority of the State of New York, “Indian Point and Zion Near Site Study: Report to the Nuclear Regulatory Commission,” February 20, 1980, ML093630848, NRC ADAMS. 22. Commonwealth Edison Company, Consolidated Edison Company, and Power Authority of the State of New York, “Indian Point and Zion Near Site Study: Report to the Nuclear Regulatory Commission,” February 20, 1980,
250
| Notes to Chapter 4, Pages 87–90
ML093630848, NRC ADAMS; US NRC, Preliminary Assessment of Core Melt Accidents at the Zion and Indian Point Nuclear Power Plants and Strategies for Mitigating Their Effect; Garrick, Quantifying and Controlling Catastrophic Risk, 20–21, Appendix A; John Garrick, email to the author, April 14 and April 16, 2020. 23. John Garrick, email to the author, March 18, 2016; Commonwealth Edison Company, Consolidated Edison Company, and Power Authority of the State of New York, “Indian Point and Zion Near Site Study: Report to the Nuclear Regulatory Commission,” February 20, 1980, ML093630848, NRC ADAMS. 24. Power Authority of the State of New York and Consolidated Edison Company of New York, Inc., Indian Point Probabilistic Safety Study: Overview and Highlights (1982), ML102520197, NRC ADAMS. See also Commonwealth Edison, Zion Probabilistic Safety Study, September 8, 1981, 8109280420, NRC Legacy. 25. US NRC, U.S. Nuclear Regulatory Commission Policy and Planning Guidance 1984, NUREG-0885, Issue 3, 17–18; Commission, “Order, In the Matter of Consolidated Edition Company and Power Authority of the State of New York, Docket Nos. 50–247 and 50–286,” May 30, 1980, 8006060523, NRC Legacy. 26. US NRC, Review and Evaluation of the Indian Point Probabilistic Safety Study, NUREG/CR-2934, 2.5–4, 2.5–13, 2.7.4–1, and 5–1. 27. US NRC, Atomic Safety and Licensing Board, “In the Matter of Consolidated Edison Company of New York (Indian Point, Unit No. 2) and Power Authority of the State of New York (Indian Point, Unit No. 3), Docket No. 50–247-SP, 50–286-SP (ASLBP No. 81–466–03-SP), LBP-83–68,” October 24, 1983, 18 NRC 811 (1983). 28. US NRC, Atomic Safety and Licensing Board, “In the Matter of Consolidated Edison Company of New York (Indian Point, Unit No. 2) and Power Authority of the State of New York (Indian Point, Unit No. 3), Docket No. 50–247-SP, 50–286-SP (ASLBP No. 81–466–03-SP), LBP-83–68,” October 24, 1983, 18 NRC 811 (1983). 29. US NRC, Atomic Safety and Licensing Board, “In the Matter of Consolidated Edison Company of New York (Indian Point, Unit No. 2) and Power Authority of the State of New York (Indian Point, Unit No. 3), Docket No. 50–247-SP, 50–286-SP (ASLBP No. 81–466–03-SP), LBP-83–68,” October 24, 1983, 18 NRC 811 (1983); Owen, Matheson, and Howard, “The Value of Life and Nuclear Design,” 514. 30. US NRC, “In the Matter of Consolidated Edison Company.” 31. US NRC, “In the Matter of Consolidated Edison Company,” 1092–1100. 32. US NRC, “In the Matter of Consolidated Edison Company,” 1057–58. 33. Garrick, Quantifying Global Catastrophic Risks, 5–7, 54–63. 34. John Garrick, email to author, April 15, 2020. 35. Joint Committee on Atomic Energy, Investigation of Charges Relating to Nuclear Reactor Safety: Hearings Before the Joint Committee on Atomic Energy, 94th Cong., 2nd sess. March 2, 1976, 43–50. 36. Max W. Carbon to Joseph Hendrie, “Report on Quantitative Safety Goals,” May 16, 1979, 7906200050, NRC Legacy.
Notes to Chapter 4, Pages 90–93 |
251
37. Okrent and Whipple, An Approach to Societal Risk Acceptance Criteria and Risk Management; US Congress and Senate, Risk/Benefit Analysis in the Legislative Process, 187–88; Fischhoff, Slovic, and Lichtenstein, “Weighing the Risks: Which Risks are Acceptable?” 17–20, 32–38. 38. Leonard Bickwit to Hendrie, Gilinsky, Kennedy, Bradford, and Ahearne, “Adequate Protection of the Health and Safety of the Public,” October 18, 1979, 8007210266, NRC Legacy. 39. O’Neill, “Nuclear Industry Mobilizing for Degraded Core Rulemaking,” 3–6. 40. Norman C. Rasmussen to Saul Levine, March 17, 1981, box 7, RG 542, Rasmussen Papers. 41. O’Neill, “Nuclear Industry Mobilizing for Degraded Core Rulemaking,” 4. See also “Safety Goal Planned”; Buhl, “The IDCOR Program,” 205–17. 42. US NRC, Three Mile Island: A Report to the Commissioners, 151–52. See also US NRC, TMI-2 Lessons Learned Task Force Final Report, 1–2, 4–1 to 4–3; Kemeny, Report of the President’s Commission on the Accident at Three Mile Island, 61, 63. 43. US NRC, Toward a Safety Goal, 11. So good were the NRC reports on safety goals that one was reprinted by Cambridge University Press (Fischhoff et al., Acceptable Risk.) 44. US NRC, Toward a Safety Goal, 18, 20; Edward J. Hanrahan to the Commissioners, “Toward a Safety Goal: Discussion of Preliminary Policy Considerations, SECY-80–551A,” December 30, 1980, 8101200623, NRC Legacy. On risk society theory, see Beck, Risk Society, 29, 177–78; Beck, World at Risk, 111; Perrow, Normal Accidents, 3–12, 14, 305–15; Perrow, “Not Risk but Power,” 298–300; Perrow, “Risky Systems: The Habit of Courting Disaster,” 348; Rijpma, “From Deadlock to Dead End,” 40; Boudia and Jas, “Risk and Risk Society in Historical Perspective,” 317–31. 45. Mattson, “Concepts, Problems, and Issues in Developing Safety Goals and Objectives for Commercial Nuclear Power,” 706; D. Clark Gibbs to Secretary of the Commission, “Comments on Federal Register Notice Entitled ‘Development of a Safety Goal: Preliminary Policy Considerations’ and NUREG-0764, ‘Towards a Safety Goal: Discussion of Preliminary Policy Considerations,’ (46 FR 18827),” May 15, 1981, 8105260620, NRC Legacy; Technology for Energy Corporation, Technical Report 1.1 Safety Goal Evaluation. 46. Mattson, et al., “Concepts, Problems, and Issues in Developing Safety Goals and Objectives for Commercial Nuclear Power,” 706; D. Clark Gibbs to Secretary of the Commission, “Comments on Federal Register Notice Entitled ‘Development of a Safety Goal: Preliminary Policy Considerations’ and NUREG-0764, ‘Towards a Safety Goal: Discussion of Preliminary Policy Considerations,’ (46 FR 18827),” May 15, 1981, 8105260620, NRC Legacy; Technology for Energy Corporation, Technical Report 1.1 Safety Goal Evaluation; Fischhoff, “Managing Risk Perceptions,” 82. 47. “NRC Readies Safety Goal Proposal for Fireworks in Harper’s Ferry.” See also “NRC Focusing on Two Qualitative, Three Quantitative Goals”; “Indignant Intervenors Propose Substitute Safety Goals.”
252
| Notes to Chapter 4, Pages 93–98
48. Paul Gunter, Titans of Nuclear Podcast, Episode 91, October 2, 2018, https://www.stitcher.com/podcast/bret-kugelmass/why-not-nuclear/e/56523374 (accessed April 3, 2019). 49. “NRC Readies Safety Goal Proposal for Fireworks in Harper’s Ferry,” 3–4. 50. “Indignant Intervenors Propose Substitute Safety Goals,” Inside NRC, 2–3. 51. “NRC Focusing on Two Qualitative, Three Quantitative Goals,” 1–2. 52. “SERs, Operating Plants, Controls, and ATWS Top New Chairman’s Priorities.” 53. “Palladino Says Proposed Goals Reflect NRC Trend Toward PRA Use.” 54. US NRC, “Proposed Policy Statement on Safety Goals for Nuclear Power Plants”; Spangler, “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 3,” 98. 55. “Palladino Says Proposed Goals Reflect NRC Trend Toward PRA Use.” 56. US NRC, “Proposed Policy Statement on Safety Goals for Nuclear Power Plants.” 57. EDO to Commissioners, “Severe Accident Rulemaking and Related Matters, SECY-82–001,” January 4, 1982, 8201190416, NRC Legacy; P. Shewmon to Nunzio J. Palladino, September 14, 1982, 8208290022, NRC Legacy; “ACRS not Satisfied with Implementation Plan”; “NRC Staff and ACRS Pose ‘Catch-22’ for Safety Goal Proposal.” 58. “Safety Goals Finally Approved.” 59. US NRC, “Minutes of the 301st ACRS Meeting, May 9–1, 1985, Washington, DC,” August 20, 1985, 8508210314, NRC Legacy. 60. US NRC, Backfitting Guidelines, NUREG-1409, Appendix A. 61. “Staff Backs Justifying Safety Goal Backfits with Economic Benefits.” 62. US NRC, “Safety Goals for the Operations of Nuclear Power Plants; Policy Statement,” 30028–33. 63. US NRC, “Safety Goals for the Operations of Nuclear Power Plants; Policy Statement,” 30028–33. 64. Gary Holahan, email to the author, September 18, 2018. 65. US House of Representatives, Committee on Energy and Commerce, Nuclear Reactor Safety: Hearings Before the Subcommittee on Energy Conservation and Power, 99th Cong., 2nd sess., May 22, 1986, 21–27, 37–40, 230; Lindeman, “NRC Revises Estimate of Probability of Severe Accident in Next 20 Years.” 66. US NRC, U.S. Nuclear Regulatory Commission Policy and Planning Guidance 1987, NUREG-0885, Issue 6, 6, 23. 67. US NRC, Regulatory Guide 1.174, 5. 68. James M. Taylor to the Commissioners, “Interim Guidance on Staff Implementation of the Commission’s Safety Goal Policy, SECY-91–270,” August 27, 1991, 9109030213, NRC Legacy. 69. James M. Taylor to the Commissioners, “Interim Guidance on Staff Implementation of the Commission’s Safety Goal Policy, SECY-91–270,” August 27, 1991, 9109030213, NRC Legacy.
Notes to Chapter 4, Pages 98–101 |
253
70. Nuclear Energy Agency, Use and Development of Probabilistic Safety Assessment. 71. US NRC, Perspectives on Reactor Safety, NUREG.CR-6042, 2.5–23 to 25. 72. C. J. Heltemes to the Commissioners, “Status Report-Office for Analysis and Evaluation of Operational Data (AEOD), SECY-79–371A,” November 21, 1979, 8007020248, NRC Legacy; Luis Reyes to the Commissioners, “Status of the Accident Sequence Precursor (ASP) Program and the Development of Standardized Plant Analysis Risk (SPAR) Models, SECY-05–0192,” October 24, 2005, ML052700542, NRC ADAMS. 73. US NRC, “Policy Statement on Severe Reactor Accidents,” 32138–50. 74. US NRC, Perspectives on Reactor Safety, NUREG.CR-6042, 2.5–2; US NRC, Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, NUREG-1150. 75. In the early 1980s, the NRC made substantial improvements to PRA methodology. See US NRC, Fault Tree Handbook, NUREG-0492; Hays, “The Evolution of Probabilistic Risk Assessment in the Nuclear Industry,” 124–27; US NRC, Probabilistic Risk Assessment (PRA) Reference Document; Okrent, “The Safety Goals of the U.S. Nuclear Regulatory Commission.” 76. NRC work on and use of expert elicitation was extensive. See US NRC, Eliciting and Analyzing Expert Judgment; US NRC, Office of Nuclear Regulatory Research (Jing Xing and Stephanie Morrow), White Paper: Practical Insights and Lessons Learned on Implementing Expert Elicitation, October 13, 2016, ML16287A734, NRC ADAMS. 77. US NRC, “Consideration of Additional Requirements for Containment Venting Systems for Boiling Water Reactors with Mark I and Mark II Containments (REDACTED VERSION), SECY-12–0157,” December 7, 2012, ML12345A030, NRC ADAMS. 78. S. H. Hanauer to J. F. O’Leary, F. E. Kruesi, and L. Rogers, September 20, 1972, ML111530443, NRC ADAMS. 79. On the regulatory history of the Mark I containment, see US NRC, “Consideration of Additional Requirements for Containment Venting Systems for Boiling Water Reactors with Mark I and Mark II Containments (REDACTED VERSION), SECY-12–0157,” December 7, 2012, ML12345A030, NRC ADAMS. 80. US NRC, Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants; US NRC, A Proposed Risk Management Regulatory Framework. 81. US NRC, “Individual Plant Examination for Severe Accident Vulnerabilities-10 CFR 50.54(f) (Generic Letter 88–20),” November 23, 1988, Nuclear Regulatory Commission Website, www.nrc.gov/reading-rm/doc-collections/gencomm/gen-letters/1988/gl88020.html (accessed June 8, 2015). 82. Trip Rothschild to Jim Tourtellotte, “Draft Charnoff Committee Comments on Legislative Package,” July 26, 1982, 8208030511, NRC Legacy; US NRC, “Early Site Permits; Standard Design Certifications; and Combined Licenses for Nuclear Power Reactors,” 15372–400; “Briefing by Westinghouse on Advanced PWR Program,” November 1, 1989, ML15153A294, NRC ADAMS; Bertram Wolfe to Kenneth M. Carr, November 26, 1990, 9101080062, NRC Legacy. 83. Blake, “Congress Passes Very Pronuclear Energy Bill,” 33.
254
| Notes to Chapter 5, Pages 102–108
CHAPTER 5
1. US NRC Special Inquiry Group, Three Mile Island, 89. 2. US NRC Special Inquiry Group, Three Mile Island, 89–121; Kemeny, Report of the President’s Commission on the Accident at Three Mile Island, 20–22. 3. US NRC Special Inquiry Group, Three Mile Island, 99–100, 161; Kemeny, Report of the President’s Commission, 64, 66–67; US NRC, The Status of Recommendations of the President’s Commission on the Accident at Three Mile Island, 38–39. 4. “Wake Me If It’s a Meltdown,” Time, April 13, 1987; US NRC, U.S. Nuclear Regulatory Commission Annual Report 1978, 109–10; Ryan, “Seminar Sees Nuclear Industry Evolving Unique Safety Culture,” 14; Victor Stello to J.C. Everett, III, March 31, 1987, ML021580019, NRC ADAMS; US NRC, Annual Report 1987, 10, 201–10. See also US NRC, “The Regulatory Craft: An Interview with Commissioner Stephen Burns,” May 2019, https://www.youtube.com /watch?v=uAbz9B78tNA&t=338s; Rees, Hostages of Each Other, 25, 111–17. 5. US NRC, Proceedings of the U.S. Nuclear Regulatory Commission: NRC Regulatory Information Conference, 1–2. 6. Victor Stello to Lando Zech, “SECY-85–256, Enforcement Policy on Vendors,” October 6, 1986, box 18, Lando Zech Papers, Hoover Institution, Stanford University, CA. 7. US NRC, Knowledge Management Seminar, “Celebrating 25 Years of NRC’s Principles of Good Regulation,” January 19, 2016, ML16048A579, NRC ADAMS. 8. As Daniel Miller demonstrates in his study of the NRC’s maintenance rule, the Three Mile Island accident reviews made little mention of the importance of maintenance to safety, and a half dozen years passed before the NRC made better maintenance programs a priority. See Miller, “Maintaining the Atom,” 94–112; Rees, Hostages of Each Other, 14–15, 23; Ryan, “Nuclear Plant Success Said to Depend on Developing Managers.” 9. Rees, Hostages of Each Other, 9. 10. This description of the Davis-Besse event is based on US NRC, Loss of Main and Auxiliary Feedwater Event at the Davis-Besse Plant on June 9, 1985, NUREG-1154, section 3. 11. US NRC, “Briefing on Davis-Besse (Public Meeting),” July 24, 1985, 8508050176, NRC Legacy. 12. Bukro, “Nuclear Mishap Stuns Regulators.” 13. Thomas M. Roberts to Edward J. Markey, July 17, 1985, 8510070080, NRC Legacy; US NRC, Transient Response of Babcock & Wilcox-Designed Reactors, NUREG-0667, 2–5. 14. US NRC, Loss of Main and Auxiliary Feedwater Event at the DavisBesse Plant on June 9, 1985, 6–1 and 8–1. See also US NRC, “Briefing on Davis-Besse (Public Meeting),” July 24, 1985, 8508050176, NRC Legacy; Miller, “Maintaining the Atom,” 123. 15. Mike Derivan, “Nuke Knews,” 2014, http://www.nukeknews.com /Look.html.
Notes to Chapter 5, Pages 108–111 |
255
16. Ryan, “NRC Faults Plant Maintenance and Says Utilities Must Find Answers.” 17. Airozo, “Poor Maintenance Cited as Cause of Many Forced Outages.” 18. US NRC, Trends and Patterns in Maintenance Performance in the U.S. Nuclear Power Industry, B-15 to B-22; Moray and Huey, Human Factors Research and Nuclear Safety, 24–27. 19. US NRC, Maintenance Approaches and Practices in Selected Foreign Nuclear Power Programs. 20. The difference between prescriptive and performance-based requirements is the difference between means and ends. The NRC’s prescriptive requirements, as traditionally used, tell a licensee what they shall do—the acceptable means to reach a safety goal, usually by specifying design features. A performance-based requirement relies upon a licensee demonstrating that a certain design produces satisfactory, measurable performance results. It provides more flexibility as to the means licensees use to achieve safety goals. See US NRC, Maintenance Approaches and Practices in Selected Foreign Nuclear Power Programs, 8–12. 21. Jordan, “Proposed Maintenance Policy would Formalize NRCNUMARC Agreement.” 22. Jordan, “Proposed Maintenance Policy would Formalize NRCNUMARC Agreement.” 23. US NRC, “Briefing on Initiative to Improve Maintenance Performance (Public Meeting),” November 20, 1986, 8612010093, NRC Legacy. 24. US NRC, Proceedings of the Public Workshop for NRC Rulemaking on Maintenance of Nuclear Power Plants. 25. Airozo and Jordan, “Carr Says He’ll ‘Chew On Management’ to Improve Plant Maintenance.” 26. “NRC is Moving Forward with Plans to Issue a Maintenance Rule.” 27. Jordan, “Zech Unswayed by Claim that Maintenance Rule Could Kill Nuclear Option”; US NRC, “Final Commission Policy Statement on Maintenance of Nuclear Power Plants,” 9430–31; US NRC, “Ensuring the Effectiveness of Maintenance Programs for Nuclear Power Plants,” 47822–29. 28. Airozo, “Zech Complains about Lack of Industry Assistance on Maintenance Rule.” 29. Forrest J. Remick to Lando W. Zech, Jr., “Proposed Final Rulemaking Related to Maintenance of Nuclear Power Plants,” April 11, 1989, 8904240487, NRC Legacy. 30. Airozo, “Commission Says Industry Overkill of Maintenance Rule May Prove Unwise,” 5. 31. Bill M. Morris, “Draft Regulatory Guide DG-1001, ‘Maintenance Programs for Nuclear Power Plants,’ ” August 1, 1989, ML003739384, NRC ADAMS; US NRC, A Process for Risk-Focused Maintenance, NUREG/CR-5695. 32. James H. Sniezek to Kenneth A. Strahm, December 27, 1980, 9104110128, NRC Legacy; William T. Russell to Thomas E. Tipton, January 31, 1991, Attachment 1 to Enclosure 1, SECY-91–110, 9105060172, NRC Legacy. 33. James Taylor to the Commissioners, “Staff Evaluation and Recommendation on Maintenance Rulemaking, SECY-91–110,” April 26, 1991,
256
| Notes to Chapter 5, Pages 111–113
9105060172, NRC Legacy; Helen Nicolaras Pastis to Thomas E. Murley and others, “Summary of 372nd ACRS Full Committee Meeting (April 1991),” May 6, 1991, 9105160181, NRC Legacy. 34. Stellfox, “Commission to Get Staff Recommendation Against Maintenance Rule.” 35. Commissioner Curtiss to Commissioners Carr, Roberts, Rogers, and Remick, “Rulemaking Options—Maintenance,” December 22, 1989, 9408290146, NRC Legacy; US NRC, “Briefing on the Maintenance Rule,” May 6, 1991, 9105130247, NRC Legacy; Commissioner Curtiss to Samuel J. Chilk, “Notation Vote Response Sheet,” June 17, 1991, 9107080322, NRC Legacy; “Monitoring the Effectiveness of Maintenance at Nuclear Power Plants” (1991), 31318. 36. Byron Lee to Ivan Selin, August 16, 1991, 9109060199, NRC Legacy. 37. H. W. Keiser to C. L. Miller, October 23, 1991, 9111050212, NRC Legacy. 38. US NRC, “Briefing on Implementing Guidance for the Maintenance Rule and Industry Verification and Validation Effort,” January 29, 1993, ML15119A065, NRC ADAMS. 39. Miller, “Maintaining the Atom,” 206–7. 40. US NRC, “Advisory Committee on Reactor Safeguards 362nd ACRS Meeting,” June 8, 1990, 342–348, 9006120418, NRC Legacy; Harry Rood, “Summary of November 27, 1990 Public meeting to Discuss the Risk-Based Technical Specifications Program Developed by EPRI, PG&E, and Westinghouse,” December 10, 1990, 9012130032, NRC Legacy; Christopher I. Grimes to Brian K. Grimes, “Summary of Owners Groups Meeting on the Improved Standard Technical Specifications: June 29 and 30, 1993,” July 28, 1993, 9309030314, NRC Legacy; US NRC, Lessons Learned from Early Implementation of the Maintenance Rule at Nine Nuclear Power Plants; Apostolakis and Mosleh, Risk-Informed Decision Making: A Survey of United States Experience, 87–93. By 2004, many countries had adopted the maintenance rule and risk monitors. There was notable caution with the new tools among some nations, such as Japan and France that did not typically favor maintenance during power operations. See NEA, Risk Monitors, 14, 97, 145–54. 41. US NRC, Lessons Learned from Maintenance Rule Baseline Inspections, 2–29 to 2–37; James M. Taylor to Zach Pate, October 6, 1994, 9410170328, NRC Legacy; US NRC, “Monitoring the Effectiveness of Maintenance at Nuclear Power Plants” (1999); Nuclear Energy Institute, Industry Guideline for Monitoring the Effectiveness of Maintenance at Nuclear Power Plants, NUMARC 93–01, Revision 4A, Section 11, ML11116A198, NRC ADAMS; Riley and Weglian, “Applied Risk Management in Electric Power Plant Decision Making,” 5. 42. US NRC, No Undue Risk, 8–9. 43. Nuclear Energy Institute, Enhancing Nuclear Plant Safety and Reliability, i. 44. Navigant Consulting, Assessment of the Nuclear Power Industry, 35, 37. 45. “U.S. Nuclear Outages Were Less Than 3% of Capacity This Summer,” September 28, 2015, U.S. Energy Information Administration, https://www .eia.gov/todayinenergy/detail.php?id = 23112; Electric Power Research Institute, Safety and Operational Benefits of Risk-Informed Initiatives; Dolley, “EPRI: Risk-Informed Approaches Have Reduced Accident Risk.”
Notes to Chapter 5, Pages 114–120 |
257
46. Lochbaum, “The NRC’s Maintenance Rule.” 47. Sean Peters, et al., “Organizational Factors in PRA: Twisting Knobs and Beyond,” Proceedings of the 2019 International Topical Meeting on Probabilistic Safety Assessment and Analysis, Charleston, SC, April 28-May 3, 2019, NRC ADAMS ML19057A474. 48. Rickover used a colloquialism for the useless act of teaching someone to do what they already know. See Naval Nuclear Propulsion Program: Hearings Before the Joint Committee on Atomic Energy, 91st Cong., 2nd sess., March 19 and March 20, 1970, 31, 107–8. 49. Valerie Barnes, email to the author, November 9, 2019. 50. Fischhoff, et al., Acceptable Risk. 51. Finlayson, et al., Human Engineering of Nuclear Power Plant Control Rooms. 52. “Evaluation of Incidents of Primary Coolant Release from Operating Boiling Water Reactors,” October 30, 1972, ML17151A684, NRC ADAMS. 53. Finlayson, et al., Human Engineering of Nuclear Power Plant Control Rooms, 6–1 to 7–16. 54. Turner, “The Organizational and Interorganizational Development of Disasters,” 378–97. 55. Walker, Three Mile Island, 209–25. 56. Kemeny, Report of the President’s Commission on the Accident at Three Mile Island, 63–64. 57. US NRC, Three Mile Island, 2:920, 2:1227. 58. Harold Denton to the Commissioners, “Guidelines for Utility Management Structure and Technical Resources, SECY-80–440,” September 19, 1980, 8011060353, NRC Legacy; R. W. Jurgensen to Harold Denton, December 8, 1980, 8012120504, NRC Legacy; Stephen H. Howell to Harold R. Denton, November 20, 1980, 8012020363, NRC Legacy; E. E. Utley to Harold Denton, November 20, 1980, 8011250362, NRC Legacy; Philip Crane to Harold R. Denton, December 8, 1980, 8012150149, NRC Legacy; US NRC, Critical Human Factors Issues in Nuclear Power Regulation, 3:193, 3:194–95. 59. Nadel, Analysis of Processes Used in Evaluating Utility Management and Organization for an NRC Operating License. 60. US NRC, Organizational Analysis and Safety for Utilities with Nuclear Power Plants, 1:8–9, 2:7. 61. US NRC, Organizational Analysis and Safety for Utilities with Nuclear Power Plants, 1:13, 1:38, 2: 1–5. 62. US NRC, Safety Culture: A Survey of the State-of-the-Art, 27–32; US NRC, A Guide to Literature Relevant to the Organization and Administration of Nuclear Power Plants. 63. US NRC, Organizational Analysis and Safety for Utilities with Nuclear Power Plants, 2:1–6. 64. US NRC, “Nuclear Regulatory Commission Meeting, Briefing on Human Factors Program Plan,” December 14, 1982, 8212230393, NRC Legacy. 65. US NRC, “Revised Program Element: Management and Organization, For the Human Factors Program Plan, SECY-82–462A,” January 7, 1983, 8301130199, NRC Legacy.
258
| Notes to Chapter 5, Pages 120–123
66. “NRC Budget Provides a Big Hike for Waste Management, Nothing for Breeder.” 67. US NRC, “Celebrating 25 Years of NRC’s Principles of Good Regulation,” January 19, 2016, ML16048A579, NRC ADAMS. 68. US NRC, “Revised Program Element: Management and Organization, For the Human Factors Program Plan, SECY-82–462A,” January 7, 1983, 8301130199, NRC Legacy. 69. US NRC, Implications of the Accident at Chernobyl for Safety Regulation of Commercial Nuclear Power Plants in the United States, 1:1–23. 70. Rees, Hostages of Each Other, 113. 71. US NRC, “Policy Statement on the Conduct of Nuclear Power Plant Operations,” 3424. See also Victor Stello to J. C. Everett, III, March 31, 1987, ML021580019, NRC ADAMS; US NRC, U.S. Nuclear Regulatory Commission Annual Report 1987, 10, 201–10; US NRC, “The Regulatory Craft: An Interview with Commissioner Stephen Burns,” 2019, https://www.youtube.com /watch?v=uAbz9B78tNA&t=338s; Sandia National Laboratories, An Overview of the Evolution of Human Reliability Analysis, 10; Rees, Hostages of Each Other; US NRC, “Nuclear Regulatory Commission Meeting, Briefing on NAS Human Factor Recommendations,” May 19, 1988, ML15132A550, NRC ADAMS; Victor Stello to the Commissioners, “NRC’s Human Factors Programs and Initiatives, SECY-89–183,” June 16, 1989, ML15153A122, NRC ADAMS. 72. Moray and Huey, Human Factors Research and Nuclear Safety, 4, 13–15; Advisory Committee on Reactor Safeguards, “335th General Meeting,” March 10, 1988, 8803170286, NRC Legacy. 73. Perrow, Normal Accidents; LaPorte, Roberts, Rochlin, “Aircraft Carrier Operations at Sea”; LaPorte and Consolini, “ ’Working in Practice but Not in Theory’ ”; Weick, “Organizational Culture as a Source of High Reliability”; Bierly and Spender, “Culture and High Reliability Organizations: The Case of the Nuclear Submarine.” There is a vast literature on the debate between scholarly adherents of normal accident theory and high reliability organizational studies. A place to start is Sagan, “The Problem of Redundancy Problem.” 74. Cooke and Rousseau, “Behavioral Norms and Expectations”; Freudenburg, “Perceived Risk, Real Risk: Social Science and the Art of Probabilistic Risk Assessment.” 75. US NRC, “Nuclear Regulatory Commission, Briefing on NAS Human Factor Recommendations,” May 19, 1988, ML15132A550, NRC ADAMS; Victor Stello to the Commissioners, “NRC’s Human Factors Programs and Initiatives, SECY-89–183,” June 16, 1989, ML15153A122, NRC ADAMS. 76. US NRC, Nuclear-Power-Plant Severe-Accident Research Plan, 3–2. 77. Apostolakis, Bickel, and Kaplan, “Probabilistic Risk Assessment in the Nuclear Power Utility Industry,” 91–94. 78. George Apostolakis, email to the author, March 2, 2020. 79. US NRC, Influence of Organizational Factors on Performance Reliability. 80. Haber, O’Brien, and Ryan, “Model Development for the Determination of the Influence of Management on Plant Risk”; Haber, et al., “The Nuclear Organization and Management Analysis Concept Methodology”; James Taylor
Notes to Chapter 5, Pages 124–128 |
259
to the Commissioners, “Organizational Factors Research Progress Report, SECY-90–349,” October 9, 1990, 9010190086, NRC Legacy. 81. “Diagnostic Evaluation Report for McGuire Nuclear Station,” March 8, 1988, 8804130299, NRC Legacy; “Nuclear Regulatory Commission, Briefing on Effectiveness of the Diagnostic Evaluations,” November 23, 1988, ML15139A402, NRC ADAMS. 82. Nelson, “NRC Scores Commonwealth Edison for Operating Problems at Zion”; Zuercher, “NYPA Slow to Correct Fitzpatrick Deficiencies, NRC Diagnostic Report Says”; Stellfox, “South Texas Project 12th Plant to Get NRC’s Diagnostic Evaluation”; Stellfox, “Nuclear Plant Management Ills Prompt NYPA Chief Brons’ Departure”; Stuart Rubin to Richard P. La Rhette, June 01, 1995, 9407190351, NRC Legacy. 83. Brian Haagensen, email to the author, November 11, 2019. 84. Thomas G. Ryan, “Organizational Factors Research Lessons Learned and Findings (1991),” September 1991, ML090170627, NRC ADAMS. All quotations in this and the next paragraph are from this source. 85. Kramer and Haber, “Organizational Performance Research at the U.S. Nuclear Regulatory Commission.” 86. George Apostolakis, email to the author, March 2, 2020. 87. Kramer and Haber, “Organizational Performance Research at the U.S. Nuclear Regulatory Commission.” 88. James M. Taylor to the Commissioners, “Review of Organizational Factors Research, SECY-93–020,” February 1, 1993, 9302040223, NRC Legacy. 89. L. Joseph Callan to the Commissioners, “Proposed Options for Assessing the Performance and Competency of Licensee Management, SECY-98– 059,” March 26, 1998, ML992910122, NRC ADAMS; John C. Hoyle to L. Joseph Callan, “Staff Requirements: SECY-98–059, Proposed Options for Assessing the Performance and Competency of Licensee Management,” June 29, 1998, ML003753127, NRC ADAMS. 90. IAEA, ASCOT Guidelines, IAEA-TECDOC-743, 12, 14. 91. Haber and Barriere, Development of a Regulatory Organizational and Management Review Method; Nuclear Energy Agency, State-of-the-Art Report on Systematic Approaches to Safety Management, 65–78, Appendix 3.b; Durbin, Review of International Oversight of Safety Culture in Nuclear Facilities. 92. Nuclear Energy Agency, State-of-the-Art Report on Systematic Approaches to Safety Management. 93. Pidgeon, “Safety Culture: Key Theoretical Issues,”203–4, 213. 94. Apostolakis, “How Useful is Quantitative Risk Assessment?,” 517. 95. Peters, et al., “Organizational Factors in PRA: Twisting Knobs and Beyond”; Ghosh and Apostolakis, “Organizational Contributions to Nuclear Power Plant Safety”; Morrow, Koves, and Barnes, “Exploring the Relationship Between Safety Culture and Safety Performance in U.S. Nuclear Power Operations.” 96. US NRC, “Briefing on PRA Policy Statement and Action Plan,” August 30, 1994, 9409070307, NRC Legacy. 97. Scott Morris, email to the author, April 16, 2020.
260
| Notes to Chapter 5, Pages 128–130
98. David A. Ward to Ivan Selin, “The Consistent Use of Probabilistic Risk Assessment,” July 19, 1991, 9107240220, NRC Legacy; James M. Taylor to the Commissioners, “Staff Expertise and Capabilities to Utilize Analytical Codes, SECY-91–247,” August 7, 1991, 9109090019, NRC Legacy. 99. US NRC, A Review of NRC Staff Uses of Probabilistic Risk Assessment, 13–14; James M. Taylor to the Commissioners, “Proposed Policy Statement on the Use of Probabilistic Risk Assessment Methods in Nuclear Regulatory Activities, SECY-94–218,” August 18, 1994, 9409090233, NRC Legacy; James M. Taylor to the Commissioners, “Proposed Agency-Wide Implementation Plan for Probabilistic Risk Assessment (PRA), SECY-94–219,” August 19, 1994, ML12116A052, NRC ADAMS; “Briefing on PRA Policy Statement and Action Plan,” August 30, 1994, 9409070307, NRC Legacy. 100. US NRC, “Use of Probabilistic Risk Assessment Methods in Nuclear Activities,” 42622; US NRC, Regulatory Guide 1.174; Sorensen, Apostolakis, and Powers, “On the Role of Defense in Depth in Risk-Informed Regulation.” 101. “Remarks by Ivan Selin, Chairman, U.S. Nuclear Regulatory Commission before the ANS Executive Conference on Policy Implications of Risk-Based Regulation, Washington, D.C.,” March 15, 1994, ML003708710, NRC ADAMS. 102. Kenneth C. Rogers, “Risk Based Regulation and the Need for Reliability Data Collection,” at the International Workshop on Reliability Data Collection, Toronto, Canada, May 15, 1995, ML003708979, NRC ADAMS. 103. “An Evening with Dr. Shirley Ann Jackson, The Washington Area Alumni of the Massachusetts Institute of Technology, American University, Washington, D.C.,” November 15, 1995, 9511270092, NRC Legacy. 104. L. Joseph Callahan to the Commissioners, “White Paper on Performance-Based, Risk-Informed Regulation, SECY-98–144,” June 22, 1998, ML992880068, NRC ADAMS. 105. US NRC, “An Approach for Using Risk Assessment in Risk-Informed Decision on Plant-Specific Changes to the Licensing Basis,” attachment to Gareth W. Parry to G. E. Apostolakis, May 26, 1998, 9805280007, NRC Legacy; Shirley Ann Jackson, “The Role of Research in NRC Regulatory Programs, 23rd Water Reactor Safety Information Meeting, Bethesda, Maryland,” October 23, 1995, 9510270001, NRC Legacy. See also Murley, “Toward a New Safety Contract.” 106. Domenici, A Brighter Tomorrow, 72–77. 107. Weil, “NRC Gears Up for Budget Battle with Senate Appropriators”; Weil, “Domenici Book Touts his Role in Regulatory Turnaround; Jackson Begs to Differ.” 108. US NRC, Regulatory Review Group Summary and Overview, 17–18, 9309070076, NRC Legacy; James M. Taylor to the Commissioners, “Proposed Agency-Wide Implementation Plan for Probabilistic Risk Assessment (PRA), SECY-94–219,” August 19, 1994, ML15155B088, NRC ADAMS; William D. Magwood, “Regulating a Renaissance: Adapting to Change in a Globalized, Environmentally-Conscious, Security-Focused and Economically-Uncertain Century, S-11–015,” March 9, 2011, ML110940385, NRC ADAMS. 109. Shirley Jackson, “Chairman’s Address to Staff,” June 5, 1998, ML003708665, NRC ADAMS. Jackson acknowledged the influence of congressional oversight in pushing the NRC to aggressively pursue risk-informed
Notes to Chapter 5, Pages 130–133 |
261
regulation. See Ryan, “NRC Commission Endorses Revival of Nuclear Before Senate Panel.” 110. Schneider, “Pete Domenici”; see also Domenici, A Brighter Tomorrow, 3. 111. Sara Diaz, “Shirley Ann Jackson (1946-),” Black Past, https://www .blackpast.org/african-american-history/jackson-shirley-ann-1946; Walker, “One Step at a Time Toward Greatness”; “NRC Union Demands Potty Parity for Rank-and-File Employees.” 112. Ryan, “Nuclear Plant Success Said to Depend on Developing Managers.” 113. US Energy Information Administration, The Changing Structure of the Electric Power Industry, 13, 27–28, 132; Linden, “The Revolution Continues”; Kim Riley, “40 Years in the Making: FERC Takes Action to Update PURPA,” Daily Energy Insider, September 19, 2019, https://dailyenergyinsider.com /featured/21801–40-years-in-the-making-ferc-takes-action-to-update-purpa; Paul L. Joskow, “U.S. Energy Policy during the 1990s,” talk delivered at American Economic Policy During the 1990s conference at the John F. Kennedy School of Government, Harvard University, June 27 to June 30, 2001, 46–47, https://economics.mit.edu/files/1144. 114. “Battle Over Choice to Heat Up This Year.” 115. US Energy Information Administration, The Changing Structure of the Electric Power Industry, 77–84; Tuhus, “Who Pays for Mistakes in Making Electricity?” 116. “Sale and Early Closure of Units, A Glimpse at Industry’s Future.” 117. US Energy Information Administration, International Energy Outlook: 1999, 75–84; Weil, “AmerGen, GPU Nuclear Sign Deal for Sale of Oyster Creek”; Weil, “Final OK to Let PECO, Unicom be Largest U.S. Nuclear Operator”; “Exelon Generation Formally Integrates AmerGen Assets into Exelon Nuclear,” January 8, 2009, http://www.exeloncorp.com/newsroom/Pages/pr_ 20090108_Generation.aspx. 118. Wald, “Monopoly: Nuclear Power Version.” 119. Jackson, “Current Regulatory Challenges,” July 22, 1996, ML003711269, NRC ADAMS. See also US NRC, “Final Policy Statement on the Restructuring and Economic Deregulation of the Electric Utility Industry,” 44071. 120. Weil, “Regulation, Lack of Cohesive Policy Blamed for Nuclear Industry Woes.” 121. Jackson, “Talking Points: Industry Restructuring and the NRC, to the Washington International Energy Group, Washington, DC,” December 11, 1996, ML003708395, NRC ADAMS. See also US NRC, The Price-Anderson Act—Crossing the Bridge to the Next Century, 30, 33–34. 122. Hart, “NRC May Expand Purview to Financial Aspects of Nuclear Competition.” See also Burton and Olver, “Shutdown: Can Nuclear Plants Survive Deregulation?”; Kerber, “Nuclear Plants Face Huge Costs to Fix Problems.” 123. Nuclear Energy Institute, Enhancing Nuclear Plant Safety and Reliability, iii-iv, 13–14. 124. US NRC, “Advisory Committee on Reactor Safeguards 362nd ACRS Meeting,” June 8, 1990, 342–348, 9006120418, NRC Legacy; Harry Rood, “Summary of November 27, 1990 Public Meeting to Discuss the Risk-Based
262
| Notes to Chapter 5, Pages 133–136
Technical Specifications Program Developed by EPRI, PG&E, and Westinghouse,” December 10, 1990, 9012130032, NRC Legacy; Christopher I. Grimes to Brian K. Grimes, “Summary of Owners Groups Meeting on the Improved Standard Technical Specifications: June 29 and 30, 1993,” July 28, 1993, 9309030314, NRC Legacy; US NRC, Lessons Learned from Early Implementation of the Maintenance Rule at Nine Nuclear Power Plants; Apostolakis and Mosleh, Risk-Informed Decision Making: A Survey of United States Experience, 87–93. 125. US NRC, Strategic Plan: Fiscal Year 1997-Fiscal Year 2002, 6. 126. Kenneth C. Rogers, “Safety Regulation Evolution: A New Paradigm? Regulatory Information Conference, Washington, DC,” May 4, 1994, ML003710117, NRC ADAMS; US NRC, “DSI-12: Risk-Informed, Performance Based Regulation,” September 18, 1996, ML17293A569, NRC ADAMS; John C. Hoyle, “SECY Note,” April 22, 1997, 9704240161, NRC Legacy. 127. Thomas D. Ryan to John C. Hoyle, November 27, 1996, ML17293A595, NRC ADAMS. 128. Towers Perrin, Nuclear Regulatory Review Study. 129. Towers Perrin, Nuclear Regulatory Review Study, cover memo, 3, 10. 130. Towers Perrin, Nuclear Regulatory Review Study, 56, Exhibit III-3; US Senate. Subcommittee of the Committee on Appropriations, Hearings on Energy and Water Development Appropriations for Fiscal Year 1996, 104th Cong., 1st sess., September 30, 1996, 1187–1188. For similar industry complaints earlier, see “Industry Group Calls Regulation Unfair and Detrimental to Safety.” 131. Victor Stello to the Commissioners, “Systematic Assessment of Licensee Performance, SECY-80–083,” February 12, 1980, 8002280101, NRC Legacy. 132. Scott Morris, email to the author, April 16, 2020. On the improved safety performance of nuclear power plants, see statistics on “significant events” in annual editions of US NRC, Information Digest. 133. Scott Morris, email to the author, April 16, 2020; US NRC Office of the Inspector General, OIG Review of NRC’s Systematic Assessment of Licensee Performance Program (SALP), OIG 87A-21, August 1989, 9001220106, NRC Legacy. 134. Ryan, “NRC Finds Slow Going in Measuring Organizational Safety Culture.” 135. Franklin, “NRC Suspends Enforcement Drive.” 136. US GAO, Nuclear Regulation. 137. James M. Taylor to the Commissioners, “Systematic Assessment of Licensee Performance, SECY-96–005,” January 5, 1996; US NRC website, https://www.nrc.gov/reading-rm/doc-collections/commission/secys/1996; Nuclear Energy Institute, “Making Nuclear Plant Safety Assessments Crystal Clear”; L. Joseph Callan to the Commissioners, “Status of the Integrated Review of the NRC Assessment Process for Operating Commercial Nuclear Reactors, SECY-98–045,” March 9, 1998, ML992910124, NRC ADAMS. 138. Pooley, “Nuclear Warriors”; US NRC, Office of the Inspector General, NRC Failure to Adequately Regulate—Millstone 1, Case 95–771, December 21, 1995, ML15265A408, NRC ADAMS. 139. Pooley, “Nuclear Warriors.”
Notes to Chapter 5, Pages 136–139 |
263
140. “Sale and Early Closure of Units, A Glimpse at Industry’s Future.” See also Stellfox, “Jackson Orders Review of 50.59 Changes after Millstone Refueling Case”; Stellfox, “Whistleblower Forced Close Look at Millstone; Report Out This Week”; Rabinovitz, “N.R.C. Gives Final Approval to Restart of Millstone Reactor”; Choiniere, “NRC Concurs.” 141. Samuel J. Collins to T. C. McMeekin, “Response of Duke Engineering & Service to NRC Demand for Information (OI Report No. 1–95–050),” October 8, 1998, 9810160019, NRC Legacy. 142. Stellfox, “Maine Yankee Says NRC Inspection Will Cost Utility $10-Million”; Stellfox, “Maine Yankee Shutdown: Homicide or Suicide?” 143. James M. Taylor, “Speech before the Regulatory Information Conference,” Washington, DC, April 9, 1996, ML003709013, NRC ADAMS. 144. Shirley Ann Jackson, “ ’So Where are We Now and Where Do We Go From Here?’: Nuclear Power Industry and Nuclear Regulatory Challenges,” Regulatory Information Conference, Washington, DC, April 14, 1998, ML003708720, NRC ADAMS; Shirley Ann Jackson, “Nuclear Energy and Economic Competition: The NRC Perspective,” Nuclear Energy Institute Fuel Cycle ‘97 Conference, Atlanta GA, April 7, 1997, ML003708127, NRC ADAMS; US NRC, The Price-Anderson Act, 46–59. 145. Shirley Ann Jackson, “Talking Points: Industry Restructuring and the NRC, to the Washington International Energy Group, Washington, DC,” December 11, 1996, ML003708395, NRC ADAMS. 146. Kerber, “Nuclear Plants Face Huge Costs to Fix Problems.” See also Burton and Olver, “Shutdown”; Hart, “NRC May Expand Purview to Financial Aspects of Nuclear Competition.” 147. Paul Gunter to John C. Hoyle, February 7, 1997, 9702110249, NRC Legacy; James Riccio, “Strategic Assessment and Re-Baselining Project: Comments of Public Citizen’s Critical Mass Energy Project,” December 3, 1996, ML17293A608, NRC ADAMS. See also Judith H. Johnsrud to John C. Hoyle, December 1, 1996, ML17293A555, NRC ADAMS. 148. Lochbaum, Nuclear Plant Risk Studies, 8–9. 149. Stellfox, “Jackson Says Questions of Who Shut Millstone Down Misses the Point”; US GAO, Nuclear Regulation: Preventing Problem Plants Requires More Action, 6–12. 150. Stellfox, “Maine Yankee Shutdown,” 8. 151. Stellfox, “Wall Street, Lacking SALP Scores.” 152. Nuclear Energy Institute, “Making Nuclear Plant Safety Assessments Crystal Clear,” 1–2. 153. Stellfox, “Jackson Says Questions of Who Shut Millstone Down Misses the Point”; Weil, “Regulation, Lack of Cohesive Policy Blamed for Nuclear Industry Woes.” 154. US NRC, Office of the Inspector General, “NRC Needs Comprehensive Plan to Resolve Regulatory Issues, OIG/97A-01,” August 21, 1997, 9709020126, NRC Legacy. 155. Weisskopf and Maraniss, “Forging an Alliance for Deregulation,” 61; Sperry, “Saving Energy”; Weil, “NRC Gears Up for Budget Battle with Senate Appropriators,” 1, 13.
264
| Notes to Chapter 5, Pages 139–142
156. Weil, “NRC Survives—For Now—Senate Subcommittee Effort to Sharply Cut Staff”; Weil, “Some Senate Appropriators Propose Abolishing NRC’s ASLB.” 157. US Senate, Committee on Appropriations, Energy and Water Development Appropriation Bill, 1999, S. 2138, 105th Cong., 2nd sess., June 5, 1998. 158. Weil, “House Appropriators Give NRC $5 Million Less Money Than FY-98 Level”; Nils J. Diaz, “To Risk or Not to Risk, Remarks Before the 1998 NRC Regulatory Information Conference, Washington, DC,” April 14, 1998, ML003708711, NRC ADAMS; Nils J. Diaz, “Nuclear Regulation: Its Role in the Future of Nuclear Power, Remarks of Commissioner Nils J. Diaz, DecisionMakers Forum on a New Paradigm for Nuclear Energy, Senate Nuclear Issues Caucus,” June 19, 1998, ML003711569, NRC ADAMS. 159. US Senate, Committee on Environment and Public Works, Hearing Before the Subcommittee on Clean Air, Wetlands, Private Property and Nuclear Safety, 105th Congress, 2nd sess., July 30, 1998, 46, 147–55. 160. John C. Hoyle to L. Joseph Callan, “Staff Requirements—SECY-98– 045—Status of the Integrated Review of the NRC Assessment Process for Operating Commercial Nuclear Reactors,” June 30, 1998, ML003752969, NRC ADAMS; Wald, “At a Hearing, Nuclear Regulators are Criticized on 2 Fronts”; US Senate, Committee on Environment and Public Works, Hearing Before the Subcommittee on Clean Air, Wetlands, Private Property and Nuclear Safety, 105th Congress, 2nd sess., July 30, 1998. 161. Shirley Ann Jackson to L. Joseph Callan, August 7, 1998, 9808130079, NRC Legacy. 162. Nuclear Energy Institute, Nuclear Energy: 2000 and Beyond. 163. Richard Barrett to Thomas T. Martin, “Minutes of the July 28, 1998 Meeting with the Nuclear Energy Institute to Discuss Performance Indicators and Performance Assessment,” July 30, 1998, 9808070228, NRC Legacy; Jack W. Roe, “NRC Administrative Letter 98–07: Interim Suspension of the Systematic Assessment of Licensee Performance (SALP) Program,” October 2, 1998, 9810020160, NRC Legacy; “NRC Drops Hated ‘Watch List.’ ” 164. Stewart L. Magruder to Thomas H. Essig, “Summary of Meeting Held on August 28, 1998, With NEI to Discuss Risk-Informed, Performance-Based Pilot Project,” September 17, 1998, 9809220304, NRC Legacy; Stellfox, “NEI Proposes Wholesale Revision for NRC Regulation Based on PRA.” 165. Stewart L. Magruder to Thomas H. Essig, “Summary of Meeting Held on August 28, 1998, With NEI to Discuss Risk-Informed, Performance-Based Pilot Project,” September 17, 1998, 9809220304, NRC Legacy; Stellfox, “NEI Proposes Wholesale Revision for NRC Regulation Based on PRA.” 166. Stellfox, “NEI Proposes Wholesale Revision of NRC Regulation Based on PRA.” 167. US NRC, “All Employees Meetings on ‘The Green’ Plaza Area Between Building at White Flint,” September 3, 1998, ML15125A400 and ML15125A401, NRC ADAMS. 168. US Senate, Committee on Environment and Public Works, Subcommittee on Clean Air, Wetlands, Private Property, and Nuclear Safety, Nuclear Reg-
Notes to Chapter 5, Pages 142–148 |
265
ulatory Commission: Review of Programs and Reforms, 106th Cong., 1st sess., February 4, 1999, 18–20; Ryan, “NRC Commission Endorses Revival of Nuclear Before Senate Panel.” 169. “NEI Positively Comments on New Reactor Oversight Process.” 170. Cornelius F. Holden to Frank P. Gillespie, “Meeting with the Nuclear Energy Institute to Discuss the Preliminary Team Results of the New Framework, Baseline Inspection and Assessment Processes,” December 16, 1998, Attachment 4, 9901200178, NRC Legacy; US NRC, Reactor Oversight Process. 171. Brian Haagensen, email to the author, November 11 and 17, 2019. 172. Scott Morris, email to the author, April 16, 2020. 173. Stellfox, “UCS’ Lochbaum Approves of New Reactor Oversight Process.” 174. “NEI Positively Comments on New Reactor Oversight Process”; US NRC, “Briefing on Improvements in the Reactor Oversight Process, Public Meeting,” March 7, 2000, ML003691086, NRC ADAMS. 175. US Senate, Committee on Environment and Public Works, Hearing Before the Subcommittee on Clean Air, Wetlands, Private Property and Nuclear Safety, 105th Cong., 2nd Sess., July 30, 1998, 2. 176. Weil, “Sen. Inhofe Asks Industry to Critique NRC’s Regulatory Reform Plans.” 177. US NRC, “Meeting on NRC Response to Stakeholders’ Concerns, Rockville, MD,” December 16, 1999, ML15131A227, NRC ADAMS. 178. For an excellent report on the remarkable consistency and evolution of the defense-in-depth concept, see US NRC, Historical Review and Observations of Defense-in-Depth. 179. Barber, “Clean Air, Consolidation Make Nuclear Less of Investor Pariah.” 180. Ryan, “Prerequisites Emerge for Joining the U.S. ‘Nuclear Renaissance.’ ” 181. Scott Morris, email to the author, April 16, 2020. 182. Joskow, “U.S. Energy Policy in the 1990s,” 47. 183. Stellfox, “Risk-Informed Part 50 Could Boost Advanced Reactor Prospects.” 184. Joskow, “U.S. Energy Policy in the 1990s,” 62. CHAPTER 6
1. Center for Chemical Process Safety, Guidelines for Chemical Process Quantitative Risk Analysis, 1. 2. Bell and Esch, “The Space Shuttle: A Case of Subjective Engineering,” 42. 3. R. P. Feynman, “Appendix F: Personal Observations on the Reliability of the Shuttle,” in NASA, Report of the Presidential Commission on the Space Shuttle Challenger Accident, https://science.ksc.nasa.gov/shuttle/missions/51-l /docs/rogers-commission/Appendix-F.txt. 4. Pinkus, et al., Engineering Ethics, 270. 5. R. P. Feynman, “Appendix F: Personal Observations on the Reliability of the Shuttle,” in NASA, Report of the Presidential Commission on the Space
266
| Notes to Chapter 6, Pages 148–151
Shuttle Challenger Accident, https://science.ksc.nasa.gov/shuttle/missions/51-l /docs/rogers-commission/Appendix-F.txt. 6. Sagan, “The Problem of Redundancy Problem,” 943. 7. Vaughn, The Challenger Launch Decision, 62. 8. US NRC, “Advisory Committee on Reactor Safeguards, Subcommittee on Regulatory Policies and Practices License Renewal,” March 26, 1990, 154, 9003290227, NRC Legacy. Similar weaknesses in NASA management and safety culture were identified after the 2003 Columbia disaster. See Leveson, “Technical and Managerial Factors in the NASA Challenger and Columbia Losses.” 9. Fragola, “Risk Management in U.S. Manned Spacecraft,” 86. 10. Seife, “Columbia Disaster Underscores the Risky Nature of Risk Analysis.” See also Federal Coordinating Council for Science, Engineering and Technology Ad Hoc Working Group on Risk Assessment, Risk Assessment: A Survey of Characteristics, Applications, and Methods Used by Federal Agencies for Engineered Systems, November 1992, 17, ML040090236, NRC ADAMS. 11. Bell and Esch, “The Space Shuttle,” 46. 12. Henrion and Fischhoff, “Assessing Uncertainty in Physical Constants.” 13. Breyer, Breaking the Vicious Circle, 41; Howard, “On Fates Comparable to Death.” 14. Fragola, “Risk Management in U.S. Manned Spacecraft.” 15. US House of Representatives, Committee on Science and Technology, Review of RTG Utilization in Space Missions: Hearings before the Subcommittee on Energy Research and Production and the Subcommittee on Space Science and Applications, 99th Cong., 2nd sess., March 4, 1986, 41. 16. Bell and Esch, “The Space Shuttle,” 44; Rees, “Marshall Flight Center Approach in Achieving High Reliability of the Saturn Class Vehicles”; Fragola, “Risk Management in U.S. Manned Spacecraft,” 84; Stamatelatos, “Recent NASA Activities and Plans in Risk Analysis.” 17. Bell and Esch, “The Space Shuttle,” 44; Rees, “Marshall Flight Center Approach in Achieving High Reliability of the Saturn Class Vehicles”; Fragola, “Risk Management in U.S. Manned Spacecraft,” 83–84; Stamatelatos, “Recent NASA Activities and Plans in Risk Analysis.” 18. ERI Consulting & Co., Review of Risk Management Practices in Various Organizations and Industries, Appendix B, 12–13. 19. See US House of Representatives, Committee on Science and Technology, Review of RTG Utilization in Space Missions: Hearings before the Subcommittee on Energy Research and Production and the Subcommittee on Space Science and Applications, 99th Cong., 2nd sess., March 4, 1986 (testimony of Milton Silveira, 28–42); Pinkus, Engineering Ethics, 154–58. As later comparisons showed, FMEA/CIL was a limited evaluation compared to PRA. See Roger Boyer and Mike Stewart, “Probabilistic Risk Assessment (PRA): How to Quantify and Understand Risk,” Deep Space/Deep Ocean: Aramco Technology and Operational Excellence Forum, April 7, 2015, https://ntrs.nasa.gov/archive /nasa/casi.ntrs.nasa.gov/20150003057.pdf. 20. US AEC, “Satellite Application of SNAP-3, AEC 1000/16,” April 6, 1961, ML003755839, NRC ADAMS; McGeorge Bundy to Edward C. Welsh, “NSAM
Notes to Chapter 6, Pages 151–153 |
267
50: Official Announcements of Launching into Space of Systems involving Nuclear Power in Any Form,” May 12, 1961, JFK Library Webpage, https://www .jfklibrary.org/Asset-Viewer/Archives/JFKNSF-329–025.aspx; John F. Kennedy to Multiple agencies, “NSAM 235, Large-Scale Scientific or Technological Experiments with Possible Adverse Environmental Effects,” April 17, 1963, https;// fas.org/irp/offdocs/nsam-jfk/nsam-235.htm; Hardy, Krey, and Volchok, Global Inventory and Distribution of Pu-238 from SNAP-9A, 17; Zbigniew Brzezinski to Secretary of State and multiple agencies, “Presidential Directive/NSC-25, Scientific or Technological Experiments with Possible Large-Scale Adverse Environmental Effects and Launch of Nuclear Systems into Space,” December 14, 1977, Jimmy Carter Presidential Library, https://www.jimmycarterlibrary.gov/assets/documents /directives/pd25.pdf. 21. Alexander, et al., Final Snapshot Safeguards Report NAA-SR-10022; Detterman, Weitzberg, and Willis, Aerospace Safety Program—Safe Disposal of SNAP Reactors, NAA-SR-11353; Willis, Statistical Safety Evaluation of Power Reactors AI-65-TDR-212. 22. Buden, “The Acceptability of Reactors in Space”; Bennett, “Overview of the U.S. Flight Safety Process for Space Nuclear Power.” 23. James B. Baeker, Space Shuttle Range Safety Hazards Analysis, Technical Report no. 81–1329 (Redondo Beach, CA: J. H. Wiggins, Co., July 1981), National Archives and Records Administration (NARA), Record Group 220, https://catalog.archives.gov/id/644134, File RG220.CHALPUB.RPT. 24. Baeker, Space Shuttle Range Safety Hazards Analysis. 25. R. Weatherwax and E. Colglazier, Review of the Updated Safety Analysis Report (USAR) for the Galileo and International Solar-Polar Missions by the Interagency Nuclear Safety Review Panel Launch Abort Subpanel (Teledyne, July 5, 1984), NARA, RG 220, File RG220.CHALPUB.RPT. See also US House, Committee on Science and Technology, Hearings before the Subcommittee on Energy Research and Production and the Subcommittee on Space Science and Applications, 79; D. Carlson and S. Hatch, Report on MSFC Shuttle Element Risk Assessment for RTG Missions, Sand 84–1579 (Sandia National Laboratory, December 1984), NARA, RG 220, https://catalog.archives.gov /id/644134, File RG220.CHALPUB.RPT. 26. Marshall Space Flight Center Risk Assessment Team, MSFC Shuttle Element Risk Assessment for RTG Missions, November 1984. 27. D. Carlson and S. Hatch, Report on MSFC Shuttle Element Risk Assessment for RTG Missions, Sand 84–1579 (Sandia National Laboratory, December 1984), NARA, RG 220, https://catalog.archives.gov/id/644134, File RG220. CHALPUB.RPT; House, Committee on Science and Technology, Hearings before the Subcommittee on Energy Research and Production and the Subcommittee on Space Science and Applications, 245–261, 274–294. Although Marshall maintained that the second O-ring provided redundancy, NASA concluded in 1982 that it did not provide a backup to the primary O-ring. See discussion in Chapter 6 of NASA, Report of the Presidential Commission on the Space Shuttle Challenger Accident. See also Sagan, “The Problem of Redundancy Problem,” 943. 28. Space Shuttle Launch Abort Subpanel, Interagency Nuclear Safety Review Panel, Report on Evaluation of Wiggins, SERA Shuttle Range Safety
268
| Notes to Chapter 6, Pages 154–156
Hazards, May 1985, NARA, RG 220, File https://catalog.archives.gov/id /644134, RG220.CHALPUB.RPT. 29. Committee on Shuttle Criticality Review and Hazard Analysis Audit, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, 3, 4, 46. 30. Marshall, “Academy Panel Faults NASA’s Safety Analysis.” 31. Committee on Shuttle Criticality Review, Post-Challenger Evaluation of Space Shuttle, 3–6, 34–36, 46, 55. 32. Bell and Esch, “The Space Shuttle: A Case of Subjective Engineering,” 46. 33. Fragola, “Risk Management in U.S. Manned Spacecraft,” 83–92. 34. Federal Coordinating Council for Science, Engineering and Technology Ad Hoc Working Group on Risk Assessment, Risk Assessment: A Survey of Characteristics, Applications, and Methods Used by Federal Agencies for Engineered Systems, November 1992, 17, ML040090236, NRC ADAMS; Fragola, “Risk Management in U.S. Manned Spacecraft,” 86; Seife, “Columbia Disaster Underscores the Risk Nature of Risk Analysis.” 35. Science Applications International Corporation, Probabilistic Risk Assessment of the Space Shuttle, 1–2 to 1–11, 7–2 to 7–3. 36. Vesely, et al., Fault Tree Handbook with Aerospace Applications; Stamatelatos and Rutledge, “Probabilistic Risk Assessment at NASA and Plans for the Future”; Bryan O’Connor to Brian Sheron, “Memorandum of Understanding (MOU) between National Aeronautics and Space Administration (NASA) Office of Safety and Mission Assurance, and Nuclear Regulatory Commission (NRC) Research,” December 9, 2008, ML090130193, NRC ADAMS. 37. Columbia Accident Investigation Board, Columbia Accident Investigation Report, 188. 38. Seife, “Columbia Disaster Underscores the Risky Nature of Risk Analysis.” 39. Perera and Holsomback, “Use of Probabilistic Risk Assessments for the Space Station Program,” 517; NASA, Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners; NASA, NASA Accident Precursor Analysis Handbook; NASA, NASA Risk Management Handbook; NASA, NASA System Safety Handbook: Volume 1; NASA, NASA System Safety Handbook: Volume 2. 40. The best history of the Soviet safety system is Schmid, Producing Power. 41. “Germans and Austrians Protest Against Temelin.” 42. Primatarova, “The Closure of Units 1–4,” 120–21; Ivanov, “Legitimate Conditionality?” 150, 162; Oudenaren, “The Limits of Conditionality,” 470; Axelrod, “Nuclear Power and EU Enlargement,” 153, 162–63; Maniokas and Staniouis, “Negotiations on Decommissioning Ignalina”; Dranseikaite, “The Closure of the Ignalina,” 200, 214–17. This emphasis on non-technical over technical factors is similar to studies of European integration and general technology politics by political scientists and historians of technology and policy. See Gillingham, European Integration; O’Brennan, Eastern Enlargement of the European Union; Wiener and Diez, European Integration Theory; Shore, Building Europe. 43. Networks of experts had similar influence in other areas of European technology policy. See Schot and Schipper, “Experts and European Transport Integration,” 277.
Notes to Chapter 6, Pages 157–159 |
269
44. Eric van der Vleuten warns of the risk of internalist narratives in writing transnational histories of technology. However, histories should not overlook important engineering factors that influenced political decisions. See van der Vleuten, “Towards a Transnational History of Technology,” 993. 45. This tended to be true of the Soviet reactors that were most like Western designs, particularly their PWR class, the VVERs. PRAs were less useful for the Chernobyl-type RBMK that had no Western counterpart. 46. In Europe, PRAs are known as Probabilistic Safety Assessments (PSAs). As the rest of the book uses the US term, “PRA” will be substituted for “PSA” in the text and all quotations, but not titles in footnotes. 47. Grabbe, “Europeanization Goes East”; Carmin and Deveer, “Enlarging EU Environments,” 16; Caddy, “Harmonisation and Asymmetry.” For a critique of this interpretation, see Jehlicka and Tickle, “Environmental Implications of Eastern Enlargement”; Krige, “Peaceful Atom as Political Weapon”; Howlett, EURATOM and Nuclear Safeguards; Trischler and Weinberger, “Engineering Europe”; McCray, “ ‘Globalization with Hardware’ ”; Arnold “Europe, Technology, and Colonialism”; van der Vleuten and Kaijser, Networking Europe; Krige, Russo, and Sebesta, “A Brief History of the European Space Agency,” 441. 48. This is not to suggest that engineers could alter their political environment simply by their knowledge claims, but internationally accepted engineering standards did influence decisionmakers. See a similar point made in Barth, “Catalysts of Change,” 185. 49. Peter Lyth and Helmuth Trischler pose the question: “Is it the nature of globalizing technology to be so inherently Western that the technology cannot be acquired without the culture?” For Western nuclear engineers involved in the CEE effort, the answer was, “yes.” See Lyth and Trischler, Wiring Prometheus, 14. 50. Krige, American Hegemony and the Postwar Reconstruction of Europe; Krige, Sharing Knowledge, Shaping Europe. 51. Burns, “The Impact of the Major Nuclear Power Plant Accidents on the International Legal Framework for Nuclear Power”; International Atomic Energy Agency, International Atomic Energy Agency: Personal Reflections, 263; “Council Resolution of 22 July 1975”; European Commission, Nuclear Safety and the Environment, 4–8; Iansiti and Konstantinov, “Nuclear Safety Standards.” 52. “An Overview of the Nuclear Safety Standards”; “Safety Principles Underlying the NUSS Documents”; Adede, IAEA Notification and Assistance Conventions, xxi-xxii. The IAEA produced guidelines for bilateral treaties on reporting accidents and emergency assistance. See IAEA, Guidelines on Reportable Events; IAEA, Guidelines for Mutual Emergency Assistance. 53. Ryan, “Regulators Told to Increase.” 54. IAEA, The International Nuclear Events and Radiological Scale User’s Manual. 55. Ryan, “Regulators Told to Increase.” See also IAEA, Personal Reflections, 174; Burns, “The Impact of the Major Nuclear Plant Accidents on the International Legal Framework for Nuclear Power,” 11; International Nuclear Safety Advisory Group, The Safety of Nuclear Power. The creation of INSAG indicated that the reliance noted by Sheila Jasanoff of US federal agencies on
270
| Notes to Chapter 6, Pages 159–163
expert advisory panels to solve policy issues was common for international organizations, too. See Jasanoff, The Fifth Branch, 1–19. 56. R. W. Barber and F. X. Gavigan, “Report of a Trip to the USSR by an NRC-DOE Nuclear Safety Team,” March 2–13, 1987, 8805060211, NRC Legacy. 57. Tanguy, “The French Approach to Nuclear Power Safety,” 592; Stellfox, “IPE Round-Up”; Niehaus, “Use of Probabilistic Safety Assessment,” 155–56. 58. MacLachlan, “French Rethinking Safety.” 59. MacLachlan, “Approaches to Safety in EC Countries”; Commission of the European Communities, Summary Report of Safety Objectives; European Commission, Nuclear Safety and the Environment: 30 Years of NRWG Activities, 19. France’s utility, EDF, was skeptical of PRAs well before Chernobyl. See Glucroft, “French See PRA Useful.” 60. Karen Henderson, email to the author, April 13, 2020. 61. INSAG, The Chernobyl Accident; Hibbs and MacLachlan, “East Germans Looking West”; MacLachlan, “Watkins Says Soviets Have Not Learned Nuclear Safety Lessons.” 62. Joksimovich and Orvis, “Safety Culture in Nuclear Installations, Risk Culture: An Outgrowth of Safety Culture,” 293; “East German Plan Foresees Western Help”; GDR Wants to End Decade of Errors.” 63. “GDR Wants to End Decade of Errors”; Hibbs, “Greifswald Restart Not Likely.” 64. “GDR Orders Greifswald Shut.” 65. Simons, “Evolution in Europe.” 66. Daily Record (Ellensburg, WA), February 6, 1990, 1. 67. Hibbs, “German VVERs to be Closed”; Hibbs, “As Greifswald Tension Mounts,” 7. 68. Hibbs and Seneviratne, “Western Governments Differ Over Nuclear Aid to Bulgaria”; MacLachlan, “Regulators Says Kozloduy Needs Continued Western Aid for Safety,” 1. 69. Jack Ramsey, emails to the author, March 7, 2013, and April 13, 2020. 70. MacLachlan, “Legasov’s Suicide: A Mute Reproach to Soviet Industry.” 71. Upton, Assessment of the Impacts of Transferring Certain Nuclear Reactor Technologies to the Soviet Union and Eastern Europe; Hirsch, et al., IAEA Safety Targets and Probabilistic Risk Assessment, 42–43. 72. Ryan, “WANO Says Old VVERs.” See also “International Safety Review of WWER-440/230 Nuclear Power Plants”; “Preventing Chernobyl II.” 73. Franklin, “Selin’s East European ‘Snapshot’ ”; US NRC, “Press Conference, Rockville, MD” October 7, 1991, 9110250204, NRC Legacy. 74. Simons, “West Urges Bulgarians to Shut Reactors.” 75. Simons, “Evolution in Europe.” 76. Hibbs, “East Europe’s Nuclear Share.” See also Hibbs and Seneviratne, “Western Governments Differ”; “E. Europe Nuclear Plants Worry.” Suggestions that all Soviet-designed reactors would be shuttered lasted for several years; see “EU Wants Soviet-Style Nuclear Plants Closed.” 77. US NRC, Annual Report, 1993, 153–54. 78. Yanko Yanev, email to the author, April 12, 2020.
Notes to Chapter 6, Pages 163–167 |
271
79. “Chairman Selin Travels to Soviet Union.” 80. International Atomic Energy Agency, The Safety of Nuclear Power Plants in Central and Eastern Europe, 2–3, 9–11; Gesellschaft fur Reaktorsicherheit. First Interim Report on Evaluation of Safety at the Greifswald, 3; MacLachlan, “G-7 Aid Consensus Sees No Future”; Hibbs, “G-7 Leaders to Okay $700 Million.” 81. Harold R. Denton to The Commissioners, “SECY-93–036: Report of Consultants Meeting on Nuclear Safety Assistance to East Europe and the Former Soviet Union, February 1, 1993,” February 10, 1993, 9302120272, NRC Legacy. 82. MacLachlan, “Official Says Bulgaria Won’t Shut Four Kozloduy Units.” 83. Isidro Lopez Arcos, email to the author, April 20, 2020. 84. Ivan Selin, “Nuclear Safety in the New Independent States and in Central and Eastern Europe, The British Nuclear Forum,” September 13, 1994, ML003710697, NRC ADAMS. 85. Karen Henderson, email to the author, April 13, 2020. 86. Jack Ramsey, email to the author, April 13, 2020; Stellfox, “PRA Use at RBMKs and VVERs Debated”; Hibbs, “Probabilistic Assessment for VVER.” 87. MacLachlan, “EC Aid to Bulgaria Proceeding.” 88. IAEA, The Safety of WWER-440, 26; Ryan, “EQE Says Bulgaria’s Kozloduy Units Could be Made Safe.” 89. Ryan, “EQE Says Bulgaria’s Kozloduy Units Could be Made Safe,” 3; and ENCONET Consulting, Current Status of Probabilistic Safety Assessments, 19, 22, 24–25. 90. Hibbs, “Probabilistic Assessment for VVER,” 6. 91. Jack Ramsey, email to the author, April 13, 2020. 92. Carol Kessler, email to the author, April 12, 2020. 93. Luis Lederman, email to the author, April 11, 2020. 94. IAEA, Ranking of Safety Issues for WWER-440 Model 230 Nuclear Power Plants; IAEA, The Safety of WWER-440; International Nuclear Safety Advisory Group, A Common Basis for Judging the Safety of Nuclear Power Plants; IAEA, Procedures for Conducting Probabilistic Safety Assessments; IAEA, Review of Probabilistic Safety Assessments by Regulatory Bodies; IAEA, Generic Initiating Events for PSA for WWER Reactors; IAEA, Use of PSA Level 2 Analysis. For a list of IAEA-sponsored PRA activity, see IAEA, Safety of WWER and RBMK Nuclear Power Plants, 36–37. The NRC sponsored a Russian PRA; see US NRC, Severe Accident Risks for VVER Reactors: The Kalinin PRA Program. 95. Hecht and Edwards, Technopolitics of the Cold War, 24. 96. MacLachlan, “EDF Threatens to Leave Kozloduy.” 97. MacLachlan, “First Upgraded Kozloduy Unit”; Hart, “Bulgaria Seeks Joint Ventures,” 15. 98. Jack Ramsey, email to the author, April 13, 2020. 99. Karen Henderson, email to the author, April 13, 2020. 100. Toepfer, “President’s Opening Address,” 9–22. In the same volume, see also Brinkhorst, “Perspectives from the Commission of the European Communities,” 23–30; Rosen, “An International Safety Agenda,” 41–46.
272
| Notes to Chapter 6, Pages 167–169
101. MacLachlan, “World Nuclear Safety Regime is Debated Hotly in Vienna,” 1. 102. IAEA, Convention on Nuclear Safety, July 5, 1994, INFCIRC/449, http://www.iaea.org/Publications/Documents/Infcircs/Others/inf449.shtml. 103. Ivan Selin, “Nuclear Safety in the New Independent States and in Central and Eastern Europe,” September 13, 1994, ML003710697, NRC ADAMS. 104. European Commission, Agenda 2000, 6,15, 39–40. 105. Western European Nuclear Regulators’ Association, General Conclusions of WENRA; Kessler, email to the author, April 12, 2020. 106. Karen Henderson, email to the author, April 13, 2020. For a critical assessment of Lithuania’s safety culture and design safety issues, see Brown, et al., “Ignalina In-Depth Safety Assessment,” 24–34. 107. Hibbs, “GRS Study Will Improve VVERs,” 10. Two model 213s were closed at Greifswald, but this was due largely to unique conditions of integrating East German plants with West German safety requirements. Based on deterministic analysis, it was clear that Greifswald’s confinement buildings could not meet stringent West German criteria that they withstand an airplane impact, a standard required by few other nations. Experts concluded an upgrade might cost over $1.1 billion. The 213 models elsewhere survived because they could be economically upgraded. 108. “Czechs Agree to Let Outside Experts Inspect Atomic Power Plant”; MacLachlan, “Austria Threatens to Veto Czech EU Entry.” 109. “Eastinghouse” was coined for two Finnish reactors, hybrids of Soviet and Western design. 110. “Verheugen Said to Back Temelin Plant Safety.” See also MacLachlan, “Austrian Institute Says Temelin by Far Not Europe’s Riskiest Unit.” 111. Axelrod, “Nuclear Power and EU Enlargement,” 159, 161. 112. MacLachlan, “Austria Left Empty-Handed,” 7; MacLachlan, “Czechs, Austrians Agree,” 6; Western European Nuclear Regulators’ Association, General Conclusions of WENRA; MacLachlan, “SUJB, CEZ Agree.” 113. Western European Nuclear Regulators’ Association, Nuclear Safety in EU Candidate Countries; “Memorandum Between the European Commission and the Bulgarian Government”; “Bulgaria Will Honor Its Commitments,” Bulgarian EuroBulletin. 114. Sartmadjiev, Balabanov, and Genov, “A Concept, a Technical Solution, and Analysis for Modernization”; Tranteeva, “Kozloduy NPP,” 213–19; IAEA, Ranking of Safety Issues for WWER-440 Model 230 Nuclear Power Plants: Extrabudgetary Program on Safety Aspects; IAEA, Report of the Expert Mission, 37–39, 88–92; IAEA, Press Release: IAEA Experts Review Safety; IAEA, Annual Report 2002, 52; MacLachlan, “Experts Find Kozloduy VVERs”; “Preliminary Conclusions of the Members of the Expert Mission,” Bulgarian EuroBulletin. 115. MacLachlan, “Despite Endorsing EU Standards”; MacLachlan, “Kozloduy-3/4 Face Forced Closure”; IAEA, Strength Analyses of the Bubbler Condenser, 7–23; Nuclear Energy Agency, Bubbler Condenser Related Research Work, 7–9. 116. Yanko Yanev, email to the author, April 12, 2020.
Notes to Chapter 6, Pages 169–173 |
273
117. MacLachlan, “Despite Endorsing EU Standards,” 6; MacLachlan, “Kozloduy-3/4 Face Forced Closure,” 1; Yanko Yanev, email to the author, April 12, 2020. 118. Yanko Yanev, email to the author, April 12, 2020. Patricia Clavin makes this point as well; see Clavin, “Defining Transnationalism,” 431. 119. MacLachlan, “EU Utilities Fear.” 120. Isidro Lopez Arcos, email to the author, April 20, 2020. 121. MacLachlan, “WENRA Working on Reference Requirements for PSAs”; MacLachlan, “European Regulators, Industry Embark on ‘Harmonization’ of Regs”; Knapik, “IAEA Meeting Raises Issues,” 13; “European Wariness of PSA Increases”; MacLachlan, “Europeans Wish NRC Luck”; MacLachlan, “Operators Say PSAs Prove”; MacLachlan, “Eastern Europeans Vie for Safety Kudos.” 122. The French believed their new independent regulatory system would be a system equivalent to the United States’s system. It would increase confidence in reactor safety and, by increasing transparency, would improve democracy in nuclear regulation. See Berthelemy and Leveque, “Harmonising Nuclear Safety Regulation in the EU,” 133–34; Hecht and Edwards, Technopolitics of the Cold War, 12; Hecht, Radiance of France, 338; MacLachlan, “French Minister Sets Sights on Regulatory Agency Modeled on NRC”; MacLachlan, “French Government Approves Bill Creating Independent Regulator”; IAEA, Convention on Nuclear Safety. 123. NEA, Comparison of Probabilistic Seismic Hazard Analysis of Nuclear Power Plants in Areas with Different Levels of Seismic Activity; NEA, Status of Practice for Level 3 Probabilistic Safety Assessments; NEA, Summary Record of the Seventeenth (17th) Meeting of the Working Group on Risk Assessment; NEA, Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2010. An update to the latter, Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2017, is forthcoming. 124. “Germans and Austrians Protest Against Temelin”; “Nuclear Power in Russia,” World Nuclear Association, http://www.world-nuclear.org/info/inf45 .html (updated September 2012). On the concept of technodiplomacy, see Krige, “Technodiplomacy.” 125. Jack Ramsey, email to the author, April 13, 2020. 126. Nash, “From Safety to Risk,” 2. 127. Thompson, Deisler, and Schwing, “Interdisciplinary Vision.” 128. National Research Council, Committee on the Institutional Means for Assessment of Risks to Public Health, Commission on Life Sciences, Risk Assessment in the Federal Government. The most notable exception to the costbenefit approach was the “Delaney Clause” of the Federal Food, Drug and Cosmetic Act, which had a zero-tolerance standard for any cancer-causing additive or animal drug found in meat. See Graham, “Historical Perspectives of Risk Assessment in the Federal Government,” 35. 129. Industrial Union Department v. American Petroleum Institute, 448 U.S. 607 (1980). See also US EPA, “Health Risk and Economic Impact Assessments of Suspected Carcinogens”; Maugh, “Chemical Carcinogens: How Dangerous are Low Doses?”
274
| Notes to Chapter 6, Pages 173–181
130. Although not as ambitious as the 1975 Rasmussen Report, the EPA produced its first quantitative risk assessment in the same year using the LNT model to estimate risk from vinyl chloride. See Kuzmack and McGaughy, Quantitative Risk Assessment for Community Exposure to Vinyl Chloride, B-2. 131. Weil, “NRC-EPA War Over Decommissioning Cleanup Standards Moves to Congress.” 132. Walker, Permissible Dose, 11 and 26; Jones, “A Review of the History of U.S. Radiation Protection Regulations,” 107–112; Nash, “From Safety to Risk,“ 23–24; Sowby, “Radiation and Other Risks.” 133. Walker, Permissible Dose, 49. 134. Sowby, “Radiation and Other Risks.” 135. Graham, “Historical Perspective on Risk Assessment in the Federal Government,” 33–35. 136. Walker, Permissible Dose, 36–44, 47–52; Gofman and Tamplin, “Low Dose Radiation, Chromosomes, and Cancer.” 137. Walker, Permissible Dose, 57–62. 138. Jones, “A Review of the History of U.S. Radiation Protection Regulations,” 110. 139. Jones, “A Review of the History of U.S. Radiation Protection Regulations,” 112; Smith, Ghosh, and Kanatas, “Death V. Taxes”; Graham, “Historical Perspective on Risk Assessment in the Federal Government,” 36–39. 140. Smith, Ghosh, and Kanatas, “Death V. Taxes”; Graham, “Historical Perspective on Risk Assessment in the Federal Government,” 36–39. 141. Ruckelshaus, “Science, Risk, and Public Policy,” 1026–28. 142. Roberts, “Counting on Science at EPA,” 616. See also Nash, “From Safety to Risk,” 15–18. 143. Ruckelshaus, “Science, Risk, and Public Policy” 1026–27; Roberts, “Counting on Science at EPA,” 616; Breyer, Breaking the Vicious Circle, 60–61. 144. Ruckelshaus, “Science, Risk, and Public Policy,” 1026. See also Kuzmack and McGaughy, Quantitative Risk Assessment for Community Exposure to Vinyl Chloride, B-2. 145. Roberts, “Counting on Science at EPA,” 617. 146. US EPA, Risk Assessment and Management: Framework for Decision Making; Superfund Public Health Evaluation Manual, 67; US EPA, “40 CFR Part 300, National Oil and Hazardous Substances Pollution Contingency Plan”; US EPA, “40 CFR Part 61, National Emission Standards for Hazardous Air Pollutants; Radionuclides”; US EPA, “40 CFR Part 300, National Oil and Hazardous Substances Pollution Contingency Plan.” See also Spangler’s three part series on NRC and risk, “De Minimis Risk Concepts in the US Nuclear Regulatory Commission.” 147. Travis, et al., “Cancer Risk Management.” 148. “NRC Proposed Interagency Task Force to Resolve Disagreements with EPA.” 149. “EPA Allowed to Defer to NRC on Radionuclide Emissions Standards.” 150. US NRC Press Release, “NRC, EPA Sign Memorandum of Understanding,” March 16, 1992, ML003702487, NRC ADAMS.
Notes to Chapter 6, Pages 181–185 |
275
151. US EPA, “40 CFR Part 61, National Emissions Standards for Hazardous Air Pollutants”; Henry A. Waxman, Philip R. Sharp, and Mike Synar to Carol Browner, May 25, 1993, and Ivan Selin to Carol M. Browner, June 11, 1993, 9306230244, NRC Legacy. 152. US General Accounting Office, Nuclear Health and Safety: Consensus on Acceptable Radiation Risk to the Public is Lacking, 4, 29. 153. “News Release: Glenn Calls for Better Government Radiation Safety Standards,” October 27, 1994, 9412150172, NRC Legacy. 154. James M. Taylor to the Commissioners, “U.S. Nuclear Regulatory Commission and U.S. Environmental Protection Agency Risk Harmonization Issues and Recommendations, SECY-95–249,” October 3, 1995, 9510110260, NRC Legacy. 155. Kenneth L. Mossman, Keith Schiager, and Marvin Goldman to Gail de Planque, April 4, 1994, 9405110148, NRC Legacy; National Council of Radiation Protection and Measurements, “Recent Applications of the NCRP Public Dose Limit Recommendation for Ionizing Radiation, NCRP Statement No. 10,” December 2004, http://ncrponline.org/wp-content/themes/ncrp/PDFs /Statement_10.pdf. 156. Breyer, Breaking the Vicious Circle, 13–14, 16. 157. Carol M. Browner to Shirley Ann Jackson, February 7, 1997, 9702270390, NRC Legacy; Airozo, “Browner Warns Jackson to Not Loosen Residual Rad Standards.” 158. Cheryl A. Trottier, “Discussion of the Unrestricted Dose Criterion in NRC’s Draft Final Rule (SECY-97–046A) and Changes from the 1994 Proposed Rule Approach,” nd, 9705290372, NRC Legacy; and Ramona Trovato, “Statement on the Nuclear Regulatory Commission’s Rule on Radiological Criteria for License Termination,” April 21, 1997, 9705290372, NRC Legacy. 159. Airozo, “Battle Over Decommissioning Standards Escalates as NRC Final Rule Nears.” 160. Airozo, “Battle Over Decommissioning Standards Escalates as NRC Final Rule Nears.” 161. Stephen D. Luftig and Larry Weinstock to Multiple Addressees, “Establishment of Cleanup Levels for CERCLA Sites with Radioactive Contamination, OSWER No. 9200 4–18,” August 22, 1997, U.S. Environmental Protection Agency Website, https://nepis.epa.gov/Exe/ZyPDF.cgi/P100MKM2.PDF? Dockey = P100MKM2.PDF. 162. James M. Taylor to the Commissioners, “U.S. Nuclear Regulatory Commission and U.S. Environmental Protection Agency Risk Harmonization Issues and Recommendations, SECY-95–249,” October 3, 1995, 9510110260, NRC Legacy. 163. Walker, Permissible Dose, 115–20; Spangler, “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 1,” 235; Spangler “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 2,” 51–52. 164. US NRC, Effective Risk Communication, 18. 165. Hart, “Commissioner de Planque Charges NRC Moving Toward ‘Over-Regulation.’ ”
276
| Notes to Chapter 6, Pages 186–189
166. Ryan, “Dose Regulation Basis Excoriated, But Replacement Not In Sight.” 167. Airozo, “EPA Administrator Warns Jackson Not Loosen Residual Rad Standards,”6. 168. Ryan, “Dose Regulation Basis Excoriated, But Replacement Not In Sight.” 169. US NRC, “10 CFR Part 20, et al. Radiological Criteria for License Termination,” 39062. 170. US NRC, “10 CFR Part 20, et al. Radiological Criteria for License Termination,” 39064. 171. Airozo, “Commissioners Affirm Votes on Decommissioning Standards.” See also Shirley Ann Jackson to Carol M. Browner, December 12, 1997, ML003696785, NRC ADAMS. 172. Stephen D. Luftig and Larry Weinstock to Multiple Addressees, “Establishment of Cleanup Levels for CERCLA Sites with Radioactive Contamination, OSWER No. 9200 4–18,” August 22, 1997, EPA website, https://nepis .epa.gov/Exe/ZyPDF.cgi/P100MKM2.PDF?Dockey = P100MKM2.PDF. 173. Weil, “Meserve-Whitman Meeting May Mark Fresh Start for Agencies”; Bickers, “EPA Accepts NRC Unrestricted Release Does Limits at West Valley”; and US Department of Energy and New York State Energy Research and Development Authority, Final Environmental Impact Statement for Decommissioning and/or Long-Term Stewardship at the West Valley Demonstration Project and Western New York Nuclear Service Center. 174. “Memorandum of Understanding between the Environmental Protection Agency and the Nuclear Regulatory Commission,” October 9, 2002, https:// www.nrc.gov/reading-rm/doc-collections/news/2002/mou2fin.pdf. For an example of how the consultative process has worked between the NRC and EPA, see Derek Widmayer, “Implementation of the NRC/EPA MOU on Decommissioning Sites,” June 6, 2005, ML051380168, NRC ADAMS; Charles L. Miller to James Woolford, December 21, 2006, ML063340647, NRC ADAMS; James E. Woolford to Charles L. Miller, March 21, 2007, ML070871137, NRC ADAMS. CHAPTER 7
1. The agency’s “PRA Implementation Plan” was renamed the “RiskInformed Regulation Implementation Plan.” See William D. Travers to the Commissioners, “Risk-Informed Regulation Implementation Plan, SECY-00– 0062,” March 15, 2000, NRC SECY webpage. See also US NRC, Strategic Plan, NUREG-1614, vols. 1 and 2. 2. Stellfox, “Risk-Informed Cost-Benefit Equation.” 3. Stellfox, “Risk-Informed Cost-Benefit Equation.” 4. US NRC, “Meeting on NRC Response to Stakeholders’ Concerns, Rockville, MD,” December 16, 1999, ML15131A227, NRC ADAMS. 5. Stellfox, “Risk-Informed Cost-Benefit Equation.” 6. Knapik, “NRC Moving to Eliminate Some Rules on Hydrogen Recombiners, Monitors”; US NRC, “Briefing on Risk-Informing Special Treatment Requirements,” July 20, 2001, ML012040383, NRC ADAMS.
Notes to Chapter 7, Pages 189–192 |
277
7. US NRC, Effective Risk Communication, 2. 8. US NRC, The Technical Basis for the NRC’s Guidelines for External Risk Communication, 1–4. 9. Stellfox, “Part 50 Pulled into Big Changes Proposed in NRC Regulations.” 10. Stellfox, “Reforming Part 50: Staff Option, NEI’s Choices and a Lot of Questions.” 11. Stellfox, “PRA Practitioners See Little Business in Risk-Informed Rules.” 12. Stellfox, “Reforming Part 50: Staff Option, NEI’s Choices and a Lot of Questions.” 13. Lochbaum, Nuclear Plant Risk Studies, 24. 14. Stellfox, “Risk-Informing Part 50’s Technical Basis at Beginning of Long Journey,” 1. 15. Stellfox, “Part 50 Pulled into Big Changes Proposed in NRC Regulations.” 16. Knapik, “Industry Seeing Huge Benefits, Presses for Redefining LargeBreak LOCA.” 17. Weil, “NRC Staff’s Risk-Informing Part 50 Rulemaking Uses New Risk Categories”; Knapik, “Final 50.69 Rule Released, but Some Implementation Issues Remain”; Nuclear Energy Institute, Efficiency Bulletin 17–09, Industrywide Coordinated Licensing of 10 CFR 50.69. 18. US NRC, “Risk-Informed Categorization and Treatment of Structures, Systems and Components for Nuclear Power Reactors.” 19. Nuclear Energy Institute, 10 CRF 50.69 SSC Categorization Guideline. 20. Weil, “NRG May Build ABWRs at South Texas; Commercial-Grade Parts to Cut Cost.” 21. Luis A. Reyes to the Commissioners, “Update on the Improvements to the Risk-Informed Regulation Implementation Plan, SECY-07–0074,” April 26, 2007, ML070890384, NRC ADAMS. 22. Dolley, “Industry Has Been Slow to Adopt Risk-Informed Regulatory Initiatives.” 23. Knapik, “Commission Agrees to Consider Redefinition of Large-Break LOCA.” 24. Stellfox, “NRC Considers Allowing Core Damage Accidents in ECCS Rewrite”; Knapik, “Industry, Seeing Huge Benefits, Presses for Redefining Large-Break LOCA”; Luis Reyes to the Commissioners, “Proposed Rulemaking for ‘Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements, SECY-05–052,” March 29, 2005, Attachment 2, ML050480172, NRC ADAMS. 25. US NRC, “Nuclear Energy Institute; Receipt of Petition for Rulemaking.” 26. US NRC, Estimating Loss-of-Coolant Accident (LOCA) Frequencies, xxi. 27. Dolley, “NRC Issues 50.46 Proposed Rule Revising Emergency Cooling Regulations.” 28. Graham B. Wallis to Dale E. Klein, “Draft Final Rule to Risk-Inform 10 CFR 50.46, ‘Acceptance Criteria for Emergency Core Cooling Systems for Light Water Nuclear Power Reactors,’ ” November 16, 2006, ML071590139, NRC ADAMS.
278
| Notes to Chapter 7, Pages 193–197
29. Dolley, “Power Uprates Using Proposed Rule Could Save Billions of Dollars.” 30. “Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements” (2009); R. W. Borchardt to the Commissioners, “Final Rule: Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements (10 CR 50.46a), SECY-10–0161,” December 10, 2010, ML102210460, NRC ADAMS; Said Abdel-Khalik to Gregory B. Jaczko, “Draft Final Rule for RiskInformed Changes to Loss-of-Coolant Accident Technical Requirements (10 CRF 50.46a),” October 20, 2010, ML102850279, NRC ADAMS; Dolley, “Commission to Consider 50.46(a) Core Cooling Rule.” 31. The description of the Davis-Besse vessel-head erosion problem is based on information available on the NRC Davis-Besse event page and in the report by the NRC Lessons Learned Task Force. See US NRC, “Degradation of the Davis-Besse Nuclear Power Station Reactor Pressure Vessel Head LessonsLearned Report,” September 30, 2002, https://www.nrc.gov/reactors/operating /ops-experience/vessel-head-degradation/lessons-learned/lltf-report.html. 32. US NRC, “Bulletin: 2001–01: Circumferential Cracking of Reactor Pressure Vessel Head Penetration Nozzles,” August 3, 2001, ML012080284, NRC ADAMS; US NRC, “Degradation of the Davis-Besse Nuclear Power Station Reactor Pressure Vessel Head Lessons-Learned Report,” September 30, 2002, https://www.nrc.gov/reactors/operating/ops-experience/vessel-head-degradation /lessons-learned/lltf-report.html. 33. US GAO, Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown, 125. 34. US NRC, “Bulletin: 2001–01: Circumferential Cracking of Reactor Pressure Vessel Head Penetration Nozzles,” August 3, 2001, ML012080284, NRC ADAMS; FENOC, “To Discuss Information Related to Supplemental Information Regarding Inspection Plans and Commitments for Davis-Besse in Response to Bulletin 2001–01,” November 28, 2001, ML022410079, NRC ADAMS. 35. US NRC, “Status of NRC Staff Review of FENOC’s Bulletin 2001–01 Response for Davis-Besse,” November 30, 2001, ML023580102, NRC ADAMS; US NRC, “Order Modifying License [Draft],” November 30, 2001, ML022420249, NRC ADAMS. 36. US NRC, “NRC Regulatory Issue Summary 2001–02, Guidance on Risk-Informed Decisionmaking in License Amendment Reviews,” January 18, 2001, ML003778249, NRC ADAMS. 37. Lawrence Burkhart, email to Brian Sheron and others, November 19, 2001, ML022460052, NRC ADAMS; Allen Hiser, email to Bill Bateman and others, November 19, 2001, ML0022460040, NRC ADAMS; Lawrence Burkhart, email to Brian Sheron and others, “Letter to D-B,” December 4, 2001, ML022400649, NRC ADAMS. 38. “Condition Report: Control Rod Drive Nozzle Crack Indication, 02–00891,” February 27-June 1, 2002, ML042940192, NRC ADAMS; US NRC, “Degradation of the Davis-Besse Nuclear Power Station Reactor Pressure Vessel Head Lessons-Learned Report,” September 30, 2002, https://www.nrc
Notes to Chapter 7, Pages 197–201 |
279
.gov/reactors/operating/ops-experience/vessel-head-degradation/lessons-learned /lltf-report.html. 39. Henry, “Ex-Engineer Found Guilty of Concealing Davis-Besse Dangers.” 40. Lew W. Meyers to J. E. Dyer, “Confirmatory Action Letter Response— Management and Human Performance Root Cause Analysis Report on Failure to Identify Reactor Pressure Vessel head Degradation,” August 21, 2002, ML022750405, NRC ADAMS. 41. Paul Gunter and David Lochbaum, “Anatomy of a Flawed Decision: NRC Has a Brain, but No Spine,” November 15, 2002, ML031110514, NRC ADAMS; Gronlund, Lochbaum, and Lyman, Nuclear Power in a Warming World, 24–25. 42. Edward J. Markey and Marcy Kaptur to Richard A. Meserve, May 1, 2002, ML021220496, NRC ADAMS. 43. Hubert T. Bell to Chairman Meserve, “NRC’s Regulation of Davis-Besse Regarding Damage to the Reactor Vessel Head (Case No. 02–03S),” December 30, 2002, ML030070600, NRC ADAMS; Weil, “NRC IG.” 44. US NRC, “Degradation of the Davis-Besse Nuclear Power Station Reactor Pressure Vessel Head Lessons-Learned Report,” September 30, 2002, https:// www.nrc.gov/reactors/operating/ops-experience/vessel-head-degradation/lessonslearned/lltf-report.html. 45. US GAO, Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown, 33–45, 94–96. 46. Gary Holahan to Chairman Meserve and others, “Comments in Defense of the Risk-Informed Decision Making Process and on the OIG Event Inquiry, ‘NRC’s Regulation of Davis-Besse Regarding Damage to the Reactor Vessel Head,’ ” January 14, 2003, ML030210302, NRC ADAMS. 47. US GAO, Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown, 33–45, 94–96. 48. US GAO, Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown, 59, 95, 117–21. 49. U.S Senate, Committee on Environment and Public Works, Hearing Before the Subcommittee on Clean Air, Climate Change, and Nuclear Safety, 108th Cong. 2nd sess. (May 20, 2004), 25. 50. Mark B. Bezilla to James L. Caldwell, “Resubmittal of Redacted Organizational Safety Culture and Safety Conscious Work Environment Independent Assessment Report and Actions Plans for the Davis-Besse Nuclear Power Station,” March 4, 2005, ML050660425, NRC ADAMS. 51. US NRC, Safety Culture in the ROP, April 5, 2018, ML18095A029, NRC ADAMS. 52. Valerie Barnes, email to the author, November 9, 2019; US NRC, “Final Safety Culture Policy Statement”; US NRC, Safety Culture Policy Statement. NUREG/BR-0500; US NRC, Safety Culture Common Language. 53. “Experts Say Planning, Care Needed for Long Nuclear Unit Lives.”
280
| Notes to Chapter 7, Pages 201–204
54. Gertner, “Atomic Balm?” 55. “Return of the Nukes will Take a Miracle.” See also “Support for Nuclear Power Grows in the U.S.”; MacLachlan, “Nuclear Renaissance is Hostage to Public Opinion, WANO Told”; Hultman, Koomey, and Kammen, “What History Can Teach Us about the Future Costs of U.S. Nuclear Power.” 56. “Nuclear Industry Considers Obstacles to the Future.” 57. Lukaszewski, “Nuclear Industry has One More Chance.” See also Ryan, “U.S. Nuclear O&M Costs Pushed Back Downward in 2004”; “Support for Nuclear Power Soars”; “Support of Nuclear Power Grows in the U.S.” 58. Kahn, “Cheney Promotes Increasing Supply as Energy Policy.” 59. Seelye, “Nuclear Power Gains in Status after Lobbying.” 60. Hiruo, “DOE, NRC on Track for Big Push Toward New U.S. Power Reactors.” 61. Hiruo, “ ‘Nuclear Renaissance Is Here,’ Klein Says After TVA Submits COL Application.” 62. Dolley, “Potential for Power Reactor Operation Beyond 60 Years to be Assessed”; Weil, “NRC’s Departing Commissioners Lay Out Vision for Nuclear Resurgence.” For polls on nuclear power over the last thirty years, see Gallup Polls, “For First Time, Majority in U.S. Oppose Nuclear Energy,” March 18, 2016, https://news.gallup.com/poll/190064/first-time-majority-oppose-nuclearenergy.aspx; Bisconti Research, “Public Sees Nuclear Energy as Important, Survey Finds,” Nuclear Energy Institute Website, October 2016, https://www.nei .org/CorporateSite/media/filefolder/resources/reports-and-briefs/national-publicopinion-survey-nuclear-energy-201610.pdf. 63. Dolley, “Industry has been Slow to Adopt Risk-Informed Regulatory Initiatives,” 8–9. 64. Quoted in Wald, “Edging Back to Nuclear Power”; see also Barack Obama, “The 2010 State of the Union Address,” January 27, 2010, https:// obamawhitehouse.archives.gov/photos-and-video/video/2010-state-unionaddress#transcript. 65. Deutch, et al., Update of the MIT 2003 Future of Nuclear Power, 8–10; US DOE, “Obama Administration Announces Loan Guarantees to Construct New Nuclear Power Reactors in Georgia,” Office of Nuclear Energy, February 16, 2010, https://www.energy.gov/ne/articles/obama-administration-announcesloan-guarantees-construct-new-nuclear. 66. Dolley and Hiruo, “Commissioners Review NRC Accomplishments, Challenges.” See also Wald, “Edging Back to Nuclear Power”; “NRC Sets Procedures for New Reactor Hearings”; “License Renewal Reviews Too Slow, Say Senators.” 67. William C. Ostendorff, “Initial Impressions of a New Commissioner,” 2011 NRC Regulatory Information Conference, March 8, 2011, ML110680647, NRC ADAMS. 68. CBS News, Marcia McNutt, “Energy From Quake: If Harnessed, Could Power L.A. for a Year,” March 12, 2011, https://www.youtube.com/watch?v=_ C7KKwIMapw; Chang, “Quake Alters Earth’s Balance and Widens Japan”; European Space Agency, “GOCE: The First Seismometer in Orbit,” http://
Notes to Chapter 7, Pages 204–209 |
281
www.esa.int/Our_Activities/Observing_the_Earth/GOCE/GOCE_the_first_ seismometer_in_orbit. 69. Casto, Station Blackout, Location 256; Foster, “Alert Sounded a Minute Before the Tremor Struck”; Talbot, “80 Seconds of Warning for Tokyo”; Birmingham, “Japan’s Earthquake Warning System Explained.” 70. “The Japanese Mayor Who Was Laughed At”; Casto, Station Blackout, Location 309. 71. Onishi, “In Japan, Seawall Offered a False Sense of Security”; Read, “How Tenacity, a Wall Saved a Japanese Nuclear Plant.” 72. Casto, Station Blackout, Location 369; US NRC, Staff email exchanges from FOIA/PA-2011–0118, FOIA-PA-2011–0119, FOIA-PA-2011–0120, ML11257A101 and ML11175A278, NRC ADAMS. 73. Casto, Station Blackout, Location 363. 74. Ohishi and Glanz, “Japanese Rules for Nuclear Plants Relied on Old Science.” 75. US NRC, Reflections on Fukushima, 16. 76. “EU Reactor Stress Tests to Ignore Probability of Events.” 77. MacLachlan, “Japan to Strengthen Severe Accident Regulations.” See also MacLachlan, “Post-Fukushima Upgrades Could Cost Eur25 Billion: Draft EC Report”; Sains, “Six Energy Ministers in Germany Oppose Nuclear Power in EU.” 78. “Russian President Proposes International Nuclear Safety Rules.” 79. Casto, Station Blackout, Location 610. 80. Siu, et al., “PSA Technology Challenges Revealed by the Great East Japan Earthquake.” 81. Perrow, “Fukushima and the Inevitability of Accidents,” 44–52; Ramama, “Beyond Our Imagination: Fukushima and the Problem of Assessing Risk”; Knowles, “Learning from Disaster?”; Downer, “Disowning Fukushima.” 82. Ohishi and Glanz, “Japanese Rules for Nuclear Plants Relied on Old Science.” 83. Yamaguchi, “Japan’s NRA Uses Backfit Power to Order Review of Plant Volcano Risk”; US NRC, “Report: A Comparison of U.S. and Japanese Regulatory Requirements in Effect at the Time of the Fukushima Accident,” November 2013, ML13326A991, NRC ADAMS. In 1991, Japan’s Nuclear Safety Commission produced a PRA that estimated the probability of a severe accident as one in one hundred thousand reactor-years, far less than US PRAs. “The figure is small enough to make us believe there will be no severe accident,” the report said. See Usui, “NSC White Paper Says PSA Shows Japan’s Reactors are Safe Enough.” 84. Ohishi and Glanz, “Japanese Rules for Nuclear Plants Relied on Old Science.” 85. IAEA, The Fukushima Daiichi Accident: Report by the Director General, 67–70. 86. Acton and Hibbs, Why Fukushima was Preventable, 23–24; National Research Council, Lessons Learned from the Fukushima Nuclear Accident, 177–195.
282
| Notes to Chapter 7, Pages 209–212
87. Nuclear Energy Agency, Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2010; Ushio, “NRA Plans to Complete Revision of Plant Inspection Process in 2020”; Russell Gibbs, “Licensee Workshop for Understanding NRC’s Reactor Oversight Process, Tokyo, Japan,” June 8, 2018, ML18177A448, NRC ADAMS. 88. Gordon R. Thompson, New and Significant Information from the Fukushima Daiichi Accident in the Context of Future Operation of the Pilgrim Nuclear Power Plant (Cambridge, MA, Institute for Resource and Security Studies, June 1, 2011), ML12094A183, NRC ADAMS. For a comparison and critique of the use of global statistics in estimating accident probabilities, see US NRC, Probabilistic Risk Assessment and Regulatory Decisionmaking, 31–34. 89. Staff of Edward J. Markey, “Fukushima Fallout: Regulatory Loopholes at U.S. Nuclear Plants,” May 12, 2011, ML111390565, NRC ADAMS. 90. Union of Concerned Scientists, “U.S. Nuclear Power after Fukushima: Common Sense Recommendations for Safety and Security,” July 2011, www .ucsusa.org/nuclear_power; Union of Concerned Scientists, Preventing an American Fukushima, 6. 91. US NRC, “North Anna Power Station, Unit Nos. 1 and 2 Final Accident Sequence Precursor Analysis Results,” February 27, 2013, ML13045A165, NRC ADAMS; US NRC, “NRC Regulatory Issue Summary 2006–24 Revised Review and Transmittal Process for Accident Sequence Precursor Analyses,” December 6, 2006, https://www.nrc.gov/reading-rm/doc-collections/gen-comm /reg-issues/2006/ri200624.pdf; Dolley, “NRC Continues Post-Earthquake Inspections at North Anna.” 92. Hiruo, “Fort Calhoun to Restart This Year Under OPPD Plan”; Eric de Fraguier, “Lessons Learned from the 1999 Blayais Flood: Overview of EDF Flood Risk Management Plan,” U.S. NRC Regulatory Information Conference 2010, Rockville, MD, March 11, 2010, https://www.nrc.gov/public-involve /conference-symposia/ric/past/2010/slides/th35defraguierepv.pdf. 93. US NRC, Recommendations for Enhancing Reactor Safety in the 21st Century, vii-viii. 94. US NRC, Recommendations for Enhancing Reactor Safety in the 21st Century, ix. 95. US NRC, Recommendations for Enhancing Reactor Safety in the 21st Century, vii-viii, 20–22. 96. “Jaczko Advances Closure of Yucca Mt. Licensing Work”; Freebairn and Ostroff, “Majority of NRC Commissioners Approve AP1000 Design Certification”; Hiruo, Freebairn, and Dolley, “IG Report Documents Angry Outbursts by NRC Chairman”; Broder and Wald, “Chairman of N.R.C. to Resign Under Fire.” See also Jaczko, Confessions of a Rogue Nuclear Regulator. 97. Dolley, “Senators Disagree on Approach to NRC Review of Fukushima Accident.” 98. Dolley, “Senators Disagree on Approach to NRC Review of Fukushima Accident,” 1. 99. Freebairn, et al., “Amid Infighting, NRC Regulatory Action Advanced,” 1.
Notes to Chapter 7, Pages 212–215 |
283
100. For a summary of NRC post-Fukushima actions, see US NRC, “Japan Lessons Learned,” https://www.nrc.gov/reactors/operating/ops-experience/japandashboard.html. 101. US NRC, Reflections on Fukushima; US NRC, State-of-the-Art Reactor Consequence Analyses (SOARCA) Report, A-3 to A-5; Mosleh, “PRA: A Perspective on Strengths, Current Limitations, and Possible Improvements.” 102. US NRC, Protecting Our Nation. 103. US NRC, State-of-the-Art Reactor Consequence Analysis, xi-xx, A-1 to A-11; Wald, “N.R.C. Lowers Estimate of How Many Would Die in a Meltdown”; R. W. Borchardt to the Commissioners, “Consideration Of Economic Consequences Within The U.S. Nuclear Regulatory Commission’s Regulatory Framework, SECY-12–0110,” August 14, 2012, https://www.nrc.gov/reading-rm /doc-collections/commission/secys/2012/2012–0110scy.pdf. 104. NEI, Diverse and Flexible Coping Strategies (FLEX) Implementation Guide (Washington, DC: NEI, August 2012), ML12221A205, NRC ADAMS; Dolley, “US Nuclear Industry Enhanced Safety since Fukushima: NEI.” 105. Freebairn, “NRC Preparing to Issue Post-Fukushima Orders by March 11.” 106. US GAO, Nuclear Regulatory Commission: Natural Hazard Assessments Could Be More Risk-Informed, 15–17, 24–28; Freebairn, “NRC Preparing to Issue Post-Fukushima Orders by March 11”; Freebairn, “NRC Ranks, Adds Plants Needing Further Seismic Risk Evaluations”; Nuclear Energy Agency, Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2010; Nuclear Energy Agency, Comparison of Probabilistic Seismic Hazard Analysis of Nuclear Power Plants in Areas with Different Levels of Seismic Activity. 107. “Despite Fukushima, Vendors Say U.S. Outlook Unlikely to Change.” 108. Isted, “Economics of Nuclear Power May Deteriorate PostFukushima.” 109. Navigant Consulting, Assessment of the Nuclear Power Industry, 39. 110. MacLachlan, et al., “One Year after Fukushima Accident, Industry Remains Unsettled.” 111. Carr, “Exelon Withdrawal of Texas ESP Reflects Merchant Challenges: Analysts.” 112. Freebairn and Ostroff, “NRC Fukushima Focus Should Not Distract Agency or Industry.” 113. Freebairn and Ostroff, “Merchant Economics Hurt New Projects, Operating Unites: Analysts.” See Rod McCullum, “Delivering the Nuclear Promise, NEI NCSL Legislative Summit,” August 7, 2016, http://www.ncsl.org /Portals/1/Documents/energy/ESTF_Mccullum_present_8_16.pdf; Freebairn, et al., “Post-Fukushima Modifications Could Cost U.S. Nuclear Operators $3.6 Billion”; Kramer, “US Nuclear Industry Fights for Survival,” 26; Navigant Consulting, Assessment of the Nuclear Power Industry, 39. On the history of natural gas prices, see US Energy Information Administration, “U.S. Price of Natural Gas Sold to Commercial Consumers,” https://www.eia.gov/dnav/ng /hist/n3020us3a.htm.
284
| Notes to Chapter 7, Pages 215–218
114. US NRC, Achieving Exemplary Nuclear Regulation in the 21st Century Report on Project Aim 2020, Enclosure 1 to SECY-15–0015, January 30, 2015, ML15023A579, NRC ADAMS. 115. US NRC, A Proposed Risk Management Regulatory Framework, 2–8. 116. Victor M. McCree, “Recommendations on Issues Related to Implementation of a Risk Management Regulatory Framework, SECY-15–0168,” December 18, 2015, ML15302A135, NRC ADAMS. 117. Dolley, “Staff Withdraws Proposed Rule on Risk-Informing ECCS Regulations.” See US NRC, “Public Meeting to Discuss the Need for a Rule for Risk Informed Decoupling of Assumed Loss-of-Offsite Power from Loss-ofCoolant Accident Analysis,” June 28, 2016, 11, ML16203A005, NRC ADAMS; US NRC, “Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements” (2016). 118. Dolley, “Industry Revises Approach to Risk-Informed Regulation after Demise of 50.46a.” 119. Anthony R. Pietrangelo to Allison M. MacFarlane, “Industry Support and Use of PRA and Risk-Informed Regulation,” ML13354B997, NRC ADAMS. See also Freebairn and Ostroff, “NRC Fukushima Focus Should Not Distract Agency or Industry.” 120. Anthony M. Pietrangelo to Mark A. Satorius, “Use of Qualitative Factors in Regulatory Decision Making,” May 11, 2015, ML15217A335, NRC ADAMS. 121. For an excellent commission debate on the proper balance of qualitative and quantitative factors in regulatory decisions, see the voting record on “Consideration of Additional Requirements for Containment Venting Systems for Boiling Water Reactors with Mark I and Mark II Containments, SECY12–0157,” March 19, 2013, ML13078A012, NRC ADAMS. 122. Freebairn, “U.S. Operators Will Load Test Rods of Accident Tolerant Fuels Starting Next Year”; Hiruo, “Industry Looking for Ways to Hasten Use of Accident Tolerant Fuel”; Dolley, “Plants Apply to Use 50.69 Process to RiskInform Component Categorization”; Nuclear Energy Institute, Efficiency Bulletin: 17–09. 123. Nuclear Energy Agency, Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2010, 8. See also the forthcoming 2017 edition. 124. US NRC, Probabilistic Risk Assessment, 36; Mosleh, “PRA: A Perspective on Strengths, Current Limitations, and Possible Improvements,” 1–10; US NRC, State-of-the-Art Reactor Consequence Analyses (SOARCA) Report, 69–85. 125. US NRC, Guidance on the Treatment of Uncertainties Associated with PRAs in Risk-Informed Decisionmaking, Final Report; US NRC, Regulatory Guide 1.174, Rev. 3, An Approach for Using Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes to the Licensing Basis, January 2018, ML17317A256, NRC ADAMS; Nathan Siu, Kevin Coyne, and Felix Gonzalez, “Knowledge Management and Engineering at a Risk-Informed Regulatory Agency: Challenges and Suggestions,” March 30, 2017, ML17089A538, NRC ADAMS; Anthony R. Pietrangelo to William M. Dean, “Transmittal of
Notes to Chapter 7, Pages 218–219 |
285
NEI 16–06, ‘Crediting Mitigating Strategies in Risk-Informed Decision Making,” August 26, 2016, ML16286A297, NRC ADAMS; US NRC, “Risk Informed Steering Committee Public Meeting,” April 13, 2017, ML17104A318, NRC ADAMS; Victor M. McCree to the Commissioners, “Plans for Increasing Staff Capabilities to Use Risk Information in Decision-Making Activities, SECY-17– 0112,” November 13, 2017, ML17270A192, NRC ADAMS. 126. Garrick, Quantifying Global Catastrophic Risks, 7, 54–57. 127. Nathan Siu, “The Frontier: Grand Challenges and Advanced Methods Lecture 9–3,” January 15, 2019, ML19011A444, NRC ADAMS; Daniel J Merzke to Ho K. Nieh, “Effectiveness Review of Cross-Cutting Issues,” April 23, 2014, ML14099A171, NRC ADAMS. 128. US NRC, “The Regulatory Craft: An Interview with Commissioner Stephen Burns,”May2019,https://www.youtube.com/watch?v=uAbz9B78tNA&t= 338s. 129. Kadak and Matsuo, “The Nuclear Industry Transition to Risk-Informed Regulation.” 130. Nathan Siu, email to the author, April 13, 2020. 131. Frank N. von Hippel, email to the author, July 2, 2015.
Bibliography
ARCHIVAL SOURCES
Doub, William O. Personal Papers. Hoover Institution, Stanford, CA. Harold L. Price Papers. Herbert Hoover Presidential Library, West Branch, IA. John O. Pastore Papers. Phillips Memorial Library, Library Archives, Providence College, Providence, RI. Manson Benedict Papers. Collection 428, Massachusetts Institute of Technology, Libraries Institute Archives and Special Collections, Cambridge, MA. Norman C. Rasmussen Papers. Collection 542, Massachusetts Institute of Technology, Libraries Institute Archives and Special Collections, Cambridge, MA. Okrent, David. Professional Papers. University of California, Los Angeles, Department of Engineering, Los Angeles, CA. Union of Concerned Scientists Papers. Massachusetts Institute of Technology Archives, Collection 434, Libraries Institute Archives and Special Collections, Cambridge, MA. US Department of Energy. OpenNet System (DOE OpenNet), https://www.osti .gov/opennet. . Nuclear Testing Archive—Public Reading Facility, Las Vegas, NV. . Public Reading Room Catalog (DOE PRRC), http://reading-room .labworks.org/Catalog/Search.aspx. . Records of the Atomic Energy Commission. Record Group 326, Germantown, MD. US National Archives and Records Administration. Record Group 220, Records of Temporary Committees, Commissions, and Boards, 1893–2008, President Reagan, Presidential Commission on the Space Shuttle Challenger Accident, https://catalog.archives.gov/id/644134. US Nuclear Regulatory Commission. ADAMS public documents website (NRC ADAMS), http://www.nrc.gov/reading-rm/adams.html. 287
288
| Bibliography
. Public Documents Room, Legacy Microfiche Collection (NRC Legacy), Rockville, Maryland. US Senate, Joint Committee on Atomic Energy. Records of the Joint Committee on Atomic Energy, 1946–1977, Record Group 128, National Archives and Records Administration, Washington DC. Von Hippel, Frank. Personal Papers. Princeton University, Princeton, NJ. Ybarrondo, Larry. Professional Journal, Jackson Hole, WY, May 21, 1969. Private collection. PUBLISHED SOURCES
“A New View of Safety: Reassuring Rasmussen Report.” Nuclear Industry 21 (August 1974): 13–15. “A Precedent-Setting Decision.” Nuclear Industry 2 (December 1964): 4–9. “ACRS Not Satisfied with Implementation Plan.” Nuclear News 25, no. 14 (November 1982): 61–62. “ACRS Questioning PWR Emergency Core Cooling: Plant Delays Possible.” Nucleonics Week, May 6, 1971, 1. “ACRS Urges New Safety Research Direction Using WASH-1400 Cornerstone.” Inside N.R.C., August 6, 1979, 5–6. “ACRS Warns: Reactor Vessel Rupture Should be Provided for in Design.” Nucleonics Week, December 9, 1965, 1–2. Acton, James, and Mark Hibbs. Why Fukushima was Preventable. Washington, DC: Carnegie Endowment for International Peace, 2012. Adede, A. O. The IAEA Notification and Assistance Conventions in Case of a Nuclear Accident. London: Graham & Trotman, 1987. Ad-Hoc Review Group. Risk Assessment Review Group Report to the Nuclear Regulatory Commission, NUREG/CR-0400. Washington, DC: U.S. Nuclear Regulatory Commission (NRC), September 1978. “AEC Internal Documents on ECCS Reveal Staff Qualms.” Nucleonics Week, February 17, 1972, 8–13. “AEC Loses a Controversial Figure: Milton Shaw ‘Retires’ After Confrontation with Commission.” Nuclear Industry 20 (June 1973): 9–11. “AEC Operational Reorganization Creates: Independent Safety Division.” Nuclear Industry 20 (May 1973): 3–6. “AEC Reveals ACRS-Suggested ECCS R&D Now Underway.” Nuclear Industry 19 (July 1972): 29–30. “AEC’s Shaw Comes Off Second Best in Cross-Examination of ECCS.” Nucleonics Week, April 13, 1972, 4. “AEC Strips Shaw of LWR Safety Sanctions Despite JCAE Leader’s Anger.” Nucleonics Week, May 17, 1973, 1. “AEC Task Force Considering Lower Power—Temporarily—for LWRs.” Nucleonics Week, May 27, 1971, 1. “An AEC-Industry Alliance to ‘Sell’ Nuclear Power Has Been Counterproductive.” Nucleonics Week, June 20, 1974, 3–4. “The AEC Testifies.” Nuclear News 16, no 14 (November 1973): 58–59.
Bibliography
|
289
“AIF Advises NRC on Use of Probabilistic Risk Assessment in Licensing.” Inside NRC, July 3, 1980, 9. Airozo, Dave. “Battle Over Decommissioning Standards Escalates as NRC Final Rule Nears.” Inside NRC, April 28, 1997, 1, 13–15. . “Commission Says Industry Overkill of Maintenance Rule May Prove Unwise.” Inside NRC, June 5, 1989, 5. . “Commissioners Affirm Votes on Decommissioning Standards.” Inside NRC, May 26, 1997, 1, 12–13. . “EPA Administrator Warns Jackson to Not Loosen Residual Rad Standards.” Inside NRC, February 17, 1997, 5. . ”Poor Maintenance Cited as Cause of Many Forced Outages.” Inside NRC, August 18, 1986, 8. . “Zech Complains about Lack of Industry Assistance on Maintenance Rule.” Inside NRC, May 22, 1989, 2. Airozo, Dave, and Brian Jordan. “Carr Says He’ll ‘Chew On Management’ to Improve Plant Maintenance.” Inside NRC, September 28, 1987, 7. Alexander, R. E., et al. Final Snapshot Safeguards Report NAA-SR-10022 (REV). San Diego, CA: Atomics International, March 20, 1965. Apostolakis, George. “How Useful is Quantitative Risk Assessment?” Risk Analysis 24, no. 3 (2004): 515–20. Apostolakis, G. E., J. H. Bickel, and S. Kaplan. “Probabilistic Risk Assessment in the Nuclear Power Utility Industry.” Reliability Engineering and System Safety 24 (1989): 91–94. Apostolakis, George, and Ali Mosleh. Risk-Informed Decision Making: A Survey of United States Experience. Los Angeles and Tokyo: B. John Garrick Institute for Risk Sciences and the Nuclear Risk Research Center, 2017. Arnold, David. “Europe, Technology, and Colonialism in the 20th Century.” History and Technology 21, no. 1 (March 2005): 85–106. “At Best, Commissioner Support is Lukewarm for Way of Ranking Safety Issues.” Inside NRC, April 19, 1982, 9–10. Axelrod, Regina. “Nuclear Power and EU Enlargement: The Case of Temelin.” Environmental Politics 13, no. 1 (2004): 153–72. “B&W Comes Up With Preliminary ECCS Clad Temperature Peak of 1,700 F.” Nucleonics Week, June 20, 1974, 3. Baecher, Gregory B. Annotated Readings in the History of Risk Analysis in Dam Safety, RMC_TR-2018–13. Washington, DC: US Army Corps of Engineers, 2018. Balogh, Brian. Chain Reaction: Expert Debate and Public Participation in American Commercial Nuclear Power, 1945–1975. Cambridge, UK: Cambridge University Press, 1991. Barber, Wayne. “Clean Air, Consolidation Make Nuclear Less of Investor Pariah.” Nucleonics Week, April 22, 1999, 3. Barth, Kai-Henrik. “Catalysts of Change: Scientists as Transnational Arms Control Advocates in the 1980s.” Osiris 21, no. 1 (2006): 182–206. “Battle Over Choice to Heat Up This Year.” Power Engineering 101, no. 3 (March 1, 1997).
290
| Bibliography
Beck, Clifford K. “Engineering Out the Distance Factor.” Atomic Energy Law Journal 5 (Winter 1963): 245–60. . “The Thesis is Good; Practice is Difficult.” Nucleonics 17, no. 2 (February 1959): 67. Beck, Ulrich. Risk Society: Towards a New Modernity. London: Sage Publications, 1992. . World at Risk. Cambridge, UK: Polity, 2009. Bell, Trudy E., and Karl Esch. “The Space Shuttle: A Case of Subjective Engineering.” IEEE Spectrum, June 1989, 42–46. Belson, William A. “Matching and Prediction on the Principle of Biological Classification.” Journal of the Royal Statistical Society, Series C (Applied Statistics) 8, no. 2 (June 1959): 65–75. Bennett, Gary L. “Overview of the U.S. Flight Safety Process for Space Nuclear Power.” Nuclear Safety 22, no. 4 (July-August 1981): 423–34. Berthelemy, Michel, and Francois Leveque. “Harmonizing Nuclear Safety Regulation in the EU: Which Priority?” Intereconomics 46, no. 3 (2011): 132–37. Bickers, Richard. “EPA Accepts NRC Unrestricted Release Does Limits at West Valley.” Inside NRC, August 27, 2001, 13. Bierly, Paul E., III, and J. C. Spender. “Culture and High Reliability Organizations: The Case of the Nuclear Submarine.” Journal of Management 21, no. 4 (1995): 639–56. Birmingham, Lucy. “Japan’s Earthquake Warning System Explained.” Time, March 18, 2011. http://content.time.com/time/printout/0,8816,2059780,00 .html. Blake, E. Michael. “Congress Passes Very Pronuclear Energy Bill.” Nuclear News 35, no. 14 (November 1992): 33. Boeing Corporation. Research and Engineering Division. Fault Tree for Safety, D6–53604. Seattle, WA: Boeing Aerospace, Co., November 1968. Boffey, Philip M. “Reactor Safety: Congress Hears Critics of Rasmussen Report.” Science 192, no. 4246 (June 25, 1976): 1312–13. Boland, Joseph B. “The Cold War Legacy of Regulatory Risk Analysis: The Atomic Energy Commission and Radiation Safety.” PhD diss., University of Oregon, 2002. Boring, Ronald Laurids. “A Survey of Expert Elicitation Practices for Probabilistic Risk Assessment.” In Proceedings of the Human Factors and Ergonomics Society 59th Annual Meeting, 2015, Los Angeles, CA, 1447–51. Santa Monica, CA: HFES, 2015. Boudia, Soraya, and Nathalie Jas. “Risk and Risk Society in Historical Perspective.” History and Technology 23, no. 4 (December 2007): 317–31. Bray, A. P., and P. W. Ianni. “Why Did We Need a Design Basis Accident?” Transactions of the American Nuclear Society 10 (June 10, 1973): 203–4. Breyer, Stephen. Breaking the Vicious Circle: Toward Effective Risk Regulation. Cambridge, MA: Harvard University Press, 1993. Brinkhorst, L. J. “Perspectives from the Commission of the European Communities.” In The Safety of Nuclear Power: Strategy for the Future, Proceedings of a Conference, Vienna, 2–6 September 1991, 23–30. Vienna: International Atomic Energy Agency (IAEA), 1992.
Bibliography
|
291
Broder, John M., and Matthew L. Wald. “Chairman of N.R.C. to Resign Under Fire.” New York Times, May 21, 2012, A13. Brookhaven National Laboratory. Supplement to Report on the Brookhaven Nuclear Reactor, BNL-18. Oak Ridge, TN: Technical Information Service, August 30, 1948. Brown, R. A., R. J. Budnitz, P. Butcher, and D. G. Reichenbach. “Ignalina InDepth Safety Assessment.” Nuclear Safety 38, no. 1 (January-March 1997): 24–34. Brunot, W. K. “Reliability of a Complex Safety Injection System from Digital Simulation.” Transactions of the American Nuclear Society, Fifteenth Annual Meeting 12 (June 15–19, 1969): 169–70. Buden, David. “The Acceptability of Reactors in Space.” 16th Intersociety Energy Conversion Engineering Conference, Atlanta, GA, August 9–14, 1981, LA-UR-81–1371. Los Alamos, NM: Los Alamos Scientific Laboratory (LANL), 1981. Buhl, Anthony R., et al. “The IDCOR Program—Severe Accident Issues, Individual Plant Examinations and Source Term Developments.” In Risk Assessment and Management, edited by Lester B. Lave, 205–17. New York: Plenum Press, 1987. Bukro, Casey. “Nuclear Mishap Stuns Regulators.” Chicago Tribune, June 23, 1985. “Bulgaria Will Honor Its Commitments to EU to Shut Down Kozloduy NPP Units 3 and 4.” Bulgarian EuroBulletin (2003). http://eubulletin.mfa.government .bg/Issue02/Highlights.asp (accessed July 19, 2012). Burnett, T. W. T. Reactor Protection System Diversity in Westinghouse Pressurized Water Reactors, WCAP-7306. Pittsburgh, PA: Westinghouse Electric Corporation, April 1969. Burnham, David. “Nuclear Agency Revokes Support for Safety Study.” New York Times, January 20, 1979, 1, 19. . “U.S. Orders Construction Halt on Ohio Atom Plant.” New York Times, November 13, 1982, 1. Burns, Stephen G. “The Impact of the Major Nuclear Power Plants Accidents on the International Legal Framework for Nuclear Power.” Nuclear Law Bulletin 101 (2018): 7–30. Burton, Marsha, and Lynne Olver. “Shutdown: Can Nuclear Plants Survive Deregulation? The Jury is Still Out.” Wall Street Journal, September 14, 1998, R21. Caddy, Joanne. “Harmonisation and Asymmetry: Environmental Policy Coordination between the European Union and Central Europe.” Journal of European Public Policy 4, no. 3 (1997): 318–36. Campbell, John L. Collapse of an Industry: Nuclear Power and the Contradictions of U.S. Policy. Ithaca, NY: Cornell University Press, 1988. Carlisle, Rodney P. “Probabilistic Risk Assessment in Nuclear Reactors: Engineering Success, Public Relations Failure.” Technology and Culture 38, no. 4 (Oct. 1997): 920–44. Carlisle, Rodney P., with Joan M. Zenzen. Supplying the Nuclear Arsenal: American Production Reactors, 1942–1992. Baltimore, MD: Johns Hopkins University Press, 1996.
292
| Bibliography
Carmin, Joann, and Stacy D. VanDeveer. “Enlarging EU Environments: Central and Eastern Europe from Transition to Accession.” Environmental Politics 13, no. 1 (Spring 2004): 3–24. Carr, Housley. “Exelon Withdrawal of Texas ESP Reflects Merchant Challenges: Analysts.” Nucleonics Week, August 30, 2012, 1. Casto, Charles S. Station Blackout: Inside the Fukushima Nuclear Disaster and Recovery. New York: Radius Book Group, 2018. Center for Chemical Process Safety. Guidelines for Chemical Process Quantitative Risk Analysis. Second Edition. New York: Wiley-Interscience, 2000. “Chairman Selin Travels to Soviet Union.” Inside NRC, September 23, 1991, 15. Chang, Kenneth. “Quake Alters Earth’s Balance and Widens Japan.” The New York Times, March 13, 2011. https://www.nytimes.com/2011/03/14/world /asia/14seismic.html. “Charges of Reprisals Against Staff ECCS Critics Denied by AEC, Idaho.” Nucleonics Week, March 2, 1972, 1–2. “Cherry Attacks Hanauer Credibility and Honesty.” Nucleonics Week, February 17, 1972, 3. “Cherry Threatens Suit to Block AEC Rulemaking Hearings.” Nucleonics Week, December 16, 1971, 4. Choiniere, Paul. “NRC Concurs: NU Firings Discriminatory.” The Day (New London, CT), September 9, 1998. Clarke, Lee. Worst Cases: Terror and Catastrophe in the Popular Imagination. Chicago: University of Chicago Press, 2006. Clavin, Patricia. “Defining Transnationalism.” Contemporary European History 14, no. 4 (November 2005): 421–39. Cohen, Bernard L. The Nuclear Energy Option: An Alternative for the 90s. New York: Plenum, 1990. Cohn, Steven Mark. Too Cheap to Meter: An Economic and Philosophical Analysis of the Nuclear Dream. Albany: State University of New York Press, 1997. Columbia Accident Investigation Board. Columbia Accident Investigation Report. Washington, DC: NASA, August 2003. Commission of the European Communities. Summary Report on Safety Objectives in Nuclear Power Plants, EUR 12273 EN. Brussels: The Commission, 1989. Committee on Shuttle Criticality Review and Hazard Analysis Audit. PostChallenger Evaluation of Space Shuttle Risk Assessment and Management. Washington, DC: National Academy Press, 1988. Committee on the Science of Science Communication. Communicating Science Effectively: A Research Agenda. Washington, DC: National Academies Press, 2017. “COMSAT-Type Entity proposed for Enrichment by Hosmer.” Nuclear Industry 15 (June 1968): 53–56. “Consolidated Edison Has Spent $6 million on ECCS Equipment for Indian Point 1.” Nucleonics Week, February 13, 1975, 5. “Consolidated Edison is Now Offering to Run Indian Point-1 at 40% of Power.” Nucleonics Week, June 12, 1975, 7.
Bibliography
|
293
Cook, James. “Nuclear Follies.” Forbes 11 (February 1985), 1, 82. Cooke, Robert A., and Denise M. Rousseau. “Behavioral Norms and Expectations: A Quantitative Approach to the Assessment of Organizational Culture.” Group & Organizational Studies 13, no. 3 (September 1988): 245–73. Cooke, Roger M. Experts in Uncertainty: Opinion and Subjective Probability in Science. New York: Oxford University Press, 1991. Coppola, Anthony. “Reliability Engineering of Electronic Equipment: A Historical Perspective.” IEEE Transactions on Reliability R-33, no. 1 (April 1984): 29–35. Cottrell, William B., and A. W. Savolainen, eds. U.S. Reactor Containment Technology: A Compilation of Current Practices in Analysis, Design, Construction, Test and Operation, ORNL-NSIC-5. Oak Ridge, TN: Oak Ridge National Laboratory, August 1965. “Council Resolution of 22 July 1975 on the Technological Problems of Nuclear Safety.” Official Journal of the European Communities 128C (14 August 1975): 1. “Czechs Agree to Let Outside Experts Inspect Atomic Power Plant.” New York Times, December 14, 2000, 7. Davies, Richard. “The Effectiveness of the Sizewell B Public Inquiry in Facilitating Communication about the Risks of Nuclear Power.” Science, Technology and Human Values 12, no. 3/4 (Summer-Autumn 1987): 102–10. Davis, W. K., and W. B. Cottrell. “Containment and Engineered Safety of Nuclear Plants.” In Nuclear Safety, 363–71. Vol. 13 of Proceedings of the Third International Conference on the Peaceful Uses of Atomic Energy, Geneva, Switzerland 31 August-9 September 1964. New York: United Nations, 1965. “Design Basis Accidents for Power Reactors.” In Transactions of the American Nuclear Society, 1973 Annual Meeting. June 10–14, 1973, 203–7. New York: Academic Press, 1974. “Despite Fukushima, Vendors Say U.S. Outlook Unlikely to Change.” Nucleonics Week, May 12, 2011, 1. Detterman, R. L., A. Weitzberg, and C. A. Willis. Aerospace Safety Program— Safe Disposal of SNAP Reactors, NAA-SR-11353. Canoga Park, CA: Atomics International, July 25, 1965. Deutch, John M., et al. Update of the MIT 2003 Future of Nuclear Power: An Interdisciplinary MIT Study. Cambridge, MA: MIT Energy Initiative, 2009. https://web.mit.edu/nuclearpower/pdf/nuclearpower-update2009.pdf. DiNunno, J. J., et al. Calculation of Distance Factors for Power and Test Reactor Sites: Technical Information Document, TID 14844. Washington, DC: Atomic Energy Commission, March 23, 1962. “Dixy Ray: Another Non-Nuclear Face Among the AEC Commissioners.” Nucleonics Week, July 20, 1972, 4. Dolley, Steven. “Commission to Consider 50.46(a) Core Cooling Rule.” Inside NRC, January 14, 2011, 1–3. . “EPRI: Risk-Informed Approaches Have Reduced Accident Risk.” Nucleonics Week, April 3, 2008, 8.
294
| Bibliography
. “Industry Has Been Slow to Adopt Risk-Informed Regulatory Initiatives.” Inside NRC, March 17, 2008, 8–9. . “Industry Revises Approach to Risk-Informed Regulation after Demise of 50.46a.” Inside NRC, October 17, 2016, 1. . “NRC Continues Post-Earthquake Inspections at North Anna.” Nucleonics Week, October 6, 2011, 1. . “NRC Issues 50.46 Proposed Rule Revising Emergency Cooling Regulations.” Inside NRC, November 14, 2005, 1, 15. . “Plants Apply to Use 50.69 Process to Risk-Inform Component Categorization.” Nucleonics Week, October 19, 2017, 1. . “Potential for Power Reactor Operation Beyond 60 Years to be Assessed.” Inside NRC, March 17, 2008, 5. . “Power Uprates Using Proposed Rule Could Save Billions of Dollars, NRC Says.” Inside NRC, August 17, 2009, 1, 9–10. . “Senators Disagree on Approach to NRC Review of Fukushima Accident.” Nucleonics Week, August 4, 2011, 1. . “Staff Withdraws Proposed Rule on Risk-Informing ECCS Regulations.” Inside NRC, May 7, 2012, 3–4. . “US Nuclear Industry Enhanced Safety since Fukushima: NEI.” Nucleonics Week, March 7, 2013, 3. Dolley, Steven, and Elaine Hiruo. “Commissioners Review NRC Accomplishments, Challenges.” Inside NRC, March 14, 2011, 1, 7–8. Domenici, Pete V. A Brighter Tomorrow: Fulfilling the Promise of Nuclear Energy. Lanham, MD: Rowman & Littlefield, 2004. Doron, Z. J., and H. Albers. “Mean Annual Severity: An Extension of the Quantitative-Probabilistic Approach to Reactor Safety Analysis.” Nuclear Engineering and Design 9 (March 1969): 349–56. “Doub Makes Clear to Utilities New AEC Attitude Toward Public.” Nucleonics Week, October 19, 1971, 1–2. Downer, John. “Disowning Fukushima: Managing the Credibility of Nuclear Reliability Assessment in the Wake of Disaster.” Regulation and Governance 8 (2014): 287–309. Dranseikaite, Edita. “The Closure of the Ignalina Nuclear Power Plant.” In Value Complexity in Crisis Management: The Lithuanian Transition, edited by Stephanie Buus, Lindy M. Newlove, and Eric K. Stern, 197–231. Stockholm: Elanders Gotab, 2005. Duffy, Robert J. Nuclear Politics in America: A History and Theory of Government Regulation. Lawrence: University Press of Kansas, 1997. Durbin, Nancy. Review of International Oversight of Safety Culture in Nuclear Facilities. Idaho National Laboratory, Battelle Energy Alliance, March 2006. Dumas, Lloyd J. Lethal Arrogance: Human Fallibility and Dangerous Technologies. New York: St. Martin’s Press, 1999. Dunlap, Riley E. “Trends in Public Opinion Toward Environmental Issues: 1965–1990.” Society and Natural Resources 4, no. 3 (1991): 285–312. Dyke, Richard Wayne. Mr. Atomic Energy: Congressman Chet Holifield and Atomic Energy Affairs, 1945–1974. New York: Greenwood Press, 1989.
Bibliography
|
295
“East German Plan Foresees Western Help in Nuclear Construction.” Nucleonics Week, January 18, 1990, 1. “E. Europe Nuclear Plants Worry Sweden’s Leader.” Washington Post, February 21, 1992, A14. “ECCS Enters New Phase.” Nuclear Industry 19 (April 1972): 5–10. “ECCS Rulemaking Was Worthwhile After All, Industry Sources Say.” Nucleonics Week, January 10, 1974, 2–3. Eckberg, C. R. WS-133B Fault Tree Analysis Program Plan. Seattle, WA: The Boeing Corporation, March 16, 1963. ENCONET Consulting. Current Status of Probabilistic Safety Assessments for Soviet Designed Reactors: Final Report. EUR 17567 EN. 1999. https://op .europa.eu/en/publication-detail/-/publication/806306d7–8917–4b21–93adbae274d0f9fa. Electric Power Research Institute. Safety and Operational Benefits of RiskInformed Initiatives. Palo Alto, CA: EPRI, February 2008. Emshwiller, John R. “Nuclear Nemesis.” The Wall Street Journal, March 10, 1978, 1. “EPA Allowed to Defer to NRC on Radionuclide Emissions Standards.” Inside NRC, November 5, 1990, 13. “EPA Rejects AEC’s Emergency Cooling Draft Environmental Statement.” Nucleonics Week, February 22, 1973, 2–3. Epler, E. P. “Common Mode Failure Considerations in the Design of Systems for Protection and Control.” Nuclear Safety 10, no. 1 (January-February 1969): 38–45. Ergen, W. K., et al. Emergency Core Cooling: Report of Advisory Task Force on Power Reactor Emergency Cooling. Washington, DC: US Atomic Energy Commission, 1967. ERI Consulting & Co. Review of Risk Management Practices in Various Organizations and Industries, ERI/ESA-00101. Rotkreuz, Switzerland: European Space Agency, January 2000. Ericson, Clifton A., II. “Fault Tree Analysis—A History.” In Proceedings of the 17th International System Safety Conference—1999, Relken Engineering. https://ftaassociates.files.wordpress.com/2018/12/C.-Ericson-Fault-TreeAnalysis-A-History-Proceedings-of-the-17th-International-System-SafetyConference-1999.pdf. “EU Reactor Stress Tests to Ignore Probability of Events.” Inside NRC, April 11, 2011, 9–10. “EU Wants Soviet-Style Nuclear Plants Closed.” The Prague Post, June 29, 1994. European Commission. Agenda 2000: The Challenge of Enlargement, COM(97) 2000 final. Vol. 2. Brussels: Commission of the European Communities, July 15, 1997. https://www.cvce.eu/en/obj/european_commission_communication_ ii_agenda_2000_the_challenge_of_enlargement_1997-en-353b1d52–69fb-43f4– 9862-f949dcc3a4ef.html. . Nuclear Safety and the Environment: 25 Years of Community Activities Towards Harmonisation of Nuclear Safety Criteria and Requirements— Achievements and Prospects. EUR 20055 EN. October 2001. https://inis .iaea.org/collection/NCLCollectionStore/_Public/36/090/36090326.pdf.
296
| Bibliography
. Nuclear Safety and the Environment: 30 Years of NRWG Activities Towards Harmonisation of Nuclear Safety Criteria and Requirements. EUR 20818 EN. November 2002. https://inis.iaea.org/collection/NCLCollectionStore /_Public/36/090/36090338.pdf. “European Wariness of PSA Increases as Design Basis Issues Approached.” Inside NRC, August 16, 1999, 5. “Ex-Commissioner Doub: AEC Can’t Regulate Every Aspect of Nuclear Power.” Nucleonics Week, October 3, 1974, 5–6. “Experts Say Planning, Care Needed for Long Nuclear Unit Lives.” Nucleonics Week, October 25, 2007, 1. Farmer, F. R. “Siting Criteria—A New Approach.” In Containment and Siting of Nuclear Power Plants: Proceedings of a Symposium on the Containment and Siting of Nuclear Power Plants Held by the International Atomic Energy Agency in Vienna, 3–7 April 1967, 303–24. Vienna: International Atomic Energy Agency, 1967. Fields, R. E., and J. W. McWhirter. “Probability Analysis of a Severe Reactor Accident.” Transactions of the American Nuclear Society 6 (June 12, 1963): 98. Finlayson, F. C., et al. Human Engineering of Nuclear Power Plant Control Rooms and Its Effects on Operator Performance, ATR-77. El Segundo, CA: Aerospace Corporation, February 1977. Finney, John W. “New AEC Chairman Moves Against Dominance of Congressional Joint Panel.” New York Times, May 22, 1973, 17. Fischhoff, Baruch. “Managing Risk Perceptions.” Issues in Science and Technology 2, no. 1 (Fall 1985): 83–96. . “Risk Perception and Communication Unplugged: Twenty Years of Process.” Risk Analysis 15, no. 2 (1995): 137–45. Fischhoff, Baruch, et al. Acceptable Risk. Cambridge, UK: Cambridge University Press, 1981. . “How Safe is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits.” Policy Sciences 9 (1978): 127–52. Fischhoff, Baruch, Paul Slovic, and Sarah Lichtenstein. “Weighing the Risks: Which Risks are Acceptable?” Environment 2, no. 4 (1979): 17–20, 32–38. Fitzgerald, Joseph J. Criteria and Methods for Evaluating Environmental Reactor Hazards, KAPL-1527. Schenectady, NY: Knolls Atomic Power Laboratory, March 3, 1956. Ford, Daniel. The Cult of the Atom: The Secret Papers of the Atomic Energy Commission. New York: Simon and Schuster, 1982. . A History of Federal Nuclear Safety Assessments: From WASH-740 Through the Reactor Safety Study. Cambridge, MA: Union of Concerned Scientists, 1977. . “The Hole in the Reactor.” New York Times, April 13, 2002, A17. Ford, Daniel F., and Henry W. Kendall. An Assessment of the Emergency Core Cooling Systems Rulemaking Hearings. Cambridge, MA: Union of Concerned Scientists, 1974. Foster, Peter. “Alert Sounded a Minute Before the Tremor Struck.” The Daily Telegraph, March 11, 2011. https://www.telegraph.co.uk/news/worldnews /asia/japan/8377510/Alert-sounded-a-minute-before-the-tremor-struck.html.
Bibliography
|
297
Fragola, J. R. “Risk Management in U.S. Manned Spacecraft: From Apollo to Alpha and Beyond.” In Proceedings of the Product Assurance Symposium and Software Product Assurance Workshop, 19–21 March 1996, ESA SP-377, 83–92. Noordwijk, Netherlands: ESA Publications, 1996. Franklin, Ben A. “NRC Suspends Enforcement Drive on Licensees’ Use of Dedicated Parts.” Nucleonics Week, May 3, 1990, 3. . “Selin’s East European ‘Snapshot’ is a Very Pessimistic Picture.” Nucleonics Week, October 10, 1991, 12. Frederick, Eva. “Predicting Three Mile Island.” MIT Technology Review, April 24, 2019. https://www.technologyreview.com/s/613371/predicting-three-mileisland. Freebairn, William. “NRC Preparing to Issue Post-Fukushima Orders by March 11.” Nucleonics Week, January 19, 2012, 1. . “NRC Ranks, Adds Plants Needing Further Seismic Risk Evaluations.” Nucleonics Week, May 15, 2014, 1. . “U.S. Operators Will Load Test Rods of Accident Tolerant Fuels Starting Next Year.” Nucleonics Week, June 22, 2017, 1. Freebairn, William, et al. “Amid Infighting, NRC Regulatory Action Advanced.” Nucleonics Week, December 22, 2011, 1. . “Post-Fukushima Modifications Could Cost U.S. Nuclear Operators $3.6 Billion.” Nucleonics Week, June 6, 2013, 1. Freebairn, William, and Jim Ostroff. “Majority of NRC Commissioners Approve AP1000 Design Certification.” Nucleonics Week, December 15, 2011, 1. . “Merchant Economics Hurt New Projects, Operating Unites: Analysts.” Nucleonics Week, September 13, 2012, 1. . “NRC Fukushima Focus Should Not Distract Agency or Industry: NEI Head.” Nucleonics Week, December 22, 2011, 3–5. Freudenburg, William R. “Perceived Risk, Real Risk: Social Science and the Art of Probabilistic Risk Assessment.” Science 242, no. 4875 (October 7, 1988): 44–49. Fullwood, Ralph F. Probabilistic Safety Assessment in the Chemical and Nuclear Industries. Boston: Butterworth, Heinemann, 2000. Garrick, B. John. Probabilistic Analysis of Nuclear Reactor Containment Structures, HN-203. Los Angeles: Homes and Narver, 1969. . “PRA-Based Risk Management: History and Perspectives.” Nuclear News 57, no. 8 (July 2014): 48–53. . Quantifying and Controlling Catastrophic Risk. Oxford, UK: Academic Press, 2008. , ed. Quantifying Global Catastrophic Risks: One Day Workshop. Los Angeles: B. John Garrick Institute for the Risk Sciences, UCLA, 2018. . Reliability Analysis of Nuclear Power Plant Protective Systems, HN-190. Los Angeles: Holmes & Narver, 1967. . “Unified Systems Safety Analysis for Nuclear Power Plants.” PhD diss., University of California at Los Angeles, 1968. , and Robert Christie. “Probabilistic Risk Assessment Practices for Nuclear Power Plants.” Safety Science 40 (2002): 177–201. “GDR Wants to End Decade of Errors, Incompetence, Secrecy.” Nucleonics Week, January 18, 1990, 10.
298
| Bibliography
Gekler, W. C. “Development and Application of a Reliability Data Collection Program in Nuclear Power Plants.” Transactions of the American Nuclear Society, Fifteenth Annual Meeting 12 (June 15–19, 1969): 170. Gephart, Robert P., Jr., John Van Maanen, and Thomas Oberlechner. “Organizations and Risk in Late Modernity.” Organization Studies 30, no. 2 & 3 (2009): 141–55. “Germans and Austrians Protest Against Temelin on Fukushima Anniversary.” CzechPosition.com, March 12, 2012, https://ceskapozice.lidovky.cz/tema /germans-and-austrians-protest-against-temelin-on-fukushima-anniversary .A120312_115734_pozice_60034. Gertner, Jon. “Atomic Balm?” The New York Times Magazine, July 16, 2006, 6, 36. Gilbert, Alexander, et al. “Cost Overruns and Financial Risk in the Construction of Nuclear Power Reactors: A Critical Appraisal.” Energy Policy 102 (2017): 644–49. Gillette, Robert. “Nuclear Reactor Safety: A Skeleton at the Feast?” Science 172, no. 3986 (May 5, 1972): 918–19. . “Nuclear Reactor Safety: At the AEC the Way of the Dissenter is Hard.” Science, 176, no. 4034 (May 5, 1972): 492–98. . “Nuclear Safety (III): Critics Charge Conflicts of Interest.” Science, 177, no. 4053 (September 15, 1972): 970–75. Gillingham, John. European Integration, 1950–2003: Superstate or New Market Economy? Cambridge, UK: Cambridge University Press, 2003. Gimpel, Jeff. “Risk Assessment and Cost Benefit Act of 1995: Regulatory Reform and the Legislation of Science.” Journal of Legislation 23, no. 1 (1997): 61–91. Glucroft, Douglas. “French See PRA Useful in Operating but Not in Licensing Nuclear Plants.” Inside NRC, July 12, 1982, 10. Ghosh, S. Tina, and George E. Apostolakis. “Organizational Contributions to Nuclear Power Plant Safety.” Nuclear Engineering and Technology 37, no. 3 (June 2005): 207–20. Gofman, John W., and Arthur R. Tamplin. “Low Dose Radiation, Chromosomes, and Cancer.” 1969 IEEE Nuclear Science Symposium, San Francisco, October 29, 1969, https://www.ratical.org/radiation/CNR/GT-Reports /GT-101–69.pdf. Gomberg, Henry J. Report on the Possible Effects on the Surrounding Population of an Assumed Release of Fission Products into the Atmosphere from a 300 Megawatt Nuclear Reactor Located at Lagoona Beach, Michigan. Detroit, MI: Atomic Power Development Associates, Inc., May 1957. Graham, John D. “Historical Perspective on Risk Assessment in the Federal Government.” Toxicology 102 (1995): 29–52. . “The Risk Not Reduced.” NYU. Environmental Law Journal 3 (1994): 382–404. Grabbe, Heather. “Europeanization Goes East: Power and Uncertainty in the EU Accession Process.” In The Politics of Europeanization, edited by Kevin Featherstone and Claudio M. Radaelli, 303–30. Oxford, UK: Oxford University Press, 2003.
Bibliography
|
299
Granger, Morgan M., Max Henrion, and Samuel C. Morris. Expert Judgments for Policy Analysis, BNL-51358. Upton, NY: Brookhaven National Laboratory, 1979. Gronlund, Lisbeth, David Lochbaum, and Edwin Lyman. Nuclear Power in a Warming World: Assessing the Risks, Addressing the Challenges. Cambridge, MA: UCS, 2007. Guzzo, Louis R. Is It True What They Say About Dixy? A Biography of Dixy Lee Ray. Mercer Island, WA: Writing Works, 1980. Haas, Reinhard, Stephen Thomas, and Amela Ajanovic. “The Historical Development of the Costs of Nuclear Power.” In The Technological and Economic Future of Nuclear Power, edited by Reinhard Haas, Lutz Mez, and Amela Ajanovic, 97–115. Wiesbaden: Springer, 2019. Haber, Sonja B., and Michael T. Barriere. Development of a Regulatory Organizational and Management Review Method, AECB Project No. 2.341.2. Atomic Energy Control Board, January 20, 1998. Haber, Sonja B., John N. O’Brien, and Thomas Ryan. “Model Development for the Determination of the Influence of Management on Plant Risk.” International Conference on Human Factors and Power Plants. Monterey, CA, July 1988. Haber, Sonja B., et al. “The Nuclear Organization and Management Analysis Concept Methodology: Four Years Later.” IEEE Fifth Conference on Human Factors and Power Plants, Monterey, CA, June 7–11, 1992. Hake, G. “The Relation of Reactor Design to Siting and Containment in Canada.” In Containment and Siting of Nuclear Power Plants, Proceedings of a Symposium on the Containment and Siting of Nuclear Power Plants Held by the International Atomic Energy Agency in Vienna, 3–7 April 1967, 77–92. Vienna: International Atomic Energy Agency, 1967. Hamblin, Jacob Darwin. “Fukushima and the Motifs of Nuclear History.” Environmental History 17 (April 2012): 285–99. Hanauer, S. H., and C. S. Walker. Design Principles of Reactor Protection Instrument Systems, ORNL-NSIC-51. Oak Ridge, TN: Oak Ridge National Laboratory, September 1968. Hardy, E. P., P. W. Krey, and H. L. Volchok. Global Inventory and Distribution of Pu-238 from SNAP-9A, HASL-250. New York: AEC, March 1, 1972. Hart, Kathleen. “Bulgaria Seeks Joint Ventures, Rejects Nuclear Safety Studies.” Nucleonics Week, November 17, 1994, 15. . “Commissioner de Planque Charges NRC Moving Toward ‘OverRegulation.’ ” Inside NRC, March 6, 1995, 1. . “NRC May Expand Purview to Financial Aspects of Nuclear Competition.” Inside NRC, June 10, 1996, 6. Hays, M. R. “The Evolution of Probabilistic Risk Assessment in the Nuclear Industry.” Transactions of the Institute of Chemical Engineers 77, Part B (May 1999): 117–42. Headington, W. L., M. E. Stewart, and J. O. Zane. Fault Tree Analysis of the PBF Transient Rod Drive System. Idaho Falls, ID: Philips Petroleum, November 1968.
300
| Bibliography
Hecht, Gabrielle. The Radiance of France: Nuclear Power and National Identity after World War II. Cambridge, MA: MIT Press, 1998. Hecht, Gabrielle, and Paul N. Edwards. The Technopolitics of the Cold War: Toward a Transregional Perspective. Washington, DC: American Historical Association, 2007. Hellstrom, Tomas. “Science-Policy Interaction, Constitutive Policy Making and Risk: The Development of the U.S. Nuclear Safety Study WASH-1400.” Science Studies 11, no. 2 (1998): 3–19. Henrion, Max, and Baruch Fischhoff. “Assessing Uncertainty in Physical Constants.” American Journal of Physics 54, no. 9 (September 1986): 791–98. Henry, Tom. “Ex-Engineer Found Guilty of Concealing Davis-Besse Dangers. Toledo Blade, August 27, 2008, http://www.toledoblade.com/local/2008/08/27 /Ex-engineer-found-guilty-of-concealing-Davis-Besse-dangers.print. Hibbs, Mark. “As Greifswald Tension Mounts, Workers Want Dissenter Sacked.” Nucleonics Week, May 31, 1990, 7. . “East Europe’s Nuclear Share Impedes Plant Upgrades, BMU Says.” Nucleonics Week, August 1, 1991, 8. . “G-7 Leaders to Okay $700 Million in Emergency Nuclear Assistance.” Nucleonics Week, July 9, 1992, 1. . “GDR Orders Greifswald Shut; GRS Hits Operational Safety.” Nucleonics Week, June 7, 1990, 2. . “German VVERs to be Closed and Siemens PWRs Build Instead.” Nucleonics Week, February 28, 1991, 1. . “Greifswald Restart Not Likely, German Licensing Officials Says.” Nucleonics Week, November 15, 1990, 11. . “GRS Study Will Improve VVERs, But Not in Germany.” Nucleonics Week, September 9, 1991, 1. . “Probabilistic Assessment for VVER Designs Could Be Abused, GRS Says.” Inside NRC, October 18, 1993, 6. . “Toepfer Says G-7 Aid Linked to Commitment to Shut Unsafe Units.” Nucleonics Week, July 16, 1992, 2. Hibbs, Mark, and Ann Maclachlan. “East Germans Looking West for Broader Nuclear Cooperation.” Nucleonics Week, April 16, 1987, 11. Hibbs, Mark, and Gamini Seneviratne. “Western Governments Differ Over Nuclear Aid to Bulgaria.” Nucleonics Week, July 11, 1991, 1. Hill, Robert L. “A Farewell Interview with NRC Chairman Rowden.” Public Utilities Fortnightly 100, no. 1 (1977): 24–28. Hirsch, H., T. Einfalt, O. Schumacher, and G. Thompson. IAEA Safety Targets and Probabilistic Risk Assessment: State of the Art, Merits, and Shortcomings of Probabilistic Risk Assessment. Hannover: Greenpeace International, August 1989. Hirsh, Richard F. Technology and Transformation in the Electric Utility Industry. Cambridge, UK: Cambridge University Press, 1989. Hiruo, Elaine. “DOE, NRC on Track for Big Push Toward New U.S. Power Reactors.” Nucleonics Week, April 28, 2005, 3. . “Fort Calhoun to Restart This Year Under OPPD Plan.” Nucleonics Week, August 18, 2011, 1.
Bibliography
|
301
. “Industry Looking for Ways to Hasten Use of Accident Tolerant Fuel.” Nucleonics Week, September 14, 2017, 1. . “ ‘Nuclear Renaissance is Here,’ Klein Says After TVA Submits COL Application.” Nucleonics Week, November 1, 2007, 1. , William Freebairn, and Steven Dolley. “IG Report Documents Angry Outbursts by NRC Chairman.” Nucleonics Week, July 5, 2012, 1. Holl, Jack M. Argonne National Laboratory, 1946–96. Urbana, IL: University of Illinois Press, 1997. Holmes & Narver, Inc. Proposal for the U.S. Atomic Energy Commission: A Practical Method for Analyzing Reactor Safety System Reliability. Los Angeles: Holmes and Narver, September 23, 1963. “House Committee Probes Nuclear Power in Blue Ribbon Debate.” Nuclear Industry 22 (April 1975): 3–6. “How Safe are Nuclear Reactors?” New York Times, January 24, 1979, A22. Howard, Ronald A. “On Fates Comparable to Death.” Management Science 30, no. 4 (April 1984): 407–22. Howlett, Darryl A. EURATOM and Nuclear Safeguards. New York: St. Martin’s Press, 1990. Hughes, H. A., and R. M. Horsley. “Application of the Safety Limitation Against Depressurization to the Calder and Chapelcross Reactors.” Journal of the British Nuclear Energy Society 3, no. 3 (July 1964): 198–203. Hultman, Nathan, and Jonathan Koomey. “Three Mile Island: The Driver of US Nuclear Power’s Decline?” Bulletin of the Atomic Scientists 69, no. 3 (2013): 63–70. Hultman, Nathan E., Jonathan G. Koomey, and Daniel M. Kammen. “What History Can Teach Us About the Future Costs of U.S. Nuclear Power.” Environmental Science and Technology, April 1, 2007, 2089–93. Iansiti, E., and L. Konstantinov. “Nuclear Safety Standards (NUSS) Programme—Progress Report.” IAEA Bulletin 20, no. 5, (1978): 46–55. “Indignant Intervenors Propose Substitute Safety Goals.” Inside NRC, July 27, 1981, 2–3. “Industry Group Calls Regulation Unfair and Detrimental to Safety.” Nucleonics Week, January 31, 1991, 3. International Atomic Energy Agency (IAEA). Annual Report 2002. Vienna: IAEA, 2003. . ASCOT Guidelines, IAEA-TECDOC-743. Vienna: IAEA, 1994. . ASCOT Guidelines, Revised 1996 Edition. IAEA-TECDOC-860. Vienna: IAEA, 1996. . Convention on Nuclear Safety, INFCIRC/449. July 5, 1994. http:// www.iaea.org/Publications/Documents/Infcircs/Others/inf449.shtml. . Final Report of the Programme on the Safety of WWER and RBMK Nuclear Power Plants. Vienna: IAEA, 1999. . The Fukushima Daiichi Accident: Report by the Director General, GOV/2015/26. Vienna: IAEA, May 14, 2015. . Generic Initiating Events for PSA for WWER Reactors. Vienna: IAEA, June 1994.
302
| Bibliography
. Guidelines for Mutual Emergency Assistance Arrangements in Connection with a Nuclear Accident or Radiological Emergency, INFCIRC/310. Vienna: IAEA, January 1984. . Guidelines on Reportable Events, Integrated Planning and Information Exchange in a Transboundary Release of Radioactive Materials, INFCIRC /321. Vienna: IAEA, January 1985. . IAEA International Peer Review Mission On Mid-And-Long-Term Roadmap Towards the Decommissioning Of TEPCO’s Fukushima Daiichi Nuclear Power Station Units 1–4. Vienna: IAEA, May 13, 2015. https:// www.iaea.org/sites/default/files/missionreport130515.pdf. . International Atomic Energy Agency: Personal Reflections. Vienna: IAEA, 1997. . The International Nuclear Events and Radiological Scale User’s Manual. Vienna, IAEA, 2012. . Press Release: IAEA Experts Review Safety of Kozloduy Units 3 and 4, July 9, 2002, http://www.iaea.org/NewsCenter/PressReleases/2002/prn0210 .shtml. . Procedures for Conducting Probabilistic Safety Assessments of Nuclear Plants (Level 1), Safety Series, 50-P-4. Vienna: IAEA, 1992. . Ranking of Safety Issues for WWER-440 Model 230 Nuclear Power Plants: Report of the IAEA Extrabudgetary Programme on the Safety of WWER-440 Model 230 Nuclear Power Plants, IAEA-TECDOC-640. Vienna: IAEA, February 1992. . Review of Probabilistic Safety Assessments by Regulatory Bodies, Safety Report Series no. 25. Vienna: IAEA, 2002. . Report of the Expert Mission to Review the Results of Safety Upgrading Activities of the Kozloduy Nuclear Power Plant Units 3&4. Bulgaria, IAEA-TCR-001142. Vienna: IAEA, June 24–28, 2002. . The Safety of Nuclear Power Plants in Central and Eastern Europe: An Overview and Major Findings of the IAEA Project on the Safety of WWER 440 Model 230 Nuclear Power Plants. Vienna: IAEA, February 14, 1992. . The Safety of WWER-440 Model 230 Nuclear Power Plants: An Overview and Major Findings of the IAEA Extrabudgetary Programme of the Safety of WWER-440 Model 230 Nuclear Power Plants. Vienna: IAEA, 1992. . Strength Analyses of the Bubbler Condenser Structure of WWER-440 Model 213 Nuclear Power Plants. Vienna: IAEA, June 1995. . Use of PSA Level 2 Analysis for Improving Containment Performance. Vienna: IAEA, March 1988. International Nuclear Safety Advisory Group. The Chernobyl Accident: Updating INSAG-1: INSAG Series 7. Vienna: IAEA, 1992. . The Safety of Nuclear Power, Safety Series No. 75-INSAG-5. Vienna: IAEA, 1992. . A Common Basis for Judging the Safety of Nuclear Power Plants Built to Earlier Standards, INSAG-8. Vienna: IAEA, 1995. “International Safety Review of WWER-440/230 Nuclear Power Plants.” IAEA Bulletin 34, no. 2 (1992): 24–31.
Bibliography
|
303
Isted, Claire-Louise. “Economics of Nuclear Power May Deteriorate PostFukushima: IEA.” Nucleonics Week, November 10, 2011, 1. Ivanov, Kalin. “Legitimate Conditionality? The European Union and Nuclear Power Safety in Central and Eastern Europe.” International Politics 45 (2008): 146–67. Jacobs, I. M. “Safety Systems for Nuclear Power Reactors.” Transactions of the American Institute of Electrical Engineers Pt. I 76 (1957): 670–73. “Jaczko Advances Closure of Yucca Mt. Licensing Work.” Nucleonics Week, October 7, 2010, 1. Jaczko, Gregory B. Confessions of a Rogue Nuclear Regulator. New York: Simon & Schuster, 2019. “The Japanese Mayor Who was Laughed At for Building a Huge Sea Wall—Until His Village Was Left Almost Untouched by Tsunami.” Daily Mail, May 13, 2011. http://www.dailymail.co.uk/news/article-1386978/The-Japanese-mayorlaughed-building-huge-sea-wall—village-left-untouched-tsunami.html. Jasanoff, Sheila. The Fifth Branch: Science Advisors as Policymakers. Cambridge, MA: Harvard University Press, 1990. . Learning from Disaster: Risk Management after Bhopal. Philadelphia, PA: University of Pennsylvania Press, 1994. “JCAE Safety Hearings: Rasmussen’s Debut.” Nuclear Industry 20 (October 1973): 11–14. “JCAE Safety Hearings: Rasmussen Study, ECCS Standardization Are First Day Topics.” Nuclear Industry 21 (January 1974): 22–23. Jehlicka, Petr, and Andrew Tickle. “Environmental Implications of Eastern Enlargement: The End of Progressive EU Environmental Policy?” Environmental Politics 13, no. 1 (Spring 2004): 77–95. Johnston, Sean F. “Making the Invisible Engineer Visible: DuPont and the Recognition of Nuclear Expertise.” Technology and Culture 52, no. 3 (July 2011): 548–73. Joksimovich, V., and D. D. Orvis. “Safety Culture in Nuclear Installations, Risk Culture: An Outgrowth of Safety Culture.” In Proceedings of the International Topical Meeting of Safety Culture in Nuclear Installations, 24 to 28 April 1995, Vienna Austria, edited by A. Carnino and G. Weimann, 291– 300. Vienna: American Nuclear Society-Austria Local Section, 1995. Jones, Cynthia Gillian. “A Review of the History of U.S. Radiation Protection Regulations, Recommendations, and Standards.” Health Physics: The Radiation Safety Journal 88, no. 2 (February 2005): 105–24. Jordan, Brian. “Proposed Maintenance Policy Would Formalize NRCNUMARC Agreement.” Inside NRC, February 16, 1987, 9. . “Zech Unswayed by Claim that Maintenance Rule Could Kill Nuclear Option.” Inside NRC, August 15, 1988, 7. Kadak, Andrew C., and Toshihiro Matsuo. “The Nuclear Industry Transition to Risk-Informed Regulation and Operation in the United States.” Reliability Engineering and System Safety 92 (2007): 609–18. Kahn, Joseph. “Cheney Promotes Increasing Supply as Energy Policy.” New York Times, May 1, 2001, A1.
304
| Bibliography
Kaplan, S., and B. J. Garrick. “On the Quantitative Definition of Risk.” Risk Analysis 1, no. 1 (March 1981): 11–27. Keller, William, and Mohammad Moddarres. “A Historical Overview of Probabilistic Risk Assessment Development and its Uses in the Nuclear Power Industry: A Tribute to the Late Professor Norman Carl Rasmussen.” Reliability Engineering and System Safety 89 (2005): 271–85. Kellermann, O., and H. G. Seipel. “Analysis of the Improvement in Safety Obtained by a Containment and by Other Safety Devices for Water-Cooled Reactors.” In Containment and Siting of Nuclear Power Plants: Proceedings of a Symposium, 3–7 April 1967, 403–18. Vienna: IAEA, 1967. Kemeny, John G. Report of the President’s Commission on the Accident at TMI, Vol. 1: The Need for Change: The Legacy of TMI. Washington, DC: Government Printing Office, 1979. Kerber, Ross. “Nuclear Plants Face Huge Costs to Fix Problems.” The Wall Street Journal, June 18, 1997, B1 and B3. Kletz, T. A. “The Origin and History of Loss Prevention.” Transactions of the Institute of Chemical Engineers 77, Part B (May 1999): 109–16. Knapik, Michael. “Commission Agrees to Consider Redefinition of LargeBreak LOCA.” Inside NRC, April 7, 2003, 1. . “Final 50.69 Rule Released, But Some Implementation Issues Remain.” Inside NRC, November 29, 2004, 1. . “IAEA Meeting Raises Issues About Increasing Regulatory Use of PSAs.” Inside NRC, September 24, 2001, 13. . “Industry Seeing Huge Benefits, Presses for Redefining Large-Break LOCA.” Inside NRC, January 15, 2001, 1. . “NRC Moving to Eliminate Some Rules on Hydrogen Recombiners, Monitors.” Nucleonics Week, May 23, 2002, 5. Knowles, Scott Gabriel. “Learning from Disaster? The History of Technology and the Future of Disaster Research.” Technology and Culture 55, no. 4 (October 2014): 773–84. “Kouts Moves to Kill Research-by-Timetable Methods.” Nucleonics Week, October 10, 1974, 3. Kramer, David. “US Nuclear Industry Fights for Survival.” Physics Today 71 (2018): 26–30. Kramer, Joel J., and Sonja B. Haber. “Organizational Performance Research at the U.S. Nuclear Regulatory Commission—Past, Present, Future.” In Proceedings of the International Topical Meeting of Safety Culture in Nuclear Installations, 24 to 28 April 1995, Vienna Austria, edited by A. Carnino and G. Weimann, 385–93. Vienna, Austria: American Nuclear Society Austria Local Section, 1995. Krige, John. American Hegemony and the Postwar Reconstruction of Science in Europe. Cambridge, MA: MIT Press, 2006. . “The Peaceful Atom as Political Weapon: EURATOM and American Foreign Policy in the Late 1950s.” Historical Studies in the Natural Sciences 38, no. 1 (Winter 2008): 5–44. . Sharing Knowledge, Shaping Europe: U.S. Technological Collaboration and Nonproliferation. Cambridge, MA: MIT Press, 2016.
Bibliography
|
305
. “Technodiplomacy: A Concept and Its Application to U.S.-France Nuclear Weapons Cooperation in the Nixon-Kissinger Era.” Federal History Journal 12 (2020): 99–116. Krige, John, Arturo Russo, and Lorenza Sebaesta. “A Brief History of the European Space Agency.” In A History of European Scientific and Technological Cooperation, edited by. John Krige and Lucy Guzzetti, 195–220. Luxembourg: Office for Official Publications of the European Communities, 1997. Kuzmack, Arnold M., and Robert E. McGaughy. Quantitative Risk Assessment for Community Exposure to Vinyl Chloride. Washington, DC: EPA, December 5, 1975. Lambert, Howard E. Fault Trees for Decision Making in Systems Analysis. Livermore, CA: Lawrence Livermore Laboratory, October 9, 1975. LaPorte, Todd R., and Paula M. Consolini. “‘Working in Practice but Not in Theory’: Theoretical Challenges of ‘High-Reliability Organizations.” Journal of Public Administration Research and Theory 1, no. 1 (January 1991): 19–48. LaPorte, Todd R., and Craig W. Thomas. “Regulatory Compliance and the Ethos of Quality Enhancement: Surprises in Nuclear Power Plant Operations.” Journal of Public Administration Research and Theory 5, no. 1 (January 1995): 109–37. LaPorte, Todd R., Karlene Roberts, and Gene I. Rochlin. “Aircraft Carrier Operations at Sea: The Challenges of High Reliability Performance. Final Report.” Berkeley: University of California, Berkeley, July 15, 1988. Lapp, Ralph E. “Nuclear Power Safety—1973: Weighing Benefits and Risks.” New Republic, April 28, 1973, 17–19. Laurence, George C. “Reactor Safety in Canada.” Nucleonics 18, no. 19 (October 1960): 73. Leach, L. P., L. J. Ybarrondo, E. F. Hicken, and K. Tasaka. “A Comparison of LOCA Safety Analysis in the USA, FRG, and Japan.” In Proceedings of the International Meeting on Thermal Nuclear Reactor Safety, Chicago, IL, August 29-September 2, 1982, NUREG/CP-0027, 1475–84. Washington, DC: NRC, February 1983. Lellouche, Gerald. “ATWS—Impact of a Nonproblem.” EPRI Journal, March 1977, 37–41. Levine, Saul, and Fred Stetson. “How PRA is Being Used in the USA.” Nuclear Engineering International 27 (June 1982): 35–38. Leveson, Nancy. “Technical and Managerial Factors in the NASA Challenger and Columbia Losses: Looking Forward to the Future.” In Controversies in Science and Technology, Volume 2, edited by Daniel Lee Kleinman, Karen A. Cloud-Hansen, Christina Matta, and Jo Handelsman, 237–70. New Rochelle, NY: Mary Ann Liebert Press, 2008. Levy, Solomon. “A Systems Approach to Containment Design in Nuclear Power Plants.” In Containment and Siting of Nuclear Power Plants: Proceedings of a Symposium, Vienna 3–7 April 1967, 227–42. Vienna: IAEA, 1967. Lewis, Harold W. “The Safety of Fission Reactors.” Scientific American 242, no. 3 (March 1980): 53–65. “The Lewis Review of Rasmussen’s Report.” Nuclear Industry 26 (June 1979): 16–17.
306
| Bibliography
“License Renewal Reviews Too Slow, Say Senators.” Inside NRC, February 14, 2011, 1–2. Lichtenstein, Sarah, Baruch Fischhoff, and Lawrence D. Phillips. Calibration of Probabilities: The State of the Art to 1980. Eugene, OR: Decision Research, June 1981. Lindackers, K-H., and W. Stoebel, “Probability Analysis Applied to LWRs, Part I: Loss of Coolant Accidents.” In Symposium on Safety and Siting, 28 March 1969, edited by A. D. Bray, 79–85. London: Institution of Civil Engineers, 1969. Lindeman, Eric. “NRC Revises Estimate of Probability of Severe Accident in Next 20 Years.” Inside NRC, May 26, 1986, 9. Linden, Henry R. “The Revolution Continues.” The Electricity Journal 8, no. 10 (December 1995): 54–56. Lipset, Seymour Martin, and William Schneider. The Confidence Gap: Business, Labor and Government in the Public Mind. Revised edition. New York: Free Press, 1987. Lochbaum, David. “Anticipated Transient Without Scram.” The Union of Concerned Scientists, All Things Nuclear Blog, August 16, 2018, https:// allthingsnuclear.org/dlochbaum/anticipated-transient-without-scram. . “The NRC’s Maintenance Rule.” The Union of Concerned Scientists, All Things Nuclear Blog, September 20, 2016, https://allthingsnuclear.org /dlochbaum/nrcs-nuclear-maintenance-rule. . Nuclear Plant Risk Studies: Failing the Grade. Cambridge, MA: UCS Publications, 2000. Loewen, Eric P. “To Understand Fukushima, We Must Remember Our Past: The History of Probabilistic Risk Assessment.” Address to Sociedad Nuclear Mexicana Conference, August 8, 2011, http://www.ans.org/about/officers /docs/8-aug-11_mexico_address_fs.pdf. “Lowenstein Urges Split-off of AEC’s Regulatory Arm.” Nucleonics Week, October 19, 1971, 1. Lukaszewski, James E. “Nuclear Industry has One More Chance.” PR News 57, no. 28 (July 23, 2001). Lyczkowski, Robert W. “The History of Multiphase Computational Fluid Dynamics.” Industrial Engineering and Chemistry Research 49, no. 11 (2010): 5029–36. Lyons, Richard D. “Aide Who Criticized A.E.C. Over Safety Shifted by Agency.” New York Times, February 6, 1972, 42. Lyth, Peter, and Helmuth Trischler. Wiring Prometheus: Globalisation, History and Technology. Aarhus, Denmark: Aarhus University Press, 2004. MacLachlan, Ann. “Approaches to Safety in EC Countries Not Too Different, Official Says.” Inside NRC, October 10, 1988, 8. . “Austria Left Empty-Handed at Copenhagen on Temelin Safety.” Nucleonics Week, December 19, 2002, 7. . “Austria Threatens to Veto Czech EU Entry Over Temelin Startup.” Nucleonics Week, August 31, 2000, 1. . “Austrian Institute Says Temelin by Far Not Europe’s Riskiest Unit.” Nucleonics Week, November 29, 2001, 8.
Bibliography
|
307
. “Czechs, Austrians Agree on Process for Temelin EIA.” Nucleonics Week, February 15, 2001, 6. . “Despite Endorsing EU Standards, EC Won’t Amend Kozloduy Closure.” Nucleonics Week, November 14, 2002, 6. . “Eastern Europeans Vie for Safety Kudos from International Experts.” Nucleonics Week, June 24, 1999, 11. . “EC Aid to Bulgaria Proceeding but Coordination is a Problem.” Nucleonics Week, September 19, 1991, 1. . “EDF Threatens to Leave Kozloduy if Sofia Doesn’t Close Reactors.” Nucleonics Week, December 9, 1993, 10. . “EU Utilities Fear New WENRA Rules Could Impose Costly Backfitting.” Nucleonics Week, February 16, 2006, 10. . “European Regulators, Industry Embark on ‘Harmonization’ of Regs.” Inside NRC, December 26, 2005, 7. . “Europeans Wish NRC Luck in Brave, New, Risk-Informed World.” Inside NRC, August 2, 1999, 1. . “Experts Find Kozloduy VVERs Are No Longer First Generation Units.” Nucleonics Week, July 11, 2002, 12. . “First Upgraded Kozloduy Unit Restarts with Western Blessing.” Nucleonics Week, December 31, 1992, 2. . “French Government Approves Bill Creating Independent Regulator.” Nucleonics Week, February 23, 2006, 12. . “French Minister Sets Sights on Regulatory Agency Modeled on NRC.” Inside NRC, July 21, 1997, 12. . “French Rethinking Safety to Keep Public Support for Nuclear Option.” Nucleonics Week, May 26, 1988, 7. . “G-7 Aid Consensus Sees No Future for Old VVER-440 or RBMK Units.” Nucleonics Week, May 28, 1992, 1. . “G-24 Nations Start Coordinating Aid to Eastern European Plants.” Nucleonics Week, September 24, 1992, 16. . “Japan to Strengthen Severe Accident Regulations.” Inside NRC, March 26, 2012, 8–10. . “Kozloduy-3/4 Face Forced Closure December 31 After Best Year Ever.” Nucleonics Week, December 21, 2006, 1. . “Legasov’s Suicide: A Mute Reproach to Soviet Industry.” Nucleonics Week, June 9, 1988, 4. . “Nuclear Renaissance is Hostage to Public Opinion, WANO Told.” Nucleonics Week, October 13, 2005, 4. . “Official Says Bulgaria Won’t Shut For Kozloduy Units for EBRD Aid.” Nucleonics Week, June 10, 1993, 1. . “Operators Say PSAs Prove That Soviet-Design Plants Now Safer.” Nucleonics Week, June 24, 1999, 1. . “Post-Fukushima Upgrades Could Cost Eur25 Billion: Draft EC Report.” Nucleonics Week, October 4, 2012, 3–4. . “Regulators Says Kozloduy Needs Continued Western Aid for Safety.” Nucleonics Week, May 13, 1993, 1.
308
| Bibliography
. “SUJB, CEZ Agree Temelin Issues Won’t Block Czech EU Entry.” Nucleonics Week, May 9, 2002, 1. . “Watkins Says Soviets Have Not Learned Nuclear Safety Lessons.” Nucleonics Week, September 28, 1989, 1. . “WENRA Working on Reference Requirements for PSAs.” Inside NRC, December 15, 2003, 10. . “World Nuclear Safety Regime is Debated Hotly in Vienna.” Nucleonics Week, September 5, 1991, 1. MacLachlan, Ann, Steven Dolley, William Freebairn, and Tom Harrison. “One Year After Fukushima Accident, Industry Remains Unsettled.” Nucleonics Week, March 8, 2012, 1. Magee, John F. “Decision Trees for Decision Making.” Harvard Business Review 4, no. 4 (July/Aug. 1964): 126–38. Maloney, Michael T., Robert E. McCormick, and Raymond E. Sauer. “On Stranded Cost Recovery in the Deregulation of the U.S. Electric Power Market.” Natural Resources Journal 37, no 1 (Winter 1997): 59–123. Maniokas, Klaudijus, and Ramunas Staniouis. “Negotiations on Decommissioning Ignalina Nuclear Power Plant.” In Lithuania’s Road to the European Union: Unification of Europe and Lithuania’s EU Accession Negotiation, edited by Klaudijus Maniokas, Ramunas Vilpisauskas, and Darius Zeruolis, 297–349. Vilnius: Eugrimas, 2005. Margolis, S. G., and J. A. Redfield. FLASH: A Program for Digital Simulation of the Loss of Coolant Accident, WAPD-TM-534. Pittsburgh, PA: Westinghouse Electric Corp, May 1966. Marshall, Eliot. “Academy Panel Faults NASA’s Safety Analysis.” Science 239, no. 4845 (March 11, 1988): 1233. Martin, R. P. “Science-Based Nuclear Design and Safety in the Emerging Age of Data Based Analytics.” Nuclear Engineering and Design 354 (2019): 1–7. Mattson, R., et al. “Concepts, Problems, and Issues in Developing Safety Goals and Objectives for Commercial Nuclear Power.” Nuclear Safety 21, no. 6 (November-December 1980): 706. Maugh, Thomas H. “Chemical Carcinogens: How Dangerous are Low Doses.” Science 202, no. 4363 (October 6, 1978): 37–41. Mazur, Allan. “Bias in Risk-Benefit Analysis.” Technology in Society 7 (1985): 25–30. Mazuzan, George T., and J. Samuel Walker. Controlling the Atom: The Beginning of Nuclear Regulation 1946–1962. Berkeley: University of California Press, 1984. McCray, W. Patrick. “ ‘Globalization with Hardware’: ITER’s Fusion of Technology, Policy, and Politics.” History and Technology 26, no. 4 (December 2010): 283–312. McCullough, C. Rogers, Mark M. Mills, and Edward Teller. “The Safety of Nuclear Reactors.” In Proceedings of the International Conference on the Peaceful Uses of Atomic Energy, August 8–20, 1955, Volume 13, Legal, Administrative, Health, and Safety Aspects of Large Scale Use of Nuclear Energy, 79–87. New York: United Nations, 1956.
Bibliography
|
309
McWethy, L. M., P. R. Pluta, and D. B. Sherer. Core Design Development Needs in Relation to Fuel Failure Propagation, Sodium Boiling and Clad/FuelSodium Thermal Interaction, GEAP-13639–1. Washington, DC: AEC, October 1970. “Memorandum Between the European Commission and the Bulgarian Government.” Bulgarian EuroBulletin, November 29, 1999. “Methods Refined Probabilistic Studies Earn Increased Acceptance with Regulators.” Nuclear Industry 27 (April 1980): 14–15. Miller, Carolyn R. “The Presumptions of Expertise: The Role of Ethos in Risk Analysis.” Configurations 11, no. 2 (Spring 2003): 163–202. Miller, Daniel Paul. “Maintaining the Atom: U.S. Nuclear Power Plant Life and the 80-Year Maintenance Regulation Regime.” PhD. diss., Virginia Polytechnic Institute and State University, 2019. Mills, M. M. A Study of Reactor Hazards, NAA-SR-31 (Del.). Oak Ridge, TN: AEC, 1949. Moray, Neville P., and Beverley M. Huey. Human Factors Research and Nuclear Safety. Washington, DC: National Academy Press, 1988. Morone, Joseph G., and Edward J. Woodhouse. The Demise of Nuclear Energy? Lessons for Democratic Control of Technology. New Haven, CT: Yale University Press, 1989. Morrow, Stephanie L., G. Kenneth Koves, and Valerie E. Barnes. “Exploring the Relationship Between Safety Culture and Safety Performance in U.S. Nuclear Power Operations.” Safety Science 69 (2014): 37–47. Mosleh, Ali. “PRA: A Perspective on Strengths, Current Limitations, and Possible Improvements.” Nuclear Engineering and Technology 46, no. 1 (February 2014): 1–10. Mulvihill, R. J. et al. Analysis of United States Power Reactor Accident Probability, PRC R-695. Los Angeles, CA: Planning Research Corporation, March 21, 1965. “Muntzing Says He’ll Need Bigger Staff to Cope with Licensing Tangle.” Nucleonics Week, November 11, 1971, 5–6. Murley, Thomas E. “Toward a New Safety Contract.” Nuclear News 38, no. 9 (July 1995): 22–23. Nadel, Mark V. Analysis of Processes Used in Evaluating Utility Management and Organization for an NRC Operating License, BHARC-400/82/018. Seattle, WA: Battelle Human Affairs Research Centers, June 1982. Nash, Linda. “From Safety to Risk: The Cold War Contexts of American Environmental Policy.” Journal of Policy History 29, no. 1 (2017): 1–33. National Aeronautics and Space Administration. NASA Accident Precursor Analysis Handbook, NASA/SP-2011–3423, Version 1.0. Washington, DC: NASA, December 2011. . NASA Risk Management Handbook, NASA/SP-2011–3422, Version 1.0. Washington, DC: NASA, November 2011. . NASA System Safety Handbook: Volume 1, System Safety Framework and Concepts for Implementation, NASA/SP-2010–580. Washington, DC: NASA, November 2011.
310
| Bibliography
. NASA System Safety Handbook: Volume 2, System Safety Concepts, Guidelines, and Implementation Examples, NASA/SP-2014–612. Washington, DC: NASA, November 2014. . Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, 2nd, NASA/SP-2011–3421. Washington, DC: NASA, December 2011. . Report of the Presidential Commission on the Space Shuttle Challenger Accident. Washington, DC: NASA, June 6, 1986. National Research Council. Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change. Washington, DC: National Academies Press, 2008. . Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants. Washington, DC: National Academies Press, 2014. National Research Council, Committee on the Institutional Means for Assessment of Risks to Public Health, Commission on Life Sciences. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press, 1983. Navigant Consulting. Assessment of the Nuclear Power Industry—Final Report. Burlington, MA: Navigant Consulting, June 2013. “Nearly Completed Nuclear Plant Will be Converted to Burn Coal.” New York Times, January 22, 1984, 1. “NEI Positively Comments on New Reactor Oversight Process.” Inside NRC, May 24, 1999, 16. Nelson, Jennifer. “NRC Scores Commonwealth Edison for Operating Problems at Zion.” Nucleonics Week, September 13, 1990, 8. “A New View of Safety: Reassuring Rasmussen Report.” Nuclear Industry 21 (August 1974): 13–15. Niehaus, F. “Use of Probabilistic Safety Assessment (PSA) for Nuclear Installations.” Safety Science 40 (2002): 153–76. “No Holidays for Trouble.” Nuclear Industry 18 (December 1971): 6–8. “NRC Budget Provides a Big Hike for Waste Management, Nothing for Breeder.” Nucleonics Week, January 22, 1981, 8. “NRC Drops Hated ‘Watch List.’ ” Nuclear Engineering International, May 28, 1999, http://www.neimagazine.com/news/newsnrc-drops-hated-watch-list. “NRC Focusing on Two Qualitative, Three Quantitative Goals.” Inside NRC, July 27, 1981, 1–2. “NRC Is Moving Forward with Plans to Issue a Maintenance Rule.” Inside NRC, August 29, 1988, 10. “NRC Proposed Interagency Task Force to Resolve Disagreements with EPA.” Inside NRC, July 16, 1990, 5. “NRC Readies Safety Goal Proposal for Fireworks in Harper’s Ferry.” Inside NRC, July 13, 1981, 3–4. “NRC Sets Procedures for New Reactor Hearings.” Inside NRC, February 28, 2011, 1–3. “NRC Staff and ACRS Pose ‘Catch-22’ for Safety Goal Proposal.” Inside NRC, June 14, 1982, 5–6.
Bibliography
|
311
“NRC Union Demands Potty Parity for Rank-and-File Employees.” Inside NRC, April 26, 1999, 18–19. Nuclear Energy Agency. Committee on the Safety of Nuclear Installations. Bubbler Condenser Related Research Work: Present Situation. Paris: Organization for Economic Co-operation and Development (OECD), February 6, 2001. . Committee on the Safety of Nuclear Installations, Special Expert Group on Human and Organisational Factors (SEGHOF). State-of-the-Art Report on Systematic Approaches to Safety Management, NEA/CSNI/R(2006)1. March 16, 2006, https://www.oecd-nea.org/nsd/docs/2006/csni-r2006–1.pdf. . Comparison of Probabilistic Seismic Hazard Analysis of Nuclear Power Plants in Areas with Different Levels of Seismic Activity, NEA/CSNI/R(2019)1. Paris: OECD, June 2019. . Risk Monitors: The State of The Art in Their Development and Use at Nuclear Power Plants, NEA/CSNI/R(2004)20. Paris: OECD, 2004. . Status of Practice for Level 3 Probabilistic Safety Assessments, NEA /CSNI/R(2018)1. Paris: OECD, December 19, 2018. . Summary Record of the Seventeenth (17th) Meeting of the Working Group on Risk Assessment (WGRISK), NEA/SEN/SIN/WGRISK(2016)1. Paris: OECD, May 19, 2016. . Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2010, NEA/CSNI/R(2012)11. Paris: OECD, December 2012. . Use and Development of Probabilistic Safety Assessment: An Overview of the Situation at the End of 2017, forthcoming. Nuclear Energy Institute. 10 CRF 50.69 SSC Categorization Guideline, NEI 00–04. DC: NEI, July 2005. . Efficiency Bulletin: 17–09, Industrywide Coordinated Licensing of 10 CFR 50.69. March 23, 2017, https://www.nei.org/CorporateSite/media /filefolder/resources/delivering-nuclear-promise/2017/eb-17–09–50–69lar-process.pdf. . Enhancing Nuclear Plant Safety and Reliability through Risk-Based and Performance-Based Regulation, NEI 96–04. Washington, DC: NEI, May 1996. . “Making Nuclear Plant Safety Assessments Crystal Clear.” Insight 98, May 1998, 1–2. . Nuclear Energy: 2000 and Beyond a Strategic Direction for Nuclear Energy in the 21st Century. Washington, DC: NEI, May 1998. “Nuclear Industry Considers Obstacles to Future.” Megawatt Daily 6, no. 100 (May 24, 2001). O’Brennan, John. The Eastern Enlargement of the European Union. New York: Routledge, 2006. Ohishi, Norimitsu, and James Glanz. “Japanese Rules for Nuclear Plants Relied on Old Science.” New York Times, March 26, 2011, A1. Okrent, David. Nuclear Reactor Safety: On the History of the Regulatory Process. Madison: University of Wisconsin Press, 1981. . On the History of the Evolution of Light Water Reactor Safety in the United States. Los Angeles: UCLA, 1978, NRC ADAMS ML090630275.
312
| Bibliography
. “The Safety Goals of the U.S. Nuclear Regulatory Commission.” Science 236, no. 4799 (April 17, 1987): 296–300. , and Chris Whipple. An Approach to Societal Risk Acceptance Criteria and Risk Management, UCLA-ENG-7746. Los Angeles: University of California, Los Angeles, June 1977. O’Neill, John. “Nuclear Industry Mobilizing for Degraded Core Rulemaking: Asks Safety Goal First.” Nuclear Industry 27 (October 1980): 3–6. Onishi, Norimitsu. “In Japan, Seawall Offered a False Sense of Security.” New York Times, March 31, 2011. https://www.nytimes.com/2011/04/02/world /asia/02wall.html. “Order Forecasts for ‘74 See Another Peak Year.” Nuclear Industry 21 (January 1974): 3–6. O’Toole, Thomas. “A-Plants Face Delays After Failure in Tests.” Washington Post, May 26, 1971, A1, A5. Otway, Harry J. The Application of Risk Allocation to Reactor Siting and Design, LA-4316. Los Alamos, NM: University of California, June 1970. Oudenaren, John van. “The Limits of Conditionality: Nuclear Reactor Safety in Central and Eastern Europe, 1991–2001.” International Politics 38 (December 2001): 467–97. “An Overview of the Nuclear Safety Standards (NUSS) Programme.” IAEA Bulletin 21, no. 2/3 (1979): 13–17. Owen, D. L., J. E. Matheson, and R. A. Howard. “The Value of Life and Nuclear Design.” In Readings on the Principles and Applications of Decision Analysis, edited by Ronald A. Howard and James E. Matheson, 2:507–19. Palo Alto, CA: Strategic Decisions Group, 1984. “Palladino Says Proposed Goals Reflect NRC Trend Toward PRA Use.” Inside NRC, April 19, 1982, 7. Parker, H. M., and J. W. Healy. “Environmental Effects of a Major Reactor Disaster, P/482.” In Legal, Administrative, Health and Safety Aspects of Large-Scale Use of Nuclear Energy, 106–9. Vol. 13 of Proceedings of the International Conference on the Peaceful Uses of Atomic Energy, Geneva, Aug. 8 to 20, 1955. New York: United Nations, 1956. Pate-Cornell, Elisabeth, and Robin Dillion. “Probabilistic Risk Analysis for the NASA Space Shuttle: A Brief History and Current Work.” Reliability Engineering and System Safety 74 (2001): 345–52. Perera, J., and J. Holsomback. “Use of Probabilistic Risk Assessments for the Space Station Program.” 2004 IEEE Aerospace Conference Proceedings, 6–13 March 2004, Big Sky, MT, vol. 1. https://ieeexplore.ieee.org/document /1367633. Perkins, John H. “Development of Risk Assessment for Nuclear Power: Insights from History.” Journal of Environmental Studies and Science 4 (December 2014): 273–87. Perrow, Charles. “Fukushima and the Inevitability of Accidents.” Bulletin of the Atomic Scientists 67, no. 6 (2011): 44–52. . Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton University Press, 1999.
Bibliography
|
313
. “Not Risk but Power.” Contemporary Sociology 11, no. 3 (May 1982): 298–300. . “Risky Systems: The Habit of Courting Disaster.” The Nation, October 11, 1986, 347–56. Pidgeon, Nick. “Safety Culture: Key Theoretical Issues.” Work & Stress 12, no. 3 (1998): 202–16. Pinkus, Rosa Lynn B., et al. Engineering Ethics: Balancing Cost, Schedule, and Risk—Lessons Learned from the Space Shuttle. Cambridge, UK: Cambridge University Press, 1997. Pollard, Robert D., ed. The Nugget File. Cambridge, MA: Union of Concerned Scientists, 1979. Pooley, Eric. “Nuclear Warriors.” Time, March 4, 1996, 56. Pope, Daniel. “ ‘We Can Wait. We Should Wait.’ Eugene’s Nuclear Power Controversy, 1968–1970.” Pacific Historical Review 59, no. 3 (August 1990): 349–73. Porter, Theodore M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press, 1995. “Preliminary Conclusions of the Members of the Expert Mission to Assess the Safety of NPP Kozloduy.” Bulgarian EuroBulletin, November 17–18, 2003. “Preventing Chernobyl II.” Business Week, June 8, 1992, 44–51. Primack, J. R., and Frank von Hippel. Advice and Dissent: Scientists in the Political Arena. New York: Basic Books, 1974. Primatarova, Antoinette. “The Closure of Units 1–4 of the Kozloduy Nuclear Power Plant.” In Managing Political Crisis in Bulgaria: Pragmatism and Procrastination, edited by Kjell Engelbrekt and Markus Furberg, 97–138. Stockholm: Elanders Gotab, 2005. Pritchard, Sara B. “An Envirotechnical Disaster: Nature, Technology, and Politics at Fukushima.” Environmental History 17, no. 2 (April 2012): 219– 43. “Probabilistic Assessment Seen Increasingly Accepted, But Caution Urged.” Inside NRC, July 28, 1980, 10. Pugh, Michael C. Probability Approach to Safety Analysis TRG Report 1949(R). Warrington, United Kingdom: United Kingdom Atomic Energy Authority, 1969. Rabinovitz, Jonathan. “N.R.C. Gives Final Approval to Restart of Millstone Reactor.” New York Times, June 30, 1998, B4. Ramama, M. V. “Beyond Our Imagination: Fukushima and the Problem of Assessing Risk.” Bulletin of the Atomic Scientist, April 20, 2011, https://thebulletin .org/2011/04/beyond-our-imagination-fukushima-and-the-problem-of-assessing -risk-2. Rasmussen, Norman. “Measuring Nuclear Risks: An Interview with Norman Rasmussen.” The Center Magazine, May/June 1981, 15–21. . “The Safety Study and its Feedback.” Bulletin of the Atomic Scientists. 31 (September 1975): 25–28. “Reactor Safety R&D Held Too Divorced from Licensing Process.” Nucleonics Week, April 13, 1967, 1.
314
| Bibliography
Read, Richard. “How Tenacity, a Wall Saved a Japanese Nuclear Plant from Meltdown after Tsunami.” The Oregonian, August 25, 2012. https://www .oregonlive.com/opinion/index.ssf/2012/08/how_tenacity_a_wall_saved_a_ ja.html. Rees, Eberhard. “Marshall Flight Center Approach in Achieving High Reliability of the Saturn Class Vehicles.” 4th Annual Reliability and Maintainability Conference, Los Angeles, CA July 28–30, 1965, https://ntrs.nasa.gov /archive/nasa/casi.ntrs.nasa.gov/19660020061.pdf. Rees, Joseph V. Hostages of Each Other: The Transformation of Nuclear Safety Since Three Mile Island. Chicago: University of Chicago Press, 1994. “Report to the APS by the Study Group on Light-Water Reactor Safety.” Reviews of Modern Physics 47, sup. 1 (Summer 1975). “Return of the Nukes Will Take a Miracle.” Engineering News Record 246, no. 24 (June 18, 2001): 108. Rijpma, Jos A. “From Deadlock to Dead End: The Normal Accidents—High Reliability Debate Revisited.” Journal of Contingencies and Crisis Management 11, no. 1 (March 2003): 37–45. Riley, Jeff, and John Weglian. “Applied Risk Management in Electric Power Plant Decision Making.” In 2019 Annual Reliability and Maintainability Symposium (RAMS), January 28–31, 2019, Orlando FL, https://ieeexplore .ieee.org/document/8768918. Rip, Arie. “The Mutual Dependence of Risk Research and Political Context.” Science & Technology Studies 4, no. 3/4 (Autumn-Winter 1986): 3–15. Ripley, Anthony. “Safety Gear Untried at A-Power Plants.” New York Times, December 11, 1971, 1, 62. Rittenhouse, P. L. “Fuel-Rod Failure and Its Effects in Light-Water Reactor Accidents.” Nuclear Safety 12 (September-October 1971): 487–95. Roberts, Leslie. “Counting on Science at EPA.” Science 249, no. 4969 (August 10, 1990): 616–18. Rogers, S. J., and G. E. Kennedy. Fission Product Release During a Simulated Meltdown of a PWR Type Core, Technical Report 6, NP-7071. Callery, PA: MSA Research Corporation, October 20, 1958. Rosa, Eugene, and Riley E. Dunlap. “Poll Trends: Nuclear Power: Three Decades of Public Opinion.” The Public Opinion Quarterly 58, no. 3 (Summer 1994): 295–324. Rosen, M. “An International Safety Agenda.” In The Safety of Nuclear Power: Strategy for the Future, Proceedings of a Conference, Vienna, 2–6 September 1991, 41–46. Vienna: IAEA, 1992. Rubel, Paul. “Reliability-Engineering Methods in Reactor-Safety Technology.” Nuclear Safety 12, no. 5 (September-October 1971): 496–99. Ruckelshaus, William D. “Science, Risk, and Public Policy.” Science 221, no. 4615 (September 9, 1983): 1026–28. “Russian President Proposes International Nuclear Safety Rules.” Nucleonics Week, May 12, 2011, 1. Ryan, Margaret L. “Dose Regulation Basis Excoriated, But Replacement Not In Sight.” Inside NRC, May 11, 1998, 3.
Bibliography
|
315
. “EQE Says Bulgaria’s Kozloduy Units Could be Made Safe for $150 Million.” Nucleonics Week, April 16, 1992, 3. . “NRC Commission Endorses Revival of Nuclear Before Senate Panel.” Inside NRC, March 1, 1999, 16–17. . “NRC Faults Plant Maintenance and Says Utilities Must Find Answers.” Inside NRC, March 31, 1986, 8. . “NRC Finds Slow Going in Measuring Organizational Safety Culture.” Inside NRC, May 1, 1995, 1, 12–13. . “Nuclear Plant Success Said to Depend on Developing Managers.” Nucleonics Week, March 23, 1989, 13. . “NUS Audit Says Temelin VVER-1000s Can Meet Western Safety Standards.” Nucleonics Week, March 19, 1992, 1. . “Prerequisites Emerge for Joining the U.S. ‘Nuclear Renaissance.’ ” Nucleonics Week, November 18, 1999, 1–3. . “Regulators Told to Increase International Efforts in Wake of Chernobyl.” Inside NRC, June 9, 1986, 3. . “Seminar Sees Nuclear Industry Evolving Unique Safety Culture.” Nucleonics Week, May 4, 1995, 14. . “Temelin Plant Getting Green Light as CEZ Backs NUS Safety Upgrades.” Nucleonics Week, March 26, 1992, 3. . “U.S. Nuclear O&M Costs Pushed Back Downward in 2004.” Nucleonics Week, July 7, 2005, 1. . “WANO Says Old VVERs Not Worth Overhauling, Should Shut by 1995.” Nucleonics Week, December 12, 1991, 1. “Safety Goals Finally Approved.” Nuclear Industry 30 (February 1983): 3–4. “Safety Goal Planned.” Nuclear Industry 28 (March 1981): 27–28. “Safety Principles Underlying the NUSS Documents.” IAEA Bulletin 22, no. 1 (1980): 51–53. Sagan, Scott D. “The Problem of Redundancy Problem: Why More Nuclear Security Forces May Produce Less Nuclear Security.” Risk Analysis 24, no. 4 (2004): 935–46. Sains, Ariane. “Six Energy Ministers in Germany Oppose Nuclear Power in EU.” Nucleonics Week, September 19, 2013, 5. “Sale and Early Closure of Units, A Glimpse at Industry’s Future.” Nucleonics Week, July 23, 1998, 1, 8–10. Saleh, J. H., and K. Marais. “Highlights from the Early (and Pre-) History of Reliability Engineering.” Reliability Engineering and System Safety 91 (2006): 249–56. Sandia National Laboratories. An Overview of the Evolution of Human Reliability Analysis in the Context of Probabilistic Risk Assessment, SAND2008– 5085. Albuquerque, NM: Sandia National Laboratory, January 2009. Sartmadjiev, A., E. V. Balabanov, and S. Genov. “A Concept, A Technical Solution, and Analysis for Modernization of the Localization System of Units 3 & 4 Kozloduy NPP, Bulgaria.” Sixth International Information Exchange Forum on ‘Safety Analysis for Nuclear Power Plants of VVER and RBMK Types’.” Kiev, Ukraine, U.S. Department of Energy, April 8–12. http://www .insc.gov.ua/forum6/doc/poster/genov.pdf.
316
| Bibliography
“Schlesinger: The End of One Era, the Start of Another.” Nucleonics Week, October 21, 1971, 4–6. “Schlesinger Wants to Keep Shaw in Charge of Reactor Safety Program.” Nucleonics Week, June 8, 1972, 1–2. Schmid, Sonja. Producing Power: The Pre-Chernobyl History of the Soviet Nuclear Industry. Cambridge, MA: MIT Press, 2015. Schneider, Keith. “Pete Domenici, Long a Powerful Senate Voice on Fiscal Policy, Dies at 85.” New York Times, September 13, 2017, B14. Schot, Johan, and Frank Schipper. “Experts and European Transport Integration, 1945–1958.” Journal of European Public Policy 18 no. 2 (2011): 274–93. Schroder, R. J. Fault Trees for Reliability Analysis, BNWL-SA-2522. Richland, WA: Battelle Memorial Institute Pacific Northwest Laboratories, October 1969. Science Applications International Corporation. Probabilistic Risk Assessment of the Space Shuttle: A Study of Losing the Vehicle During Normal Operation, Vol. 1: Final Report. New York: SAIC, February 1995. Scranton, Philip. “The Challenge of Technological Uncertainty.” Technology and Culture 50 (April 2003): 513–18. Seaborg, Glenn. The Journal of Glenn Seaborg, Chairman of the U.S. Atomic Energy Commission, 1961–1971. Berkeley: University of California, Lawrence Berkeley Laboratory, 1988. Seelye, Katharine Q. “Nuclear Power Gains in Status After Lobbying.” New York Times, May 23, 2001, A1. Seife, Charles. “Columbia Disaster Underscores the Risky Nature of Risk Analysis.” Science 299, no. 5609 (February 14, 2003): 1001. “SERs, Operating Plants, Controls, and ATWS Top New Chairman’s Priorities.” Inside NRC, June 29, 1981, 3–5. Shapiro, T. Rees. “James R. Ramey, Atomic Energy Commission Expert on Nuclear Technology, Dies at 95.” Washington Post, September 2, 2010, B7. “Shaw Demands Better Quality Assurance: Tells JCAE of Fuel Failures.” Nucleonics Week, March 23, 1967, 2–3. “Shaw Still Angry and Still Undecided on Whether to Quit AEC.” Nucleonics Week, May 24, 1973, 1–2. Shore, Chris. Building Europe: The Cultural Politics of European Integration. London: Routledge, 2000. Shrader-Frechette, Kristen. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley: University of California Press, 1991. Siddall, E. “Reliable Reactor Protection.” Nucleonics 15, no. 6 (June 1957): 124–29. . Reactor Safety Standards and Their Attainment CRNE-726. Chalk River, Ontario: Atomic Energy of Canada Ltd, 13 September 1957. Simons, Marlise. “Evolution in Europe: Germans to Shut 5 Atom Plants Built by the Soviets.” New York Times, October 21, 1990, 1. . “West Urges Bulgarians to Shut Reactors.” New York Times, July 10, 1991, A3.
Bibliography
|
317
Siu, Nathan et al. “PSA Technology Challenges Revealed by the Great East Japan Earthquake.” PSAM Topical Conference in Light of the Fukushima Dai-ichi Accident, Tokyo, Japan, April 15–17, 2013. NRC ADAMS ML13038A203. Slifer, Bruce C. Loss-of-Coolant Accident and Emergency Core Cooling Models for General Electric Boiling Water Reactors, NEDO-10329. San Jose, CA: General Electric Co., April 1971. Slovic, Paul. The Perception of Risk. London: Earthscan Publications, 2000. . “Perceived Risk, Trust and Democracy.” Risk Analysis 13, no. 6 (1993): 675–82. Smith, Maxwell C., Anita Ghosh, and Catherine E. Kanatas. “Death v. Taxes: Agency Approaches to Setting Safety Goals Using Risk Management in an Evolving Legal Environment.” New York University Environmental Law Journal 26, no. 1 (2017): 41–106. Smith, Roger. “Nuclear Power in the U.S.: Chaos Reigns Supreme as 1975 Opens.” Nucleonics Week, January 16, 1975, 1–2. . “Probability Analyst Calls Rasmussen Accident Study Futile.” Nucleonics Week, April 4, 1974, 1–2. Solnick, Steve. “Compton Lecturer Criticizes NRC Set-Up.” The Tech (MIT, Cambridge, MA), April 15, 1980, 1, 3. Solomon, K. A., et al. Estimate of the Hazards to a Nuclear Reactor from the Random Impact of Meteorites, UCLA-ENG-7426. Los Angeles: University of California, Los Angeles, March 1974. Sorensen, J. N., G. E. Apostolakis, and D. A. Powers. “On the Role of Defense in Depth in Risk-Informed Regulation.” In Proceedings of PSA ‘99, International Topical Meeting on Probabilistic Safety Assessment, August 22–25, 1999, Washington, DC, 408–13. https://www.researchgate.net/publication /285245803_On_the_role_of_defense_in_depth_in_risk-informed_regulation. Sovacool, Benjamin K., Alex Gilbert, and Daniel Nugent. “An International Comparative Assessment of Construction Cost Overruns for Electricity Infrastructure.” Energy Research and Social Science 3 (2014): 152–60. Sowby, F. D. “Radiation and Other Risks.” Health Physics 11 (1965): 879–87. Spangler, Miller B. “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 1: As Low as Reasonably Achievable.” Project Appraisal 2, no. 4 (1987): 231–42. . “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 2: Implicit Uses in Waste Management and Regulating Uranium Mines.” Project Appraisal 3, no. 1 (1988): 43–54. . “De Minimis Risk Concepts in the US Nuclear Regulatory Commission, Part 3: The Establishment of Cutoff Levels of Regulatory Concerns in the Treatment of Severe Accident Issues.” Project Appraisal 3, no. 2 (1988): 95–104. Sperry, Roger L. “Saving Energy.” Government Executive. January 1, 1996, https://www.govexec.com/magazine/1996/01/saving-energy/15. Stacy, Susan M. Proving the Principle: A History of the Idaho National Engineering and Environmental Laboratory. Idaho Falls, ID: Idaho Operations Office of the Department of Energy, 2000.
318
| Bibliography
“Staff Backs Justifying Safety Goal Backfits with Economic Benefits.” Inside NRC, October 28, 1985, 11–12. Stamatelatos, Michael G. “Recent NASA Activities and Plans in Risk Analysis.” Workshop on Uses of Risk-Information by Federal Agencies, University of Maryland at College Park, March 6–7, 2001. College Park: University of Maryland, Center for Technology Risk Studies and NRC, 2001. Stamatelatos, Michael G., and Peter J. Rutledge. “Probabilistic Risk Assessment at NASA and Plans for the Future.” In Proceedings of Joint ESA-NASA Space-Flight Safety Conference, ESTEC, Nuordijk, NL 11–14 June 2002, 21–29. European Space Agency, August 2002. Starr, Chauncey. “Radiation in Perspective.” Nuclear Safety 5 (Summer 1964): 325–33. . “Social Benefit versus Technological Risk.” Science 165, no. 3899 (September 19, 1969): 1232–38. Starr, C., and M. A. Greenfield. Public Health Risks of Thermal Power Plants, UCLA-ENG-7242. Los Angeles: School of Engineering and Applied Science, UCLA, May 1972. Stellfox, David. “Commission to Get Staff Recommendation Against Maintenance Rule.” Inside NRC, April 22, 1991, 1. . “IPE Round-Up: Industry and NRC Officials Debate Future Use of PRA.” Inside NRC, November 2, 1992, 3. . “Jackson Orders Review of 50.59 Changes after Millstone Refueling Case.” Inside NRC, December 11, 1995, 1. . “Jackson Says Questions of Who Shut Millstone Down Misses the Point.” Inside NRC, December 22, 1997, 12–13. . “Maine Yankee Says NRC Inspection Will Cost Utility $10-Million.” Nucleonics Week, June 20, 1996, 14. . “Maine Yankee Shutdown: Homicide or Suicide?” Nucleonics Week, June 5, 1997, 8. . “NEI Proposes Wholesale Revision for NRC Regulation Based on PRA.” Inside NRC, September 14, 1998, 1. . “NRC Considers Allowing Core Damage Accidents in ECCS Rewrite.” Inside NRC, May 22, 2000, 1–2. . “Nuclear Plant Management Ills Prompt NYPA Chief Brons’ Departure.” Nucleonics Week, June 10, 1993, 6. . “Part 50 Pulled into Big Changes Proposed in NRC Regulations.” Nucleonics Week, November 5, 1998, 6–7. . “PRA Practitioners See Little Business in Risk-Informed Rules.” Nucleonics Week, January 28, 1999, 1–2. . “PRA Use at RBMKs and VVERs Debated at IAEA Symposium.” Nucleonics Week, November 27, 1997, 11. . “Reforming Part 50: Staff Option, NEI’s Choices and a Lot of Questions.” Inside NRC, November 9, 1998, 1. . “Risk-Informed Cost-Benefit Equation Missing both ‘Cost’ and Benefit.’ ” Inside NRC, December 20, 1999, 1. . “Risk-Informed Part 50 Could Boost Advanced Reactor Prospects.” Inside NRC, July 5, 1999, 1, 13.
Bibliography
|
319
. “Risk-Informing Part 50’s Technical Basis at Beginning of Long Journey.” Inside NRC, July 3, 2000, 1, 9–10. . “South Texas Project 12th Plant to Get NRC’s Diagnostic Evaluation.” Inside NRC, April 5, 1993, 1, 11–13. . “UCS’ Lochbaum Approves of New Reactor Oversight Process.” Inside NRC, March 13, 2000, 4. . “Wall Street, Lacking SALP Scores, Eyes New Oversight Program.” Nucleonics Week, February 10, 2000, 1. . “Whistleblower Forced Close Look at Millstone; Report Out This Week.” Inside NRC, August 5, 1996, 14. “Support for Nuclear Power Grows in the U.S.” Nuclear News 44, no. 3 (March 2001): 19. “Support for Nuclear Power Soars in U.S., Canada.” Generation Week, April 11, 2001. “Suppressed Oak Ridge Memo Casts Doubt on ECCS Criteria.” Nucleonics Week, March 16, 1972, 2–3. Talbot, David. “80 Seconds of Warning for Tokyo: Earthquake-Detection Technology Investment Pays Off for Japan.” MIT Technology Review, March 11, 2011. https://www.technologyreview.com/s/423274/80-seconds-of-warningfor-tokyo. Tanguy, P. “The French Approach to Nuclear Power Safety.” Nuclear Safety 24, no. 5 (September-October 1983): 589–606. Technology for Energy Corporation. IDCOR Program Plan. Knoxville, TN: Atomic Industrial Forum, November 1981. . Technical Report 1.1 Safety Goal Evaluation: Implications for IDCOR. Knoxville, TN: Atomic Industrial Forum, August 1984. Temples, James R. “The Politics of Nuclear Power: A Subgovernment in Transition.” Political Science Quarterly 95, no. 2 (Summer 1980): 239–60. Thompson, Kimberly M., Paul F. Deisler, Jr., and Richard C. Schwing. “Interdisciplinary Vision: The First 25 Years of the Society for Risk Analysis (SRA), 1980–2005.” Risk Analysis 25, no. 6 (2005): 1333–86. Thompson, T. J., and J. G. Beckerley, eds. Reactor Materials and Engineering. Vol. 2 of The Technology of Nuclear Reactor Safety. Cambridge, MA: MIT Press, 1973. Toepfer, K. “President’s Opening Address.” In The Safety of Nuclear Power: Strategy for the Future, Proceedings of a Conference, Vienna, 2–6 September 1991, 9–22. Vienna: IAEA, 1992. Towers Perrin. Nuclear Regulatory Review Study: Final Report. Chicago: Towers Perrin, October 24, 1994. Travis, Curtis C., et al. “Cancer Risk Management: A Review of 132 Federal Regulatory Decisions.” Environmental Science and Technology 21, no. 5 (1987): 415–20. Tranteeva, Radelina. “Kozloduy NPP: Status, Modernization Programs, and Evaluation of Rest Life Time.” In IAEA, Extrabudgetary Programme on Safety Aspects of Long Term Operation of Water Moderated Reactors, Minutes of the Programme’s Working Group 1 First Meeting, January 13–15, 2004, 213–19. Vienna: IAEA, March 29, 2004.
320
| Bibliography
Trischler, Helmuth, and Hans Weinberger. “Engineering Europe: Big Technologies and Military Systems in the Making of 20th-Centruy Europe.” History and Technology 21, no. 1 (March 2005): 49–93. Tuhus, Melinda. “Who Pays for Mistakes in Making Electricity?” New York Times, March 28, 1998, CN 14. Turner, Barry A. “The Organizational and Interorganizational Development of Disasters.” Administrative Science Quarterly 21 (September 1976): 378–97. Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science 185, no. 4157 (September 27, 1974): 1124–31. “Two Public Rulemakings Initiated: Dissension Mars ECCS Opener.” Nuclear Industry 19 (January 1972): 16–20. “UCS Cross-Examination of Milton Shaw.” Nuclear News 15, no. 5 (May 1972): 22. Union of Concerned Scientists. Preventing an American Fukushima: Limited Progress Five Years after Japan’s Nuclear Power Plant Disaster. Cambridge, MA: UCS, March 2016. . The Risks of Nuclear Power Reactors: A Review of the NRC Reactor Safety Study WASH-1400 (NUREG-75/014). Cambridge MA: Union of Concerned Scientists, 1977. US Atomic Energy Commission. “Acceptance Criteria for Emergency Core Cooling Systems for Light-Water-Cooled Nuclear Power Reactors.” Federal Register 39, no. 3 (January 4, 1974): 1001–6. . Annual Report to Congress of the Atomic Energy Commission for 1971. Washington, DC: Government Printing Office, 1972. . “Proposed Rulemaking 10 CFR Licensing of Production and Utilization Facilities: Consideration of Accidents in Implementation of the National Environmental Policy Act of 1969.” Federal Register 36, no. 231 (December 1, 1971), 22851–54. . Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH-1400, Draft. Washington, DC: AEC, August 1974. . Technical Basis for Interim Regional Tornado Criteria, WASH-1300. Washington, DC: AEC, May 1974. . Technical Report on Anticipated Transients Without Scram for WaterCooled Power Reactors, WASH-1270. Washington, DC: AEC, September 1973. . Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants, WASH-740. Washington, DC: AEC, March 1957. US Congress, Office of Technology Assessment. Nuclear Power in an Age of Uncertainty, OTA-E-216. Washington, DC: Government Printing Office, February 1984. US Department of Energy. Historic American Engineering Record B Reactor (105-B Building) HAER No. WA-164. Richland, WA: US Department of Energy (DOE), May 2001. . Nuclear Plant Cancellations: Causes, Costs and Consequences, DOE/ EIA-0392. Washington, DC: DOE, April 1983.
Bibliography
|
321
, and New York State Energy Research and Development Authority. Final Environmental Impact Statement for Decommissioning and/or LongTerm Stewardship at the West Valley Demonstration Project and Western New York Nuclear Service Center, DOE/EIS-0226. West Valley, NY: DOE, January 2010. US Energy Information Administration. The Changing Structure of the Electric Power Industry: An Update, DOE/EIA-0562(96). Washington, DC: Energy Information Administration, December 1996. . International Energy Outlook: 1999. Washington, DC: Energy Information Administration, March 1999. US Environmental Protection Agency. “40 CFR Part 61, National Emission Standards for Hazardous Air Pollutants; Radionuclides.” Federal Register 54, no. 240 (December 15, 1989): 51654–715. . “40 CFR Part 300, National Oil and Hazardous Substances Pollution Contingency Plan.” Federal Register 53, no. 245 (December 21, 1988): 51394–520. . “40 CFR Part 300, National Oil and Hazardous Substances Pollution Contingency Plan.” Federal Register 55, no. 46 (March 8, 1990): 8666–865. . “Health Risk and Economic Impact Assessments of Suspected Carcinogens.” Federal Register 41, no. 102 (May 25, 1976): 21402–5. . Reactor Safety Study (WASH-1400): A Review of the Draft Report, EPA-520/3–75–012. Washington, DC: EPA, August 1975. . Risk Assessment and Management: Framework for Decision Making, EPA 600/9–85–2. Washington, DC: EPA, December 1984. . Superfund Public Health Evaluation Manual, EPA-540/1–86–060. Washington, DC: EPA, October 1986. US General Accounting Office. Nuclear Health and Safety: Consensus on Acceptable Radiation Risk to the Public is Lacking, GAR/RCED-94–190. Washington, DC: General Accounting Office, September 1994. . Nuclear Regulation: NRC Needs to More Aggressively and Comprehensively Resolve Issues Related to the Davis-Besse Nuclear Power Plant’s Shutdown, GAO-04–415. Washington, DC: GAO, May 2004. . Nuclear Regulation: Preventing Problem Plants Requires More Effective NRC Action, GAO/RCED-97–145. Washington, DC: GAO, March 1997. . Nuclear Regulation: Process for Backfitting Changes in Nuclear Plants Has Improved, GAO/NCED-86–27. Washington, DC: GAO, December 1985. Nuclear Regulatory Commission: Natural Hazard Assessments Could Be More Risk-Informed, GAO-12–465. Washington, DC: GAO, April 2012. . This Country’s Most Expensive Light Water Reactor Safety Test Facility. Report to the Committee of Government Operations, United States Senate. RED-76–68. Washington, DC: GAO, May 26, 1976. US House of Representatives, Subcommittee on Energy and the Environment of the Committee on Interior and Insular Affairs. Observations on the Reactor
322
| Bibliography
Safety Study: A Report. Washington, DC: Government Printing Office, January 1977. US Nuclear Regulatory Commission. “10 CFR Part 20, et al. Radiological Criteria for License Termination.” Federal Register 62, no. 139 (July 21, 1997): 39059–92. . “10 CFR Part 50, Station Blackout.” Federal Register 53, no. 119 (June 21, 1988): 23203–19. . “10 CFR Parts 50 and 51 Nuclear Power Plant Accident Considerations Under the National Environmental Policy Act of 1969.” Federal Register 45, no 116 (June 13, 1980): 40101–4. . Anticipated Transients without Scram for Light Water Reactors: Staff Report, NUREG-0460. 3 vols. Washington, DC: NRC, 1978. . Backfitting Guidelines, NUREG-1409. Washington, DC: NRC, June 1990. . Compendium of ECCS Research for Realistic LOCA Analysis. NUREG-1230 Rev. 4. Washington, DC: NRC, 1988. . Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants, NUREG-0654. Washington, DC: NRC, November 1980. . Critical Human Factors Issues in Nuclear Power Regulation and a Recommended Comprehensive Human Factors Long-Range Plan, NUREG/ CR-2822. Washington, DC: NRC, August 1982. . “Early Site Permits; Standard Design Certifications; and Combined Licenses for Nuclear Power Reactors.” Federal Register 54, no. 73 (April 18, 1989): 15372–400. . Effective Risk Communication: The Nuclear Regulatory Commission’s Guidelines for External Risk Communication, NUREG/BR-0308. Washington, DC: NRC, January 2004. . Eliciting and Analyzing Expert Judgment: A Practical Guide, NUREG/ CR-5424. Washington, DC: NRC, January 1990. . “Ensuring the Effectiveness of Maintenance Programs for Nuclear Power Plants.” Federal Register 53, no. 228 (November 28, 1988): 47822–29. . Estimating Loss-of-Coolant Accident (LOCA) Frequencies Through the Elicitation Process (Draft Report for Comment), NUREG-1829. Washington, DC: NRC, March 2005. . Evaluation of Station Blackout Accidents at Nuclear Power Plants, Technical Findings Related to Unresolved Safety Issue A-44, NUREG-1032. Washington, DC: USNRC, June 1988. . Fault Tree Handbook, NUREG-0492. Washington, DC: NRC, January 1981. . “Final Commission Policy Statement on Maintenance of Nuclear Power Plants.” Federal Register 53, no. 56 (March 23, 1988): 9430–31. . “Final Policy Statement on the Restructuring and Economic Deregulation of the Electric Utility Industry.” Federal Register 62, no. 160 (August 19, 1997): 44071–8. . “Final Safety Culture Policy Statement.” Federal Register 76, no. 114 (June 14, 2011): 34773–78.
Bibliography
|
323
. Guidance on the Treatment of Uncertainties Associated with PRAs in Risk-Informed Decisionmaking, Final Report, NUREG-1855, Rev. 1. Washington, DC: NRC, March 2017. . A Guide to Literature Relevant to the Organization and Administration of Nuclear Power Plants, NUREG/CR-3645. Washington, DC: NRC, December 1983. . Historical Review and Observations of Defense-in-Depth, NUREG/ KM-0009. Washington, DC: NRC, April 2016. . Human Factors Evaluation of Control Room Design and Operator Performance at Three Mile Island-2, NUREG/CR-1270, Vol. 1. Washington, DC: NRC, December 1979. . Implications of the Accident at Chernobyl for Safety Regulation of Commercial Nuclear Power Plants in the United States, NUREG-1251. Vol 1. Washington, DC: NRC, April 1989. . Improving Quality and the Assurance of Quality in the Design and Construction of Nuclear Power Plants: A Report to Congress, NUREG1055. Washington, DC: NRC, May 1984. . “In the Matter of Consolidated Edison Company of New York (Indian Point, Unit No. 2) and the Power Authority of the State of New York (Indian Point, Unit No. 3), Docket Nos. 50–247-SP and 50–286-SP.” In Nuclear Regulatory Commission Issuances, Opinions and Decisions of the Nuclear Regulatory Commission with Selected Orders, May 1, 1985-June 30, 1985, 21:1043–1100. Washington, DC: NRC, 1985. . Influence of Organizational Factors on Performance Reliability, NUREG/CR-5538. Washington, DC: NRC, December 1991. . Information Digest, NUREG-1350. 31 vols. Washington, DC: NRC, 1989–2019. . Lessons Learned from Maintenance Rule Baseline Inspections, NUREG-1648. Washington, DC: NRC, October 1999. . Lessons Learned from Early Implementation of the Maintenance Rule at Nine Nuclear Power Plants, NUREG-1526. Washington, DC: NRC, June 1995. . Loss of Main and Auxiliary Feedwater Event at the Davis-Besse Plant on June 9, 1985, NUREG-1154. Washington, DC: NRC, July 1985. . Maintenance Approaches and Practices in Selected Foreign Nuclear Power Programs and Other U.S. Industries: Review and Lessons Learned, NUREG-1333. Washington, DC: NRC, April 1990. . “Monitoring the Effectiveness of Maintenance at Nuclear Power Plants.” Federal Register 56, no. 132 (July 10, 1991): 31306–23. . “Monitoring the Effectiveness of Maintenance at Nuclear Power Plants.” Federal Register 64, no. 137 (July 19, 1999): 38551–57. . No Undue Risk: Regulating the Safety of Operating Nuclear Power Plants, NUREG/BR-0518. Washington, DC: NRC, June 2014. . “Nuclear Energy Institute; Receipt of Petition for Rulemaking.” Federal Register 67, no. 67 (April 8, 2002): 16654–56. . Nuclear-Power-Plant Severe-Accident Research Plan, NUREG-0900. Washington, DC: NRC, January 1983.
324
| Bibliography
. Organizational Analysis and Safety for Utilities with Nuclear Power Plants, NUREG/CR-3215. Vols. 1 and 2. Washington, DC: NRC, July 1983. . Perspectives on Reactor Safety, NUREG/CR-6042, Rev. 2. Washington, DC: NRC, March 2002. . “Policy Statement on Severe Reactor Accidents Regarding Future Designs and Existing Plants.” Federal Register 50, no. 153 (August 8, 1986): 32138–50. . “Policy Statement on the Conduct of Nuclear Power Plant Operations.” Federal Register 54, no. 14 (January 24, 1989): 3424–26. . Preliminary Assessment of Core Melt Accidents at the Zion and Indian Point Nuclear Power Plants and Strategies for Mitigating Their Effects: Analysis of Containment Building Failure Modes, Preliminary Report, NUREG-0850. Washington, DC: NRC, November 1981. . The Price-Anderson Act—Crossing the Bridge to the Next Century: A Report to Congress, NUREG/CR-6617. Washington, DC: NRC, October 1998. . A Prioritization of Generic Safety Issues, NUREG-0933. Washington, DC: NRC, December 1983. . Probabilistic Risk Assessment and Regulatory Decisionmaking: Some Frequently Asked Questions, NUREG-2201. Washington, DC: NRC, September 2016. . Probabilistic Risk Assessment (PRA) Reference Document: Final Report, NUREG-1050. Washington, DC: NRC, August 1984. . Proceedings of the Public Workshop for NRC Rulemaking on Maintenance of Nuclear Power Plants Held at Mayflower Hotel, DC July 11–13, 1988, NUREG/CP-0099. Washington, DC: NRC, November 1988. . Proceedings of the U.S. Nuclear Regulatory Commission: NRC Regulatory Information Conference, NUREG/CP-0102. Vol. 1. Washington, DC: NRC, 1989. . A Process for Risk-Focused Maintenance, NUREG/CR-5695. Washington, DC: NRC, March 1991. . “Proposed Policy Statement on Safety Goals for Nuclear Power Plants.” Federal Register 47, no. 32 (February 17, 1982): 7023–28. . A Proposed Risk Management Regulatory Framework, NUREG-2150. Washington, DC: NRC, April 2012. . Protecting Our Nation: A Report of the U.S. Nuclear Regulatory Commission, NUREG/BR-0314, Rev. 4. Washington, DC: NRC, August 2015. . Reactor Oversight Process, NUREG-1649, Rev. 3. Washington, DC: NRC, July 2000. . Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH-1400 (NUREG-75/014). Washington, DC: NRC, October 1975. . Recommendations for Enhancing Reactor Safety in the 21st Century: The Near-Term Task Force Review of Insights from the Fukushima Dai-ichi Accident. Washington, DC: NRC, July 12, 2011.
Bibliography
|
325
. Reflections on Fukushima: NRC Senior Leadership Visit to Japan, 2014, NUREG/KM-0008. Washington, DC: NRC, December 2014. . Regulatory Effectiveness of the Anticipated Transient Without Scram Rule, NUREG-1780. Washington, DC: NRC, September 2003. . Regulatory Guide 1.174—An Approach for Using Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes to the Licensing Basis. Washington, DC: NRC, July 1998. . Regulatory Review Group Summary and Overview. Washington, DC: NRC, August 31, 1993. . Reliability of Emergency AC Power Systems at Nuclear Power Plants, NUREG/CR-2989. Washington, DC: NRC, June 1983. . Review and Evaluation of the Indian Point Probabilistic Safety Study, NUREG/CR-2934. Washington, DC: NRC, December 1984. . A Review of NRC Staff Uses of Probabilistic Risk Assessment, NUREG1489. Washington, DC: NRC, March 1994. . “Risk-Informed Categorization and Treatment of Structures, Systems and Components for Nuclear Power Reactors.” Federal Register 69, no. 224 (November 22, 2007): 68008–48. . “Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements.” Federal Register 74, no. 152 (August 10, 2009): 40006–52. . “Risk-Informed Changes to Loss-of-Coolant Accident Technical Requirements.” Federal Register 81, no. 194 (October 6, 2016): 69446–48. . Safety Culture: A Survey of the State-of-the-Art, NUREG-1756. Washington, DC: NRC, January 2002. . Safety Culture Common Language, NUREG-2165. Washington, DC: NRC, March 2014. . Safety Culture Policy Statement. NUREG/BR-0500, Rev. 1. Washington, DC: NRC, December 2012. . “Safety Goals for the Operations of Nuclear Power Plants; Policy Statement.” Federal Register 51, no. 162 (August 21, 1986): 30028–33. . Selected Review of Foreign Licensing Practices for Nuclear Power Plants, NUREG/CR-2664. Washington, DC: NRC, April 1982. . Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, NUREG-1150. Washington, DC: NRC, December 1990. . Severe Accident Risks for VVER Reactors: The Kalinin PRA Program, NUREG/CR-6572. Washington, DC: NRC, May 1999. . Special Inquiry Group. Three Mile Island: A Report to the Commissioners and to the Public, NUREG/CR-1250 Vol. 1. Washington, DC: U.S. NRC, January 1980. . Staff Discussion of Fifteen Technical Issues Listed in Attachment to November 3, 1976 Memorandum From Director NRR to NRR Staff, NUREG-0138. Washington, DC: NRC, November 1976. . Standard Review Plan, NUREG-0800. Washington, DC: NRC, July 1981. . State-of-the-Art Reactor Consequence Analyses (SOARCA) Report, NUREG 1935. Washington, DC: NRC, November 2012.
326
| Bibliography
. Station Blackout Accident Analyses (Part of NRC Task Action Plan A-44), NUREG/CR-3226. Washington, DC: NRC, May 1983. . Strategic Plan: Fiscal Year 1997-Fiscal Year 2002, NUREG-1614. Vols. 1 and 2. Washington, DC: NRC, September 1997. . The Status of Recommendations of the President’s Commission on the Accident at Three Mile Island: A Ten-Year Review, NUREG-1355. Washington, DC: NRC, March 1989. . The Technical Basis for the NRC’s Guidelines for External Risk Communication, NUREG/CR-6840. Washington, DC: NRC, January 2004. . TMI-2 Lessons Learned Task Force Final Report, NUREG-0585. Washington, DC: NRC, October 31, 1979. . Toward a Safety Goal: Discussion of Preliminary Policy Considerations, NUREG-0764. Washington, DC: NRC, March 1981. . Transient Response of Babcock & Wilcox-Designed, Reactors, NUREG-0667. Washington, DC: NRC, 1980. . Trends and Patterns in Maintenance Performance in the U.S. Nuclear Power Industry: 1980–1985, NUREG/CR-4611. Washington, DC: NRC, 1986. . US Nuclear Regulatory Commission Annual Report 1978, NUREG0516. Washington, DC: NRC, 1979. . US Nuclear Regulatory Commission 1987 Annual Report, NUREG1145. Vol. 4. Washington, DC: NRC, 1988. . United States Nuclear Regulatory Commission Annual Report 1993, NUREG-1145. Vol. 10. Washington, DC: NRC, 1994. . U.S. Nuclear Regulatory Commission Policy and Planning Guidance 1984, NUREG-0885, Issue 3. Washington, DC: NRC, January 1984. . U.S. Nuclear Regulatory Commission Policy and Planning Guidance 1987, NUREG-0885, Issue 6. Washington, DC: NRC, September 1987. . “Use of Probabilistic Risk Assessment Methods in Nuclear Regulatory Activities: Final Policy Statement.” Federal Register 60, no. 158: 42622– 629. Upton, J. W. Assessment of the Impacts of Transferring Certain Nuclear Reactor Technologies to the Soviet Union and Eastern Europe, PNL-X-765. Richland, WA: Pacific Northwest Laboratory, June 1987. Ushio, Shota. “NRA Plans to Complete Revision of Plant Inspection Process in 2020.” Inside NRC, April 1, 2019, 4–5. Usui, Naoaki. “NSC White Paper Says PSA Shows Japan’s Reactors are Safe Enough.” Nucleonics Week, January 10, 1991, 3. Vaughn, Diane. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press, 1996. . “The Dark Side of Organizations: Mistake, Misconduct, and Disaster.” Annual Review of Sociology 25 (1999): 271–305. “Verheugen Said to Back Temelin Plant Safety.” The Prague Post, November 29, 2000. Vesely, W. E. Fault Tree Handbook with Aerospace Applications. Washington, DC: NASA Office of Safety and Mission Assurance, August 2002. . “Reliability and Fault Tree Applications at the NRTS.” IEEE Transactions on Nuclear Science NS-18 (February, 1971): 472–80.
Bibliography
|
327
. The Evaluation of Failure and Failure Related Data, ANCR-1024. Idaho Falls, ID: Aerojet Nuclear Company, August 1971. van der Vleuten, Erik. “Towards a Transnational History of Technology: Meanings, Promises, Pitfalls.” Technology and Culture 49, no. 4 (October 2008): 974–94. , and Arne Kaijser. Networking Europe: Transnational Infrastructures and the Shaping of Europe, 1850–2000. Sagamore Beach, UK: Watson Publishing International, 2006. “Wake Me If It’s a Meltdown.” Time, April 13, 1987. Wald, Matthew L. “At a Hearing, Nuclear Regulators are Criticized on 2 Fronts.” New York Times, July 18, 1998, A9. . “Edging Back to Nuclear Power.” New York Times, April 22, 2010, F1. . “Monopoly: Nuclear Power Version.” New York Times, August 16, 1998, sec. 4, 4. . “N.R.C. Lowers Estimate of How Many Would Die in a Meltdown.” New York Times, July 29, 2011, A14. Walker, Andrea K. “One Step at a Time Toward Greatness.” Baltimore Sun. February 10, 2001. https://www.baltimoresun.com/news/bs-xpm-2001–02– 10–0102100073-story.html. Walker, J. Samuel. Containing the Atom: Nuclear Regulation in a Changing Environment, 1963–1971. Berkeley: University of California Press, 1992. . Permissible Dose: A History of Radiation Protection in the Twentieth Century. Berkeley: University of California Press, 2000. . Three Mile Island: A Nuclear Crisis in Historical Perspective. Berkeley: University of California Press, 2004. Wall, Ian B. Nuclear Safety Objectives for General Electric Boiling Water Reactors, NEDE-13004. San Jose, CA: General Electric, August 1969. . “Probabilistic Assessment of Risk for Reactor Design and Siting.” Transactions of the American Nuclear Society, Fifteenth Annual Meeting 12 (June 15–19, 1969): 169. , R. M. Bernero, A. C. Millunzi, and S. Rosen. “The Reactor Safety Study: Its Influence upon Reactor Safety.” International Atomic Energy Agency, International Conference on Nuclear Power Experience, IAEACN-42–83, Vienna, 13–17 September 1982. Vienna: IAEA, 1983. Wall, I. B., I. F. Stuart, and V. D. Nguyen. Probabilistic Assessment of Risk for Reactor Design and Siting, NEDE-130009. San Jose, CA: General Electric, September 1969. Weatherwax, Robert K. “Virtues and Limitations of Risk Analysis.” Bulletin of the Atomic Scientists 31 (September 1975): 29–32. Weick, Karl E. “Organizational Culture as a Source of High Reliability.” California Management Review 29, no. 2 (Winter 1987): 112–27. Weil, Jenny. “AmerGen, GPU Nuclear Sign Deal for Sale of Oyster Creek.” Nucleonics Week, September 16, 1999, 1. . “Domenici Book Touts His Role in Regulatory Turnaround; Jackson Begs to Differ.” Inside NRC, October 4, 2004, 1. . “Final OK to Let PECO, Unicom be Largest U.S. Nuclear Operator.” Nucleonics Week, October 19, 2000, 2.
328
| Bibliography
. “House Appropriators Give NRC $5 Million Less Money Than FY-98 Level.” Inside NRC, June 22, 1998, 9. . “Meserve-Whitman Meeting May Mark Fresh Start for Agencies.” Inside NRC, March 12, 2001, 1. . “NRC-EPA War Over Decommissioning Cleanup Standards Moves to Congress.” Inside NRC, March 30, 1998, 5. . “NRC Gears Up for Budget Battle with Senate Appropriators.” Inside NRC, May 11, 1998, 1, 13. . “NRC IG: Financial Considerations Could Encroach on Safety Issues.” Nucleonics Week, February 20, 2003, 1. . “NRC Staff’s Risk-Informing Part 50 Rulemaking Uses New Risk Categories.” Inside NRC, October 25, 1999, 1. . “NRC Survives—For Now—Senate Subcommittee Effort to Sharply Cut Staff.” Inside NRC, June 8, 1998, 1. . “NRC’s Departing Commissioners Lay Out Vision for Nuclear Resurgence.” Inside NRC, March 19, 2007, 1, 14. . “NRG May Build ABWRs at South Texas; Commercial-Grade Parts to Cut Cost.” Inside NRC, June 26, 2006, 1, 14. . “Regulation, Lack of Cohesive Policy Blamed for Nuclear Industry Woes.” Inside NRC, May 25, 1998, 3. . “Sen. Inhofe Asks Industry to Critique NRC’s Regulatory Reform Plans.” Inside NRC, February 15, 1999, 8. . “Some Senate Appropriators Propose Abolishing NRC’s ASLB.” Inside NRC, June 8, 1998, 14. Weinberg, Alvin M. The First Nuclear Era: The Life and Times of a Technological Fixer. New York: AIP Press, 1994. , et al. The Second Nuclear Era, ORAU/IEA-84–6(M). Oak Ridge, TN: Institute for Energy Analysis, Oak Ridge Associated Universities, March 1984. Weintraub, Irene. “NEPA and Uncertainty in Low-Risk, High-Impact Scenarios: Nuclear Energy as a Case Study.” Cardozo Law Review 37, no. 4 (2016): 1565–98. Weisskopf, Michael, and David Maraniss. “Forging an Alliance for Deregulation.” Washington Post, March 12, 1995, 61. Wellock, Thomas R. “The Children of Chernobyl: Engineers and the Campaign for Safety in Soviet-Designed Reactors in Central and Eastern Europe.” History and Technology: An International Journal 29, no. 1 (2013): 3–32. . Critical Masses: Opposition to Nuclear Power in California, 1958– 1978. Madison: University of Wisconsin Press, 1998. . “Engineering Uncertainty and Bureaucratic Crisis at the Atomic Energy Commission, 1964–1973.” Technology and Culture 53, no. 4 (October 2012): 846–84. . “A Figure of Merit: Quantifying the Probability of a Nuclear Reactor Accident.” Technology and Culture 58, no. 3 (July 2017): 678–721. . Preserving the Nation: The Conservation and Environmental Movements, 1870–2000. Wheeling, IL: Harlan Davidson, 2007.
Bibliography
|
329
. “Social Scientists in an Adversarial Environment: The Nuclear Regulatory Commission and Organizational Factors Research.” Nuclear Technology, forthcoming. Western European Nuclear Regulators’ Association (WENRA). General Conclusions of WENRA: On Nuclear Safety in the Candidate Countries to the European Union. October 2000, http://www.wenra.org/media/filer_public /2012/11/05/wenrasummary2000.pdf. . Nuclear Safety in EU Candidate Countries NEI-SE-318. Sweden: WENRA, October 2000. “Westinghouse Fuel Redesign to Reduce Cladding Temperature.” Nuclear Industry 19 (October 1972): 48. Wicker, Tom. “No Nuclear Credibility.” New York Times, February 27, 1979, A17. Wiener, Antje, and Thomas Diez. European Integration Theory. Oxford, UK: Oxford University Press, 2009. Willis, C. A. Statistical Safety Evaluation of Power Reactors AI-65-TDR-212. San Diego, CA: Atomics International, October 20, 1965. , and W. J. Carlson. “Fault Tree Analysis for SNAP Reactor Disposal Systems.” Transactions of the American Nuclear Society 9 (1966): 159–61. Wilson, T. R., O. M. Hauge, and G. B. Matheney. Feasibility and Conceptual Design for the STEP Loss of Coolant Facility, IDO-16833. Idaho Falls, ID: Phillips Petroleum Co., December 22, 1962. “With the Rasmussen Report in Hand, Why Escalate Design Conservatism?” Nucleonics Week, November 7, 1974, 6–7. Wreathall, J., and C. Nemeth. “Assessing Risk: The Role of Probabilistic Risk Assessment (PRA) in Patient Safety Improvement.” Quality and Safety in Health Care 13, no. 3 (June 2004): 206–12. Yamada, T. “Safety Evaluation of Nuclear Power Plants.” In Reactor Safety and Hazards Evaluation Techniques, 493–515. Vol. 2 of Proceedings of the Symposium on Reactor Safety and Hazards Evaluation Techniques, Vienna 14–18 May 1962. Vienna: International Atomic Energy Agency, 1962. Yamaguchi, Yuzo. “Japan’s NRA Uses Backfit Power to Order Review of Plant Volcano Risk.” Inside NRC, June 10, 2019, 6–7. Ybarrondo, L .J., C. W. Solbrig, and H. S. Isbin. “The ‘Calculated’ Loss-ofCoolant Accident: A Review.” American Institute of Chemical Engineers Monograph Series 68, no. 7 (1972). Yellin, Joel. “The Nuclear Regulatory Commission’s Reactor Safety Study.” The Bell Journal of Economics 7, no. 1 (Spring 1976): 317–39. Zachmann, Karin. “Risk in Historical Perspective: Concepts, Contexts, and Conjunctions.” In Risk: A Multidisciplinary Introduction, edited by C. Kluppelberg, D. Straub, and D. Welpe, 3–35. New York: Springer, 2014. Zuercher, Richard. “NYPA Slow to Correct Fitzpatrick Deficiencies, NRC Diagnostic Report Says.” Inside NRC, December 30, 1991, 3–4.
Index
accidents and accident risks, nuclear acceptable risk of, 23–24, 25, 52–53, 93 AEC public reassurances, xiv antinuclear activist warnings of, 44, 86, 93 ATWS, see Anticipated Transient Without Scram aversion factor, 89, 96 beyond DBAs, see Beyond Design Basis Accidents blackouts, see station blackouts Chernobyl. see Chernobyl accident China syndrome, see China syndrome Class 9, see Class 9 accidents codes for modeling, 41–42 consequences of, xv, xvii, 1–3, 4–5, 7, 17–20, 23, 26, 27, 30, 31, 35–36, 37, 38, 49, 50, 51, 54, 55, 56, 59, 60, 62, 63, 67, 75, 84, 86, 88, 89, 121, 198, 211, 213, 218, 225 coping measures for severe, 213–214 costs of, 77 credibility of, see credible accidents; Design Basis Accidents data on, 23, 62, Design Basis Accidents, see Design Basis Accidents decision/fault trees for, 20, 21–23, see also fault trees environmental impact of, 85
external events, 87, 98, 100, 204–207, 209–211, 214, 218 “Farmer Curve,” 24, 89 fire, 98–99 flooding, 210–211 Fukushima, see Fukushima accident Hanford probability and consequence estimates of, 4, 6, 7–8, 10, 229 human error in, see human factors Japanese PRA on, 281 less than DBAs, 2, 18, 76, 77, 106 lessons from, 162 loss of coolant, see Loss of Coolant Accidents Maximum Credible Accident, 2, 13, 27, 227 maximum hypothetical accidents, 2, 12, 20, 227 mitigation of, 2–3, 31, 37, 50, 75, 89, 97, 187, 191, 211, 213–214, 235 modeling of (codes), xvii, 10, 21, 23, 27, 37–39, 41, 42, 49–50, 53, 58, 62, 71, 72, 96, 123, 127, 128, 155, 157–160, 162, 166, 178, 206, 208, 213, 232, 237, 238, see also probabilistic risk assessment; names of individual code vs. natural disaster, 68–69 NRC drills for, 86, 249 organizational factors, see organizational factors; safety culture precursor analysis, 8, 98–99
331
332
| Index
accidents and accident risks (continued) prevention of, 2, 3, 27, 29, 31, 34, 36–38, 40, 77, 191, 213, see also defense in depth probability data and estimates of, xv, xvii, 12, 18, 20–21, 26–27, 34–36, 37, 39, 50–51, 52–53, 59, 87, 88, 91, 210, 281 see also probabilistic risk assessment public opinion as influenced by, 3, 219 qualitative judgements on, 12, 13, 17, 31, 99 reactivity accident (runaway), 4–5, 8, 10, 11, 23, see also, Chernobyl accident risk-informed assessment and regulation of, 189–199 risk quantification and estimates of, xvi, 5, 18, 27, 87, 230, see also individual studies of accidents risk triplet of, 1, 26, 207 safety goals for, see safety goals in satellite SNAP reactors, 151 seismic hazards, 16, 33, 35, 85, 87, 204–206, 210–211 Soviet-designed reactors risk of, 161–169 from terrorist attacks and sabotage, 5, 8, 52, 73, 213 TMI. see Three Mile Island accident uncertainty/error in estimates, xv, xix, 1, 26, 27, 49, 59, 60, 62, 63, 66–67, 71, 77, 81, 88, 89, 90, 96, 195, 198 WASH-1400 on, xiv-xv, 57, 59–60, 62–64. See also credible accidents; design basis accidents (DBA); incredible accidents; names of individual accidents; names of accident studies; loss-of-coolant accidents (LOCA); radiation releases active safety systems AEC staff view of, 29 in Canadian nuclear power plants, 18 as defense in depth feature, 4 ECCS, 27 GE’s use of, 8, 29, 32, 34, 35 and the maintenance rule, 113 in Oyster Creek licensing decision, 16 reliability of, 4, 5, 16, 27, 29, 32, 35 Advisory Committee on Reactor Safeguards (ACRS) ATWS issue and, 50, 52 China syndrome debate, 29–34 in civilian plant development, 11 conservatism on safety, 14 on conservative safety margins of, 12, 31
creation of, 8 on ECCS systems, 27, 36 and Ergen report, 34, 36 Hanford reactors assessment by, 8 vs. industry on safety, 32–33 on loss-of-coolant accidents, 14 on maintenance rule, 110–111 and National Academies of Sciences study, 121 on LOCA rule, 192 on safety research, 31, 37 Shaw’s opposition to, 37 support for PRA, 90, 92, 94, 128 support for safety goals, 90, 92, 96 on WASH-1400, 66 aerospace industry Apollo program, 148, 150, 152 fault trees in, 22–23, 62 human factors profession, 116 nuclear satellites, 151–152 PRA as used by, xviii, 147–155, 218 Rasmussen Report influence on, xvi Alvin W. Vogtle Generating Plant, 203, 217 AmerGen Energy Company, 132 American Physical Society (APS), 66, 67 Anticipated Transient Without Scram (ATWS) and backfit rule, 83–84 as a beyond-design-basis accident, 211 conflicting estimates of, 64–65 as a credible accident, 50–53 NRC rule on, 83–84, 249 and PRA, 58, 127–28 PG&E report on, 240 WASH-1400 on, 70–71 antinuclear activism among scientists, 66 and Below Regulatory Concern, 184 call for WASH-740 update, 53, 59 concerns on risk-informed rules, 190 on costs vs. safety, 138 in Czech Republic and Austria, 168 early organizations, xiii-xiv and ECCS controversy and hearings, 42, 44, 48–49 and the environment, 54–55, 85 Friends of the Earth, 85 Fukushima as fuel for, 209–210 Indian Point and Zion PRAs, 86, 88 and Nader, Ralph, 49 on the new NRC, 49, Public Citizen, 143, 190 on safety goals, 93
Index strength of mid-1970s, 65, 67 and WASH-740 report, 19UCS. see Union of Concerned Scientists (UCS) Apostolakis, George and the Brookhaven study, 123, 125 on Fukushima and PRA, 208 heads NRC task force on risk management, 215 optimism on PRA, 122–23 on organizational factors, in PRA, 125, 127 on quantifying safety culture, 127 work with living PRAs, 123 Arcos, Isidro Lopez, 164, 170 Argonne National Laboratory, 13, 40, 47 asbestos, 183 “As Low as is Reasonably Achievable” (ALARA), 177, 181–183, 185, 186 Asselstine, James conservative safety views of, 89, 96 as critic of PRA, 97, 108 on industry maintenance practices, 109 at Lehman Brothers, 144 on PRA for backfitting, 95 on safety goal policy statement, 96 Atomic Energy Act of 1954 “adequate protection” safety standard, 12, 87 dual mandate of AEC, 11 launches commercial nuclear industry, xvii and operational safety, 103 radiation protection in, 177 Atomic Energy Commission (AEC) on accident risk, xiv agency split, 46–47, 49, 67 and antinuclear activism, xiii, xiv, 48–49 beyond design basis accidents, 51, 59, 211 and Calvert Cliffs decision, 45–46 China syndrome debate, 33–34, 51 and Class 9 accidents, 54–55, 85 compared to Soviet regulators, 159 commission structure established, 11–12 commits to WASH-1400, 54, 55 see also WASH-1400 conflict with industry over safety, 17 data needs of, 42 DBA in safety, 2, 74, 83 DBA vs probability debate at, 34–36 deterministic thinking of, 24 Division of Reactor Development, 12 Division of Reactor Development and Technology, 31, 55 dual mandate of, 11–12, 31, 49, 72 early probability estimates, 20–21
|
333
ECCS controversy and hearings, xiv, 37, 40–45, 49, 50 ECCS system tests, 40–41 vs. the EPA on risk assessment, 54–55 Holifield appointments to, 19 and human factors, 116 impact of industry growth on regulation, 16–17 independence of regulatory staff, 12 influence of Cold War on agency Hanford operations, 5 influence of production reactors on regulation of commercial reactors, 11 interest in work of Chauncey Starr, 25 LOFT Program of, 36–39 low-level radiation, regulation of, 174, 176–177, 185 motives to quantify risk, xvii, 50–55 new commission leadership for, 46–48 vs. NRC mission, 65 Oyster Creek permit ruling, 16–17 prevention vs. accident research debate, 37–38 promotional/regulatory conflict, 11–12, 31, 46–48, 72, 73 qualitative vs. quantitative risk assessments, xvii, 10, 12, 18, 25, 71 regulatory staff vs. ACRS, 33 ATWS issue, 52–53 in a bandwagon market, xiii, 16 and the China Syndrome, 31–32, 34 data needs of, 42 and defense in depth, 4, 27, 29, 31, 36 in ECCS debate, 27, 37, 40–43, 49, 50 and Mark I containment, 29 probability estimates by, 50, 53, 149 regulation of low-level radiation, 172 Shaw vs., 31, 45 skepticism of risk quantification, 18–21, 27 transfer to NRC, 49 WASH-1400 as informing, 59 WASH-1400 response, 65 safety standards of, 12–13, 14 safety research of, 13–14, 36–39, 40–42, 49, 98 office of safety research, establishment of, 47, 49 siting criteria, 12 SNAP reactors for satellites, 151 static barrier distrust, 29 updates WASH-740, 19. See also, safety regulation; safety research
334
| Index
Atomic Energy Control Board of Canada, 126 Atomic Industrial Forum (AIF), 37, 82, 92 Atoms for Peace program, xvii, 11 Babcock and Wilcox Company, 105, 247 backfitting of reactors by AEC, 17 criticism of rule, 80, 83 for ATWS protection, 51–52, 84 cost-justified, 96, 98 in Japan, 209 revised rule, 94, 95, 128 safety goals to limit, 82, 91, 93, 96, 98 Soviet-designed reactors, 163 use of PRA for, 95, 248 bandwagon market, xiii, 14–16, 65, 81 Barnes, Dr. Valerie, 115, 200–201 Barrasso, Senator John, 212 Barriere, Michael, 126 barriers, containment. see containment buildings Battelle Human Affairs Research Center, 118–119 Bayes Theorem, 22, 90 Beck, Clifford, 18, 20, 36, 51, 231 Beckjord, Eric, 34 Beedle, Ralph, 190 Bell, Hubert, 198 Bellefonte nuclear power station, 202 Benero, Robert, 77 Bernthal, Frederick, 96 beyond design basis accidents, 50–51, 59, 82, 83, 85, 209, 211, 212, 217 Bhopal chemical disaster, xv, 146 Bickwit, Leonard, 91 Big Rock Point nuclear power plant, 132 blackouts, station as beyond DBA, 51 as credible accident, 63 at Fukushima, 205–206, 208 probabilistic assessment of by vendors, 23 regulations on, 84, 128, 211, 213 Soviet designs for, 165 WASH-1400 assessment of, 63 Blayais nuclear power plant, 211 Blix, Hans, 158–159 Bodega Bay nuclear power plant proposed, 16 Bohunice nuclear power facility, 168 Boiling Water Reactor Experiment (BORAX), 13 boiling-water reactors (BWR)
and ATWS, 50 blackout danger, 206 core-damage probabilities, 99–100, 234 design model, 30 failed scrams in, 50 at Fukushima, 205, 210 GE development of, 13 and LOCA rulemaking, 192 Mark I containment system for, 16, 27–29, 32, 100, 206, 214, 234 Mark II containment system for, 100, 214 Mark III containment system for, 100 superior safety aspects of, 29 Boxer, Senator Barbara, 212 Bradford, Peter, 94 Bradley, Biff call for upgraded PRA, 203 on risk-informed regulation, 191, 192, 193, 202, 216 Brand, Stewart, 201 Bray, Philip, 37, 64 Breyer, Stephen, 183 British Energy, 132 Brockett, George, 42 Brookhaven National Laboratory NOMAC modeling, 123, 126, 127 NRC human factors study, 123–126, 199 and WASH-740 update, 20 PLG PRA review, 88 research reactor at, 228 updated safety culture methodology, 200 Browner, Carol, 183–184 Browns Ferry Nuclear Power Station fire at, 51, 67, 99, 104 Mark I system photo, 28 management issues at, 103 scram failure, 84 Bryan, William, 62 Budnitz, Robert, 63, 74, 83 Buchbinder, Benjamin, 154 Burford, Anne Gorsuch, 178 Burns, Stephen, 219 Bush administration, 202 cancer acceptable risk of in toxins, 173 from all causes, 181 from chemicals, 177 Delaney Clause, 273 EPA regulation of, 177–180 EPA vs. NRC estimates, 183 LNT risk models, 149, 172, 176, 185–186
Index low dose extrapolation, 175–176 from radionuclides, 174, 181 risk estimates by antinuclear groups, 88, 93, 184 in safety goals, 94, 97, 98 SOARCA estimates, 213 WASH-1400 estimates, 60 Carr, Kenneth, 110–111 Carter, President Jimmy, 151 administration, 151 Casto, Charles, 206 Challenger disaster, xv, xviii, 62, 146, 147–149, 153, 207 chemical hazards, 172, 175, 177, 179 Cheney, Vice President Richard, 202 Chernobyl accident in risk estimates, 210, 218 importance of, 146, 156, 162 influence on international opinion, 160, 162, 219 influence on safety culture, 114, 119–120, 127, 200 international cooperation after, 158–159 and positive coefficient of reactivity, 3 psychological trap of, 148 radiation levels at TMI site, 75 and safety goal debate, 96–97 and Soviet-made reactor crisis, 169–170 thawing relations following, 160 Chernomyrdin, Victor, 163 Cherry, Myron as antinuclear activist, 42 in ECCS hearings, 44, 45 requests release of draft WASH-740 update, 59 China Syndrome, 31–39, 50–51 Cicero, 81 Class 9 accidents, 54–55, 85–86, 249 Cleveland Electric Illuminating Company, 108 Clinton, President Bill, 130, 143, 163 Cold War end of, 157, 158, 169 plutonium production in, 4, 5 Colglazier, E. William, 152, 153 Colmar, Robert, 41, 43, 44 Columbia disaster, 148, 155 Colvin, Joe, 110 common-cause (mode) failures, 21, 23, 32, 51–52, 63, 71, 84, 106, 107, 153, 147, 149, 206, 209 Consolidated Edison, 16, 33, 45 containment buildings vs. active systems, 4, 32
|
335
AEC requirements for, 36 at Big Rock Point photo, 15 breach of, 31–32 and China syndrome debate, 31–34, 38 in defense in depth, 4, 5, 11, 14, 31, 36, 97–98, 143, 231 dry vs. wet, 29, 100 vs. ECCS system, 34–36 Fukushima reactors failure of, 206 in French safety philosophy, 159 for LOCAs, 11, 14 in maintenance rule, 113 Mark I, 16, 27–29, 35, 100, 206 Mark II, 100, 214 Mark III, 100 model in PWRs and BWRs, 30 PLG study and, 87 postulated failure of, 12, 20, 50 rate of failure, 213 in safety goals, 96, 97–98 NRC study of (SOARCA), 213 Soviet-designed reactors lack of, 161, 165, 168, 169 at Three Mile Island, 75, 87 venting of, 89, 91, 100, 214 WASH-1400, 62, 98 West German criteria on, 272 control room design, 82, 116, 117, 141 Convention on Nuclear Safety, 167–168, 170 Cook, James, 78, 79 coolant accidents. see loss-of-coolant accidents (LOCA) core catchers, 32, 34, 36, 91 core cooling. see Emergency Core Cooling Systems (ECCS) core damage frequency (CDF), 97 Council on Environmental Quality (CEQ), 55, 85 Crane, David, 215 credible accidents “adequate protection” standard against, 12 ATWS, 50–53 core meltdowns, 32, 62, 237 criteria for, 43 DBAs, 2 from ECCS system, 27, 39 and Fukushima accident, 207 vs. “incredible,” 1, 17–18, 37 lack of regulatory guidance on, 91 LOCAs, 13–14, 31, 141 qualitative judgements on, 12, 13 PRA uncertainties and, 89
336
| Index
credible accidents (continued) runaway reactivity, 4 scram failure, 50–53. See also accidents, nuclear; Design Basis Accidents, Maximum Credible Accidents Curtiss, James, 111 Czech Republic, 164, 168, 169, 171 Davis, Kenneth, 17 Davis-Besse nuclear power plant event similar to TMI accident (1977), 77 Loss-of-feedwater event (1985), 105–108, 120, 123 vessel-head erosion event (2002), 193–199 influence on safety culture, 199–201 Decision Research group, 92 decision trees, 21, 27, 56, 233 decommissioning plants acceptable radiation at, 146, 173 cost concerns, 132 EPA/NRC controversy over, 147, 173, 181–187, 218 predicted wave in 1990s, 144 residual radiation criteria at, 172 183–185Superfund sites, 178 defense in depth ACRS view of, 14 barriers for coolant accidents, 11 containment buildings for, 14, 231 Division of Reactor Development and Technology report on, 54 ECCS as last line of, 40 and Ergen Report, 38 evolution of concept, 143–144, 217 Fukushima challenges to, 211 GE call to change concept, 29, 32, 34–36 key elements of, 1, 3–5, 14 lack of regulatory criteria for, 91 legacy, 217 LOCA rulemaking and, 192 and maintenance rule, 112 in NEI safety philosophy, 141 PRA balanced with, 97, 100, 215, 219 PRA policy statement and, 128 in risk-informed regulation, 100, 143–144 in the ROP, 142 Soviet reactors lack of, 166–167 at TMI accident, 75, 77 standard in evaluation of Soviet reactors, 157, 161, 165–169, 171 “Delaney Clause,” 273
Denton, Harold, 86 Department of Defense, 9, 151 Department of Energy aid for permitting process, 202 aid for second nuclear era, 144, 202, 203 backfitting report, 80 establishment of, 49 “near death” bill for, 139 on nuclear power for global warming, 203 on regulatory burdens for industry and plant cancellations, 79 requests for assistance from CEE nations, 162 Department of Justice, 199 de Planque, Gail, 185 deregulation, power markets antinuclear movement and safety concerns over, 190 conflict between NRC and industry over, 132–133 impact on power markets, 202 industry influence, 131–132 positive influence on operations, 144, 201 safety concerns in era of, 132, 137–138, 144 design basis accidents (DBA) adequate protection and, 83 credible vs. incredible accidents, 17–18 criticisms of, 17, 34, 35, 64, excessive pessimism in, 213 Fukushima accident influence on, 207 legacy of, 217 LOCAs, 11, 13–14, 63 LOCA rulemaking and, 141 PRA as supplementing or replacing, 59, 64, 70, 76, 85, 116 based on engineering judgment, 2 probability approach vs., 34–35, 38 safety as determined by, 2, 38 and special treatment requirements, 190 TMI accident influence on, 77 design safety, new reactors ATWS challenge to, 51 in a bandwagon market, 16 conservative margins in, 12 DBA vs probabilistic approach, 34–36 defense in depth, 3 deficiencies in Soviet-designed reactors, 161 deterministic design, 1–2 differing systems for, 32 ECCS modifications, 45 “Farmer Curve” for, 24
Index fault trees for, 23 and increased output, 17 PRAs for, 27 qualitative vs. quantitative, 9–10, 11 quality assurance vs., 31 reliability prediction methods, 9–10 testing, 13. See also Three “D”s of safety deterministic design and analysis of CEE reactors, 165, 166, 272 conservatism in, 2, 14 criticism of, 82, 113, 140, 216 favored by antinuclear movement, 190 French regulatory perspective on, 159 Fukushima’s impact on, 207–209 at Hanford production reactors, 9, 10 lack of regulatory criteria for, 91 for LOCAs, 14 PRA policy statement and, 128 in risk-informed regulation, 83, 90, 128, 190, 199, 218 risk triplet of, 1. See also, Three “D”s of safety Diablo Canyon nuclear plant, 85–86, 123 Diagnostic Evaluation Teams (DET), 123–125, 126, 143 Diaz, Nils, 139, 142, 200 Dickman, John, 64 Dicus, Greta, 186 Division of Reactor Development and Technology (RDT), 31, 54, 55 Domenici, Pete, 129, 130, 139, 202 Domenici-Jackson meeting, 129–131, 140, 141, 188 Doub. William O., 46, 48 Drabova, Dana, 170 DuPont Corporation, 2, 4, 5 earthquakes and common-mode failures, 35, 52 Diablo Canyon, 85–86 first probability estimates, 228 Fukushima accident, 204–206 influence on U.S. regulation of, 210–212 Japan’s readiness for, 204, 209 NUREG-1150 and, 100 PLG PRA and, 87–88 PRA and, 208, 226 risk assessment of at Hanford, 5, 7, 8, 232. See also, accidents and accident risks, nuclear, seismic hazards
|
337
Eisenhower, President Dwight, xvii, 11, 130 Electric Power Research Institute, 202 Emergency Core Cooling Systems (ECCS) acceptance criteria for, 42–43, 189 AEC distrust of, 27, 29 China syndrome debate, 32–39 in defense in depth, 40 vs. containment buildings, 36 core accident odds, 237 rulemaking hearings, xiv, 44–50, 62 in Soviet-design reactors, 161, 168 LOCA and, 141 LOCA risk-informed rulemaking and, 192, 198 LOFT tests and code development of, 37, 38, 40–41, 42 RELAP computer code and, 41 SATAN computer code and, 41 Energy crisis, 19, 64, 66 Energy Reorganization Act of 1974, 49 Entergy Nuclear, 132, 144, 188–189, 201 environmental hazards chemicals, 177, 179 cost-benefit regulation of, 172 “Delaney Clause,” 273 public priorities on, 179 low-dose regulation, 177–187 Environmental impact statements Calvert Cliffs decision, 45 and Class 9 accidents, 54–55, 86 at Indian Point and Zion power plants, 85 for nuclear powered satellites and Space Shuttle, 151–152 Environmental Protection Agency (EPA) concentration and pathway limits, 177–178, 180, 182–183 criticism of AEC on Class 9 accidents, 54–55 deference to NRC regulation, 173, 180 embrace of PRA, xviii, 178 first risk assessment by, 274 LNT model and, 175 memorandum of understanding with NRC (1992), 180–181 memorandum of understanding with NRC (2002), 187 vs. NRC on decommissioning regulations, 146–147, 173, 180–187, 218 plant decommissioning, 145 radiation safety rules, 177–182 radiation scrutiny, 54–55 risk quantification and management policies, 149, 171–174, 177–180
338
| Index
Enzi, Senator Mike, 184 Epler, E.P., 50, 51 EQE International, 165–166 Ergen, William, 34, 36, 38 Ergen Report, 36–38 European Commission, 168 European Union, xix, 146, 156, 167–170 event trees, 58, 61, 62, 70, 151 Exelon Corporation, 132, 209–210, 213, 215 experts advisory panels on, 269–270 biased risk perception among, 72, 73, 218 in postulating design basis accidents, 141, elicitation process, 77, 99 nuclear experts hired by NASA, 154–155 maintenance rule and expert panels, 112 public trust in, 92, 93, 179, 219 weakening of AEC by divisions among, 47–49 Failure Mode and Effects Analysis (FMEA), 150, 266 Farmer, F.R., 23, 24, 58 “Farmer Curve,” 23, 24, 89 fault trees development by nuclear industry, 23, 26–27, 50, 232 fault/event tree combined, 61, 62 invention by Bell Laboratory, 22–23 modeling for scram failures (ATWS), 53 NASA development of, 150, 155 NRC development of, 99 promotion of by GE, 35 use in WASH-1400, 56, 58 weaknesses of, 62 Federal Aviation Administration (FAA), 109 Federal Energy Regulatory Commission (FERC), 131 Federal Food, Drug and Cosmetic Act, 273 Feynman, Richard, 147, 148 figure of merit, xv, xvi, 27, 225 financial issues, nuclear accident costs, 77 construction/operation costs, 79, 214 cost/safety relationship, 137–138 decommissioning site cleanup, 183 deregulation costs, 132 regulation cost/benefit, 83–84, 90 risk-informed rule costs, 188–189 FirstEnergy Nuclear Operating Company (FENOC), 194–200
Fischhoff, Baruch, 92 FLEX strategy, 213–214 Food and Drug Administration (FDA), 149, 172, 175 Ford, Daniel in ECCS controversy, 44–45 photo of, 63 as UCS leader, 62 on WASH-1400, 73 Fort Calhoun nuclear power plant, 211 fossil fuel plants, 54, 79, 105, 116, 123 fracking, 215 Framatome, 195, 197 Freedom of Information Act, 59 Freeman, S. David, 17 Freudenberg, William, 122 Friends of the Earth, 85, 86, 88 Fudai, Japan, 205 Fukushima Daiichi accident, xvi, xviii, xix, curies released, 5, 7 description of, 204–211 human factors in, 208 industry impacts, 214–215 impact on use of PRA, 171, 209 influence on support for nuclear power, 74, 208 international response to, 207–208 NRC regulatory response to, 211–216 PRA questions/answers from, xix, 74 Fukushima Daini station, 205, 213 Galatis, George, 135–136 Garrick, John commercial PRAs of, 27 on early PRA quality, 21 fault tree dissertation, 26 on the NAS committee, 154 on NRC staff view of risk-informed regulation, 189 on Oyster Creek and Indian Point PRAs, 87 on PRA advances since 1960, 90 on quantifying accident credibility, 218 General Accounting Office (GAO) call for safety culture methodology, 199 Davis-Besse event report, 198 on federal radiation standards, 181 on lax NRC oversight, 135 on NRC backfits, 80 on NRC oversight process, 138 on NRC risk-informed regulation, 199 on PRAs of natural hazards, 214 General Electric Company (GE) in ATWS debate, 50–52
Index in the bandwagon market, 15, 78 and BWR safety, 29, 99 China syndrome debate and, 32 ECCS promotion, 29, 34 at Hanford, 4–10 22 Mark I system and, 27, 234, see also Mark I containment system NuStart consortium and DOE, 202 Oyster Creek order, 15 probabilistic approach of, 24, 34–36 quantitative safety promotion, 10 reliability prediction methods, 9 use of AEC research in developing BWR design, 13 use of fault trees, 23 Gilinsky, Victor, 90, 94 Glenn, Senator John, 181 global warming, 201, 202 Gore, Vice President Al, 163 Gossick, Lee, 66 Graham, Senator Bob, 142 Grand Coulee Dam, 5 Gravel, Senator Mike, 53–54, 55, 241 Green Party, 168, 171 Greenpeace, 168 Greifswald nuclear power plant, 161, 272 groundwater contamination, 180, 183 Group of 7 industrialized nations (G-7), 163 Gunter, Paul, 93, 198 Gupta, Olivier, 207 Haagensen, Brian, 124, 143 Haber, Sonja, 123, 125, 126, 200 Haddam Neck nuclear power plant, 184 Hanauer, Stephen, 238 in ATWS debate, 52–53 in ECCS debate, 32 as ECCS task force leader, 42–43 on Mark I containment, 100 on PRA and transparency, 83 on probability estimates, 27, 53 in WASH-1400 planning, xiv-xv, 56, 59 Hanford Engineering Works, reactors accident probability at, 229 aerial view of, 9 ATWS-like event at, 51 B reactor photo, 9 earthquake risk study, 232 fault trees for, 23 “green run,” 5, 7 influence on civilian reactor design, 11 K reactors, 7–8 K reactors photo, 9
|
339
map of, 6 N reactor photo, 9 quantitative risk work at, 8–10, 229 reliance on the three “D”s, 1 safety concerns at, 4–5, 8, 13, 227 safety design of, 2 siting and safety, 2, 5 Healy, J.W, 7, 233 Henderson, Karen, 160, 164, 167, 168 Hendrie, Joseph, 74, 86, 94 high reliability organizations (HRO) scholarship, 121–122 Hiser, Alan, 195 Holahan, Gary, 97, 190, 199 Holifield, Congressman Chet, 19–20, 31, 47, 48, 67, 70 Hollingsworth, R.E., 57 Holmes & Narver, 20, 21, 23, 232 Hosmer, Congressman Craig, 33, 48, 60, 67 human factors/error in ATWS debate, 52 at Fukushima, 208 importance of, 50, 63, 120 NRC faulted for ignoring, 76 NRC research into, 77 studies on, 116–117, 119 at Three Mile Island, 75–76. in WASH-1400, 63 See also organizational factors Humboldt Bay Unit 3 nuclear power plant, 52 Idaho National Reactor Testing Station, see Idaho National Laboratory Idaho National Laboratory, 13, 14, 37, 40–42, 47–48, 49, 155, 237, 239 incredible accidents containment failure considered as, 20 Class 9, 54–55, 85 vs. credible, 17–18, 50–51, 89, 208 Fukushima and rethinking of, 204, 207 large pipe breaks, 191–193 industry view of, 33 meltdowns as, 237 qualitative judgements on, 1 Indian Point nuclear power plant, 33, 45, 86–90, 152 Indian Point Probabilistic Safety Study, 88, 152 Inhofe, Senator James, 140, 143, 184, 212 Institute of Nuclear Power Operations (INPO), 2, 105, 108, 111, 120, 126, 199, 201,
340
| Index
International Atomic Energy Agency (IAEA) CEE nation reactor reviews, 161–162 Chernobyl conclusions, 159–160 evaluation of NRC regulations, 203 international safety criteria for, 158 Kozloduy facility review, 161, 162 post-Fukushima report, 209, 226 PRA initiatives, 209 on safety culture, 120, 126, 199–200 standardization of Soviet-reactor PRA reviews, 166 standards convention, 167 International Commission on Radiation Protection (ICRP), 175 International Energy Agency, 214 international industry after Chernobyl, 158–159 engineer-influenced politics, 269 French regulatory system, 159, 170, 273 Fukushima concerns/aid, 206–207 globally accepted standards,, 269 maintenance models, 109 mistrust of PRA, 159 and national sovereignty, 157, 159, 167 NRC safety goals and, 98 plant construction costs, 79–80 in post-Soviet Eastern bloc, 156–171, 268 and probabilistic approach, 18, 35 probabilistic models, 58 regulator-licensee relations, 103 safety common ground, 170 safety culture in, 114, 126 siting issues, 18, 23 use of PRA, 145 International Nuclear Safety Advisory Group (INSAG), 114 Iodine-131, 5, 23, 24
Jersey Central Power & Light Company, 15, 16 Joint Committee on Atomic Energy vs. the ACRS, 33 and Atomic Energy Act of 1954, 177 and AEC reorganization, 48 close relationship with AEC, 46, 48 competition with other congressional committees, 55 control over nuclear issues, 55 dissolution of, 47, 49, 67 vs. environmental groups, 55 Holifield, Chet and, 19, 47 new leadership for, 47–48 sympathetic to nuclear industry, 33 Joskow, Paul, 144–145
Jackson, Shirley background of, 130 on deregulation and safety, 132–133, 137–138 Domenici meeting, 129–131, 139, 141 EPA conflict with, 183 industry relations, 138 NRC staff meeting on risk-informed regulation, 142 on nuclear power values, 142 proponent of nuclear power, 142 on risk-informed regulation, 128, 133, 140, 189 U.S. Senate vs., 139 Jaczko, Gregory, 203, 211, 212
LaPorte, Todd, 121 Lapp, Ralph, 45 Large Early Release Frequency (LERF), 97 Lederman, Luis, 166 Legasov, Valery, 162 Levine, Saul call for AEC safety study, 55 and computer modeling at Idaho, 42 on need for safety goals, 91 on NRC use of PRA, 74–75, 81 photo of, 58 safety goals, 91 work on WASH-1400, 57–59, 65 on worth of PRA, 59, 74 Levy, Solomon, 29, 35, 36
Kahneman, Daniel, 26 Kaplan, Stanley, 90 Kaptur, Congresswoman, Marcy Kemeny Commission, 102, 117 Kemeny, John, 76 Kendall, Henry founder of UCS, 53, 62 photo of, 63 on WASH-1400, 63, 73 Kennedy, President John F., 46, 151 Kessler, Carol, 160, 166, 168 Kewaunee nuclear power plant, 215 Kirienko, Sergei, 208 Klein, Dale, 202 Kozloduy nuclear power plant calls to close facility, 163, 167 closure, 169 PRA analysis of, 165–166, 169 restart plan for, 166 power supply to Bulgaria, 161 safety conditions at, 161–163, 167
Index Lewis Committee, first, 66–67 Lewis Committee, second, 70–72, 73, 85, Lewis, Harold, 66, 70, 71, 72, 95, 148 license renewals Fukushima as hindrance to, 209–210 impact of power markets on, 215 maintenance rule and, 113 NRC oversight and, 133, 140 21st century, 201–203 Lichtenstein, Sarah, 92 Lieberman, Senator Joseph, 184 Life magazine, xiii Linear No-Threshold model (LNT) advantages of in regulation, 175–176 for chemicals, 175–176 disagreement about validity, 185–186 EPA use of, 179, 274 illustration, 176 for radiation, 172, 175 sublinear model alternative, 175 Lochbaum, David, 114, 132, 138, 140, 143, 189, 198 LOFT program accident codes, 41–42 criticism of, 37 reorientation of, 37 ECCS system tests, 37, 38, 41 photo of, 38 planning for, 14 results of, 41 Los Angeles Department of Water and Power, 16 loss-of-coolant accidents (LOCA) and ACRS, 14 barriers preventing, 11 China syndrome and, 31 computer modeling, 41, 49, 237, 238 core meltdown from, 31 criticism of as DBA, 29, 32 defense in depth and, 14, 29, 32, 35–36 as design basis accident, 13, 14, 29 Davis-Besse plant, 105–107, 197–198 ECCS as primary defense, 35–36 ECCS rulemaking hearings and, 49 fault-tree modeling and, 23 large-break LOCA, 87, 141, 191–192, 213, 216 LOFT test of, 14, 37, 38 Mark I system for, 28, 100 from nozzle stress-corrosion cracking, 193–199 reactor behavior during, 14 risk-informing LOCA rule, 141, 189, 191–192, 202–203, 216
|
341
small break LOCAs, 75, 76 in Soviet-design reactors, 161, 165 testing and research for, 13–14, 40–41, 49 Three Mile Island as, 75. See also, containment; credible accidents; design basis accidents; Emergency Core Cooling Systems Lovelock, James, 201 Lyman, Edwin, 216 Macfarlane, Allison, 216 Maine Yankee Nuclear Station, 131, 136–137 Magwood, William, 204 maintenance and Davis-Besse feedwater incident, 105–108 and Davis-Besse vessel-head erosion, 195 industry practices in 1980s, 104–105 limited regulation of, 104–105, 110 and post-TMI NRC review, 254 during power operations, 256 vs. productivity, 105 in risk-informed regulation, 143, 190, 201, 218 rule, 108–114 international influence, 256 and license renewals, 113, 133, 144 success of, 112–114, 127, 134 scram rates, 84 TMI accident and, 74, 254 WASH-1400, 63, 76 Malibu nuclear power plant proposed, 16 man-machine interface, 116 Markey, Congressman Edward, 198, 210, 212 Mark I containment system, 16, 27–28, 32, 100, 206, 214, 234 Mark II containment system, 100, 214 Mark III containment system, 100 Marshall Space Flight Center (NASA), 153 Materials Test Reactor (MTR), 13 Mattson, Roger on ECCS tests, 40 on Hanauer, 43 in licensing reviews, 42 on meltdown probability, 237 modeling code research, 42 on NRC commitment to research, 50 on Shaw’s relationship with regulatory staff, 45 Maximum Credible Accident, 2, 227. See also, design basis accidents; credible accidents
342
| Index
Maximum Hypothetical Accident, 2, 227 McGaffigan, Edward, 140, 142, 186 McGaha, John, 188 McNeill, Corbin, 131, 133, 138 meltdown, core ACRS advocates research on, 37 and BWR vs. PWR function, 29, 30 and backfit rule, 95 at Chernobyl, 97 China Syndrome of, 31–39 core catcher for, 32, 34, 36, 91 ECCS testing and modeling, 40–42 at Fukushima, 206–207, 213 Hanford consequence study of, 7 interim NRC policy statement on environmental impact statements and, 85 limited research on, 38 LOFT test of, 37 Mark I susceptibility to, 28, 100, 206 NUREG-1150 on, 99–100 precursor analysis, 98–99 post-TMI research on, 87 probability of, 62, 209–210 regulatory staff view as not credible, 237 safety goals, 95–98 at Three Mile Island, 75 WASH-1400 on, 59, 60, 62, 244 Meserve, Richard, 186–187, 189 Michigan Consumers Power Company, 78 Midland nuclear power plant proposed, 78 Miller, Daniel, 254 Mills, Daniel, 206 Millstone nuclear power plant, 132, 136–137, 138 Minamisanriku, Japan, 205 Mintzberg, Henry, 123 Miraglia, Frank, 74 Moore, Patrick, 201 Moray, Neville, 121 Morris, Scott, 128, 135, 143, 144 Morton-Thiokol Corporation, 153 Muller, H.J., 174 Muntzing, Manning, 46, 59 Murkowski, Senator Frank Murley, Tom on Class 9 exclusions, 55 on NRC oversight, 135 on Shaw, 47 work on WASH-1400, 56–57, 59, 60 Nader, Ralph, 49 NASA Apollo 13 mission, 150
Apollo program, 62, 150 and Challenger disaster, xviii, 147–148 and Columbia disaster, xviii, 155 culture at, 148, 153–154 development of PRA in 1960s, 150 estimate of Shuttle launch failure probability, 148–149, 152 Failure Mode and Effects Analysis/ Critical Issues List, 150, 154, 155 Galileo space probe, 154–155 International Space Station program, 155 leadership at, 154–155 management, 148 NAS panel on NASA risk management, 154 NRC and nuclear industry influence on, xviii, 146, 154, 155 nuclear safety model use, 146 Office of Safety, Reliability, and Quality Assurance, 154 PRA use at, xviii, 145, 147–155 PRA use for nuclear powered satellites (SNAP), 149, 151 qualitative safety approach, 149, 151, 152, 153 risk perception, 149 skepticism of PRA, 62, 147, 148–149 Wiggins study debate, 149, 152–153 National Academy of Sciences reports, 121, 122, 154 National Audubon Society, 88 National Council on Radiation Protection and Measurements (NCRP), 174 National Energy Policy Act of 1992, 131 National Environmental Policy Act (NEPA), 45, 54, 55, 85 National Reactor Testing Station, see, Idaho National Laboratory National Science Foundation, 90 “near death” experience of NRC, 130 Near Term Task Force (Fukushima accident), 211 New York Times, 44, 46, 48, 72, 130 NFPA 805, 203, 204, 216 Niebuhr, Reinhold, 46 Nixon, President Richard, 46, 47 North Ana nuclear power plant, 210 Northeast Utilities Company, 135–136 nozzle stress-corrosion cracks, control-rods, 193–199 NRG Energy Company, 191 Nuclear Energy Institute (NEI) criticisms of NRC regulations, 134, 140 on deregulation, 133–134
Index on maintenance rule success, 113, 134 on NRC oversight, 133, 140 and Reactor Oversight Process, 142 on risk-informing regulations, 113, 128, 133, 140–141, 188, 189, 191–192, 198, 202, 216 “regulating” of NRC, 143 nuclear industry, international. see international industry nuclear industry, U.S. adversarial relationship with NRC, 102–104, 115, 122, 124, 137–138 antinuclear activism and, xiii, xviii, 64, 67, 73, 143, 190, 208, 210 on appointment of Dixy Lee Ray to AEC, 47 ATWS issue and, 53, 65, 70, 84 on backfitting, 52 bandwagon market, xiii, 10, 15–16, 20 and the Brookhaven study, 124–125 conflict with AEC over research program, 37, 41, 42 consolidation in, 132 criticizes ACRS, 33 criticizes AEC regulations, 17 criticism of NRC regulations, 65–66, 82, 108 Domenici-Jackson meeting and (“near death”) 139–141 degraded core rulemaking and, 91 deregulation and, xix, 131–132, 144–145 ECCS hearings and, 45 end of first nuclear era, 73, 77–81 energy crisis and, 64, 67 Ergen task force representation on, 34 FLEX strategy, 213–214 “fossil-fuel mentality” of, 105, 108 Fukushima accident response to, 207 Holifield’s promotion of, 19 license renewal and, 133 maintenance rule and, 109–114 management, 103, 105, 108 and NASA, 148, 150, 154 nozzle stress-corrosion cracking response, 194 nuclear renaissance, 144–145, 201–204 end of the renaissance and, 214–216 over regulation of, 102 oversight of operations, 102–103, 105, 120, 123 PLG PRA (Indian Point and Zion), 86–87, 89 political influence of, xvii, 19, 33, 37, 48, 67, 94, 139, 141, 143
|
343
post-Fukushima, 212PRA’s in, xvi, xviii, 27, 97, 129, 198, 210, 219 Price-Anderson Act and, 54–55, 67 promotion vs. regulation in, 11–12, 33, 46 public relations of. see public relations Reactor Oversight Process and, 142–143 and reactor safety, 17, 37, 50, 105, 190 voluntary initiatives on, 211 regulations to encourage growth of, 12 vs. regulators, 102–104, 110–112, 115 risk-informed regulation and, 113, 128, 133, 141, 188–189, 190–191, 192–193, 216–217 and safety goals, 82, 87, 92, 94–96 safety culture in, 114, 115, 120, 126, 136, 199, 201 SALP and, 123–124, 134–135 second nuclear era, 100–101 Three Mile Island effects, 75, 76 Towers Perrin report and, 134 21st century, 144, 201–204 WASH-1400 response to, 63–65 WASH-1400 fallout, 73 See also names of individual companies; Atomic Industrial Forum; Institute of Nuclear Power Operations; Nuclear Energy Institute Nuclear Information and Resource Service, 198 Nuclear Organization and Management Analysis Concept (NOMAC), 123, 126 Nuclear Regulatory Commission (NRC) Accident Sequence Precursor Program, 99 adversarial relationship with industry, 79, 102, 103, 110, 114, 115, 119, 127, 129, 140 antinuclear activists and, 49, 65 ATWS and, 71 Brookhaven researchers vs., 124 criticism of agency structure, 142 criticizes nuclear industry, 79 Davis-Besse vessel-head erosion event, 194–199 Domenici-Jackson meeting, 129–131, 141 vs. the EPA, xviii, 173, 180–187 establishment of, 49 Friends of the Earth petition on Class 9 accidents, 85 Fukushima response, 211–216 growth of, 49
344
| Index
Nuclear Regulatory Commission (NRC) (continued) and human factors, 76–77 independence of, 65 Indian Point/Zion petition, 86 industry criticism of, 66, 132–135 and nuclear renaissance, 101, Office for Analysis and Evaluation of Operational Data, 99 Office of Nuclear Regulatory Research, 49 oversight overhaul of, 139–142 political pressure on, xvii PRA support for, xvi, 76, 77, 79, 80–81, 83,128, 219 PRA wariness of, 73–74, 82 regulation reduction, 120 regulators vs. licensees, 102, 110, 115, 119 regulatory stability search for, 50 regulatory staff risk-informed regulation, see risk informed regulation role in industry woes, 78–80 safety goals, quantitative, 91–92 safety research, 49, 50, 77, 99 SALP and watch lists, 134–141 severe accident research, 98–100 60 Minutes episode, 65 Soviet-era reactors assistance, 156–171 TMI accident and, 74–75, 76 WASH-1400 Commission distancing from executive summary, xv, 70, 72, 74 WASH-1400 and, 66–67, 70. See also risk-informed regulations; safety regulations; individual reports nuclear renaissance cost of new reactors, 203, 214–215 Fukushima impact on, 209, 215 license applications, 202, 203, 215 NRC regulatory reform for (Part 52), 101 political support for, 202, 203 predictions of pro and con, 144, 201 nuclear powered satellites (SNAP), 151–152 nuclear powered submarines, 104, 110 Nucleonics Week, 33, 47, 67 NUREG-1150 study, 99–100 NuStart Energy Development, 202 Nyer, Warren, 34 Oak Ridge National Laboratory, 40, 41, 47, 52 Obama, President Barack, 203
Occupational Safety and Health Administration (OSHA), 172, 175 Office of Safety, Reliability, and Quality Assurance (NASA), 154 Okrent, David on the ACRS, 33 call for numerical safety goal, 90 on LOCA, 31, 32 NSF risk assessment project, 90 on risk judgements, 13 O’Leary, John, 62 Onagawa nuclear power station, 205 “one in a million” risk standard ATWS as, 51–53, 55, 71, 84 Below Regulatory Concern, 184 at EPA, 178–180, 185 GE design standard at Hanford, 10 at Knolls Atomic Power Laboratory, 230 PLG study, 88 popularity of in federal agencies, 149 as public tolerance level, 64, 93 for radioactive releases, 94, 96 Rasmussen Report estimate of, 59–60, 64 on the Reactor Safeguard Committee, 7 regulatory staff belief in, 63, 237 in safety goals, 93, 94, 96, 97, 180 for Superfund sites, 180 Supreme Court, 173 organizational factors Battelle studies on, 118–119 Brookhaven research on, 123–125 difficulty in quantifying, 127 high reliability organizations research, 121–122 importance of in plant operations, 120 international use of, 126 maintenance and. see maintenance management, 78, 121, 136, 178 NAS report and, 121 PRA use and, xviii, 112, 122–123, 125–126 safety culture, 114–127, 136–137 safety-related equipment, 190–191 in Three Mile Island accident, 75–77, 117. See also human error; human factors; safety culture Oyster Creek nuclear power plant, 15, 16, 17, 87, 132 Pacific Gas & Electric Company, 16, 123 Palladino, Nunzio as chairman of the NRC, 94 on meltdown probability, 97
Index on NRC oversight, 119, 120, 200 on safety goals, 94, 96 Panofsky, Wolfgang, 66 Parker, Herbert, 7 Pastore, Senator John, 41, 46–47 Peach Bottom nuclear power plant, 103, 108, 120, 123 PECO Energy, 132 performance-based regulation, 109, 113, 128, 129, 134, 139, 140, 142, 255 permits, reactor construction AEC review issues, 16–17 ATWS probability halting, 51, 53 in the bandwagon market, xiii industry stalls in 2011, 214–215 lead times and, 78 LOFT test effects on, 42 in the second nuclear era, 101. See also, individual names of nuclear power plants Perrow, Charles, 121, 208 Persensky, Julius, 200 physicists, political activism of, 66 Pickard, Lowe, and Garrick (PLG) study, 87–90, 152 Pidgeon, Nick, 127 Pietrangelo, Anthony, 192, 216, 217 Pilgrim nuclear power plant, 209–210 Pittman, Frank, 19–20 Planning Research Corporation, 20–21 plutonium production, 1, 2, 4–5 politics of nuclear power in the 1980s, 93–94 ACRS vulnerability to, 34 Battelle study and, 119 Brookhaven study and, 125 China Syndrome and, 33–34 early for/against, xiii ECCS debate and, 36 engineers and, xvii, xviii, 92, 171 Fukushima impact on, 216 and independent regulatory agencies, 170 1994 “Republican Revolution,” 139 in post-Soviet Eastern bloc, 156–158, 164, 169 quantification and risk assessment as a response to, xvii, xviii, xix, 24–27, 53–55, 166, 168, 171, 172, 178–187, 208 in reactor oversight, 129–130, 139–142 regulation as a political process, 128, 139, 142 safety goals and, 92–93
|
345
in safety research program, 37 “small dose problem,” 183–185 in Soviet-designed reactor closure decisions, 156–158, 168, 169 support for nuclear renaissance, 202, 203 in WASH-1400, xv, xviii, 53–54, 55, 66, 217 “Westernization” of technology, 269 See also, nuclear industry, U.S.; antinuclear activism Pollard, Robert, 65 pressurized-water reactors (PWR) containment of, 100, 165, 206 cooling systems of, 105 core-damage probabilities compared to BWRs, 27, 29, 99–100, 206, 234 Davis-Besse plant, 105, 193–199 design model, 30 ECCS effectiveness and, 41 LOCA rulemaking and, LOCA tests for, 41 nozzle stress-corrosion cracking hazard, 193–199 station blackout and, 165 Soviet, 156, 161, 165, 166, 168, 269 special treatment rule and, 203 Three Mile Island, 74, 247 Westinghouse development of, 13, 29 See also, Westinghouse Corporation Price, Harold, 11–12, 34 Price-Anderson Act, 54, 55, 241 probabilistic risk assessment (PRA) acceptable risk definitions, 52–53 accident-chain analysis, 8 AEC skepticism of, 27 as aid to industry, 79 on ATWS, 64–65 for backfit analysis, 80, 96 Brookhaven study and, 123, 125–126 challenges to, 218–219 CEE nations view of, 170 China syndrome influence on, 38–39 of Class 9 accidents, 55 communications tool, 24–26, 54, 57–58, 64, 73, 189, 219 confidence as disservice to, 72 cost of, 150, 170, 189, 218–219 critics of, xviii, 62, 74, 80, 92, 93, 108, 138, 140, 207, 208, 210, 216 cross-disciplinary nature of, xviii data needed for, 77, 83, 90, 192 defense in depth improved understanding of, 143 Davis Besse assessments, 108, 198
346
| Index
probabilistic risk assessment (PRA) (continued) Davis Besse vessel head erosion event and, 195, 198 as a diplomatic tool, xviii, 146–147, 156, 157, 171, 218 early AEC attempts, 20–21, 50 EPA interest in, 54 EPA vs. NRC use of, 171–187 of failed scrams, 51–53 figure of merit for, xvi “Fukushima Fallacy,” 216 Fukushima impacts, 207–210, 214, 216 Garrick’s work with, 26–27 GE’s promotion of, 10, 34–35 General Accounting Office report on, 198 at Hanford, 8, 10 Individual Plant Examinations, 100 as intellectual technology, xvii international use, 18, 159, 170–171, 281 legacy of, 217–218 Lewis Committee on, 72 license renewals, 113 limitations of, 8, 62, 83, 89, 90, 95 as lingua franca, 115, 122–123, 171 “living PRAs,” 112, 122, 166 for maintenance rule, 110–112, 113 modern day issues, xix NASA’s use of, 62, 147–155 NFPA 805, 204 NRC distancing from, 73, 74 NRC policy statement on, 73, 127–129 and NRC staff, 66, 74, 83, 128 NRC support for, 49, 76, 122, 133, 219 nuclear power development and influence on, xv-xvi, 23, 26 nuclear industry development of, 27, 87–89, 112, 113, 133, 191 nuclear industry skepticism of value of, 189, 203 in NUREG-1150, 99–100 operating experience/database for, 192 in other industries, xvi, xviii–xix, 145, 146 for plant siting (Indian Point/Zion), 87–90 PLG study, 86–90 as a policy/political tool, xix, power of, 208 practical applications, 143 qualitative assumptions for, 71 qualitative vs. quantitative, 7 quality of industry PRAs, 129, 190–191, 192, 194, 198, 203, 211, 219
in Reactor Oversight Process, 142 in real-time operation and risk monitors, 112, 113, 122, 256 reliance on, xvi–xvii for risk-based regulation, 82, 90 for risk-informed regulations, xvi, 83, 127–129, 130, 140–141, 143, 189, 215, 217 for risk management, 83, for safety culture and organizational factors, 114, 122–127 for safety goals, 91–98 sample, 61 in second nuclear era, 101 of seismic hazards, 214, 228 severe accident regulation, 98–100 social science and, 115–116, 122, 123 for Soviet-era reactors, 156–160, 162, 164–166, 168–169 Starr, Chauncey advocates for, 25–26 station blackouts, 206 STP use of, 133 in 10 CFR Part 52, 101 vs. the three Ds, 17–18, 38–39, 75, 100 Three Mile Island accident influence on, 74–77, 81, 91 transparency of, 10, 80, 83 uncertainty and, 83, 89, 90, 218 in WASH-1400, xiv,-xvi, 50, 50–60, 72 WASH-1400 setbacks for, xv, 72–74 wisdom of, 81 See also, risk-informed regulation; names of individual reports Protsenko, Alexander, 160 Public Citizen, 138, 143, 190 public opinion, nuclear power, xiv, 3, 65, 67, 73, 74, 79, 179, 202, 217 public relations accident probabilities and, 20 accidents as driving, 219 ATWS, 128 China Syndrome and, 33–34 on credible vs. incredible accidents, 23 distrust of AEC-industry alliance, 48 early nuclear support/concerns, xiv ECC hearings and, 44, 45 1970s leadership distrust, 65 perception of risk, 73–74, 89, 179 post-WASH-1400, 73 risk tolerance and, 25–26 safety goals as serving, 92 21st century, 201–202 WASH-1400 as aiding, 59–60
Index in WASH-1400 planning, 57. See also antinuclear activists Public Utilities Regulatory Policies Act of 1978, 131 qualitative judgements of accident probabilities, 1, 7 and “adequate protection,” 83, 91 Brookhaven study, 116, 125 criticism of, 216–217 in decommissioning, 187 in deterministic design, 1, 2 in GE’s release estimates, 230 inadequacy of, 10, 25, 90, 146 Indian Point/Zion petition, 86, in NASA engineering culture, 148–153 replacing, 71 PRA as a supplement to, 90, 144 in public communication, 18–19, 25 in the ROP, 142, 200–201 for risk assessment, xvi, 7, 12–13, 17 for risk-informed regulation, 83, 128, 195, 217 on “safe” doses of radiation, 175 safety goals, 10, 92, 96 for safety regulation, xvi–xvii, 12–13, 18, 27, 37, 53, 83, 86, 90, 91, 104, 112, 124, 128, 140, 187, 195, 200 in the SALP, 123, 135 vs. quantitative, xvi-xvii, 71, 91, 192, 194, 217 Three “D”s as, 10, 143 Radiation, health effects debate over LNT validity, 181, 185–186 genetic effects, 174 graph of competing models, 176 in Hanford reactor accidents, 7 SOARCA study, 213 Linear vs. tolerance dose effect, 174–175 in WASH-740, 18 in WASH-740 update, 20 in WASH-1400, 59–60, 62 See also, Linear no-Threshold Model (LNT) Radiation, regulation As Low As Reasonably Achievable” standard, 177, 181, 182 Below Regulatory Concern (BRC), 184 at decommissioned plants, 146, 173, 180–187 early regulation of, 174–177 EPA and, 54, 177 EPA vs. NRC on, 173–174, 180–187
|
347
“Farmer Curve,” 23 federal limits on, 181 of low-level releases, 174, 175, 176–177 LNT model use of, 172, 175–176 mitigation requirements after 9/11 attacks, 213 from nuclear satellites, 151–152 “one in a million” risk standard, 180 and plant siting, 88 SOARCA study, 213 residual at decommissioned sites, 172 REM units, 177, 182 safety goals for, 96–98 See also, Linear no-Threshold Model (LNT) Radiation, release during Chernobyl accident, 75, 158 containment buildings and, 31–32 in design basis accidents, 2, 5, 20, 51 Fukushima accident and, 5–7, 206 GE probability estimates, 230 “Green run” at Hanford, 5 level 3 PRA, 96 low-level emissions, xiv, 172, 176–177 estimates of Hanford accident, 7, 51 from satellite launches (SNAP), 151 in Soviet-designed reactors, 161 during TMI accident, 5, 75 in urban locations, 86 Raiffa, Howard, 21, 56, 178, 233 Ramey, James AEC career ending, 47, 48, 60 influence of, 46 pronuclear views of, 19, 46 and WASH-740 update, 19 on WASH-1400, 57, 59, 60 Ramsey, Jack, 162, 164, 166–167, 171 RAND Corporation, 22, 56 Rasmussen, Norman on human error, 76 on NRC embrace of PRA, 81 photo of, 58 response to WASH-1400 criticism, 62, 67, 72 on safety goals, 91 work on WASH-1400, xiv, xv, 56–60, 178 “Rasmussen Report.” see WASH-1400 Rathbun, Dennis, 139 Ray, Dixy Lee, 19, 46, 47–48, 60, 240 RBMK reactor technology, 161, 163, 269 reactivity coefficients Canadian reactors, 18 negative vs. positive, 3, 4 in commercial reactors, 11
348
| Index
Reactor Loss-of-Coolant Accident Program (RELAP), 41, 49 Reactor Oversight Process (ROP) effectiveness of, 143–144 in Japan, 209 NRC investment in, 189 nuclear industry and, 142 nuclear renaissance and, 201 in risk-informed regulation, 190 safety culture in, 199, 200–201, 218 as SALP replacement, 142 UCS support for, 143 reactor plants, non-safety issues canceled orders for, 77–78 capacity increases, 113 decommissioned. see decommissioned plants license renewals, 113, 132 location of. see location, reactor man-machine interface, 116 mismanaged construction of, 78–79 operating costs, 79 permit review issues, 16–17 scaling up size, 17 as sociotechnical systems, 121 Soviet, 156–171, 268, 272. See also boiling-water reactors (BWR); pressurized-water reactors (PWR); names of individual plants Reactor Safeguard Committee on design safety, 3 Hanford accident risk concerns, 4–5 isolation siting formula, 6 merged into ACRS, 8 one in a million risk goal, 7, 90, 149 probabilistic seismic estimates, 228 Reactor Safety Study. see WASH-1400 Reagan, President Ronald, 93–94, 178 redundancy ACRS and ECCS, 33 and ATWS, 50–51, 84 Challenger disaster and, 148 for common-cause failures, 107, 267 in defense in depth, 2, 17 Fukushima accident and, 206 GE ECCS system, 29, 32 in Soviet-designed reactors, 168 special requirements, 141 system diversity of, 32 regulations. see safety regulations Reid, Senator Harry, 129, 212 Reig, Javier, 207 Reilly, William, 179 reliability engineering, 9
REM (Roentgen Equivalent Man) units, 177, 178 Remick, Forrest, 110, 111, 167 “Republican Revolution” of 1994, 139 Riccio, Jim, 143, 190, 201 Richards, Robert, 34–35, 37 Rickover, Admiral Hyman, 19, 114–115, 257 risk assessment AEC research supporting, 13 decision trees for, 21–23 early attempts at, 20–21 EPA’s use of, 173, 179 figure of merit for, 27 NASA’s history of, 148–149, 154 political need for, 53–55 qualitative, 12–13 regulatory need for, 50–53 risk management vs., 178–179 Starr model of, 24–26 three “D”s of, 1, 3, 18 U.S. interagency use of, 171–173 by WASH-740, 18–19. See also probabilistic risk assessment (PRA) risk-based regulation, 82, 83, 90, 112, 122, 133, 187, 246 risk communication, 73, 189, 219 risk-informed regulations as aid to industry woes, 133 balance of PRA and defense in depth in, 100 CEE nations interest in, 170–171 Davis-Besse vessel head erosion event and, 194, 198–199 decision making via, 195 of defense in depth, 143–144 Domenici-Jackson meeting on, 129 and Fukushima accident, 209, 214, 215 industry/Senate call for, 133, 139–141 legacy, 218–219 LOCA rulemaking, 191–193, 202, 216 maintenance practices, 113 maintenance rule as, 112 NFPA 805, 203, 216, 217 Nozzle stress corrosion cracking issue, 194–195 NRC adoption of, xviii-xix, 83, 122, 130, 142, 188–190, 217 NRC resistance to, 83, 129, 133, 142, 189 nuclear renaissance and, 201 opposition to, 190, 210 origin of term, 128
Index post-Fukushima, 209, 210, 216 as primary option, 217, 219 proposed regulations for, 141 public communications and, 189 quantitative vs. qualitative, 90 Reactor Oversight Process as, 143 vs. risk based regulation, 187 safety concerns about, 189–191 safety culture and, 199–201 “safety related,” 190 similarities to NASA programs, 155 skepticism of, 188–189 special treatment rule (10 CFR 50.69), 189, 191, 203, 217 risk, biased perceptions of, 26, 72, 77, 99, 179 expert perception of, 72, 77, 99, 179, 218 nuclear vs. non-nuclear comparisons of, xvi, 7, 25, 26, 57, 69, 88, 89, 90, 93, 175, 184 nuclear vs. non-nuclear comparison graph (WASH-1400), 67–68 Risk monitor programs, 113 public perception of, 25–26, 72, 73–74, 99, 115, 176, 179 “risk society,” xvi risk triplet, 1, 26, 207 safety goals for, 92 of Soviet reactors, 161–163. See also accidents and accident risks, nuclear; “one in a million” risk standard; radiation releases, Rogers, Kenneth, 110, 120, 128, 133 Rogovin, Mitchell, 76 Rogovin Report, 76, 91, 99, 102, 117 Rosen, Morris, 41, 43, 44 Ruckelshaus, William, 178, 179, 187 safety culture applying PRA to, 114–127 call for Soviet, 160–161 in CEE, 161, 162, 166, 168 Davis Besse nuclear power plant, 199–201 Maine Yankee nuclear power plant, 136 Millstone nuclear power plant, 136 at NASA, 148, 153 need for regulation of, 199–200 NRC policy statement, 200–201 performance indicators, 142–143 Peach Bottom nuclear plant, 103 PRAs “grand challenge,” 127, 218 in ROP, 142 social science research on, 121–122, 127.
|
349
See also, organizational factors; human factors safety goals ACRS recommends, 90, AEC staff informal goals, 53 “Farmer Curve,” 23 Hanford reactors, 10 industry request for, 87 international adoption of, 159–160, 171 at NASA, 155 in new reactor designs, 203 NRC policy statement on, 83, 88, 90–98 in PRA policy statement, 128 in risk-informed regulation, 122, 128, 201, SALP’s lack of, 135 Starr, Chauncey proposal for, 82 U.S. reactors meeting, 213 Safety oversight of licensee operations antinuclear criticism of, 138, 140, 198 criticism of by government agencies and committees, 102, 117, 135, 198–199 Diagnostic Evaluation Teams (DET), 123–126, 143 enforcement, 102, 103, 135, 138, 140, 142 industry criticism of, 134 inspection, 102, 103, 104, 114, 122, 126, 134–135, 137, 138, 143, 189, 194, 195 maintenance practices, 104–105, 108 maintenance rule, 108–114 of licensee management, 120 policy statement on conduct of plant operations, 121 Reactor Oversight Process (ROP), 142–143, 144, 190, 199, 200–201, 218 request for information on deviations from design basis, 137–138 Resident inspector program, 103–104 safety culture, 114–116, 142, 199–201 safety culture policy statement, 200–201 Systematic Assessment of Licensee Performance (SALP), 123–124, 126, 134, 135, 138–139, 141, 142 after TMI accident, 102–103 before TMI, 103. Towers-Perrin report, 134–135, 137, 139 See also, names of individual nuclear power plants; safety regulation safety regulation adequate protection standard, 83 and AEC’s dual mandate, 11–12 on ATWS, 84
350
| Index
safety regulation (continued) backfit rule, 17, 51–52, 80, 82, 83–84, 91, 93, 94, 95–96, 98, 128, 163, 209, 248 in a bandwagon market, 15–16 beyond design basis regulations, 83 for civilian reactors, xiii Combined construction and operating licensing (10 CFR Part 52), 101 complexity of U.S. system, 80, 82, 83 computing science for, 49–50 DBA as standard, 1, 83–84 deterministic, 82, 91 for ECCS, 43, 45 emergency planning and drills, 82, 86, 89, 91, 249 and environmental impact statements, see Environmental Impact Statements EPA vs. NRC, 173, 180–181 for equipment, 190–191 Indian Point/Zion decision, 86–90 Individual plant examinations, 100 international issues, 157–159, 170 in Japan, 209, 210 LOCA revisions, 191–193 for maintenance, 109 for Mark I containment venting, 100 for nozzle stress-corrosion cracks, 194–199 performance-based regulation, 109, 113, 128, 129, 134, 139, 140, 142, 255 post-Fukushima restructuring, 211–216 PRA-informed, 83–85, 127–129, 141 prescriptive vs. performance-based, 255 pronuclear influence on, 47–48 qualitative vs. quantitative, xvi–xvii, 91 quantitative, 84 regulatory burden, 33, 79–80, 124, 132–133, 142, 188, 214, 215 risk assessment for, 50–53 risk-informed. see risk-informed regulations safety goal policy statement, 90–98 and safety culture, 103, 114–127, 199–200 for seismic hazards, 84, 210 severe accidents and, 98–101 WASH-1400 aids to, 59 WASH-1400 effects, xv–xviii, 64, 72. See also Nuclear Regulatory Commission (NRC); quantitative safety; names of
individual nuclear power plants; safety oversight of licensee operations safety research accident modeling (codes), xvii, 10, 21, 23, 27, 37–39, 41, 42, 49–50, 53, 58, 62, 71, 72, 96, 123, 127, 128, 155, 157–160, 162, 166, 178, 206, 208, 213, 232, 237, 238 ACRS recommendations for, 31, 37 AEC sponsored, 13–14, 36–39, 40–42, 49 office of safety research, establishment of, 47, 49 AEC-industry conflict over, 37, 41, 42 ECCS, 37, 38, 40–41, 42 human error (factors), 77, 116–117, 119, 121 LOCAs, 13–14, 40–41, 49 Office of Nuclear Regulatory Research established, 49 prevention vs. molten core, 37–38 safety culture (organizational factors), 114–127, 201 Battelle study, 118–119 Brookhaven study, 123–126 NAS study, 121–122 severe accidents, 98, 213 TMI accident and, 87. See also, risk assessment; names of individual national laboratories; and research reports Sanders, Senator Bernie, 212 Sandia National Laboratories, 59, 88, 153 SAPHIRE program, 155 Schlesinger, James, 46, 47, 57, 59 Science, 44 Science Applications International Corporation, 155 scientists, activist, 66 scrams and scram systems ATWS, 50–53, 65, 71 Browns Ferry failure, 84 at Davis-Besse, 106 at Hanford, 4 from maintenance, 109 rate of, 84 technical specifications for testing of, 104 Seaborg, Chairman Glenn, xiv, 33–34, 41, 46, 53 Selby, John, 73 Selin, Ivan, 128, 163, 164, 167 September 11 attacks, 213 Shaw, Milton
Index on AEC safety research program, 31 assistant to Rickover, 19 and computer modeling codes, 42 Division of Reactor Development and Technology, director, 31 in the ECCS hearings, 45 leaves AEC, 47–48 LOFT tests, 37, 42 vs. national labs, 43 pronuclear politics of, 19, 47 vs. regulatory staff, 31 safety philosophy, 31 report on defense in depth philosophy (WASH-1250), 54 Sheron, Brian, 192 siting, reactor AEC criteria for, 12 in DBA plans, 2 and evacuation drills, 86 Hanford isolation, 5, 6, 227 in Japan and Europe, 18 near populated areas, 16 probabilistic guidelines for, 23 and redundancy beyond ECCS, 33 safety goals for, 92, 94 UCS petitions, 86–90 Siu, Nathan, 206, 219 60 Minutes, 65 Slovic, Paul, 92 “small dose problem,” 183–185 Sniezek, James, 108 Society for Risk Analysis, 172 South Texas Project (STP), 112, 133, 191 Soviet-era reactor upgrades, 156–171, 268 Sporn, Philip, 16 Sports Illustrated, xiii Starr, Chauncey, 24–26, 54, 57, 74, 82, 219, 233 static barriers. see containment buildings Stello, Victor, 103, 110 Stevens, Justice John Paul, 172–173 Structures, Systems, and Components (SSCs), 109, 111 Symington, Senator Stuart, 48 System Accident and Transient Analysis of Nuclear Plants (SATAN) program, 41 Systematic Assessment of Licensee Performance (SALP) dubious effectiveness of, 123–126, 135 improvements following, 139 industry resentment of, 123, 134–135 influence on utility stock prices, 138
|
351
program suspension, 141 replacement for, 142 Systems Nuclear Auxiliary Power Program (SNAP), 151 Tanguy, Pierre, 159 Taro, Japan, 204–205 Taylor, James, 124, 137 Taylor, Lauriston, 175 Teledyne Corporation, 152 Teller, Edward, 3 Temelin nuclear power facility, 168, 169, 171 Tennessee Valley Authority, 15, 17, 103, 202 Thadani, Ashok, 65 Thomas, Senator Craig, 184 Thompson, Theos, xiv Three “D”s of safety AEC loyalty to, 14, 18, 66 bandwagon market challenges, 15 and beyond design basis regulations, 82 China Syndrome vs., 32 defined, 1 designs based on, 38 international influence, 159 limitations, 217 and maintenance and operations, 104 PRA supplements to, 100 purpose of, 51 as qualitative measure, 10, 17, 143 in risk-informed regulation, 165 Three Mile Island lessons on, 75–76 Three Mile Island accident accident description, 74–75 core tests after, 38 emergency planning after, 86 human factors, 116–117 impact on PRA, xv, 74, 76–77, 85 impact on safety philosophy, 75 impetus to expanded oversight, 102–103 radioactive release, 5 regulator-licensee relations post-, 103 regulatory burden, 79 regulatory response, 79–80, 82, 83 risk assessment and, 35, 74–77, 247 as safety goal impetus, 82, 90–91 safety research influence on, 38, 87, 98–100 system vulnerabilities and, 35 Tim D. Martin Associates, 139 Time, 103, 135–136 Toepfer, Klaus, 163, 167
352
| Index
Tokyo Electric Power Company (TEPCO), 209 Toledo Edison Company, 108, 199 Towers Perrin report, 134, 135, 137, 139 Tversky, Amos, 26 Udall, Morris, 70, 72 Union of Concerned Scientists (UCS) on backfit rule, 95 critique of WASH-1400, 62 in ECCS debate, 42, 44, 48–49 experts hired by, 62 Kendall, Henry, founder, 53 Indian Point/Zion petition, 86–90 Pollard, Robert joins, 65–66 on PRAs, 210 utility companies and bandwagon market, xiii capability of, 117 choice of nuclear power by, 78–79, 214 competitive electricity markets, 131–132, 144 deregulation of, 131–132, 144 diversity of, 103, 188 “fossil-fuel mentality,” 105, 121 maintenance rules for, 109 management and organization, 118–119 and 1970s energy crisis, xiii, 64 turnkey construction contracts, 15 See also, names of individual companies van der Vleuten, Eric, 269 Verheugen, Gunter, 169 Vesely, William, 57, 59, 155 Voinovich, Senator George, 200 von Hippel, Frank, 66, 67, 70, 219 VVER reactor technology, 161, 163, 168, 269 Model 440/213, 161, 168 Model 440/230, 161, 163, 165, 166, 167, 168, 169 Model 1000, 168 Wall, Ian, 23, 24 Wallis, Graham, 192 WASH-3 report, 3 WASH-740 and Class 9 accidents, 55 consequence estimates, 18 legacy of, 19, 57 probability estimate, 18 risk assessment in, 18
WASH-740 update AEC agrees to new version, 54 requests for release of draft, 53–54, 59 consequence estimate, 20 JCAE request for study, 19 political consequences, 21, 25 WASH-1250, 55 WASH-1400 and ATWS issue, 65, 70, 71, 84 Commission withdraws endorsement of executive summary, 72 conclusions of, 62–64 criticism of, 62–74, 152, 245 development of, 54–60 effect on public opinion, 73–74 error bands, 59, 60, 67, 71 as first PRA, xvi graphs of risk from, 68–69 impetus for, 241 and Indian Point/Zion petition, 85, 87 legacy of, 217–218 Lewis Committee reviews, 66–67, 70–72 NUREG-1150 revises, 99–100 rebirth of, 76–77 as a regulatory tool, xvii–xviii reviews of, 66–67, 70, 73 Wall’s work on, 24 Washington Public Power Supply System, 78 watch list, 134–138, 140 Watkins, James, 160–161 Weatherwax, Robert, 152, 153 Weinberg, Alvin, 38, 47 Weiss, Ellyn, 93 West Valley Demonstration Project, 186–187 “Westernization” of Soviet reactor technology, 169, 170, 269 Westinghouse Corporation AP1000 reactor, 203, 212 in the bandwagon market, 15 China Syndrome “core catcher” solution, 32, 34, 36 dry containment building, 29 Temelin contract, 169 fault-tree development, 23 PWR design vs. BWR, 29 safety research, 13 SATAN computer modeling program, 41 turnkey era contract losses, 78 21st century construction by (NuStart), 202, 203 use of fault trees, 23
Index Whitman, Christine Todd, 186–187 Wicker, Tom, 72–73 Wiggins, J.H. Company, 152 Wiggins study, 152–153 William Zimmer nuclear power plant, 78, 79 Willoughby, Will, 147 World Association of Nuclear Operators, 162–163, 201 Wright, Skelly, 45
|
353
Yanev, Yanko, 163, 164, 169–170 Ybarrondo, Larry, 41 Yeltsin, Boris, 163 Yom Kippur War, 64 Young, R.C., 41 Yucca Mountain, 129, 212 Zech, Lando, 110, 121, 160 Zion Nuclear Power Station, 86–90, 132
Founded in 1893,
University of California Press publishes bold, progressive books and journals on topics in the arts, humanities, social sciences, and natural sciences—with a focus on social justice issues—that inspire thought and action among readers worldwide. The UC Press Foundation raises funds to uphold the press’s vital role as an independent, nonprofit publisher, and receives philanthropic support from a wide range of individuals and institutions—and from committed readers like you. To learn more, visit ucpress.edu/supportus.