268 24 56MB
English Pages 622 [620] Year 2021
INTRODUCTION TO ENERGY ESSENTIALS Insight into Nuclear, Renewable, and Non-Renewable Energies
BAHMAN ZOHURI PATRICK MCDANIEL
Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-323-90152-9 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals Publisher: Joe Hayton Senior Acquisitions Editor: Katie Hammon Editorial Project Manager: Madeline Jones Production Project Manager: Nirmala Arumugam Cover Designer: Matthew Limbert Typeset by Aptara, New Delhi, India
This book is dedicated to my Son Sasha Zohuri and Grand Son Darioush and Grand Daughter Donya Nikpour Bahman Zohuri
I dedicate this book to all of the professors, colleagues, and students, too numerous to mention by name, that have led me to the ideas expressed within these pages Patrick McDaniel
CONTENTS
About the Authors xvii Preface xix Acknowledgment xxvii
1. Population growth driving energy demand 1.1 1.2 1.3 1.4
1.5
1.6
1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14
1.15
1
Introduction 1 Energy demand projection 2 A role for everyone 6 Behind the scenes: how we forecast to 2040 7 1.4.1 Global energy demand varies by sector 8 1.4.2 Energy demand shifts toward non-OECD 9 1.4.3 Global energy mix shifts to lower-carbon fuels 9 Transportation energy projections 10 1.5.1 Transportation energy demand growth driven by commerce 11 1.5.2 Global transportation energy demand relative to GDP 11 1.5.3 Commercial transportation grows in all aspects 12 1.5.4 Access to personal mobility increase 13 1.5.5 Efficiency mitigates light-duty demand growth 13 1.5.6 Electric vehicles grow rapidly 15 1.5.7 Liquid demand trajectory uncertain but resilient 16 Residential and commercial energy projections 17 1.6.1 Residential and commercial demand shifts to non-OECD 17 1.6.2 Residential energy use reflects efficiency gains 18 1.6.3 Electricity demand surges 18 1.6.4 Household electricity up in non-OECD 19 Industrial energy projections 20 1.7.1 Industrial undergirds global economic expansion 20 Oil, gas, and electricity fuel industrial growth 21 Heavy industry migrates to emerging markets 22 Heavy industry energy evolves toward cleaner fuels 25 Consumer demand propels chemicals growth 26 Rising prosperity lifts chemicals energy demand 26 Chemical production relies on oil and natural gas 27 Electricity and power generation projections 28 1.14.1 Electricity source shift 28 1.14.2 Natural gas and renewables dominate growth 29 Renewable penetration increases across all regions 30
vii
viii
Contents
1.16 1.17 1.18 1.19
Electricity generation highlights regional diversity 33 Natural gas is a key fuel for reliable electricity generation 34 Different policy or technology choices can impact outcome 35 Meeting climate change goals through energy efficiency 36 1.19.1 What are the opportunities 37 1.19.2 Key recommendations 37 1.20 Energy supply projections 37 1.21 Liquid supply projections 38 1.22 Emissions 38 1.23 Fuel cell car power plants 41 References 42
2. Nuclear power plant history from past to present and future 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14
43
Introduction 43 Fission reaction energy generation 47 The first fission chain reaction 47 The first self-sustaining fission chain reaction 50 Nuclear criticality concept 52 Nuclear energy expands and stagnates for peace usages 53 Government and nuclear energy 54 Fundamental of fission nuclear reactors 57 Reactor fundamentals 60 Thermal reactors 61 Nuclear power plants and their classifications 62 Going forward with nuclear energy 62 Small modular reactors 64 Small modular reactors: safety, security, and cost concerns 65 2.14.1 Safety concepts of the MSR 67 2.14.2 Economies of scale and catch 68 2.14.3 Are small modular reactors safer? 69 2.14.4 Shrinking evacuation zones 69 2.14.5 Safety conclusions of nuclear power plants 70 2.15 Why we need nuclear power plants 74 2.16 Methodology of combined cycle 77 2.16.1 Why we still need nuclear power 77 2.16.2 Is nuclear energy renewable source of energy 78 2.16.3 Argument for nuclear as renewable energy 79 2.16.4 Argument against nuclear energy as renewable energy 80 2.16.5 Today safety of nuclear power plant 80 2.16.6 Summary 82 References 83
Contents
3. Nuclear energy research and development roadmap and pathways
85
3.1 3.2 3.3 3.4 3.5 3.6
Introduction 85 Nuclear reactors for power production 89 Future of nuclear power plant systems 91 Next generation of nuclear power reactions for power production 91 Technology roadmap for Generation IV nuclear energy systems 94 Power conversion study and technology options assessment 95 3.6.1 Heat exchanger components 99 3.6.2 Turbomachinery 99 3.6.3 Advanced computational materials science proposed for GEN IV systems 100 3.6.4 Material classes proposed for GEN IV systems 103 3.6.5 Generation IV materials challenges 104 3.7 Generation IV materials fundamental issues 105 3.8 End of cheap oil and future of nuclear power 106 3.9 The future of energy 107 3.10 Nuclear power in the world today and time for change 108 3.11 Improved performance from existing reactors 115 3.12 Other nuclear reactors 116 3.13 Summary 117 References 117
4. Small modular reactors and a modern power conversion approach 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19
119
Introduction 119 Industry opportunities for advanced nuclear technology development 123 Benefits of small modular reactors 124 Modularity 124 Lower capital investment 125 Siting flexibility 125 Greater efficiency 125 Safeguards and security/nonproliferation 125 Industry, manufacturing, and job growth 126 Economic development 126 Cost of electricity from nuclear power 127 Cost of nuclear technology is too high 127 Cooling water requirement for nuclear power reactors 128 Next generation of nuclear power reactions for power production 129 Technology roadmap for Generation IV nuclear energy systems 129 Open air-Brayton gas power cycle 136 Modeling the nuclear Air-Brayton cycles 139 Currently proposed power conversion systems for small modular reactors 144 Advanced Air-Brayton power conversion systems 145
ix
x
Contents
4.20 Design equations and design parameters 148 4.20.1 Reactors 149 4.20.2 Air compressors and turbines 150 4.20.3 Heat exchangers 152 4.20.4 Pumps and generators 154 4.20.5 Connections and uncertainty 155 4.20.6 Validation 155 4.21 Predicted performance of small modular NACC systems 155 4.22 Performance variation of small modular NACC systems 158 4.23 Predicted performance for small modular NARC systems 163 4.24 Performance variation of small modular NARC systems 164 4.25 Predicted performance for a small modular intercooled NARC systems 167 4.26 Performance variation of small modular intercooled NARC systems 170 4.27 Discussion 171 4.28 Intermittent renewable energy systems and other challenges 171 4.29 Dealing with the intermittency of renewable energy systems 173 4.30 Energy storage as heat or electrical charge 174 4.31 Energy storage as heat—two approaches 174 4.32 Hydrogen combustion to augment NACC output 175 4.33 Hydrogen combustion to augment NARC output 178 4.34 Hydrogen combustion to augment intercooled NARC output 179 4.35 Conclusions 180 References 181
5. Thermonuclear fusion reaction driving electrical power generation
183
5.1 5.2
Introduction 183 Magnetic confinement fusion (MCF) 189 5.2.1 Magnetic mirrors 192 5.2.2 Toroidal machines 193 5.3 Inertial confinement fusion (ICF) 206 5.3.1 How inertial confinement fusion (ICF) works 207 5.3.2 How fast ignition (IF) works 209 5.3.3 Issues with successful achievement 211 5.3.4 National ignition laser facility 214 References 222
6. Other electrical power generation energy sources 6.1 6.2
223
Introduction 223 What is natural gas? 228 6.2.1 How did natural gas form? 232 6.2.2 How do we get natural gas? 233
Contents
6.3
Coal 234 6.3.1 Types of coal 235 6.3.2 Coal explained: coal prices and outlook 236 6.3.3 Coal transportation costs can be significant 236 6.3.4 Most coal is purchased for power plants 237 6.3.5 The price of coal can depend on the type of transaction 237 6.3.6 A more expensive coal used to make iron and steel 238 6.3.7 The outlook for coal prices in the United States 238 6.4 Petroleum 239 6.4.1 What is crude oil and what are petroleum products? 239 6.4.2 Products made from crude oil 239 6.4.3 Nuclear energy provides one-fifth of US electricity 239 6.4.4 Nuclear fuel—uranium 241 6.5 Renewable energy sources 242 6.5.1 What is renewable energy? 242 6.5.2 What role does renewable energy play in the United States? 243 6.6 Biomass 244 6.6.1 Biomass—renewable energy from plants and animals 244 6.6.2 Converting biomass to energy 245 6.6.3 How much biomass is used for fuel? 245 6.7 Hydropower 246 6.7.1 Hydropower relies on the water cycle 246 6.7.2 Moving water drive hydroelectric power 247 6.7.3 History of hydropower 248 6.7.4 Fish ladders help salmon reach their spawning grounds 249 6.8 Geothermal power plants 250 6.8.1 Geothermal energy comes from deep inside the earth 250 6.9 Many factors influence electricity prices 251 6.10 Electricity prices are usually highest in the summer 252 6.11 Electricity prices vary by type of customer 252 6.12 Electricity prices vary by locality 253 References 253
7. Electricity production and renewable source of energy, economics
255
7.1 Introduction 255 7.2 Electricity production in the United States 257 7.3 Energy supply, demand, and market 258 7.4 What is a capacity market? 264 7.5 Renewable and nonrenewable energy sources 267 7.6 Role of renewable energy 271 7.7 Frequently asked questions 272 7.8 Snapshot of energy 273 References 275
xi
xii
Contents
8. Energy storage technologies and their role in renewable integration
277
8.1 8.2 8.3 8.4 8.5 8.6
Introduction 277 The electric grid 280 Power generation 286 Transmission and distribution 287 Load management 287 Types of storage technology 289 8.6.1 Kinetic energy storage or flywheels concept 293 8.6.2 Superconducting magnetic energy storage 295 8.6.3 Batteries 300 8.6.4 Other and future batteries in development 306 8.7 A battery-inspired strategy for carbon fixation 316 8.8 Saliva-powered battery 318 8.9 Summary 319 References 319
9. Energy insight: an energy essential guide 9.1 9.2
321
Introduction 321 Knowledge of energy management and efficiency 323 9.2.1 Energy management systems 324 9.3 Understanding by measuring 324 9.4 Reducing energy costs and minimizing risks 325 9.5 International energy standard 326 9.5.1 Complying with policies and regulatory frameworks 326 9.5.2 Improving organizational effectiveness 327 9.5.3 Improving corporate social responsibility 328 9.6 How to manage energy 328 9.6.1 The business case 328 9.6.2 Implementing energy management 328 9.7 Energy management standards 329 9.7.1 PLAN portion of PDCA 331 9.7.2 DO portion of PDCA 335 9.7.3 CHECK portion of PDCA 338 9.7.4 ACT portion of PDCA 339 9.8 In-depth data analysis 340 9.8.1 Monitoring and targeting energy review 340 9.9 How much do you spend on energy? 344 9.9.1 What did we learn? 345 9.9.2 How do I lower my energy bills? 347 9.10 Energy sources comparison 347 9.10.1 What is energy? 350 9.10.2 Energy technology 352
Contents
9.10.3 Energy challenges 356 9.10.4 Sustainability 360 9.11 How to compare power generation choices? 361 9.11.1 Capacity versus energy 363 9.11.2 Initial cost comparison 364 9.11.3 Variables versus fixed expenses 365 9.11.4 Cost per kWh comparison 367 References 369
10. Heat pipe driving heat transfer
371
10.1 10.1 10.3 10.4
Introduction 371 Heat pipes history 373 Heat pipes description and types 373 Principles of operation 387 10.4.1 Container 387 10.4.2 Working fluid 387 10.4.3 Wick or capillary structure 388 10.4.4 How heat pipe works? 390 10.4.5 Wick structures 394 10.5 Operating ranges 394 10.6 Constraints 396 10.7 Lesson(s) learned 400 10.8 Applications 401 10.9 Applications 403 10.10 Summary 407 References 411
11. Thermodynamic systems
413
11.1 Introduction 413 11.2 Continuity 414 11.3 System thermodynamics 419 11.4 Heat transfer and fluid flow 422 11.5 Extended application 424 Reference 426
12. Thermal-hydraulic analysis of nuclear reactors
427
12.1 Introduction 427 12.2 Basics understanding of thermal hydraulics aspects 429 12.3 Units 431 12.3.1 Fundamental units 431 12.3.2 Thermal energy units 432 12.3.3 Units conversion 432
xiii
xiv
Contents
12.4 System properties 433 12.4.1 Density 434 12.4.2 Pressure 434 12.4.3 Temperature 436 12.5 Properties of the atmosphere 437 12.6 The structure of momentum, heat, and mass transport 438 12.7 Common dimensionless parameters 438 12.8 Computer codes 439 12.8.1 Probabilistic risk assessment codes 440 12.8.2 Fuel behavior codes 440 12.8.3 Reactor kinetics codes 441 12.8.4 Thermal-hydraulics codes 441 12.8.5 Server accident codes 442 12.8.6 Design-basis accident (DBA) codes 443 12.8.7 Emergency preparedness and response (EPR) codes 443 12.8.8 Dose and risk calculation software 443 12.8.9 Radionuclide transport codes 444 References 445
13. Energy storage driving renewable energy
447
13.1 Introduction 447 13.2 Hybrid energy system introductory 448 13.2.1 Hybrid system as source of renewable energy 455 13.3 Energy storage systems 456 13.4 Compressed air energy storage description 459 13.4.1 Compressed air energy storage 460 13.4.2 Advanced adiabatic compressed air energy storage 462 13.5 Variable electricity with base-load reactor operation 465 13.6 Why we need nuclear power 472 13.6.1 The merits of total transformation 474 13.6.2 The downsides of monoculture 475 13.6.3 The other zero-carbon energy: nuclear 476 13.6.4 A diverse portfolio 478 13.7 Security of energy supply 480 13.8 Environmental quality 481 13.9 Nuclear power plant as renewable source of energy 483 13.10 The future of nuclear power 489 13.11 Small modular reactor driven renewable and sustainable energy 492 13.12 Small modular reactor driven hydrogen for renewable energy source 493 13.13 Why we still need nuclear power 494
Contents
13.14 Is nuclear energy source of renewable energy 494 13.14.1 Argument for nuclear as renewable source of energy 496 13.14.2 Argument against nuclear as renewable source of energy 496 13.15 Safety 497 13.16 Renewable energy policies 499 13.17 Electricity markets 501 References 506
14. Cyber resilience and future of electric power system
509
14.1 14.2 14.3 14.4 14.5 14.6
Introduction 509 Cybersecurity 511 CPS driving energy sector 513 Securing utilities against cyberattacks 519 Modern threats driving modern cyberattacks 526 ICS security guideline 527 14.6.1 Overview of ICS 530 14.6.2 Overview of CAD, DCS, and PLCs 531 14.6.3 ICS operation 532 14.6.4 Key ICS components 534 14.6.5 Control components 535 14.6.6 Network components 536 14.7 SCADA systems 537 14.8 Distributed control systems 540 14.9 Programmable logic controllers 542 14.10 AI driving modern protections against modern threats 542 14.11 AI and cybersecurity 544 References 547
Appendix A: Plan-do-check-act (PDCA) cycle A.1 Introduction A.2 Plan A.3 Do A.4 Check A.5 Act A.6 About A.7 When to use PDCA A.8 PDCA procedure A.9 Using the process approach to create a well-design process References
549 549 549 550 551 552 553 555 555 555 558
xv
xvi
Contents
Appendix B: Cumulative sum control chart (CUSUM)
559
B.1 Introduction 559 B.2 Method 561 B.3 Control chart formula 564 B.4 Estimating the target value 564 B.5 Estimating sigma—sample range 565 B.6 Estimating sigma—mean of standard deviations 565 B.7 Estimating sigma—weighted approach 566 B.8 CUSUM charts 566 References 567
Appendix C: Basic of heat transfer C.1 C.2 C.3 C.4 C.5 C.6 C.7 C.8 C.9 C.10 C.11 C.12 C.13 C.14
569
Introduction 569 Heat transfer mechanisms 570 Fourier Law of heat conduction 570 Heat equation (temperature determination) 570 The heat equation derivation 571 Thermal hydraulics dimensionless numbers 572 Definition of symbols 573 Radiation heat transfer introduction 573 Absorption and emissivity 574 Gray body radiation heat transfer 575 Radiation view factors 575 Heat transfer between two finite gray bodies 576 Some definition and symbols in radiation 576 Forced laminar flow over an isothermal plate 578
Appendix D: Permafrost phenomena
579
D.1 Introduction 579 D.2 What is permafrost made of? 579 D.3 How does climate change effect permafrost? 580 D.4 Study and classification of permafrost 581 D.5 Permafrost extent 583 D.6 Continuity of coverage 583 D.7 Alpine permafrost 584 D.8 Subsea permafrost 585 References 585
Appendix E: Glossary
587
Index589
About the Authors
Dr. Bahman Zohuri is an Adjunct Professor at Golden Gate University, San Francisco, California, and an Associate Professor in Electrical Engineering and Computer Science at the University of New Mexico at Albuquerque. After he left the defense industry working as a Chief Scientist, he started his own consulting company in 1991 called Galaxy Advanced Engineering, Inc. After graduating from University of Illinois with a degree in Physics and Applied Mathematics, he joined Westinghouse Electric Corporation where he performed thermal hydraulic analysis and natural circulation for Inherent Shutdown Heat Removal System (ISHRS) in the core of a Liquid Metal Fast Breeder Reactor (LMFBR) as a secondary fully inherent shut system for secondary loop heat exchange. All these designs were used for Nuclear Safety and Reliability Engineering for the Actuated Shutdown System. He designed the Mercury Heat Pipe and Electromagnetic Pumps for Large Pool Concepts of LMFBR for heat rejection purpose around 1978 where he received a patent for it. He then was transferred to Westinghouse, where he was responsible for the dynamic analysis, method of launch and handling of MX missile out of canister. He later went on to become a consultant at Sandia National Laboratory after leaving the United States Navy. Dr. Zohuri earned his Bachelor’s and Master’s degrees in Physics from the University of Illinois and his second Master degree in Mechanical Engineering as well as his Doctorate in Nuclear Engineering from the University of New Mexico. He has been awarded three patents and has published more than 40 textbooks and numerous other journal publications. Recently he has been involved with cloud computation, data warehousing, and data mining using fuzzy and Boolean logic and has a few books published in these subjects as well as numerous other books. Dr. Patrick McDaniel is currently a Research Professor in the Department of Nuclear Engineering, the University of New Mexico. Dr. McDaniel began his career as a pilot and maintenance officer in the U.S. Air Force. After leaving the Air Force and obtaining his doctorate at Purdue University, he worked at Sandia National Laboratories in fast reactor safety, integral cross-section measurements, nuclear weapons vulnerability, space nuclear power, and nuclear propulsion. He left Sandia to become the technical leader for Phillips Laboratory’s (which became part of the Air Force Research Laboratory (PL/AFRL)) Satellite Assessment Center. After 10 years at PL/AFRL, he returned to Sandia to lead and manage Defense Advanced Research Projects Agency’s (DARPA) Stimulated Isomer Energy Release project, a $10M per year effort. While at Sandia, he worked on the Yucca Mountain Project and DARPA’s classified UER-X program. Having taught at the University of New Mexico in the xvii
xviii
About the Authors
Graduate Nuclear engineering program for 25 years, when he retired from Sandia in early 2009, he joined the faculty at the University of New Mexico full time. He has worked on multiple classified and unclassified projects in the application of nuclear engineering to high energy systems. Dr. McDaniel holds a PhD in Nuclear Engineering from Purdue University, a Master’s degree in Resource Management from the Industrial College of the Armed Forces, a Master’s degree from CalTech in Mechanical Engineering, and a Bachelor’s degree in Engineering Science from the United States Air Force Academy (USAF) Academy.
Preface
In this book, the focus is on energy. Energy is essential for our existence. Just looking around us is enough to show the importance of energy in our daily life. In the future, this may even be more important. Because on one hand the demand for energy will go up and on the other hand climate change will place more pressure on the way that energy should be produced and consumed. Therefore, in this book the impact of population growth on the demand for energy as well as different sources of energies will be discussed. This reference shows why energy management is important for all organizations and gives an overview and explanation of the process involved. It describes the main steps for introducing and maintaining an energy management system (EnMS). It also investigates opportunities in the energy management profession and how to develop a career. This reference is also designed for individuals who need a high-level introduction to managing energy, the informed layperson, and anyone in the early stages of an energy management career. It deals with different sources of renewable with each chapter explaining an innovative approach to these sources from the research and technology point of view. The attention toward a solution to population growth is on an upward trend and the demand for energy is on the rise. Seven billion people shape the world’s energy system and have a direct impact on the fundamental drivers of energy demand. Energy impacts the economy as well as security and environmental goals. Energy solutions can vary over time and under different circumstances. Think about how access to energy affects your own life, and how that translates to billions of people around the world. Global energy demand will continue to rise through 2040, reflecting its fundamental link to growing prosperity and better living standards for an increasing population worldwide. In the years just before and during World War II, nuclear research focused mainly on the development of defense weapons. Later, scientists concentrated on peaceful applications of nuclear technology. An important use of nuclear energy is the generation of electricity. After years of research, scientists have successfully applied nuclear technology to many other scientific, medical, and industrial purposes. Governments have been deeply involved in the development of nuclear energy. Some of them initiated and led the development of nuclear energy since its military beginnings in World War II, because of its strategic nature and the scope of its risks and benefits. Governments later supported the development of civilian nuclear energy, primarily for the generation of electricity. In the postwar period, governments played an increasing overall role in the economies of the industrial countries. Science and technology were essential instruments of government action and nuclear energy was a highly visible symbol of their successful application. xix
xx
Preface
Below is a summary of each chapter as they are written: Chapter 1 of this volume talks about population growth driving the demand for energy, where seven billion people shape the world’s energy system and have a direct impact on the fundamental drivers of energy demand. Energy impacts the economy as well as security and environmental goals. Energy solutions can vary over time and under different circumstances. Think about how access to energy affects your own life, and how that translates to billions of people around the world. Global energy demand will continue to rise through 2040, reflecting its fundamental link to growing prosperity and better living standards for an increasing population worldwide. Chapter 2 builds up our knowledge around nuclear power energy and compares today’s technologies to this source of renewable energy for consumption and serving the demand for electricity based on our modern life and population growth. An important use of nuclear energy is the generation of electricity. After years of research, scientists have successfully applied nuclear technology to many other scientific, medical, and industrial purposes. Governments have been deeply involved in the development of nuclear energy. Some of them initiated and led the development of nuclear energy since its military beginnings in World War II, because of its strategic nature and the scope of its risks and benefits. Governments later supported the development of civilian nuclear energy, primarily for the generation of electricity. In the postwar period, governments played an increasing overall role in the economies of the industrial countries. Science and technology were essential instruments of government action and nuclear energy was a highly visible symbol of their successful application. Chapter 3 adds more background knowledge about nuclear energy research and a roadmap to the development of this source of electricity to serve our need for the rise of demand for electricity, since everything we deal with in our daily life requires electricity such as charging our modern cars that run on battery, smartphones that require constant charging, or even robots and robotic life that needs electricity to run and operate. This chapter is based on an April 2010 prepared report to Congress by the United States Department of Energy (DOE), based on ongoing nuclear energy research and a development roadmap among the national laboratories in conjunction with industry and companies involved in such activities. In this report, they best characterized the current prospects of nuclear power in a world that is confronted with a burgeoning demand for energy, higher energy prices, energy supply, a new source of renewable energy, and security concerns. Chapter 4 shows the building of the new generation of nuclear power known as Generation IV (i.e., GEN IV) which is more efficient and cost effective for utility companies by introducing a technique known as a combined cycle power conversion system based on Brayton open cycle. This chapter presents the innovative approach to combined cycle system for power conversion for small modular reactors designed in form with liquid metal fast breeder reactor infrastructure and a need for a nuclear power
Preface
plant (NPP) for production of electricity. The foundation has been structured and consequently the technology of ongoing research makes production of electricity from the NPP more cost effective. For the nuclear reactors to be more comparative with fossil and gas fuel power plants, they need to be as efficient as the traditional power plants in terms of output thermal efficiency. As this chapter suggests, utilizing the combined cycles to drive and produce electricity via nuclear fuel makes more sense to owners, namely electricity companies, for return on investment, total cost of ownership, and efficiency. Based on the results of modeling a combined cycle Brayton–Rankine power conversion system presented in this chapter, the based model reactor chosen for this purpose was the Molten Salt Reactors (MSRs). The Rankine bottoming cycle appears to offer significant advantages over the recuperated Brayton cycle. The overall cycle in the modeling was optimized as a unit, and the lower pressure Rankine systems did seem to be more efficient. The combined cycle requires a lot less circulating water for a heat dump than current power plants. Chapter 5 covers different aspects of nuclear energy as a source of energy, which are different than fission reaction driving the source of energy to produce electricity for consumption to meet such demand. Energy demand is expected to more than double by 2050 as the combined effect of the increases of population and energy consumption per capita in developing countries continues. Fossil fuels presently satisfy 80% of the primary energy demand but their impact on the environment through greenhouse gas emission is unacceptable. Energy sources that can prove their long-term sustainability and security of supply must replace fossil fuels. The solution to the energy problem can come only by a portfolio of options that includes improvements in energy efficiency and to varying degrees among different countries such as renewable energy, nuclear fission and carbon capture and sequestration. To ensure sustainability and security of supply, fusion has the following advantages: fuels are widely available and virtually unlimited; no production of greenhouse gases; intrinsically safe, no chain-reaction is possible; environmentally responsible—with a proper choice of materials for the reaction chamber, radioactivity decays in a few tens of years and at around 100 years after the reactor shutdown, all the materials can be recycled in a new reactor. Chapter 6 provides a different option than nuclear power to generate electricity as an alternative source. Globally as well as in the United States of America, electricity is produced with diverse energy sources and technologies. Every country with their innovative technologies of producing electricity power of their demand due to population growth uses many different energy sources for power generation and technologies to generate electricity and that includes the United Sates under the DOE guideline in collaboration with industries and universities. The sources and technologies have changed over time and some are used more than others. This chapter also discusses other sources of electrical power generation minus the NPPs since this subject has been extensively discussed in Chapters 2 through 4. Here, except nuclear power, we consider other clean
xxi
xxii
Preface
sources of energy that generate electricity without producing any monoxide or dioxide carbon production to meet electricity demand. Chapter 7 explains the economic side of producing electricity and predicting the cost of such source of energy as a renewable source based on demand by industry and growth of population globally. Most of the electricity in the United States today is produced using steam turbines. A turbine converts the kinetic energy of a moving fluid (liquid or gas) to mechanical energy. In a steam turbine, steam is forced against a series of blades mounted on a shaft. The steam rotates the shaft connected to a generator. The generator, in turn, converts its mechanical energy to electrical energy based on the relationship between magnetism and electricity. In steam turbines powered by fossil fuels (coal, natural gas, and petroleum), the fuel is burned in a furnace to heat water in a boiler to produce steam. However, considering the rise in demand for electricity, is the gas turbine generated electricity enough to meet this demand? Or even in conjunction with solar and wind, is the demand still an up the hill battle? In this chapter we assess the situation and justify the future faith of NPPs with the advanced research and development of the Generation IV International Forum (GIF). Chapter 8 presents a means of technology and mechanism of storing energy when the demand for electricity drops during nonpeak hours and this stored energy is used to meet the demand of peak hours. Today’s world is at a turning point. Resources are running low, pollution is increasing, and the climate is changing. As we are about to run out of fossil fuels in the next few decades, we are keen to find substitutes that will guarantee our acquired wealth and further growth on a long-term basis. The modern technology is already providing us with such alternatives as wind turbines, photovoltaic cells, biomass plants, and more. But these technologies have flaws. Compared to traditional power plants they produce much smaller amounts of electricity and even more problematic is the inconsistency of the production. The global demand for electricity is huge, and it is growing by approximately 3.6% annually, but the sun is not always shining nor is the wind always blowing. For technical reasons; however, the amount of electricity fed into the power grid must always remain on the same level as demanded by the consumers to prevent blackouts and damage to the grid. It leads to situations where the production is higher than the consumption or vice versa. This is where storage technologies come into play — they are the key element to balance out these flaws. Chapter 9 shows why energy management is important for all organizations and gives an overview and explanation of the process involved. It describes the main steps for introducing and maintaining an EnMS. It also investigates opportunities in the energy management profession and how to develop a career. This chapter is also designed for individuals who need a high level introduction to managing energy, the informed layperson, and anyone in the early stages of an energy management career. This chapter brings together information on a range of different energy topics and includes recent
Preface
and historical energy statistics. Each energy insight provides a brief examination of a topic as it is described in each previous chapters of this book for those who are interested in exploring the subject in more depth. Chapter 10 covers the general aspect of a heat pipe. The heat pipe is one of the remarkable achievements of thermal physics and heat transfer engineering in this century because of its unique ability to transfer heat over large distances without considerable losses.The main applications of heat pipes deal with the problems of environmental protection and energy and fuel savings. Heat pipes have emerged as an effective and established thermal solution, particularly in high heat flux applications and in situations where there is any combination of nonuniform heat loading, limited airflow over the heat generating components, and space or weight constraints. This chapter will briefly introduce heat pipe technology and then highlight its basic applications as a passive thermal control device. Chapter 11 covers the fundamentals of thermodynamics required to understand electrical power generation systems and the application of these principles to nuclear reactor power plant systems. This chapter begins with fundamental definitions of units and dimensions, thermodynamic variables, and the Laws of Thermodynamics progressing to sections on specific applications of the Brayton and Rankine cycles for power generation and projected reactor systems design issues. It is not a traditional general thermodynamics text, per se, but a practical thermodynamics volume intended to explain the fundamentals and apply them to the challenges facing actual NPPs systems, where thermal hydraulics comes to play. There have been significant new findings for intercooled systems and they will be included in this volume. New technology plans for using a Nuclear Air-Brayton as a storage system for a low carbon grid are presented along with component sizes and performance criteria for small modular reactors. Written in a lucid, straight-forward style while retaining scientific rigor, the content is accessible to upper division undergraduate students and aimed at practicing engineers in nuclear power facilities and engineering scientists and technicians in industry, academic research groups, and national laboratories. The book is also a valuable resource for students and faculty in various engineering programs concerned with nuclear reactors. Chapter 12 covers thermal hydraulic analysis of an NPP from the reactor point of view. NPPs currently generate better than 20% of the central station electricity produced in the United States. The United States currently has 104 operating powerproducing reactors, with 9 more planned. France has 58 with 1 more planned. China has 13 with 43 planned. Japan has 54 with 3 more planned. In addition, Russia has 32 with 12 more planned. Production of electricity via nuclear has certainly come into its own and is the safest, cleanest, and greenest form of electricity currently introduced on this planet. However, many current thermodynamic texts ignore nuclear energy and use few examples of nuclear power systems. Nuclear energy presents some interesting
xxiii
xxiv
Preface
thermodynamic challenges, and it helps to introduce them at the fundamental level. Research activities are currently underway worldwide to develop GEN IV nuclear reactor concepts with the objective of improving thermal efficiency and increasing economic competitiveness of GEN IV NPPs compared to modern thermal power plants. Our goal here will be to introduce the thermal aspects of nuclear power reactors as it applies to a variety of issues related to nuclear reactor thermal hydraulics and safety, which deals with energy production and utilization; therefore, to have some general understanding of NPPs is essential. However, that is true for any textual introduction to this science; yet, by considering concrete systems, it is easier to give insight into the fundamental laws of the science and to provide an intuitive feeling for further study. Chapter 13 goes over energy storage driving renewable energy. Electricity markets are changing rapidly because of (1) the addition of wind and solar and (2) the goal of a low-carbon electricity grid. At times, these changes result in high electricity prices and very low or negative electricity prices. California has seen its first month where more than 20% of the time (mid-day) the wholesale price of electricity was zero or negative. This creates large incentives for coupling heat storage to advanced reactors to enable variable electricity and industrial-heat output (maximize revenue) while the reactor operates at base load (minimize cost). Recent studies have examined coupling various types of heat storage to Rankine and Brayton power cycles. However, there has been little examination of heat-storage options between (1) the reactor and (2) the power-conversion system or industrial customer. Heat-storage systems can be incorporated into sodium-, helium-, and salt-cooled reactors. Salt-cooled reactors include the fluoride salt–cooled high-temperature reactor with its solid fuel and clean coolant and the MSR with its fuel dissolved in the salt. For sodium and salt reactors, it is assumed that a heat-storage system would be in the secondary loop between the reactor and power cycle. For helium-cooled reactors, heat storage can be in the primary or secondary loop. Finally, Chapter 14 covers cyber resilience and the future of electric power systems as well as plant control systems. Preventing and mitigating cyber threats is a concern for most organizations, especially those in critical infrastructure sectors. The electrical and mechanical equipment for NPPs are very important to nuclear safety. For computerbased equipment, appropriate standards and practices for the development and testing of computer hardware and software shall be established and implemented throughout the service life of the system, and in particular throughout the software development cycle. Security systems do not need more tools, they just need more rules, because fighting new threats with more tools just adds complexity and more degrees of freedom that these new tools always bring on board. It is time to rethink our approach to cyber security.
Preface
The book includes four Appendices from A to D. We believe that these appendices provide extra information that complement the book. At the end, we are hoping to bring more data and information to build your knowledge that give you the power of predicting different aspects of technology and how to meet the demand for each of them. Bahman Zohuri Albuquerque, New Mexico, USA Patrick McDaniel Albuquerque, New Mexico, USA
xxv
Acknowledgment
I am indebted to many people who aided me, encouraged me, and supported me beyond my expectations. Some are not around to see the results of their encouragement in the production of this book, yet I hope they know my deepest appreciations. I especially want to thank my friends Bill Kemp, Dr Patrick McDaniel, and others, to whom I am deeply indebted They have continuously provided me their support without hesitation. They have always kept me going in the right direction. Above all, I offer very special thanks to my late mother and father, and to my children, in particular, my son Sasha Zohuri and my daughters Dr. Natasha and Natalie Zohuri as well as my Grand Son Daryoush Nikpour.They have provided constant interest and encouragement, without which this book would not have been written. Their patience with many instance of my absence from home and long hours in front of the computer to prepare the manuscript are especially appreciated. B. Zohuri Albuquerque, New Mexico USA I would like to acknowledge my co-author, Dr. Bahman Zohuri, who has once again done a massive job in pulling this text together. His expertise has enabled me to provide challenging ideas in a text format. I would also like to acknowledge Professor Cassiano Ricardo Endres De Oliveira who has offered me many opportunities to participate in the education of future engineers at the University of New Mexico. P. McDaniel Albuquerque, New Mexico USA
xxvii
CHAPTER 1
Population growth driving energy demand Seven billion people shape the world’s energy system and have a direct impact on the fundamental drivers of energy demand. Energy impacts the economy as well as security and environmental goals. Energy solutions can vary over time and circumstances. Think about how access to energy affects your own life, and how that translates to billions of other people around the world. Global energy demand will continue to rise through 2040, reflecting its fundamental link to growing prosperity and better living standards for an increasing population worldwide.
1.1 Introduction Energy efficiency improvements will help curb the growth in global energy demand to about 25% over the period to 2040, while global economic output nearly doubles. To put this in perspective, if world energy demand grew as fast as estimated gross domestic product (GDP), energy demand growth could be about four times the projected amount. Emerging markets in non-OECD (Organization for Economic Co-operation and Development) nations will account for essentially all energy demand growth, led by the expanding economies in the Asia-Pacific region. Note that the OECD is an intergovernmental economic organization with 36 member countries (see Fig. 1.1) [1], founded in 1961 to stimulate economic progress and world trade. It is a forum of countries describing themselves as committed to democracy and the market economy, providing a platform to compare policy experiences, seeking answers to common problems, identify good practices, and coordinate domestic and international policies of its members. Most OECD members are high-income economies with a very high Human Development Index (HDI) and are regarded as developed countries (see Fig. 1.2). As of 2017, the OECD member states collectively comprised 62.2% of global nominal GDP (US$49.6 trillion) [2] and 42.8% of global GDP (Int$54.2 trillion) at purchasing power parity [3]. OECD is an official United Nations observer. Note also that the HDI is a statistic (composite index) of life expectancy, education, and per capita income indicators, which are used to rank countries into four tiers of human development. A country scores higher HDI when the lifespan is higher, the education level is higher, and the GDP per capita is higher. The HDI was developed Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
1
2
Introduction to energy essentials
Fig. 1.1 World Bank high-income economies in 2016.
by Pakistani economist Mahbub ul Haq and Indian economist Amartya Sen, which was further used to measure the country’s development by the United Nations Development Program. Continuing urbanization and a significant expansion of the middle class, particularly in China and India, will help drive this trend, highlighted by greater access to modern energy in homes, rising industrial demand, and significant increases in personal and commercial transportation needs. Electrification and gradual decarbonization continue as significant global trends. Energy demand for power generation accounts for about 50% of global demand growth. Energy sources shift toward cleaner fuels such as: 1. Wind energy. 2. Solar energy. 3. Hydropower energy. 4. Geothermal energy. 5. Natural gas energy. 6. Renewables energy and nuclear power. In next few chapters of this volume, we will be discussing all these clean sources of energy and to some degree in more details we are looking at renewables energy and nuclear power plant of Generation IV, known as GEN IV in particular in area of small modular reactors (SMRs) by describing all six types of Generation IV nuclear power plants as well.
1.2 Energy demand projection A significant energy transition is underway, and many factors will shape the world’s energy future. These include government ambitions and policies that seek to promote
Population growth driving energy demand
World map indicating the Human Development Index (based on 2015 and 2016 data, published on March 21, 2017). 0.900–0.949
0.650–0.699
0.400–0.449
0.850–0.899
0.600–0.649
0.350–0.399
0.800–0.849
0.550–0.599
Data unavailable
0.750–0.799
0.500–0.549
0.700–0.749
0.450–0.499
World map of the Human Development Index by country, grouped by quartiles (based on 2015 and 2016 data, published on March 21, 2017). Highest 25%
Below median
Above median
Lowest 25%
Data unavailable
Fig. 1.2 Human Development Index.
prosperity while also addressing the risks of climate change. The recent Paris Agreement [4] on climate change provided significant insights on governments’ intentions to reduce greenhouse gas (GHG) emissions through the inclusion in the agreement of nationally determined contributions (NDCs). Policies adopted to support NDCs will likely affect supply and use of energy across society.
3
4
Introduction to energy essentials
NDCs are a cornerstone of the Paris Agreement on climate change.They set out the actions that countries plan to undertake to achieve the agreement’s objectives, focused on limiting the rise in average global temperatures to well below 2°C, ideally to 1.5°C. As part of Paris Agreement, the untapped potential for climate action, the renewable energy in NDCs is taken under consideration. Renewable energy—increasingly recognized as a key climate solution—features prominently in the first round NDCs arising from the 2015 agreement. Of the 194 Parties to the United Nations Framework Convention on Climate Change that submitted NDCs, 145 referred to renewables as a way to mitigate climate change, while 109 cited specific renewable energy targets. Countries have the opportunity, however, to significantly strengthen their targets for renewables in the next round of NDCs. The International Renewable Energy Agency (IRENA) has analyzed NDCs in relation to national energy plans and actual deployment trends. In many cases, NDCs have not kept up with recent, rapid growth in renewables, the report finds. Even countries that lacked targets in their NDCs had ambitious plans for renewables in the energy sector. Fig. 1.3 is illustration of total investment needed by 2030 for the implementation of renewable energy targets in current NDCs based on USD billion. Over USD 1.7 trillion would be needed by 2030 to implement renewable energy targets contained in NDCs worldwide. At least 1.3 terawatts (TW) of renewable power capacity would be added globally by 2030 as a result of NDC implementation, amounting to a 76% increase. Total investment needed by 2030 for the implementation of renewable energy targets in current NDCs (USD billion) 100 Others*
83
68
260
USD
151 billion
Asia
USD 1.1 trillion
4 SIDS USD 16 billion
839
10
12
Middle East USD 30 billion
20
14 Latin America
USD
125
Africa USD
100
226 billion
218 billion
204 *Eurasia, Europe, North America and Oceania
Fig. 1.3 Total investment by 2030 in renewable energy in current NDCs.
Unconditional targets Conditional targets
Population growth driving energy demand
However, such growth expectations lag behind actual trends, as well as falling short of the ambitions expressed in national energy plans. The cost-effective potential for renewables, meanwhile, is much higher than what is captured in current NDCs. Rapid deployment of renewables, coupled with energy efficiency, could achieve around 90% of the emission reductions in the energy sector needed by 2050, while at the same time advancing economic growth and development. Upgraded NDCs could build on recent growth rates, pick up targets from national energy plans, and more closely reflect cost-effective potential for renewables.This would strengthen the effectiveness of the Paris Agreement and help significantly to limit the global temperature rise, IRENA’s report finds. Consequently, to support economic progress and make substantial progress on the climate goals identified in the Paris Agreement, well-designed and transparent policy approaches that carefully weigh costs and benefits are needed. Such policies are likely to help manage the risks of climate change while also enabling societies to pursue other high-priority goals—including clean air and water, access to reliable, affordable energy, and economic progress for all people. Technology will also be vital to improve living standards while addressing climate risks. Advances continue to reshape the energy playing field. Many technologies not prevalent 5 to 10 years ago have a more significant role today, and their impacts will continue to expand. Examples include wind and solar power, unconventional oil and gas development, and electric cars. Meeting the dual challenge of mitigating the risks of climate change while boosting standards of living will require additional technology advances. While policies and technologies help shape living standards and the evolution of energy, they also disrupt the status quo and can cause uncertainty and unexpected consequences. Accordingly, as part of the Outlook development process, we develop and use sensitivities to help our understanding of possible energy outcomes. Each year companies such as ExxonMobil analyzes and updates its long-term view on energy supply and demand. Why is this important? Simply because energy is fundamental to modern life, thus in their 2018 Outlook for energy as a view to 2040, ExxonMobil in this year Outlook suggested and included several sensitivities on specific areas of interest to provide greater perspective on how changes to our base Outlook assumptions could affect the energy landscape. This year’s Outlook also includes a new section, “Pursuing a 2°C Pathway.” This section utilizes work coordinated by the Energy Modeling Forum at Stanford University [5]. It provides a view of potential pathways toward a 2°C climate goal, and the implications such pathways might have in terms of global energy intensity, carbon intensity of the world’s energy mix and global demand for various energy sources. The section concludes with a discussion of the need to pursue practical, cost-effective solutions to address multiple goals simultaneously. The Outlook anticipates significant
5
6
Introduction to energy essentials
changes through 2040 across the world to boost living standards, reshape the use of energy, broaden access to abundant energy supplies, and accelerate decarbonization of the world’s energy system to address the risk of climate change. The ExxonMobil also reported in their 2018 Outlook for Energy an Excel data pages that is downloadable from the company website as well.
1.3 A role for everyone Key trends that will play a defining role in our global energy landscape through 2040 are: 1. Energy powers modern economies and living standards By 2030, the world’s economic middle class will likely expand from 3 billion to more than 5 billion people. This growth will coincide with vastly improved living standards, resulting in rising energy use in many developing countries as people develop modern businesses and gain access to cars, appliances, and air-conditioned homes. 2. Global energy needs rise about 25%, led by non-OECD nations Despite efficiency gains, global energy demand will likely increase nearly 25%. Nearly all growth will be in nonOECD countries (e.g., China and India), where demand will likely increase about 40%, or about the same amount of energy used in the Americas today. 3. Electricity demand nearly doubles in non-OECD nations Human activity continues to be dependent on reliable supplies of electricity. Global electricity demand will rise by 60% between 2016 and 2040, led by a near doubling of power demand in non-OECD countries. 4. Electricity from solar and wind increases about 400% Among the most rapidly expanding energy supplies will be electricity from solar and wind, together growing about 400%. The combined share of solar and wind to global electricity supplies is likely to triple by 2040, helping the CO2 intensity of delivered electricity to fall more than 30%. 5. Natural gas expands role to meet a wide variety of needs The abundance and versatility of natural gas make it a valuable energy source to meet a wide variety of needs while
Population growth driving energy demand
also helping the world shift to less carbon-intensive sources of energy. Natural gas use is likely to increase more than any other energy source, with about half its growth for electricity generation. 6. Oil plays a leading role to aid mobility and modern products More electric cars and efficiency improvements in conventional engines will likely lead to a peak in liquid fuels use by the world’s light-duty vehicle fleet by 2030. However, oil will continue to play a leading role in the world’s energy mix, with growing demand driven by commercial transportation and the chemical industry. 7. Decarbonization of the world’s energy system will accelerate As the world’s economy nearly doubles by 2040, energy efficiency gains and a shift to less carbon-intensive sources of energy will contribute to a nearly 45% decline in the carbon intensity of global GDP. Global energy-related CO2 emissions will likely peak by 2040 at about 10% above the 2016 level.
1.4 Behind the scenes: how we forecast to 2040 Any energy produced company such as ExxonMobil uses a data-driven technique, from bottom-up approach to produce a most-likely view of future energy demand and supply and generates Use Cases as it can be seen listed below as a foundation for such forecasting. They create a starting point for our projections using International Energy Agency (IEA) annual data, along with third-party data and recent energy trends. • Economic growth Since population and living standards drive energy demand, we forecast demographic and economic trends for about 100 regions covering the world. • Demand for services These drivers, along with consumer preferences, help us determine demand for energy across 15 sectors, covering needs for personal mobility, electricity in buildings, production of steel, cement and chemicals, plus many others. • Energy sources We then match the demand for energy services with about 20 types of energy (e.g., diesel), taking into account potential evolution of technology, policies, infrastructure, and more. • Policy/tech changes We actively monitor changes in technology and policies and compare our views of the Outlook to a variety of third-party estimates.
7
8
Introduction to energy essentials
• Test uncertainty We also run sensitivities (i.e., changes to our base assumptions) to assess the impact on our forecast if things were to play out differently.
1.4.1 Global energy demand varies by sector The following Charts as Fig. 1.4 is indication of global energy demand that varies by each industrial sector along with summary of such energy demand by sector. As part of this energy demand globally by each sector, we can list the following as: • Energy used in each sector reflects economic supply options and their general fitness for purpose. • Electricity generation is the largest and fastest-growing demand sector, reflecting strong growth in global electricity demand. • A wide variety of energy types will support electricity generation, with natural gas, renewables, and nuclear increasing their share. • Natural gas demand increases significantly, and gains share in all sectors. • Oil demand grows to support commercial transportation and chemical needs.
Fig. 1.4 Illustration of global energy demand varies by sector.
Population growth driving energy demand
Fig. 1.5 Illustration of energy demand shifts toward non-OECD.
1.4.2 Energy demand shifts toward non-OECD Fig. 1.5 is depiction of shift of energy demand toward non-OECD along with some highlighted description in bullet point form as: • Global demand reaches 680 quadrillion British thermal units (BTUs) in 2040, up nearly 25%. • Non-OECD share of global energy demand reaches about 70% in 2040, as efficiency gains and slowing economic growth in the United States and OECD nations help keep energy demand relatively flat. • China and India contribute about 45% of world energy demand growth. • The combined share of energy used in the United States and in European OECD nations will decline from about 30% in 2016 to close to 20% in 2040, similar to China’s share of world energy demand.
1.4.3 Global energy mix shifts to lower-carbon fuels Fig. 1.6 is illustration of global energy mix driving the lower-carbon fuels with some high-level consideration accordingly. • Renewables and nuclear see strong growth, contributing close to 40% of incremental energy supplies to meet demand growth.
9
10
Introduction to energy essentials
Fig. 1.6 Illustration of global energy mix shifts to lower-carbon fuels.
• Natural gas grows the most of any energy type, reaching a quarter of all demand. • Oil will continue to play a leading role in the world’s energy mix, with growing demand driven by commercial transportation needs and feedstock requirements for the chemicals industry. • Coal use remains significant in parts of the world but loses substantial share as the world transitions toward energy sources with lower emissions.
1.5 Transportation energy projections Advancements in transportation have shrunk our world, while opening up new vistas and possibilities. One consequence of billions of people joining the global middle class in the next quarter century is that it will lead to greater travel, additional cars on the road and increased commercial activity. Global transportation-related energy demand is projected to increase by close to 30%. At the same time, total miles traveled per year by cars, sport utility vehicles (SUVs), and light trucks will increase about 60%, reaching about 14 trillion in 2040. As personal mobility increases, average new-car fuel economy (including SUVs and light trucks) will improve as well, rising from about 30 miles per gallon now to close to 50 miles per gallon in 2040.
Population growth driving energy demand
Fig. 1.7 Illustration of transportation energy demand growth driven by commerce.
The growth in transportation energy demand is expected to account for about 60% of the growth in liquids fuel demand. Liquids demand for light-duty vehicles is expected to be relatively flat to 2040, reflecting better fleet fuel economy and significant growth in electric cars.
1.5.1 Transportation energy demand growth driven by commerce Fig. 1.7 is plots of global sector demand—million barrels per day oil equivalent (MBDOE) along with holistic points as [6-8]: • Global transportation-related energy demand grows close to 30% from 2016 to 2040. • Personal mobility demands continue to increase, but higher efficiency and more electric vehicles lead to a peak and decline in light-duty vehicle energy demand. • Growth in economic activity and personal income drives increasing trade of goods and services, leading to higher energy demand in the commercial transportation sector. • Heavy-duty vehicle growth is the largest sector by volume, but aviation grows the largest by percentage.
1.5.2 Global transportation energy demand relative to GDP Fig. 1.8 is showing the plots of global transportation energy demand in relative to GDP between period 1990 and project year of 2040 along with high-level points.
11
12
Introduction to energy essentials
Fig. 1.8 Illustration of global transportation energy demand relative to GDP.
• Growth in personal mobility (vehicle-miles traveled) and commercial transportation services (ton-miles of freight and passenger-miles of air travel) has tracked with GDP. • Continued economic growth, particularly in non-OECD countries, will result in increased demand for all transportation services. • Recent trends show a decoupling of economic growth and transportation energy demand, reflecting growing efficiency. • Significant increases in future fuel economy across all transportation modes will lead to a further decoupling of transportation services and energy demand.
1.5.3 Commercial transportation grows in all aspects Fig. 1.9 is showing the plots where the commercial transportation grows in all aspects as per MBDOE along with high-level points as: • Economic and population growth is concentrated in non-OECD countries, which leads to the biggest growth in commercial transportation services in these regions. • Asia-Pacific leads growth, rising to 40% of total sector’s energy demand. • Efficiency gains resulting from improvements in fuels, engine design, aerodynamics, body design, and logistics across commercial modes of travel lead to significant reductions in the rate of energy demand growth. • Electrification in most commercial transportation grows slowly due to upfront costs, range limitations, payload requirements, and infrastructure development.
Population growth driving energy demand
Fig. 1.9 Illustration of commercial transportation grows in all aspects.
1.5.4 Access to personal mobility increase Fig. 1.10 is showing the charts of access to personal mobility increases by plotting vehicles per thousand people along with high-level points as well and they are: • As incomes rise, individuals want more personal mobility, so demand for cars and motorcycles increases. • Motorcycles offer a lower-cost entry point to personal mobility, with ownership particularly high in Asia-Pacific. • Car ownership significantly increases in non-OECD countries, with Asia-Pacific leading the growth. • In the OECD, while total vehicle ownership increases significantly, the number of cars per 1000 people increases only about 10%.
1.5.5 Efficiency mitigates light-duty demand growth Fig. 1.11 is showing the charts of efficiency mitigation for light-duty demand growth and high-level points listed as below: • Increasing access to vehicles drives a worldwide increase in personal mobility-related energy demand growth. • Assuming the current fleet mix and fuel efficiency, there would be a significant increase in energy demand for personal mobility.
13
14
Introduction to energy essentials
Fig. 1.10 Illustration of access to personal mobility increases.
Fig. 1.11 Illustration of efficiency mitigation light-duty demand growth.
Population growth driving energy demand
• However, major gains in the fuel efficiency of conventional vehicles leads to a major reduction in the energy needed. • Changes in the fleet mix (e.g., increasing hybrids and electric vehicles) play a much smaller role in limiting energy demand for light-duty vehicles. Plots in Fig. 1.11 as it can be seen present the global light-duty vehicle transportation demand as per MBDOE.
1.5.6 Electric vehicles grow rapidly Fig. 1.12 is showing the charts of rapid growth of electric vehicles fleet worldwide with high-level points as listed here: • Currently, there are approximately 2 million electric vehicles in the global fleet, or about 0.2% of the total. • Recently, some car manufacturers and governments have announced plans to limit future vehicle sales to those with an electric motor, including hybrids plug-in hybrids and battery electric vehicles. • The electric vehicle fleet will see strong growth driven by decreasing battery costs, increasing model availability, and continued support from government policies. • Future battery costs and government policies are uncertain, hence there is a wide range of perspectives on future electric vehicle growth, with third-party estimates for 2040 ranging from a factor of three higher and lower than the Outlook.
Fig. 1.12 Illustration of electric vehicles grow rapidly.
15
16
Introduction to energy essentials
Fig. 1.13 Illustration of liquids demand trajectory uncertain but resilient.
Plots in Fig. 1.12 as it can be seen present the worldwide electric vehicle fleet— million cars.
1.5.7 Liquid demand trajectory uncertain but resilient Plots in Fig. 1.13 as it can be seen present liquids demand trajectory uncertain but resilient for world as per MBDOE. The high-level points are listed as: • Sensitivities help assess potential impacts to light-duty liquids demand using alternate assumptions around electric vehicle penetration, changes in fuel efficiency or broader mobility trends. • For every additional 100 million electric vehicles on the road in 2040, liquids demand could fall by ∼1.2 million barrels per day; if the entire light-duty fleet is electrified in 2040, total liquids demand could be approximately the same as in 2013. • Alternatively, recent consumer preferences have slowed the increase in fuel efficiency of new vehicle sales in both the OECD and non-OECD. • While the Outlook forecasts new-car fuel efficiency trends will be well aligned with government policies, a continuation of recent trends in consumer preferences could add more than 2 million barrels per day of liquids demand by 2040.
Population growth driving energy demand
Note that in Fig. 1.13 shaded ranges are indicative of potential shifts in demand relative to base Outlook reported by ExxonMobil Corporation in 2018.
1.6 Residential and commercial energy projections As populations grow and prosperity rises around the world, we will need more energy to power homes, offices, schools, shopping centers, and hospitals. Combined residential and commercial energy demand is projected to rise by more than 20% through 2040. About 90% of this demand growth will be met by electricity. Led by the growing economies of non-OECD nations, average worldwide household electricity use will rise about 30% between 2016 and 2040. Energy efficiency plays a big role within the residential and commercial sectors as modern appliances, advanced materials, and policies shape the future.
1.6.1 Residential and commercial demand shifts to non-OECD Plots in Fig. 1.14 as it illustrates are the residential and commercial shifts to non-OECD. The high-level points are listed as: • Growth in households, rising prosperity, and expanding commercial activity will spur demand for lighting, heat, and power in homes and offices. • Residential and commercial energy demand will rise over 20% by 2040, consistent with overall population growth.
Fig. 1.14 Illustration of residential and commercial demand to non-OECD.
17
18
Introduction to energy essentials
Fig. 1.15 Illustration of residential energy use reflects efficiency gains.
• Essentially all growth will be in non-OECD nations, where demand will rise close to 40%. • Africa and China will each account for about 30% of the increase in residential and commercial energy demand. Note that each non-OECD are painted in different color.
1.6.2 Residential energy use reflects efficiency gains Fig. 1.15 is presenting the residential energy use reflects efficiency gains along with the top-level points as follow: • Household energy use continues to improve, reflecting more efficient buildings, appliances, and consumer products. • Demand for electricity is growing across all regions. • People in Africa and Asia-Pacific still rely on biomass products to a large degree; more than 2.5 billion people worldwide lack access to modern energy for cooking, and about 1 billion people lack access to electricity. This figure is categorized by type of energy source.
1.6.3 Electricity demand surges Fig. 1.16 is presenting the electricity demand surges as part of residential and commercial energy demand for world in quadrillion BTUs with top-level lists.
Population growth driving energy demand
Fig. 1.16 Electricity demand surges.
• Energy shifts reflect rising living standards and increasing urbanization through 2040. • Electricity use increases 70%, accounting for nearly all the growth in total energy demand from 2016 to 2040; electricity reaches a share of 40% in 2040. • Natural gas use grows about 20%, keeping its share around 20% through 2040. • Oil demand decreases, though usage of liquefied petroleum gas increases as a cooking fuel replacing biomass. • Biomass demand peaks aided by growing access to modern energy in non-OECD nations.
1.6.4 Household electricity up in non-OECD Fig. 1.17 is presentation of curves that is showing household electricity up in nonOECD in case of residential electricity intensity along with top-level points. • Residential electricity use will rise about 75% by 2040, driven by a nearly 150% increase in non-OECD nations. • Electricity use per household will rise about 30% globally, as household use in nonOECD countries rises about 70%. • Electricity use per household in OECD nations will be flat-to-down as efficiencies help limit electricity requirements. • Residential electricity use in Africa and India is likely to increase about 250%, though both areas will continue to lag in terms of electricity per household. Fig. 1.17 is in term of megawatt hours per household per year.
19
20
Introduction to energy essentials
Fig. 1.17 Household electricity up in non-OECD.
1.7 Industrial energy projections Energy and industry have a long history together, and their future remains intertwined. Energy fuels industries of all kinds, from microchip manufacturing to skyscraper construction, food processing to pharmaceuticals, agriculture to zero-emission vehicle production. Consumer demand for the many and varied products that industries offer has provided the impetus to unlock new sources of energy supply from the industrial revolution to the shale revolution. As global prosperity continues to expand, industrial energy demand will increase. Most of the growth occurs in emerging markets.The chemicals industry is the industrial subsector with the highest rate of growth, as demand for plastics and other petrochemical products outpaces GDP in many regions. Industrial energy demand growth would be much higher if not for the persistent pursuit of energy efficiency improvements. The Outlook anticipates technology advances, as well as the increasing shift toward cleaner-burning fuels such as electricity and natural gas.
1.7.1 Industrial undergirds global economic expansion Fig. 1.18 is indication of industry undergirds global economic expansion along with top-level considerations.
Population growth driving energy demand
Fig. 1.18 Illustration of industry undergirds global economic expansion.
• The industrial sector includes the energy used to build cities, power factories, refine fuels, and produce food. • Manufacturing jobs contribute to rising prosperity, while also making products to meet consumer preferences for cars, clothing, cosmetics, and computers. • Almost half of the world’s energy is used for industrial activity. • Overall, industrial energy demand rises about 20% from 2016 to 2040; the chemicals sector grows 40%. • Improving industrial energy efficiency conserves fuel and reduces emissions.
1.8 Oil, gas, and electricity fuel industrial growth Fig. 1.19 is illustration of oil, gas, and electricity fuel industrial growth and its related top-level points listed below as: • Industry uses energy both as a fuel and as a feedstock for chemicals, asphalt, lubricants, waxes, and other specialty products. • Industrial fuel powers boilers, motors, compressors, robots, forklifts, and cranes. • Oil, natural gas, and electricity each contribute about one-third of industrial energy growth; oil growth is mostly due to its use as a chemical feedstock. • Use of coal and oil as industrial fuels declines in favor of natural gas and electricity, as companies strive to reduce their direct emissions. • Coal continues to play a role in steel and cement manufacturing.
21
22
Introduction to energy essentials
Fig. 1.19 Illustration of heavy industry migrates to emerging markets.
1.9 Heavy industry migrates to emerging markets For some emerging economies, large energy deposits can provide a fast boost to growth. But to move up toward middle-income status or beyond, manufacturing has almost always been a necessary step for Economic and Managements (EMs) to build a modern economy. As the chart in Fig. 1.20 shows, the world’s biggest EMs (with a red background) are moving up the ranking of the world’s biggest manufacturers, with China, Brazil, India, and South Korea all improving decade on decade. Russia and Indonesia appear only in the 2010 ranking, but what is surprising is the resilience of manufacturing in developed economies such as Spain and Canada, holding off the rise of Taiwan and Turkey to stay in the top 15. It is chart in Fig. 1.21 that shows the greater EM dependency on manufacturing, with emerging markets (in red) toward the top of the chart: China, South Korea, and Indonesia have a quarter to a third of GDP based on manufacturing.The world’s biggest manufacturer, the United States, has a more modest 12%. Given that advanced economies have made the transition to services much earlier, you might expect to see several of the world’s big economies with manufacturing as a smaller part of GDP. But what is interesting is that although manufacturing is important to EMs for growth, for many it is already shrinking as a share of GDP. McKinsey’s chart shows that
Population growth driving energy demand
Fig. 1.20 Top manufacturing countries by decade. Source: IHS Global Insight, Mckinsey Global Institute analysis.
Fig. 1.21 Manufacturing’s share of GDP has fallen in all but the poorest economies. Source: World Bank; Mckinsey Global Institute analysis.
23
24
Introduction to energy essentials
Fig. 1.22 Illustration of heavy industry migrates to emerging markets.
only for low-income countries (with Gross National Income per Capita below $1005 per year) is manufacturing as a share of GDP rising. For middle-income countries, it is already on the slide. The decline in manufacturing as a share of GDP in middle-income countries is due to the more rapid expansion of services is underway, even though incomes are still far below that of the advanced economies—McKinsey suggests this is partly due to the decline in prices of durable goods—manufacturing productivity gains are passed on to consumers in the form of lower prices. Fig. 1.22 is presentation of heavy industry migrates to emerging markets with toplevel notes as it is listed below. • Steel, cement, and manufacturing are essential to urban infrastructure development. • Heavy industry demand rises steadily in emerging markets in Asia, Africa, the Middle East, and Latin America. • China’s path forward mirrors the mature regions, as its economy transitions to higher value manufacturing and services after a decade of soaring, energy-intensive growth.
Population growth driving energy demand
• Demand grows by 75% in the emerging markets, but is essentially flat in the mature regions and China.
1.10 Heavy industry energy evolves toward cleaner fuels Fig. 1.23 is indication of heavy industry energy evolves toward clean fuels as growth in quadrillion BTUs with its top-level related points. • New industry is attracted to regions with access to abundant, affordable energy, an able workforce and balanced policies. • Electricity and natural gas are manufacturers’ fuels of choice because of their convenience, versatility, and lower direct emissions. • Climate policies boost natural gas demand in mature markets; air quality management spurs switching from coal to natural gas in China. • Abundant natural gas supplies give manufacturers a competitive edge in Africa, the Middle East, and parts of Latin America. • Coal’s use declines in China but doubles in coal-producing India and emerging Asia.
Fig. 1.23 Illustration of heavy industry energy evolves toward cleaner fuels.
25
26
Introduction to energy essentials
Fig. 1.24 Consumer demand propels chemicals growth.
1.11 Consumer demand propels chemicals growth Fig. 1.24 is illustration of consumer demand propels chemical growth as its index shown on the figure along with its top-level points as well. • Consumer demand for plastics, fertilizer, and other chemical products increases with rising incomes. • Olefins and aromatics are basic building blocks for plastics, adhesives, and other consumer products; consumer demand outpaces GDP growth. • Manufacturers see plastics as light-weight, durable materials that can improve the performance of their products, from packaging to auto parts to medical devices. • The chemicals sector uses energy both as a fuel and as a feedstock. • Chemicals energy demand grows by 40% from 2016 to 2040.
1.12 Rising prosperity lifts chemicals energy demand Fig. 1.25 is showing the rising prosperity lifts chemical energy demand with different countries in their region along with its top-level points as listed. • Since chemicals production is energy-intensive, there is usually a competitive advantage for manufacturers to locate plants near low-cost feedstock and fuel sources. • The US chemicals industry expands using abundant, low-cost natural gas liquids which are largely a byproduct of unconventional oil and natural gas production.
Population growth driving energy demand
Fig. 1.25 Illustration of rising prosperity lifts chemicals energy demand.
• Asia-Pacific’s petrochemical production grows as rising incomes stoke consumer demand. • Affordable energy (feedstock and fuel) supplies prompt investment in the Middle East, Africa, and Latin America; chemicals industry energy demand more than doubles in each region. • Mature regions remain important contributors to global chemicals production.
1.13 Chemical production relies on oil and natural gas Fig. 1.26 is presenting the plots of chemicals production that is relying on oil and natural gas with the top bullet points as well. • Feedstock comprises about two-thirds of chemicals energy demand; fuel one-third. • Oil and natural gas account for about 75% of chemicals energy demand today, and nearly all of the growth from 2016 to 2040. • Naphtha and natural gas liquids are primarily used as feedstock; natural gas is used as both a feedstock (notably for fertilizer) and a fuel. • Natural gas liquids consumption about doubles from 2016 to 2040, as unconventional oil and natural gas production in the United States expands supply. • Naphtha remains the dominant feedstock in Asia; the Middle East relies on natural gas liquids and natural gas.
27
28
Introduction to energy essentials
Fig. 1.26 Illustration of chemical production relies on oil and natural gas.
1.14 Electricity and power generation projections Demand for electricity continues to rise as it is the energy used in powering wide applications ranging from lighting to home appliances to global e-commerce and digital services. Power generation uses the broadest array of fuels: coal, natural gas, nuclear and renewables such as hydroelectricity, solar, and wind. As electricity use rises, the types of fuels used to generate electricity will shift, globally and regionally. Policies seeking to address climate change and air quality will influence the choice of sources, with wind and solar, natural gas, and nuclear fueling growth in power generation.
1.14.1 Electricity source shift Fig. 1.27 is illustration of the electricity source shift due to its industry as it can be observed with their slops of each plot. As it can be seen in this figure, the nuclear industry whether it is fission [6-8] or fusion [9-11] driving production of electricity as source of energy is on rise. • Global electricity demand grows by 60% from 2016 to 2040, driven by demand in the residential and commercial, industrial, and transportation sectors. • Industrial share of demand reduces as China’s economy shifts from heavy industry to services and lighter manufacturing; transportation’s share doubles to 2% in 2040.
Population growth driving energy demand
Fig. 1.27 Illustration of electricity source shift.
• The world shifts to lower-carbon sources for electricity generation, led by natural gas, renewables such as wind and solar, and nuclear. • Coal provides less than 30% of the world’s electricity in 2040, down from about 40% in 2016. Also, as it can be observed in the figure that the demand for oil as means of producing energy is flatting out.
1.14.2 Natural gas and renewables dominate growth Fig. 1.28 here is illustrating the natural gas and renewables energy that is dominating growth and the top-level points are listed as well. • Wind and solar grow significantly, supported by policies to reduce CO2 emissions as well as cost reductions, and lead growth as sources for electricity generation. • Natural gas grows significantly, with growing demand from OECD countries, China, and countries where natural gas is domestically available. • Nuclear demand grows, with more than 50% of this growth coming from China. • Hydropower growth makes up more than 80% of growth in the other renewables category. • Coal-fired generation grows in many Asia-Pacific countries due to electricity demand growth as well as favorable economics and supportive policy environments. Comparing Fig. 1.28 with Fig. 1.27 from different angle and perspective, we can see again that nuclear energy going forward in time is on rise providing that the total cost of ownership and return on investment along with its public safety will make it feasible.
29
30
Introduction to energy essentials
Fig. 1.28 Illustration of natural gas and renewables dominate growth.
1.15 Renewable penetration increases across all regions Boosted by a strong solar photovoltaic (PV) market, renewables accounted for almost two-thirds of net new power capacity around the world in 2016, with almost 165 gigawatts (GW) coming online. This was another record year, largely as a result of booming solar PV deployment in China and around the world, driven by sharp cost reductions and policy support. In year 2016, new solar PV capacity around the world grew by 50%, reaching over 74 GW, with China accounting for almost half of this expansion. For the first time, solar PV additions rose faster than any other fuel, surpassing the net growth in coal (see Fig. 1.29). This deployment was accompanied by the announcement of record-low auction prices as low as 3 cents per kilowatt-hour. Low announced prices for solar and wind were recorded in a variety of places, including India, the United Arab Emirates, Mexico, and Chile. Since the industrialization of nuclear technology cross commercial and private industries, Research and Development of Generation IV under subject of SMR is taking a big turning point to bring for new source of renewable energy yet a clean one [6-7]. Furthermore, wind and solar energy are also getting a big boost in term of clean source of renewable energy as well.
Population growth driving energy demand
Fig. 1.29 Electricity capacity charts by fuel.
These announced contract prices for solar PV and wind power purchase agreements are increasingly comparable or lower than generation cost of newly built gas and coal power plants. Fig. 1.30 is presentation of renewable energy penetration in form of solar and wind increasing across all regions around the globe with top-level points as well. • Globally, wind and solar’s share of delivered electricity grows significantly from about 5% in 2016 to about 17% in 2040. • Wind and solar see strong growth in North America and Europe and provide more than 20% of delivered electricity in 2040. • Renewables growth in Asia-Pacific supports local air quality and energy diversity goals. • The Middle East and Africa see growth in solar due to reduced costs and favorable solar resource. • While capacity utilization improves over time, intermittency still limits worldwide wind and solar utilization to about 30% and 20% respectively in 2040. A bright future for renewables to 2022, solar PV entering a new era. This record performance in 2016 forms the bedrock of the IEA’s electricity forecast, which sees continued strong growth through 2022, with renewable electricity capacity forecast to expand by over 920 GW, an increase of 43%.This year’s renewable forecast is 12% higher than last year, thanks mostly to solar PV upward revisions in China and India. Solar PV is entering a new era. For the next 5 years, solar PV represents the largest annual capacity additions for renewables, well above wind and hydro. This marks a turning point and underpins our more optimistic solar PV forecast which is revised up by over one-third compared to last year’s report.This revision is driven by continuous technology cost reductions and unprecedented market dynamics in China as a consequence of policy changes (see Fig. 1.31).
31
32
Introduction to energy essentials
Fig. 1.30 Illustration of renewables penetration increase across all regions.
Fig. 1.31 Renewable electricity capacity growth by technology.
Population growth driving energy demand
Under an accelerated case—where government policy lifts barriers to growth—IEA analysis finds that renewable capacity growth could be boosted by another 30%, totaling an extra 1150 GW by 2022 led by China. Note that China is the undisputed renewable growth leader when it comes to this aspect of source of energy by being responsible for over 40% of global renewable capacity growth, which is largely driven by concerns about air pollution and capacity targets that were outlined in the country’s 13th 5-year plan to 2020.
1.16 Electricity generation highlights regional diversity Renewable energy is the most important source of electricity for most countries globally due to their growth in population, where the demands and supplies play a big role. With about 6% of the nation’s electricity supplied by wind energy today, the Department of Energy has collaborated with industry, environmental organizations, academic institutions, and national laboratories to develop a renewed Wind Vision documenting the contributions of wind to date and envisioning a future in which wind continues to provide key contributions to the nation’s energy portfolio. Building on and updating the 2008 20% Wind Energy by 2030 report [12], the Wind Vision report quantifies the economic, environmental, and social benefits of a robust wind energy future and the actions that wind stakeholders can take to make it a reality. Over the past 2 years, an elite team of researchers, academics, scientists, engineers, and wind industry experts revisited the findings of the 20% Wind by 2030 report and built upon its findings to conceptualize a new vision for wind energy through 2050. Taking into account all facets of wind energy (land-based, offshore, and distributed), the new Wind Vision Report [13] defines the societal, environmental, and economic benefits of wind power in a scenario with wind energy supplying 10% of the country’s electricity in 2020, 20% in 2030, and 35% in 2050. Fig. 1.32 is illustration of electricity generation highlights regional diversity cross world regional along with top-level points as listed below and Fig. 1.32 as well. • About 60% of the growth in electricity demand will come from Asia-Pacific. • Mix of electricity generation sources will vary significantly by region. • The United States and Europe lead shift away from coal, with significant gains in natural gas, wind, and solar. • China’s coal share of electricity generation falls with nuclear, renewables, and natural gas meeting close to all electricity demand growth. • The Middle East, Africa, and the rest of world draw on natural gas where domestically available. • Favorable economics drive coal-fired electricity in Asia-Pacific; India’s use of coal for electricity more than doubles from 2016 to 2040.
33
34
Introduction to energy essentials
Fig. 1.32 Electricity generation highlights regional diversity.
1.17 Natural gas is a key fuel for reliable electricity generation For more than 20 years, I have been championing the use of fuel cell powered cars to connect the natural gas distribution network of this country with the electric distribution network, making them partners in providing clean, safe, electrical energy at the point of use, and for export back into the national grid. Some early patents were awarded, but they were way ahead of their time (see Fig. 1.33). The technological foundation for this concept remains strong and vital—representing a disruptive force aimed at the heart of traditional utility system operations. However, there seems to be a potential partnership that can be forged between auto manufacturers, electric utilities, and natural gas utilities to make this all happen. In the United States, there are two centralized energy systems: the electric utility grid and the natural gas network. Both systems together touch just about every home. Surprisingly, most people never think about combining the two systems to create a resilient hybrid system, immune from major storm disruptions and terrorist attacks. We hear a great deal today about the smart grid. Such a hybrid system can be about as smart as it gets, and quite flexible and sophisticated too. When a big weather event strikes, the above-ground electric utility system can take an awful beating. Do not expect that to change anytime soon, because it is simply too expensive to retrofit all those lines underground (some estimates suggest at least 20 times as expensive as overhead construction). On the other hand, when was the last time your natural gas service was taken out during any kind of storm—winter or summer.
Population growth driving energy demand
Fig. 1.33 Illustration of natural gas as key fuel for reliable electricity generation.
Would not it be reasonable to ask if there was a way to convert your home’s reliable natural gas service into electricity for your household. The best way to do this, in my opinion, is using a fuel cell—a device that can electrochemically convert natural gas into a useable fuel by stripping off the hydrogen portion of the methane molecule and combining it with oxygen to generate clean electricity and some waste heat. Fuel cell technology is very attractive. It has relatively high conversion efficiency (32%–42%); produces very small waste byproducts and some low-grade waste heat; comes in a highly modular design, allowing for size expansion; is proven technology; and uses a native and abundant resource—methane—available from natural and manmade sources.
1.18 Different policy or technology choices can impact outcome Fig. 1.34 is illustration of different policy or technology choices that can impact outcome based on global natural gas demand for electricity generation sensitivity as function of billion cubic feet per day. The top-level points are listed below as: • Natural gas is reliable and efficient for baseload electricity generation; its flexibility also makes it well suited to meet peak demand and back-up intermittent renewables. • The role of natural gas in the electricity generation mix varies by country: natural gas-rich regions rely heavily on natural gas-fired electricity, while importing regions balance the use of natural gas with other fuels.
35
36
Introduction to energy essentials
Fig. 1.34 Illustration of different policy or technology choices can impact outcome.
• The Outlook reflects ExxonMobil’s best views of technology improvements and policy evolution; sensitivities test the impact of alternate pathways on natural gas demand for electricity generation. • An accelerated deployment of solar and wind due to swifter cost declines and/or even more generous, targeted-support policies could reduce natural gas demand. • Conversely, stronger public sentiments against nuclear or coal and/or a shift toward more technology-neutral carbon abatement policies could increase the role of natural gas for baseload electricity generation. Note that in Fig. 1.34 the shaded ranges are indicative of potential shifts in demand relative to base Outlook report 2018.
1.19 Meeting climate change goals through energy efficiency Decoupling economic growth from energy consumption is vital to meeting climate change goals, so the question is why discuss energy efficiency and climate change goals. Energy efficiency is key to achieving the ambitions set out in the NDCs announced as part of the Paris Agreement at COP21. Therefore, NDCs can be an important driver of energy efficiency, the benefits of which go beyond emissions reduction to include energy savings, economic benefits, and improved health.
Population growth driving energy demand
IEA analysis indicates that NDC targets are not in line with limiting the increase of global average temperature to well below 2°C by the end of the century. NDCs are to be revised and strengthened every 5 years through a process of stock-taking and ratcheting-up however, so it is vitally important to track development of new energy efficiency policies and their implementation.
1.19.1 What are the opportunities Energy efficiency policies and technologies will play a key role in reducing emissions. Energy efficiency measures are among the most cost-effective actions to reduce emissions. Moreover, energy efficiency can be deployed quickly and is the one energy resource that all countries possess in abundance. In IEA modeling, energy efficiency makes the largest contribution to global emissions reduction. That contribution is driven by substantial efficiency gains in all end-use sectors through, for example, fueleconomy standards in the transport sector and highly efficient technologies to provide heat and steam in industry.
1.19.2 Key recommendations Countries should set and enforce more ambitious energy efficiency policies including mandatory minimum energy performance standards for products and vehicles. Strong energy efficiency policies are vital in order to address climate change and air pollution, improve energy security, and increase energy access. Mandatory energy efficiency policies and regulations such as minimum energy performance standards and energy efficiency obligations have resulted in significant energy and emissions savings worldwide. Given shortening timeframes for action, policies should urgently target sectors and measures with the greatest potential.
1.20 Energy supply projections The supply mix to meet growing energy demand will be historically diverse—from the oil and natural gas in America’s shale regions, to the Deepwater fields off Brazil; from new nuclear reactors in China, to wind turbines and solar arrays in nations around the world. This diversification in global energy supply will grow over the next two-and-ahalf decades. Society’s push for lower-emission energy sources will drive substantial increases in renewables such as wind and solar. By 2040, nuclear and all renewables will be approaching 25% of global energy supplies. Oil grows and continues to be the primary source of energy for transportation and as a feedstock for chemicals. Natural gas also grows, with increasing use in power generation, as utilities look to switch to lower-emissions fuels. Coal struggles to grow due to increased competition in power generation from renewables and natural gas, led by declines in OECD nations.
37
38
Introduction to energy essentials
Fig. 1.35 Illustration of demands driven by transportation and chemicals.
1.21 Liquid supply projections Liquids demand is expected to grow by about 20% over the next two-and-a-half decades, driven by the transportation and chemicals sectors. To meet the demand, supply growth will come from diverse sources, with technology advancements a key enabler. Technology enables growth in supply from tight oil and natural gas liquids, together reaching nearly 30% of global supply by 2040. Combined with growth in oil sands, energy markets shift, and North America becomes a net exporter. Fig. 1.35 is illustration of liquids demand driven by transportation and chemicals by region and sector as function of MBDOE. The top-level points are listed as: • Global liquids demand grows about 20% from 2016 to 2040. • Commercial transportation and chemicals sectors lead demand growth. • Advances in light-duty vehicle efficiency lead to liquids demand decline in North America and Europe. • Africa liquid demand grows by about 30% as emerging economies advance. • Asia-Pacific accounts for nearly 65% of the increase in global liquids demand to 2040, surpassing the combined liquids demand of North America and Europe by 2025.
1.22 Emissions Providing reliable, affordable energy to support prosperity and enhance living standards is coupled with the need to do so in ways that reduce impacts on the environment, including the risks of climate change.
Population growth driving energy demand
The challenge of meeting global energy needs and managing the risks of climate change is real—and daunting. Real in that billions of people need reliable, affordable energy every day, and daunting in the fact that people and governments in every nation have a variety of important goals and limited financial resources to address them. Progress on energy and climate objectives requires practical approaches that will contribute to both without stifling economic costs. Governments bear a unique responsibility in this regard. A key challenge is to develop and implement policies that focus emission-reduction efforts on low-cost options. This approach will help promote better living standards while reducing emissions. The long-term nature of the climate challenge promises an evolution of available solutions as knowledge expands, technology advances and markets adapt. Policies that promote innovation and flexibility afforded by competition and free markets will be critical to help ensure nations pursue the most cost-effective opportunities to reduce global GHG emissions and meet people’s energy needs. The energy-related CO2 emission peak is illustrated in Fig. 1.36, per billion-tones, with top-level points listed below as: • Global CO2 emissions rose close to 40% from 2000 to 2016, despite a roughly 10% decline in emissions in Europe and North America. • Global CO2 emissions are likely to peak by 2040, at about 10% above 2016 levels. • Combined CO2 emissions in Europe and North America fall about 15% by 2040 versus 2016.
Fig. 1.36 Illustration of energy-related CO2.
39
40
Introduction to energy essentials
Fig. 1.37 Illustration of all sectors to restrain CO2 emissions growth.
• China contributed about 60% of the growth in emissions from 2000 to 2016; its emissions peak about 2030, and gradually decline toward the 2016 level in 2040. • Emissions outside North America, Europe, and China rise about 35% from 2016 to 2040, with the share of global emissions reaching 50% by 2040. All sectors contributing to restrain CO2 emissions growth is plotted in Fig. 1.37 with all top-level points listed below as: • Electricity generation accounts for about 40% of energy-related CO2 emissions; a shift to less carbon-intensive sources of electricity (e.g., wind, solar, nuclear, and natural gas) will help reduce the CO2 intensity of delivered electricity by more than 30%. • Transportation represents about 25% of CO2 emissions, and this share is likely to grow modestly to 2040 driven by expanding commercial transportation activity. • Light-duty vehicle CO2 emissions are expected to decline close to 10% from 2025 to 2040 as more efficient conventional vehicles and electric cars gain significant share. • Industrial sector activities account for about 30% of CO2 emissions; over the outlook, efficiency gains and growing use of less carbon-intensive energy will help reduce industrial CO2 emissions relative to GDP by about 50%. Furthermore, restraining energy-related CO2 emission is plotted in Fig. 1.38 with top-level points listed below as: • The primary driver of increasing global CO2 emissions between 2000 and 2016 was economic growth, as global GDP expanded about 55%.
Population growth driving energy demand
Fig. 1.38 Restraining energy-related CO2 emissions.
• Improving energy efficiency across economies (energy use per unit of GDP) helped slow the growth in emissions, while CO2 intensity of energy use remained fairly constant. • As economic growth continues to drive CO2 emissions through 2040, efficiency gains and a shift to less CO2-intensive energy will each help substantially moderate emissions. • As the world’s economy nearly doubles by 2040, energy efficiency gains and a shift in the energy mix will contribute to a nearly 45% decline in the carbon intensity of global GDP. Energy demand as used in this Outlook refers to commercial and noncommercial energy (e.g., traditional biomass) consumed as a fuel or used as a feedstock for the production of chemicals, asphalt, lubricants, waxes, and other specialty products. Coal demand includes metallurgical coal. Gas demand includes flared gas.To avoid double counting, derived liquids (e.g., from gas-to-liquids) and synthetic gas (e.g., from coal-to-gas) are only accounted for in their final form (i.e., liquid or gas) and not in the energy type from which they were derived (i.e., gas or coal). The fuel and loss involved in the conversion process is accounted for in the energy industry subsector.
1.23 Fuel cell car power plants What if we used fuel cell powered cars in a very different way than for just transportation? A typical car fuel cell is about 45 kW to 50 kW in size.That capacity is much more
41
42
Introduction to energy essentials
than an average home need. So, I ask a simple question: Why cannot your car power your house both during normal conditions and when the utility system is unavailable due to widespread outages? Go one step further.Why not have all cars generate power wherever they are parked? Most people only drive their car about 4%–6% of the day—mostly for commuting. The rest of the time it just sits there. It could be put to work during that downtime.
References [1] List of OECD Member countries – Ratification of the Convention on the OECD, OECD, June (9) (2018). [2] World Economic Outlook Database, International Monetary Fund, April (17) (2018). [3] Report for Selected Country Groups and Subjects (PPP valuation of country GDP), IMF May (9) (2018). [4] UNFCC, The Paris Agreement, http://unfccc.int/paris_agreement/items/9485.php. [5] EMF was established at Stanford in 1976 to bring together leading experts and decisionmakers from government, industry, universities, and other research organizations to study important energy and environmental issues. For each study, the Forum organizes a working group to develop the study design, analyze and compare each model’s results and discuss key conclusions, https://emf.stanford. edu/about. EMF is supported by grants from the U.S. Department of Energy, the U.S. Environmental Protection Agency as well as industry affiliates including ExxonMobil, https://emf.stanford.edu/ industry-affiliates. [6] B. Zohuri, Hydrogen Energy: Challenges and Solutions for a Cleaner Future, 1st Ed., Springer Publishing Company, 2018. [7] B. Zohuri, Small Modular Reactors as Renewable Energy Sources, 1st Ed., Springer Publishing Company, 2018. [8] B. Zohuri, Nuclear Energy for Hydrogen Generation through Intermediate Heat Exchangers: A Renewable Source of Energy, 1st Ed., Springer Publishing Company, 2016. [9] B. Zohuri, Plasma Physics and Controlled Thermonuclear Reactions Driven Fusion Energy, 1st Ed., Springer Publishing Company, July (20) (2016). [10] B. Zohuri, Magnetic Confinement Fusion Driven Thermonuclear Energy, 1st Ed., Springer Publishing Company, July (20) (2017). [11] B. Zohuri, Plasma Physics and Controlled Thermonuclear Reactions Driven Fusion Energy, 1st Ed., Springer Publishing Company, July (20) (2016). [12] Office of Energy Efficiency & Renewable Energy, 20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply, https://www.energy.gov/eere/wind/20-windenergy-2030-increasing-wind-energys-contribution-us-electricity-supply. [13] Office of Energy Efficiency & Renewable Energy, Wind Vision: A New Era for Wind Power in the United States, https://www.energy.gov/eere/wind/maps/wind-vision.
CHAPTER 2
Nuclear power plant history from past to present and future In the years just before and during World War II (WW II), nuclear research focused mainly on the development of defense weapons. Later, scientists concentrated on peaceful applications of nuclear technology. An important use of nuclear energy is the generation of electricity. After years of research, scientists have successfully applied nuclear technology to many other scientific, medical, and industrial purposes. Governments have been deeply involved in the development of nuclear energy. Some of them initiated and led the development of nuclear energy since its military beginnings in WW II, because of its strategic nature and the scope of its risks and benefits. Governments later supported the development of civilian nuclear energy, primarily for the generation of electricity. In the postwar period, governments played an increasing overall role in the economies of the industrial countries. Science and technology were essential instruments of government action and nuclear energy was a highly visible symbol of their successful application.
2.1 Introduction The concept of the atom has existed for many centuries. But we only recently began to understand the enormous power contained in the tiny mass—the history of our discoveries about atoms. We begin with the ideas of the Greek philosophers. Then we follow the path to the early scientists who discovered radioactivity. Finally, we reach modern-day use of atoms as a valuable source of energy. Ancient Greek philosophers first developed the idea that all matter is composed of invisible particles called atoms. The word atom comes from the Greek word, atoms, meaning indivisible. Scientists in the 18th and 19th centuries revised the concept based on their experiments. By 1900, physicists knew the atom contains large quantities of energy. British physicist Ernest Rutherford was called the father of nuclear science because of his contribution to the theory of atomic structure. In 1904 he wrote: “If it were ever possible to control at will the rate of disintegration of the radio elements, an enormous amount of energy could be obtained from a small amount of matter.”
Albert Einstein developed his theory of the relationship between mass and energy 1 year later.The mathematical formula is E = mc2, or “energy equals mass times the speed of light squared.” It took almost 35 years for someone to prove Einstein’s theory [1]. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
43
44
Introduction to energy essentials
Currently, about half of all nuclear power plants (NPPs) are located in the United States. There are many different kinds of NPPs, and we will discuss a few important designs in this text. A NPP harnesses the energy inside atoms themselves and converts this to electricity. All of us use this electricity. In Section 2.2 of this chapter, we show you should have the idea of the fission process and how it works. A NPP uses controlled nuclear fission. In this chapter, we will explore how a NPP operates and the manner in which nuclear reactions are controlled. There are several different designs for nuclear reactors. Most of them have the same basic function, but one’s implementation of this function separates it from another. There is several classification systems used to distinguish between reactor types. Below is a list of common reactor types and classification systems found throughout the world and they are briefly explained down below according to three types of classification either; (1) Classified by moderator material or, (2) Classified by coolant material and, (3) Classified by reaction type. As part of history of nuclear energy and early discoveries of this fission drive energy, we can state that no scientific progress ever really starts. Rather, it builds on the work of countless other discoveries. Since we have to start somewhere, this story will start in Germany, in 1895, where a fellow named Roentgen was experimenting with cathode rays in a glass tube that he had sucked the air out of. At one point, he had the device covered but noticed that the photographic plates off to the side were lighting up when the device was energized. He realized that he was looking at a new kind of ray and called it what any reasonable physicist would call an unknown: the X-ray. He systematically studied these rays and took the first X-ray photo of his wife’s hand 2 weeks later, thereby becoming the father of modern medical diagnostics. Soon after in France, in 1896, a guy named Becquerel noticed that if he left uranium salts sitting on photographic plates, they would expose even though no cathode ray tube was energized. The energy must have been coming from inside the salts themselves. Marie Curie and her husband Pierre studied the phenomenon and isolated two new elements that exhibited this spontaneous energy production: polonium and radium. They named the phenomenon radioactivity. In United Kingdom, Ernest Rutherford starts studying radioactivity and discovers that there are two types of rays that come out that are different from X-rays. He calls them alpha and beta radiation. He later discovers the shocking fact that the vast majority of the mass of atoms is concentrated in their centers, and thus discovers the atomic nucleus. He is widely regarded today as the father of nuclear physics. He later discovers gamma radiation. In 1920, he theorizes the existence of a neutral particle in the nucleus called a neutron, though there is no evidence that neutrons exist yet.
Nuclear power plant history from past to present and future
Fig. 2.1 Enrico Fermi, an Italian physicist.
In 1932, Chadwick reads some published results from the Curie’s kid, Irene JoliotCurie, who says gamma radiation was found to knock protons out of wax. Disbelieving, he suspects they are seeing Rutherford’s neutrons and does experiments to prove this, thus discovering the neutron. In 1934, physicist Enrico Fermi (Fig. 2.1) conducted experiments in Rome that showed neutrons could split many kinds of atoms. The results surprised even Fermi himself. When he bombarded uranium with neutrons, he did not get the elements he expected. The elements were much lighter than uranium. He led the team of scientists who created the first self-sustaining nuclear chain reaction at University of Chicago, where he built the first fissionable nuclear reactor. In the fall of 1938, German scientists Otto Hahn and Fritz Strassman fired neutrons from a source containing the elements radium and beryllium into uranium (atomic number 92). They were surprised to find lighter elements, such as barium (atomic number 56), in the leftover materials [1].
45
46
Introduction to energy essentials
These elements had about half the atomic mass of uranium. In previous experiments, the leftover materials were only slightly lighter than uranium. Although they are tiny, atoms have a large amount of energy holding their nuclei together. Certain isotopes of some elements can be split and will release part of their energy as heat. This splitting is called fission. The heat released in fission can be used to help generate electricity in power plants. Uranium-235 (U-235) is one of the isotopes that fissions easily. During fission, U-235 atoms absorb loose neutrons. This causes U-235 to become unstable and split into two light atoms called fission products. The combined mass of the fission products is less than that of the original U-235. The reduction occurs because some of the matter changes into energy. The energy is released as heat. Two or three neutrons are released along with the heat. These neutrons may hit other atoms, causing more fission. This scenario and process of fission-product chain reaction are illustrated in Fig. 2.2. Some governments initiated and led the development of nuclear energy since its military beginnings in WW II, because of its strategic nature and the scope of its risks and benefits. They later sponsored and managed the development of civilian nuclear energy, primarily for the generation of electricity. As a unique new source of energy with seemingly unlimited promise, nuclear energy enjoyed a very high priority in the economic and energy policies of the leading industrial countries for many decades. A series of fissions is called a chain reaction. If enough uranium is brought together under the right conditions, a continuous chain reaction occurs. This is called a selfsustaining chain reaction. A self-sustaining chain reaction creates a great deal of heat, which can be used to help generate electricity.
Fig. 2.2 Illustration fission process chain reaction.
Nuclear power plant history from past to present and future
2.2 Fission reaction energy generation There is strategic as well as economic necessity for nuclear power in the United States and indeed most of the world. The strategic importance lies primarily in the fact that one large NPP saves more than 50,000 barrels of oil per day. At $30 to $40 per barrel (1982), such a power plant would pay for its capital cost in a few short years. For those countries that now rely on but do not have oil, or must reduce the importation of foreign oil, these strategic and economic advantages are obvious. For those countries that are oil exporters, nuclear power represents an insurance against the day when oil is depleted. A modest start now will assure that they would not be left behind when the time comes to have to use nuclear technology. The unit costs per kilowatt-hour for nuclear energy are now comparable to or lower than the unit costs for coal in most parts of the world. Other advantages are the lack of environmental problems that are associated with coal or oil-fired power plants and the near absence of issues of mine safety, labor problems, and transportation bottlenecks. Natural gas is a good, relatively clean-burning fuel, but it has some availability problems in many countries and should, in any case, be conserved for small-scale industrial and domestic uses. Thus, nuclear power is bound to become the social choice relative to other societal risks and overall health and safety risks. Nuclear fission is the process of splitting atoms, or fissioning them, as illustrated in Fig. 2.2.
2.3 The first fission chain reaction Early in WW II, the scientific community in the United States, including those Europeans now calling the United States their safe home, pursued the idea that uranium fission and the production of excess neutrons could be the source of extraordinary new weapons. They knew Lisa Meitner’s interpretation, in Sweden, of Hahn’s experiments would likely be known in Germany. Clearly, there might now be a race commencing for the development and production of a new, super weapon based on the fission of 235 U92 or 239Pu94. By early 1942, it was known that the two naturally occurring isotopes of uranium reacted with neutrons as follows: 235 U92 + 1n0 –> fission products + (2.5) 1n0 + 200 MeV Energy U92 + 1n0 –>239U92
238
U92 –>
239
Np93 –>
239
Np93 + β–1 t1/2 = 23.5 minutes.
239
Pu94 + β–1 t1/2 = 2.33 days
239
Each U-235 that undergoes fission produces an average of 2.5 neutrons. In contrast, some U-238 nuclei capture neutrons, become U-239, and subsequently emit two beta
47
48
Introduction to energy essentials
Fig. 2.3 The first generations of a nuclear chain reaction [24].
particles to produce Pu-239. The plutonium was also fissile and would produce energy by the same mechanism as the uranium. A flow sheet for uranium fission is shown in Fig. 2.3 [1]. The answers to two questions were critical to the production of plutonium for atomic bombs: 1. Is it possible, using natural uranium (99.3% U-238 and 0.7% U-235), to achieve a controlled chain reaction on a large scale? If so, some of the excess neutrons produced by the fission of U-235 would be absorbed by U-238 and produce fissile Pu-239. 2. How can we separate (in a reasonable period of time) the relatively small quantities of Pu-239 from the unreacted uranium and the highly radioactive fission-product elements. Although fission had been observed on a small scale in many laboratories, no one had carried out a controlled chain reaction that would provide continuous production of plutonium for isolation. Enrico Fermi thought that he could achieve a controlled chain reaction using natural uranium. He had started this work with Leo Szilard at Columbia University, but moved to the University of Chicago in early 1942.
Nuclear power plant history from past to present and future
The first nuclear reactor, called a pile, was a daring and sophisticated experiment that required nearly 50 tons of machined and shaped uranium and uranium oxide pellets along with 385 tons—the equivalent of four railroad coal hoppers—of graphite blocks, machined on site. The pile itself was assembled in a squash court under the football field at the University of Chicago from the layered graphite blocks and uranium and uranium oxide lumps (Fermi’s term) arranged roughly in a sphere with an anticipated 13-ft radius. Neutron-absorbing, cadmium-coated control rods were inserted in the pile. By slowly withdrawing the rods, neutron activity within the pile was expected to increase and at some point, Fermi predicted, there would be one neutron produced for each neutron absorbed in either producing fission or by the control rods [1] (see Fig. 2.4). On December 2, 1942, with 57 of the anticipated 75 layers in place, Fermi began the first controlled nuclear chain reaction occurred. At around 3:20 p.m. the reactor went critical; that is, it produced one neutron for every neutron absorbed by the uranium nuclei. Fermi allowed the reaction to continue for the next 27 minutes before inserting the neutron-absorbing control rods.The energy releasing nuclear chain reaction stopped as Fermi predicted it would.
Fig. 2.4 CP-1—Graphite blocks with 3-inch diameter uranium cylinders inserted—part of a layer of CP-1, the first nuclear reactor. A layer of graphite blocks without inserted uranium is seen covering the active layer [24].
49
50
Introduction to energy essentials
Fig. 2.5 The first controlled chain reaction, Stagg Field, Chicago, December 2, 1942. (Courtesy of the Argonne National Laboratory).
In addition to excess neutrons and energy, the pile also produced a small amount of Pu-239, the other known fissile material (see Fig. 2.5). The achievement of the first sustained nuclear reaction was the beginning of a new age in nuclear physics and the study of the atom. Humankind could now use the tremendous potential energy contained in the nucleus of the atom. However, while a controlled chain reaction was achieved with natural uranium, and could produce plutonium, it would be necessary to separate U-235 from U-238 to build a uranium bomb [1]. On December 28, 1942, upon reviewing a report from his advisors, President Franklin Roosevelt recommended building full-scale plants to produce both U-235 and Pu-239. This changed the effort to develop nuclear weapons from experimental work in academic laboratories administered by the US Office of Scientific Research and Development to a huge effort by private industry. This work, supervised by the US Army Corps of Engineers, was codenamed the Manhattan Project. It spread throughout the entire United States, with the facilities for uranium and plutonium production being located at Oak Ridge, Tennessee, and Hanford, Washington, respectively. Work on plutonium production continued at the University of Chicago, at what became known as the Metallurgical Laboratory or Met Lab. A new laboratory at Los Alamos, New Mexico, became the focal point for development of the uranium and plutonium bombs.
2.4 The first self-sustaining fission chain reaction In 1939, Bohr came to America. He shared with Einstein the Hahn–Strassman–Meitner discoveries. Bohr also met Fermi at a conference on theoretical physics in Washington, DC. They discussed the exciting possibility of a self-sustaining chain reaction. In such a process, atoms could be split to release large amounts of energy. Scientists throughout the world began to believe a self-sustaining chain reaction might be possible. It would happen if enough uranium could be brought together under proper conditions. The amount of uranium needed to make a self-sustaining chain reaction is called a critical mass.
Nuclear power plant history from past to present and future
Fig. 2.6 Picture of Fermi and Szilard.
Fermi and his associate, Leo Szilard, suggested a possible design for a uranium chain reactor in 1941. Their model consisted of uranium placed in a stack of graphite to make a cube-like frame of fissionable material (see Fig. 2.6). With neutrons around, everyone is shooting them at various nuclides. Soon enough, Hahn and Strassman shoot them at uranium atoms and see some strange behavior which Lise Meitner and her nephew Frisch identify as the splitting of the atom, releasing much energy. They name it fission, after binary fission in biology. Szilard recognizes fission as a potential way to form a chain reaction (which he had been considering for a long time). He and Fermi do some neutron multiplication studies and see that it is indeed possible. They go home, knowing that the world is about to change forever. Szilard, Wigner, and Teller write a letter to President Roosevelt, warning of nuclear weapons, and have Einstein sign it and send it (he was more famous). Roosevelt authorizes a small study into uranium. In 1942, Fermi successfully created the first manmade nuclear chain reaction in a squash court under the stadium at the University of Chicago. The Manhattan project kicked into full gear. Two types of bombs were pursued simultaneously, one made with enriched uranium, and the other made with plutonium. Giant secret cities were built very quickly. The one in Oak Ridge, TN had a reactor that created the first gram-quantities of plutonium for study, but its main task was to enrich uranium. The one in Hanford, WA is the site of plutonium production reactors (the first high-power nuclear reactors) and plutonium extraction chemistry
51
52
Introduction to energy essentials
Fig. 2.7 Lisa Meitner and Otto R. Frisch.
plants. Another in Los Alamos, NM is the site where the technology that turns weapons materials into weapons is developed. Both paths to the bomb are successful. The more uncertain design, the plutonium implosion device (like Fat Man) is successfully tested at the Trinity site in New Mexico in July 1945. The scientific community in the United States, including those Europeans now calling the United States their safe home, pursued the idea that uranium fission and the production of excess neutrons could be the source of extraordinary new weapons. They knew Lisa Meitner’s interpretation, in Sweden, of Hahn’s experiments would likely be known in Germany. Clearly, there might now be a race commencing for the development and production of a new, super weapon based on the fission of 235U92 or 239Pu94 (see Fig. 2.7). The decision is made to drop Little Boy and Fat Man on Hiroshima and Nagasaki, Japan on August 6th and 9th, 1945. The cities are devastated, with up to 250,000 people dead. Japan surrenders unconditionally 6 days later, on August 15th, 1945. This is the first time the public realizes that the United States has been developing bombs [2].
2.5 Nuclear criticality concept A nuclear reactor works on the principle of a chain reaction. An initial neutron is absorbed by a fissile nuclide and during the process of fission; additional neutrons are released to replace the neutron that was consumed. If more neutrons are produced than
Nuclear power plant history from past to present and future
are consumed, then the neutron population grows. If fewer neutrons are produced than are consumed, the neutron population shrinks. The number of fissions caused by the neutron population determines the energy released. In order to quantify this concept, let us define a multiplication factor keff. We will define keff as the ratio of the production to consumption of neutrons.
keff =
Production Neutron (2.1) Consumption Neutron
This equation is very important to have a controlled chain reaction in a NPP, no matter if they are traditional Generation III (GEN III) or advanced version if form of small modular reactor (SMR) of Generation IV (GEN IV) or even smaller one, as small as micro nuclear reactor.
2.6 Nuclear energy expands and stagnates for peace usages As we know, historically, the first nuclear reactor was only the beginning. Most early atomic research focused on developing an effective weapon for use in WW II.The work was done under the code name Manhattan Project. However, during per and post-WW II time frame, some scientists worked on making breeder reactors, which would produce fissionable material in the chain reaction. Therefore, they would create more fissionable material than they would use. Enrico Fermi led a group of scientists in initiating the first self-sustaining nuclear chain reaction. The historic event, which occurred on December 2, 1942, in Chicago, is recreated in this painting as it is shown in Fig. 2.8. In process of fission energy that was expanding in a peaceful application, an experimental liquid-metal cooled reactor in Idaho called Experimental Breeder Reactor (EBR)-I was attached to a generator in 1951, producing the first nuclear-generated
Fig. 2.8 Enrico Fermi and group of scientists in Chicago.
53
54
Introduction to energy essentials
electricity. But before civilian power plants came to be, Admiral Rickover pushed to use reactors to power submarines, since they would not need to refuel, or to use oxygen for combustion. The USS Nautilus launched in 1954 as the first nuclear-powered submarine. Soon after, the Soviet Union opens the first nonmilitary, electricity-producing reactor [3]. Based on the submarine reactor design, the Shipping-port reactor opens in 1957 as the first commercial reactor in the United States. Through the 1960s and 1970s, lots of nuclear reactors are built for making electricity, using designs very similar to those made for the submarines. They work well and produce cheap, emission-free electricity with a very low mining and transportation footprint. A nuclear-powered future is envisioned by many. In 1974, France decided to make a major push for nuclear energy and ended up with 75% of their electricity coming from nuclear reactors. The United States built 104 reactors and got about 20% of its electricity from them. Eventually, labor shortages and construction delays started bringing the cost of nuclear reactors up, slowing their growth. The 1979 Three Mile Island accident and the 1986 Chernobyl accident further slowed the deployment of nuclear reactors.Tighter regulations brought costs higher.The 1986 passive safety tests at EBR-II prove that advanced reactor designs (besides the ones originally used to make submarines) can be substantially safer. These tests have major failure occur with no control rods inserted and the reactors shut themselves down automatically. In 1994, the Megatons to Megawatts treaty with Russia is signed to down-blend nuclear warheads into reactor fuel. Eventually, 10% of US electricity comes from dismantled nuclear weapons. In the late 1990s and 2000s, the phenomenal safety record of the US commercial reactor fleet (0 deaths) and smooth operation of reactors combined with ongoing worries of global climate change due to carbon emissions brings about substantial talk of a “nuclear renaissance,” where new builds might start up substantially again. Meanwhile, strong interest in Asia strengthens and ambitious plans to build large fleets are made to satisfy growing energy needs without adding more fossil fuel. On March 2011, a large earthquake and tsunami inundate the reactors at Fukushima Daiichi. Backup diesel generators fail, and the decay heat cannot be cooled. Fuel melts, hydrogen builds up and explodes (outside of containment). Radiation is released, but much of it goes out to sea instead of into populated area. No people expected to die from radiation dose.
2.7 Government and nuclear energy After the war, the United States government encouraged the development of nuclear energy for peaceful civilian purposes. Congress created the Atomic Energy Commission (AEC) in 1946. The AEC authorized the construction of EBR-I at a site in Idaho. The reactor generated the first electricity from nuclear energy on December 20, 1951. Fig. 2.9 is illustration of an early Experimental Breeding Reactor I generated electricity
Nuclear power plant history from past to present and future
Fig. 2.9 The Experimental Breeder Reactor I.
to light four 200-W bulbs on December 20, 1951. This milestone symbolized the beginning of the nuclear power industry. A major goal of nuclear research in the mid-1950s was to show that nuclear energy could produce electricity for commercial use. The first commercial electricity-generating plant powered by nuclear energy was located in Shipping-port, Pennsylvania. It reached its full design power in 1957. Light-water reactors (LWRs) like Shipping-port use ordinary water to cool the reactor core during the chain reaction. They were the best design then available for NPPs. Private industry became more and more involved in developing LWRs after Shipping-port became operational. Federal nuclear energy programs shifted their focus to developing other reactor technologies. The nuclear power industry in the United States grew rapidly in the 1960s. Utility companies saw this new form of electricity production as economical, environmentally clean, and safe. In the 1970s and 1980s, however, growth slowed. Demand for electricity decreased and concern grew over nuclear issues, such as reactor safety, waste disposal, and other environmental considerations. Still, the United States had twice as many operating NPPs as any other country in 1991. This was more than one-fourth of the world’s operating plants. Nuclear energy supplied almost 22% of the electricity produced in the United States.
55
56
Introduction to energy essentials
Regulation of nuclear safety and security remains a core function of government. It should guarantee the existence of an independent, competent regulator with adequate resources and authority. The emphasis now is on the safety culture of organizations, beginning at the most senior levels. This brings in the need to ensure good governance. Nuclear regulation should be in line with modern regulatory practice across the government, allowing nuclear energy to compete fairly. Governments looking for a future contribution from nuclear energy should ensure that regulation is prepared to deal with issues of decommissioning, refurbishment, uprating, life extension, and new reactor designs. In the 1980s and 1990s, problems with exclusive government ownership and control of production equipment appeared. Governments came under pressure to cut expenditures and diminish their direct involvement in the economy. Expanding international trade forced all industries to be more competitive. Markets were championed as an alternative to government direction and regulation. Simultaneously, environmental protection and the concept of sustainable development increased in importance in policy making, while the need to ensure security of energy supplies persisted or even increased. At the end of 1991, 31 other countries also had NPPs in commercial operation or under construction. That is an impressive worldwide commitment to nuclear power technology. During the 1990s, the United States faces several major energy issues and has developed several major goals for nuclear power, which are: • To maintain exacting safety and design standards; • To reduce economic risk; • To reduce regulatory risk; and • To establish an effective high-level nuclear waste disposal program. Several of these nuclear power goals were addressed in the Energy Policy Act of 1992, which was signed into law in October of that year. The United States is working to achieve these goals in a number of ways. For instance, the US Department of Energy (DOE) has undertaken a number of joint efforts with the nuclear industry to develop the next generation of NPPs. These plants are being designed to be safer and more efficient. There is also an effort under way to make nuclear plants easier to build by standardizing the design and simplifying the licensing requirements, without lessening safety standards. In the area of waste management, engineers are developing new methods and places to store the radioactive waste produced by nuclear plants and other nuclear processes. Their goal is to keep the waste away from the environment and people for very long periods of time. Scientists are also studying the power of nuclear fusion. Fusion occurs when atoms join—or fuse—rather than split. Fusion is the energy that powers the Sun. On earth, the
Nuclear power plant history from past to present and future
Fig. 2.10 In Oak Ridge, Tennessee, workers package isotopes.
most promising fusion fuel is deuterium, a form of hydrogen. It comes from water and is plentiful. It is also likely to create less radioactive waste than fission. However, scientists are still unable to produce useful amounts of power from fusion and are continuing their research. See Fig. 2.10, where in Oak Ridge,Tennessee, workers package isotopes, which are commonly used in science, industry, and medicine. In the current era of privatization and competitive markets, government still has an essential role in energy, electricity, and nuclear energy. While, in some countries, it may not exercise as much direct control through ownership and economic regulation as in the past, it still has the basic responsibility for creating policy frameworks within which market forces can function and public policy goals can be achieved. Thus, with fewer direct instruments, governments will need alternative policy measures.
2.8 Fundamental of fission nuclear reactors Today many nations are considering an expanded role for nuclear power in their energy portfolios. This expansion is driven by concerns about global warming, growth
57
58
Introduction to energy essentials
in energy demand, and relative costs of alternative energy sources. In 2008, 435 nuclear reactors in 30 countries provided 16% of the world’s electricity. In January 2009, 43 reactors were under construction in 11 countries, with several hundred more projected to come on line globally by 2030. Concerns over energy resource availability, climate change, air quality, and energy security suggest an important role for nuclear power in future energy supplies. While the current Generation II and III NPP designs provide a secure and low-cost electricity supply in many markets, further advances in nuclear energy system design can broaden the opportunities for the use of nuclear energy. To explore these opportunities, the US DOE’s Office of Nuclear Energy has engaged governments, industry, and the research community worldwide in a wide-ranging discussion on the development of next generation nuclear energy systems known as “Generation IV.” Nuclear reactors produce energy through a controlled fission chain reaction. While most reactors generate electric power, some can also produce plutonium for weapons and reactor fuel. Power reactors use the heat from fission to produce steam, which turns turbines to generate electricity. In this respect, they are similar to plants fueled by coal and natural gas.The components common to all nuclear reactors include a fuel assembly, control rods, a coolant, a pressure vessel, a containment structure, and an external cooling facility (see Fig. 2.11).
Fig. 2.11 A nuclear power plant. (Courtesy of R2 Controls).
Nuclear power plant history from past to present and future
Fig. 2.12 Types of nuclear reactors. (Courtesy of Chem Cases).
In a nuclear reactor neutron interact with the nuclei of the surrounding atoms. For some nuclei (e.g., U-235), an interaction with a neutron can lead to fission: the nucleus is split into two parts, giving rise to two new nuclei (the so-called fission products), energy and a number of new highly energetic neutrons. Other possible interactions are absorption (the neutron is removed from the system), and simple collisions, where the incident neutron transfers energy to the nucleus, either elastically (hard sphere collision) or inelastically. The speed of the neutrons in the chain reaction determines the reactor type (see Fig. 2.12). Thermal reactors use slow neutrons to maintain the reaction. These reactors require a moderator to reduce the speed of neutrons produced by fission. Fast neutron reactors, also known as fast breeder reactors, use high speed, unmoderated neutrons to sustain the chain reaction [24]. Thermal reactors operate on the principle that U-235 undergoes fission more readily with slow neutrons than with fast ones. Light water (H2O), heavy water (D2O), and carbon in the form of graphite are the most common moderators. Since slow neutron reactors are highly efficient in producing fission in U-235, they use fuel assemblies containing either natural uranium (0.7% U-235) or slightly enriched uranium (0.9%–2.0% U-235) fuel. Rods composed of neutron-absorbing material such as cadmium or boron are inserted into the fuel assembly. The position of these control rods in the reactor core determines the rate of the fission chain reaction. The coolant is a liquid or gas that removes the heat from the core and produces steam to drive the turbines. In reactors using either light water or heavy water, the coolant also serves as the moderator. Reactors employing gaseous coolants (CO2 or He) use graphite as the moderator. The pressure vessel, made of heavy-duty steel, holds the reactor core
59
60
Introduction to energy essentials
containing the fuel assembly, control rods, moderator, and coolant. The containment structure, composed of thick concrete and steel, inhibits the release of radiation in case of an accident and also secures components of the reactor from potential intruders. Finally, the most obvious components of many NPPs are the cooling towers, the external components, which provide cool water for condensing the steam to water for recycling into the containment structure. Cooling towers are also employed with coal and natural gas plants.
2.9 Reactor fundamentals It is important to realize that while the U-235 in the fuel assembly of a thermal reactor is undergoing fission, some of the fertile U-238 present in the assembly is also absorbing neutrons to produce fissile Pu-239. Approximately one-third of the energy produced by a thermal power reactor comes from fission of this plutonium. Power reactors and those used to produce plutonium for weapons operate in different ways to achieve their goals. Production reactors produce less energy and thus consume less fuel than power reactors. The removal of fuel assemblies from a production reactor is timed to maximize the amount of plutonium in the spent fuel (see Fig. 2.13). Fuel rods are removed from production reactors after only several months in order to recover the maximum amount of plutonium-239. The fuel assemblies remain in the core of a power reactors for up to 3 years to maximize the energy produced. However, it is possible to recover some plutonium from the spent fuel assemblies of a power reactor. The power output or capacity of a reactor used to generate electricity is measured in megawatts of electricity, MW(e). However, due to the inefficiency of converting heat
Fig. 2.13 The fate of plutonium in a thermal reactor. (Courtesy of Chem Cases).
Nuclear power plant history from past to present and future
into electricity, this represents only about one-third of the total thermal energy, MW(t), produced by the reactor. Plutonium production is related to MW(t). A production reactor operating at 100 MW(t) can produce 100 g of plutonium per day or enough for one weapon every 2 months. Another important property of a reactor is its capacity factor. This is the ratio of its actual output of electricity for a period of time to its output if it had been operated at its full capacity. The capacity factor is affected by the time required for maintenance and repair and for the removal and replacement of fuel assemblies. The average capacity factor for US reactors has increased from 50% in the early 1970s to over 90% today.This increase in production from existing reactors has kept electricity affordable.
2.10 Thermal reactors Currently, the majority of NPPs in the world are water-moderated, thermal reactors. They are categorized as either LWR or heavy-water reactor. LWRs use purified natural water (H2O) as the coolant/moderator, while heavy-water reactors employ heavy water, deuterium oxide (D2O). In LWRs, the water is either pressured to keep it in superheated form (in a pressurized water reactors, PWR) or allowed to vaporize, forming a mixture of water and steam (in a boiling water reactors, BWR). In a PWR (Fig. 2.14), superheated water flowing through tubes in the reactor core transfers the heat generated by fission to a heat exchanger, which produces steam in a secondary loop to generate electricity. None of the water flowing through the reactor core leaves the containment structure. In a BWR (Fig. 2.15), the water flowing through the core is converted to directly to steam and leaves the containment structure to drive the turbines. LWRs use low enriched uranium as fuel. Enriched fuel is required because natural water absorbs some
Fig. 2.14 Schematic of PWR. (In the pressurized water reactor, the water which flows through the reactor core is isolated from the turbine.) (Courtesy of DOE).
61
62
Introduction to energy essentials
Fig. 2.15 Schematic of BWR. (In the boiling water reactor, the same water loop serves as moderator, coolant for the core, and steam source for the turbine.) (Courtesy of DOE).
of the neutrons, reducing the number of nuclear fissions. All of the 103 NPPs in the United States are LWRs; 69 are PWRs and 34 are BWRs.
2.11 Nuclear power plants and their classifications A NPP uses controlled nuclear fission. In this section, we will explore how a NPP operates and the manner in which nuclear reactions are controlled. There are several different designs for nuclear reactors. Most of them have the same basic function, but one’s implementation of this function separates it from another. There are several classification systems used to distinguish between reactor types. Below is a list of common reactor types and classification systems found throughout the world and they are briefly explained down below according to three types of classification: 1. Classified by moderator material (i.e., LWR, or graphite-moderated reactor, and heavy-water reactor). 2. Classified by coolant material (i.e., PWR, or BWR, and gas-cooled reactor). 3. Classified by reaction type (i.e., fast neutron reactor, or thermal neutron reactor, and liquid-metal fast breeder reactor). For further information refer to the book by Zohuri and McDaniel. In respect to details of these classifications [25].
2.12 Going forward with nuclear energy Research in other nuclear areas is also continuing in the 1990s. Nuclear technology plays an important role in medicine, industry, science, and food and agriculture, as well as power generation. For example, doctors use radioisotopes to identify and investigate
Nuclear power plant history from past to present and future
the causes of disease. They also use them to enhance traditional medical treatments. In industry, radioisotopes are used for measuring microscopic thicknesses, detecting irregularities in metal casings, and testing welds. Archaeologists use nuclear techniques to date prehistoric objects accurately and to locate structural defects in statues and buildings. Nuclear irradiation is used in preserving food. It causes less vitamin loss than canning, freezing, or drying. Nuclear research has benefited mankind in many ways. But today, the nuclear industry faces huge, very complex issues. How can we minimize the risk? What do we do with the waste? The future will depend on advanced engineering, scientific research, and the involvement of an enlightened citizenry. In March 2013, famous climate scientist James Hansen copublishes a paper from NASA computing that, even with worst case estimates of nuclear accidents and nuclear waste, nuclear energy as a whole has saved 1.8 million lives and counting by offsetting the air-pollution-related deaths that come from fossil fuel plants. September 2013, Voyager 1 enters interstellar space, 36 years after its launch. It is powered by a plutonium-238 radio isotopic thermal generator. Fig. 2.16 is illustration of NASA Voyager at age 40, still reaching for the stars.
Fig. 2.16 Voyager I image.
63
64
Introduction to energy essentials
The Voyager 1 and now Voyager 2 spacecraft explored Jupiter, Saturn, Uranus, and Neptune before starting their journey toward interstellar space.
2.13 Small modular reactors Chapter 1 of this book, pretty much covered are different generation of nuclear power reactors, including the ones that are in operation in the United States at present time under Generation III that are known as GEN III configuration, as well as conceptual design of six types Generation IV, that we know them as GEN IV, where some of the conceptual design of SMR configuration falls in these types of reactors. These generations are considered as the new generation of NPPs and they are also called Generation IV International Forum. Small and medium-sized or modular reactors are an option to fulfill the need for flexible power generation for a wider range of users and applications. SMRs, deployable either as single or multi-module plant, offer the possibility to combine nuclear with alternative energy sources, including renewables and they are flexible and affordable power generation and most important they are very safe to operate, see next section of this chapter, while their efficiencies of their thermal output is under investigation by this author [4] as well as other scientists (Zohuri et al.) [5]. Global interest in small and medium-sized or modular reactors has been increasing due to their ability to meet the need for flexible power generation for a wider range of users and applications and replace ageing fossil fuel-fired power plants. They also display an enhanced safety performance through inherent and passive safety features, offer better upfront capital cost affordability, and are suitable for cogeneration and nonelectric applications. In addition, they offer options for remote regions with less developed infrastructures and the possibility for synergetic hybrid energy systems that combine nuclear and alternate energy sources, including renewables. Many Member States are focusing on the development of SMRs, which are defined as advanced reactors that produce electricity of up to 300 MW(e) per module. These reactors have advanced engineered features, are deployable either as a single or multimodule plant and are designed to be built in factories and shipped to utilities for installation as demand arises. There are about 50 SMR designs and concepts globally. Most of them are in various developmental stages and some are claimed as being near-term deployable. There are currently four SMRs in advanced stages of construction in Argentina, China, and Russia, and several existing and newcomer nuclear energy countries are conducting SMR research and development. Dealing with the new generation of these SMRs, the International Atomic Energy Agency (IAEA) is coordinating the efforts of its Member States to develop SMRs of various types by taking a systematic approach to the identification and development of key enabling technologies, with the goal to achieve competitiveness and reliable
Nuclear power plant history from past to present and future
performance of such reactors. The Agency also helps them address common infrastructure issues that could facilitate the SMRs’ deployment.
2.14 Small modular reactors: safety, security, and cost concerns Aftermath of the major accidents at Three Mile Island in 1979 and Chernobyl in 1986 and then recent devastated Japan’s Fukushima NPP frailer in Japan in March 2011, pretty much nuclear power fell out of favor, and some countries applied the brakes to their nuclear programs. Concerns about climate change and air pollution, as well as growing demand for electricity, led many governments to reconsider their aversion to nuclear power, which emits little carbon dioxide and had built up an impressive safety and reliability record. Some countries reversed their phaseouts of nuclear power, some extended the lifetimes of existing reactors, and many developed plans for new ones. Despite all these given concerns and issues in respect to the nuclear energy, still we are facing the fact of why we still need nuclear power as clean source of energy, particularly when we deal with renewable source of energy arguments [6]. Today, roughly 60 nuclear plants are under construction worldwide, which will add about 60,000 MW of generating capacity—equivalent to a sixth of the world’s current nuclear power capacity, however this movement has been lost after March 2001 and Japan’s Fukushima nuclear power episode. Nuclear power’s track record of providing clean and reliable electricity compares favorably with other energy sources. Low natural gas prices, mostly the result of newly accessible shale gas, have brightened the prospects that efficient gas-burning power plants could cut emissions of carbon dioxide and other pollutants relatively quickly by displacing old, inefficient coal plants, but the historical volatility of natural gas prices has made utility companies wary of putting all their eggs in that basket. Besides, in the long run, burning natural gas would still release too much carbon dioxide. Wind and solar power are becoming increasingly widespread, but their intermittent and variable supply make them poorly suited for large-scale use in the absence of an affordable way to store electricity. Hydropower, meanwhile, has very limited prospects for expansion in the United States because of environmental concerns and the small number of potential sites [7]. As part of any NPP safety that one should consider as part of design and operation of such source of energy is the reactor stability. Understanding time-dependent behaviors of nuclear reactors and the methods of their control is essential to the operation and safety of NPPs. This chapter provides researchers and engineers in nuclear engineering very general yet comprehensive information on the fundamental theory of nuclear reactor kinetics and control and the state-of-the-art practice in actual plants, as well as the idea of how to bridge the two. The dynamics and stability of engineering equipment that affects their economical and operation from safety and reliable operation point of view. In this chapter, we will talk about the existing knowledge that is today’s practice
65
66
Introduction to energy essentials
for design of reactor power plants and their stabilities as well as available techniques to designers. Although, stable power processes are never guaranteed. An assortment of unstable behaviors wrecks power apparatus, including mechanical vibration, malfunctioning control apparatus, unstable fluid flow, unstable boiling of liquids, or combinations thereof. Failures and weaknesses of safety management systems are the underlying causes of most accidents [8]. The safety and capital cost challenges involved with traditional NPPs may be considerable, but a new class of reactors in the development stage holds promise for addressing them. These reactors, called SMRs, produce anywhere from 10 to 300 MW, rather than the 1000 MW produced by a typical reactor. An entire reactor, or at least most of it, can be built in a factory and shipped to a site for assembly, where several reactors can be installed together to compose a larger nuclear power station. SMRs have attractive safety features too. Their design often incorporates natural cooling features that can continue to function in the absence of external power, and the underground placement of the reactors and the spent-fuel storage pools is more secure. As concisely stated by the IAEA in TECDOC-1524, reproduced in the bullet points shown below, SMRs are highlighted as a viable alternative to the NPPs that have been used as desalination plant energy sources for the following reasons • SMRs have lower investment costs; • Almost all SMR concepts appear to show increased availability ($90%); • Because of inherent safety features, most SMRs have good potential for location near population centers, hence lowering the transport costs. Since SMRs are smaller than conventional nuclear plants, the construction costs for individual projects are more manageable, and thus the financing terms may be more favorable. And because they are factory-assembled, the on-site construction time is shorter.The utility company can build up its nuclear power capacity step by step, adding additional reactors as needed, which means that it can generate revenue from electricity sales sooner. This helps not only the plant owner but also customers, who are increasingly being asked to pay higher rates today to fund tomorrow’s plants [7]. Generation IV reactors all have a combination of novel features, such as modular construction, rail transportability, and seismic isolation techniques, that are not present in current Generation II reactor designs and started to first appear on some Generation III and III+ reactor designs. The trend to modularization was adopted by the Westinghouse AP1000, in which factory prefabricated modules, consisting of steel plates and additional structural elements such as tie rods and stiffening elements, which are delivered to the site, assembled into larger crane-lift-able modules, which are then installed and filled with concrete. This type of construction technique is different to the traditional stick-build, in which all construction work is performed on site from the ground up.
Nuclear power plant history from past to present and future
The purpose of modular construction is to cut down on construction time and cost by transferring part of the construction process to a factory setting in which highly repetitive tasks can be automated or performed in a controlled environment. By the time the order book reaches “nth-of-a-kind,” where tooling and supply chains are in place, quality issues are ironed out, and vendors and contractors are far along the assembly and construction learning curve. Modular construction thus lends itself better to SMRs.
2.14.1 Safety concepts of the MSR In the current LWR licensing, there are two major guidelines for safety. One is used as a guide for safety design of the reactor at a conceptual design stage, which is called “General Design Criteria (GDC).” And another is used for accident analysis, which defines the events to be studied in licensing and provides the criteria of the analyzed results. Both guidelines have not been issued for the molten salt reactor (MSR). As for the former one, GDC for fluoride salt-cooled high-temperature reactor is now being established in the framework of the American Nuclear Society [9]. The SMR has very high safety, because of the following unique features, including practically no possibility of a severe accident (Furukawa et al., 2007) [10]. 1. Its primary loop and a secondary loop are operated at a very low pressure (about 0.5 MPa), which essentially eliminates accidents such as system rupture due to high pressure. 2. The molten salt is chemically inert, that is, it does not react violently with air or water, and is not flammable. The corrosion of Hastelloy N can be minimized by the appropriate chemical control and maintenance of the molten salt. 3. Pressure increase in a primary loop is incredible because boiling temperature of the fuel salt is very high (about 1400°C) compared with the operating temperature (about 700°C). 4. Since there is no water within a containment, there is no possibility of high pressure by steam generation, and no possibility of hydrogen explosion, at any accidental conditions. 5. The fuel salt is drained to a drain tank through a freeze valve, if required. In case of a rupture in the primary loop, the spilled fuel salt is drained to an emergency drain tank without passing a freeze valve. 6. The fuel salt can keep criticality only where graphite exists in an appropriate fraction. In an accident, the fuel salt, which is drained to a drain tank, cannot cause a recriticality accident (as for the MSR without graphite moderator, this drain tank is appropriately designed to prevent recriticality). 7. The MSR has a large negative reactivity coefficient of a fuel salt temperature that can suppress an abnormal change of the reactor power. Although a temperature coefficient of graphite is positive, it does not affect the safety, because the heat capacity of graphite is large enough.
67
68
Introduction to energy essentials
8. Since gaseous Fission Products (FPs) FPs can be removed by separating from the fuel salt, the danger due to the release of radioactivity from the core at accidental conditions can be minimized. 9. Since fuel composition can be adjusted when necessary, an excess reactivity and a reactivity margin to be compensated by control rods are small. Therefore, reactivity requirements for control rods are also small. 10. The delayed neutron fraction of 233U is lower than that of 235U and some delayed neutrons are generated outside the core. However, safe control of the reactor is possible because of a large negative reactivity coefficient with fuel salt temperature, and small reactivity insertion. 11. Since there is no airflow and no heat source within a core when fuel salt is drained at an accidental condition, graphite fire does not occur. With the US federal budget under tremendous pressure, it is hard to imagine taxpayers funding demonstrations of a new nuclear technology. But if the United States takes a hiatus from creating new clean-energy options—be it SMRs, renewable energy, advanced batteries, or carbon capture and sequestration—Americans will look back in 10 years with regret. There will be fewer economically viable options for meeting the United States’ energy and environmental needs, and the country will be less competitive in the global technology market. SMRs are unlikely to solve the economic and safety problems faced by nuclear power. According to the US DOE and some members of the nuclear industry, the next big thing in nuclear energy will be a small thing: the “SMR”. SMRs—“small” because they generate a maximum of about 30% as much power as typical current reactors, and “modular” because they can be assembled in factories and shipped to power plant sites—have been getting a lot of positive attention recently, as the nuclear power industry has struggled to remain economically viable in an era of flat demand and increasing competition from natural gas and other energy alternatives. SMRs have been touted as both safer and more cost effective than older, larger nuclear reactor designs. Proponents have even suggested that SMRs are so safe that some current Nuclear Regulatory Commission (NRC) regulations can be relaxed for them, arguing that they need fewer operators and safety officers, less robust containment structures, and less elaborate evacuation plans. Are these claims justified?
2.14.2 Economies of scale and catch SMR-based power plants can be built with a smaller capital investment than plants based on larger reactors. Proponents suggest that this will remove financial barriers that have slowed the growth of nuclear power in recent years. However, there is a catch: “affordable” does not necessarily mean “cost effective.” Economies of scale dictate that, all other things being equal, larger reactors will generate cheaper power. SMR proponents suggest that mass production of modular reactors
Nuclear power plant history from past to present and future
could offset economies of scale, but a 2011 study concluded that SMRs would still be more expensive than current reactors. Even if SMRs could eventually be more cost effective than larger reactors due to mass production, this advantage will only come into play when many SMRs are in operation. But utilities are unlikely to invest in SMRs until they can produce competitively priced electric power. This Catch-22 has led some observers to conclude that the technology will require significant government financial help to get off the ground.
2.14.3 Are small modular reactors safer? One of the chief selling points for SMRs is that they are supposed to be safer than current reactor designs. However, their safety advantages are not as straightforward as some proponents suggest. • SMRs use passive cooling systems that do not depend on the availability of electric power. This would be a genuine advantage under many accident scenarios, but not all. Passive systems are not infallible, and credible designs should include reliable active backup cooling systems. But this would add to cost. • SMRs feature smaller, less robust containment systems than current reactors. This can have negative safety consequences, including a greater probability of damage from hydrogen explosions. SMR designs include measures to prevent hydrogen from reaching explosive concentrations, but they are not as reliable as a more robust containment—which, again, would add to cost. • Some proponents have suggested siting SMRs underground as a safety measure. However, underground siting is a double-edged sword—it reduces risk in some situations (such as earthquake) and increases it in others (such as flooding). It can also make emergency intervention more difficult. And it too increases cost. • Proponents also point out that smaller reactors are inherently less dangerous than larger ones.While this is true, it is misleading, because small reactors generate less power than large ones, and therefore more of them are required to meet the same energy needs. Multiple SMRs may actually present a higher risk than a single large reactor, especially if plant owners try to cut costs by reducing support staff or safety equipment per reactor.
2.14.4 Shrinking evacuation zones Because of SMRs’ alleged safety advantages, proponents have called for shrinking the size of the emergency planning zone surrounding an SMR plant from the current standard of 10 miles to as little as 1000 ft, making it easier to site the plants near population centers and in convenient locations such as former coal plants and military bases. However, the lessons of Fukushima, in which radiation levels high enough to trigger evacuation or long-term settlement were measured at as much as 20 to 30 miles from the accident, suggest that these proposals, which are based on assumptions and models that have yet to be tested in practice, may be overoptimistic.
69
70
Introduction to energy essentials
2.14.5 Safety conclusions of nuclear power plants • Unless a number of optimistic assumptions are realized, SMRs are not likely to be a viable solution to the economic and safety problems faced by nuclear power. • While some SMR proponents are worried that the United States is lagging in the creation of an SMR export market, cutting corners on safety is a shortsighted strategy. • Since safety and security improvements are critical to establishing the viability of nuclear power as an energy source for the future, the nuclear industry and the DOE should focus on developing safer reactor designs rather than weakening regulations. • Congress should direct the DOE to spend taxpayer money only on support of technologies that have the potential to provide significantly greater levels of safety and security than currently operating reactors. • The DOE should not be promoting the idea that SMRs do not require 10-mile emergency planning—nor should it be encouraging the NRC to weaken its other requirements just to facilitate SMR licensing and deployment. As part of discussion of Reactor Safety Study (RSS) and responses for such important factor in respect to design of Generation IV rectors for future usage of producing electricity to meet the of such energy, we need to have some basic understanding of how fissionable nuclear reactors are fundamentally working. One of the fundamental issue that we need to pay attention to it, when it comes to operate fissionable nuclear power, is physics of reactor kinetics. As part of reactor kinetic, we should look at the fission reactivity due to neutron interaction with uranium (235U) or plutonium (239P) as example of fuel, and that is if, the multiplication exceed unity by more than a small amount, the reactor power will build up at a rapid rate of interaction. For constant power production, the effective multiplication factor keff must be kept at unity. Thus, a key quantity is the difference keff -1. This quantity is usually expressed in terms of the reactivity ρ , where by definition, is given as: k −1 ρ = eff (2.2) keff From Eq. (2.2), we can conclude that: 1 (2.3) 1− ρ Off course, a runaway chain reaction is one in which keff rises appreciably above unity or, equivalently, the reactivity ρ is appreciably greater than zero. A major aspect of reactor safety is avoidance of such an excursion. To put the matter of effective multiplication factor in different perspective, which describes all the possible events in the life of a neutron and effectively describes the state of finite multiply system, will be defined as follows: keff =
keff =
neutron production from fission in one neutron generation (2.4) neuttron absorption and leakage in the preceding neutron generation
Nuclear power plant history from past to present and future
Which, Eq. (2.4), is the required condition for a stable, self-sustained fission chain reaction in a multiplying system (in a nuclear reactor) is that exactly every fission initiates another fission. The minimum condition is for each nucleus undergoing fission to produce, on the average, at least one neutron that causes fission of another nucleus. Also, the number of fissions occurring per unit time (the reaction rate) within the system must be constant. This condition can be expressed conveniently in terms of the multiplication factor. The effective multiplication factor is the ratio of the neutrons produced by fission in one neutron generation to the number of neutrons lost through absorption in the preceding neutron generation. This can be expressed mathematically as shown in Eq. (2.4). It is obvious the effective multiplication factor in a multiplying system is a measure of the change in the fission neutron population from one neutron generation to the subsequent generation. • keff 1. If the multiplication factor for a multiplying system is greater than 1.0, then the multiplying system produces more neutrons than are needed to be self-sustaining. The number of neutrons is exponentially increasing in time (with the mean generation time). This condition is known as the supercritical state. In the spite of possible misapprehension, it is worth nothing noting that a bomblike nuclear explosion cannot occur in a nuclear reactor. In a bomb, a critical mass of almost pure fissile material (235U or 239Pu) is brought together violently and compressed by the force of a chemical explosion, and the chain reaction develops fully within onemillionth of a second—quickly enough for much of the fuel to fission before the mass is disassembled, considering the critical mass for nuclear weapons with and without reflectors [23]. In this situation, a spherical form of fissionable material will have a mass that is less than the critical mass if its radius is small compared to the mean-free path for fission, λ. For 1-MeV neutrons in 235U, λ = ( M/ρ N Aσ f ) = 17 cm for numerical values of the fission cross section σf and the uranium density ρ, there exist tables in public domain that can be looked up [23]. Although λ sets a crude scale for the dimensions, it does not alone determine the critical radius Rc. For a reasonable estimate of Rc, it is necessary to treat the geometry in greater detail and take into account two crucial nuclear parameters, which is beyond the scope of this book for time being.
71
72
Introduction to energy essentials
To continue with our safety discussion here, we state that the presence of the nonfissile material has two consequences that are pertinent to the issue of explosions: (1) The multiplication factor keff in a reactor is close to unity, whereas in a bomb it approaches 2, and; (2) The average time between fission generations (the mean neutron lifetime l, which is the mean neutron lifetime and it can be expressed as the sum of terms for the prompt and delayed components as, l = (1 − β )l p + βτ e, where lp and τe are mean lifetimes for prompt and delayed neutrons, respectively, and β is the delayed neutron fraction and for 235U is about 0.65%) is greater in a reactor than in a bomb, because the most frequent neutron reactions in a reactor are elastic or inelastic scattering, not fission. Note that, the fission cross section for neutrons colliding with 238U is small for neutron energies below 2 MeV and is negligible below 1 MeV. As a result, the chain reaction builds up much more slowly in a reactor than in a bomb. Overall, the first “line of defense” against an explosion in a reactor is the negative feedback that prevents criticality accidents. This should suffice. However, if there are mistakes in the design or operation of the reactor and the chain reaction reaches too high a power level, there is time for the ultimate “negative feedback” to come into play—the partial disassembly of the reactor core, which stops the chain reaction after only a relatively small amount of energy has been produced (i.e., only a small fraction of the nuclei have fissioned). This is what happened in the Chernobyl accident, where most of the energy of the explosion came from chemical reactions, including steam interacting with hot metal. Such an accident can be very serious, but the consequences are not on the scale of the consequences of a nuclear explosion. As part of general requirements for means of achieving reactor safety, underlying the approach to safety, for any sort of equipment, are high standards in design, construction, and the reliability of components. In nuclear reactors, concern about possible accidents has led to particularly intense efforts to achieve high standards. Individual components of the reactor and associated equipment must be of a codified high quality. As described in an Organization for Economic Co-operation and Development report. In the early years of water reactor development in the United States, a tremendous effort was put into development of very detailed codes and standards for nuclear plants, and these were widely adopted by other countries where nuclear plants were initially built under US licenses [3, p. 62]. The efforts of the United States have since been supplemented by parallel efforts by other countries and the IAEA. In parallel, a nuclear reactor safety philosophy has developed, which includes a number of special features. In the summary, we need to mention that the results of the RSS included estimates of the probability distributions for a variety of forms of harm: early fatalities, early illness, latent cancer fatalities, thyroid nodules, genetic effects, property damage, and magnitude of the area in which relocation and decontamination would be required. These results
Nuclear power plant history from past to present and future
Fig. 2.17 RSS comparison of annual probabilities of accident 100 nuclear reactors and man-caused.
were presented in the form of graphs of the probability of occurrence as a function of the magnitude of the harm. To provide perspective, the RSS also compared the risks from reactor accidents to those from other sorts of accidents or natural mishaps. For these other accidents, there are few data on latent effects. In Fig. 2.17, the annual risks from 100 reactors, as estimated in the RSS, are compared with the annual risks from other causes, such as airplane accidents and dam failures. For example, Fig. 2.9 indicates an average of one airplane accident causing 100 or more fatalities every 3 years, whereas a nuclear reactor accident with this early toll was predicted to occur only once every 80,000 years.
73
74
Introduction to energy essentials
Based on the safety arguments in above and the conclusion that we have seen, the question that comes to our mind is that, why we need NPP? Which is subject of the following section here in this chapter.
2.15 Why we need nuclear power plants Generally, speaking originally, the NPPs perceived as a cheap and potential source of peaceful power post-WW II, and the commercial use of energy driven by nuclear has been controversial for decades. Naturally, environmentalists constantly are worried about the dangers that NPPs and their radioactive waste poses a threat to nearby communities as time grew and plants constructions in the United States virtually begin to die after the early 1980 during Carter administration, who was particularly opposing any research around liquid-metal fast breeder reactor and their commercial implementations within the United States (i.e., Clinch River Project in Oak Ridge,TN), yet French government moved on to their commercialization of this type of reactor that is known as Phoenix II. Following the disaster at Chernobyl in former Soviet Union followed by Three-Mile Island episode in the United States enforced a lot of negative images toward NPP and their industries involving in their design and commercialization of them. Yet in the decade prior to the Japanese nuclear crises of Fukushima Daiichi of 2011, sentiment about nuclear power underwent a marked change. The alarming acceleration of global warming due to the burning of fossil fuels and concern about dependence of foreign fuel has forced policy makers, climate scientist, and energy experts to look once again at nuclear power as ultimate source of energy and encouraging the nuclear engineers and scientist to look into new generation of these plants with better life cycle and safety, while they are more efficient and cost effective for their operating owners to build [4]. Such enforcement also building momentum based on the population growth globally, and consequently increases on demand for the electricity that let us start thinking as a way of finding a new pat to renewable source of energy. The major growth in the electricity production industry in the last 30 years has centered on the expansion of natural gas power plants based on gas turbine cycles. The most popular extension of the simple Brayton gas turbine has been the combined cycle (CC) power plant with the air-Brayton cycle serving as the topping cycle and the steam-Rankine cycle serving as the bottoming cycle for new generation of NPPs that are known as GEN IV. The air-Brayton cycle is an open-air cycle and the steam-Rankine cycle is a closed cycle.The air-Brayton cycle for a natural-gas-driven power plant must be an open cycle, where the air is drawn in from the environment and exhausted with the products of combustion to the environment. This technique is suggested as an innovative approach to GEN IV NPPs in form and type of SMRs. The hot exhaust from the air-Brayton cycle passes through a heat recovery steam generator (HRSG) prior to exhausting to the environment in a CC. The HRSG serves the same purpose as a boiler for the conventional steam-Rankine cycle [4].
Nuclear power plant history from past to present and future
In 2007 gas turbine CC plants had a total capacity of 800 GW and represented 20% of the installed capacity worldwide. They have far exceeded the installed capacity of nuclear plants, though in the late 1990s they had less than 5% of the installed capacity worldwide.There are number of reasons for this. First, natural gas is abundant and cheap. Second, CC plants achieve the greatest efficiency of any thermal plant. And third, they require the least amount of waste heat cooling water of any thermal plant. A typical gas turbine plant consists of a compressor, combustion chamber, turbine, and an electrical generator. A CC plant takes the exhaust from the turbine and runs it through a HRSG before exhausting to the local environment. The HRSG serves the function of the boiler for a typical closed cycle steam plant. The steam plant consists of a steam turbine, a condenser, a water pump, an evaporator (boiler), and an electrical generator. In a CC plant, the gas turbine and steam turbine can be on the same shaft to eliminate the need for one of the electrical generators. However, the two shafts, two generator systems provide a great deal more flexibility at a slightly higher cost. In addition to the closed loop for the steam, an open loop circulating water system is required to extract the waste heat from the condenser. The waste heat extracted by this “circulating” water system is significantly less per megawatt for a CC system as the open Brayton cycle exhausts its waste heat directly to the air. The layout for the components of a typical CC power plant is given in Fig. 2.18. General Electric (GE) currently markets a system that will produce 61% efficiency at design power and better than 60% efficiency down to 87% of design power [11] for gas turbine CC plants.
Fig. 2.18 Typical gas turbine combined cycle power plant.
75
76
Introduction to energy essentials
An approximate efficiency can be calculated for a CC power plant by the following simple argument [12]. Brayton cycle efficiency =
Heat to Rankine cycle =QR = (1 − η B )Qin Rankine cycle efficiency =
WB = ηB Qin
Overall efficiency =
WR = ηR QR
W B + WR η Q + ηRQR η BQin + ηR (1 − η B )Qin = ηT = B in = Qin Qin Qin
= η B + ηR − η BηR
ηT = η B + ηR − η BηR
This efficiency must be corrected for pressure losses and assumes that all the heat in the Brayton exhaust is used in the HRSG. For a combustion gas turbine this is not usually possible if condensation of the water in the exhaust products is to be avoided. The detailed models developed in this effort give a more accurate answer. For the nuclear reactor system, the heat transfer is in the opposite direction. All reactor components and fluids in the primary and secondary loops must be at a higher temperature than the peak temperature of the gas exiting the heat exchanger. This severely restricts the peak temperature that can be achieved for the air entering the turbine. However, all is not lost. In a typical combustion system, there are pressure losses approaching 5% of the total pressure to complete the combustion process [13]. Heat exchangers can be built with significantly lower pressure drops than 5% approaching 1% [14]. Therefore, the most straightforward method to overcome this severe temperature limitation is to borrow a technique from steam power plants and implement multiple reheat cycles. That is the first heat exchanger heats the air to its peak temperature. Then the air is expanded through the first turbine. The air is then reheated to the same peak temperature and expanded through the second turbine. Based on the relative pressure losses that appear possible, up to five turbines might be considered. All five turbines will be driving the same compressor. Multiple compressors on concentric shafts driven by different sets of turbines might be possible, but that has not been considered here. For a nuclear system to take advantage of CC technology, there are many numbers of changes to the plant components that must be made. The most significant of course is that the combustion chamber must be replaced by a heat exchanger in which the working fluid from the nuclear reactor secondary loop is used to heat the air. The normal Brayton cycle is an internal combustion one where the working fluid is heated by the combustion of the fuel with the air in the combustion chamber. The walls of the combustion chamber can be cooled and peak temperatures in the working fluid can be
Nuclear power plant history from past to present and future
significantly above the temperature that the walls of the chamber can tolerate for any length of time.
2.16 Methodology of combined cycle The approach taken in the CC code developed for this effort is to model the thermodynamics of the components making up the power conversion systems as real components with nonideal efficiencies. Pressure drops are included for every component except the connected piping. The compressor design is modeled with a small stage polytropic efficiency to take into account state-of-the-art designs. The gas turbines are likewise modeled with a polytropic efficiency. The steam turbines are modeled with a simple overall thermal efficiency. Pressure drops in each of the heat exchangers are included. The input files specify the pressure drops and the heat exchangers are designed to meet these specifications if possible. Some scientists are calling the NPPs source of energy as 100% renewable energy and off course environmentalists arguably are saying that is wrong approach, just because in the core of these plants there exist uranium or plutonium as fuel when we are talking about fission type NPPs that they exist in grid today and producing electricity to the net. However, on the other side of spectrum, where researchers and scientists at national laboratories and universities around the globe that are working toward fusion program to achieve a breakeven are passionately argue that NPPs of fusion type are totally clean so long as the source of energy come in form of two hydrogen isotopes such as deuterium (D) and tritium (T) as source of fusion reaction and driving energy from it. This is a dream that is too far away from reality of today’s need and demand for electricity yet is not out of scope of near future. Physics of plasma for driving energy via inertial confinement fusion [15] or magnetic confinement fusion [16] are in agreement with such innovative approaches.
2.16.1 Why we still need nuclear power “Nuclear power’s track record of providing clean and reliable electricity compares favorably with other energy sources. Low natural gas prices, mostly the result of newly accessible shale gas, have brightened the prospects that efficient gas-burning power plants could cut emissions of carbon dioxide and other pollutants relatively quickly by displacing old, inefficient coal plants, but the historical volatility of natural gas prices has made utility companies wary of putting all their eggs in that basket. Besides, in the long run, burning natural gas would still release too much carbon dioxide. Wind and solar power are becoming increasingly widespread, but their intermittent and variable supply make them poorly suited for large-scale use in the absence of an affordable way to store electricity. Hydropower, meanwhile, has very limited prospects for expansion in the United States because of environmental concerns and the small number of potential sites.” “The United States must take a number of decisions to maintain and advance the option of nuclear energy. The NRC’s initial reaction to the safety lessons of Fukushima
77
78
Introduction to energy essentials
must be translated into action; the public needs to be convinced that nuclear power is safe.Washington should stick to its plan of offering limited assistance for building several new nuclear reactors in this decade, sharing the lessons learned across the industry. It should step up its support for new technology, such as SMRs and advanced computermodeling tools. And when it comes to waste management, the government needs to overhaul the current system and get serious about long-term storage. Local concerns about nuclear waste facilities are not going to magically disappear; they need to be addressed with a more adaptive, collaborative, and transparent waste program.” These are not easy steps, and none of them will happen overnight. But each is needed to reduce uncertainty for the public, the energy companies, and investors. A more productive approach to developing nuclear power—and confronting the mounting risks of climate change—is long overdue. Further delay will only raise the stakes.
2.16.2 Is nuclear energy renewable source of energy Assuming for time being we are taking fission reaction as foundation for present (GEN III) and future (GEN IV) nuclear power reactors, as source nuclear energy source to somewhat degree, we can argue it is a clean source of energy. Although nuclear energy is considered clean energy its inclusion in the renewable energy list is a subject of major debate.To understand the debate, we need to understand the definition of renewable energy and nuclear energy first. However, until we manage through future technology of these fission reactors to manage to bring down the price electricity per kilowatt-hours driven by fusion energy down to the point of those by gas or fossil fuels, there is no chance to push these reactors beyond GEN III. However, efforts toward reduction price of electricity driven by nuclear fission power plants, especially using some innovative design of GEN IV plants with high temperature base line in conjunction with some thermodynamics cycles such as Brayton and Rankine, is on the way by so many universities and national laboratory such as Idaho National Laboratory and Universities such as MIT, UC Berkeley, and University of New Mexico as well as this author. Renewable energy is defined as an energy source/fuel type that can regenerate and can replenish itself indefinitely. The five renewable sources used most often are biomass, wind, solar, hydro, and geothermal. Nuclear energy on the other hand is a result of heat generated through the fission process of atoms. All power plants convert heat into electricity using steam. At NPPs, the heat to make the steam is created when atoms split apart—called fission. The fission releases energy in the form of heat and neutrons. The released neutrons then go on to hit other neutrons and repeat the process, hence generating more heat. In most cases the fuel used for nuclear fission is uranium. One question we can raise here in order to further understand whether, or not, we need present nuclear technology as a source of energy is that:
Nuclear power plant history from past to present and future
What is the difference between clean energy and renewable energy? Put another way, why is nuclear power in the doghouse when it comes to revamping the nation’s energy mix? The issue has come to the forefront the time during the debate over the Waxman– Markey energy and climate bill and its provisions for a national renewable energy mandate. To simply put it, Republicans have tried and failed several times to pass amendments that would christen nuclear power as a “low-emissions” power source eligible for all the same government incentives and mandates as wind power and solar power. Many environmental groups are fundamentally opposed to the notion that nuclear power is a renewable form of energy—on the grounds, that it produces harmful waste byproducts and relies on extractive industries to procure fuel like uranium. Even so, the nuclear industry and pronuclear officials from countries including France have been trying to brand the technology as renewable, on the grounds, that it produces little or no greenhouse gases. Branding nuclear as renewable could also enable nuclear operators to benefit from some of the same subsidies and friendly policies offered to clean energies like wind, solar, and biomass.
2.16.3 Argument for nuclear as renewable energy Most supporters of nuclear energy point out the low carbon emission aspect of nuclear energy as its major characteristic to be defined as renewable energy. According to nuclear power opponents, if the goal to build a renewable energy infrastructure is to lower carbon emission then there is no reason for not including nuclear energy in that list [17]. But one of the most interesting arguments for including nuclear energy in the renewable energy portfolio came from Bernard L. Cohen, former professor at University of Pittsburg. Professor Cohen defined the term “indefinite” (time span required for an energy source to be sustainable enough to be called renewable energy) in numbers by using the expected relationship between the Sun (source of solar energy) and the Earth. According to Professor Cohen, if the uranium deposit could be proved to last as long as the relationship between the Earth and Sun is supposed to last (5 billion years) then nuclear energy should be included in the renewable energy portfolio [18]. In his paper, Professor Cohen claims that using breeder reactors (nuclear reactor able to generate more fissile material than it consumes) it is possible to fuel the earth with nuclear energy indefinitely. Although the amount of uranium deposit available could only supply nuclear energy for about 1000 years, Professor Cohen believes actual amount of uranium deposit available is way more than what is considered extractable right now. In his arguments, he includes uranium that could be extracted at a higher cost, uranium from the sea water, and also uranium from eroding earth crust by river water. All of those possible uranium resources if used in a breeder reactor would be
79
80
Introduction to energy essentials
enough to fuel the earth for another 5 billion years and hence renders nuclear energy as renewable energy.
2.16.4 Argument against nuclear energy as renewable energy One of the biggest arguments against including nuclear energy in the list of renewables is the fact that uranium deposit on earth is finite, unlike solar and wind. To be counted as renewable, the energy source (fuel) should be sustainable for an indefinite period of time, according to the definition of renewable energy. Another major argument proposed by the opponents of including nuclear energy as renewable energy is the harmful nuclear waste from nuclear power reactors. The nuclear waste is considered as a radioactive pollutant that goes against the notion of a renewable energy source. Yucca Mountain is one of the examples used quite often to prove this point. Most of the opponents in the United States also point at the fact that while most renewable energy source could render the US energy independent, uranium would still keep the country energy dependent as the United States would still have to import uranium.
2.16.5 Today safety of nuclear power plant Aftermath of the major accidents at Three Mile Island in 1979 and Chernobyl in 1986 and then recent devastated Japan’s Fukushima NPP frailer in Japan in March 2011, pretty much nuclear power fell out of favor, and some countries applied the brakes to their nuclear programs. Concerns about climate change and air pollution, as well as growing demand for electricity, led many governments to reconsider their aversion to nuclear power, which emits little carbon dioxide and had built up an impressive safety and reliability record. Some countries reversed their phaseouts of nuclear power, some extended the lifetimes of existing reactors, and many developed plans for new ones. Despite all these given concerns and issues in respect to the nuclear energy, still we are facing the fact of why we still need nuclear power as clean source of energy, particularly when we deal with renewable source of energy arguments [19]. Today, roughly 60 nuclear plants are under construction worldwide, which will add about 60,000 MW of generating capacity—equivalent to a sixth of the world’s current nuclear power capacity, however this movement has been lost after March 2001 and Japan’s Fukushima nuclear power episode. Nuclear power’s track record of providing clean and reliable electricity compares favorably with other energy sources. Low natural gas prices, mostly the result of newly accessible shale gas, have brightened the prospects that efficient gas-burning power plants could cut emissions of carbon dioxide and other pollutants relatively quickly by displacing old, inefficient coal plants, but the historical volatility of natural gas prices has made utility companies wary of putting all their eggs in that basket. Besides, in the long run, burning natural gas would still release too much carbon dioxide. Wind and
Nuclear power plant history from past to present and future
solar power are becoming increasingly widespread, but their intermittent and variable supply make them poorly suited for large-scale use in the absence of an affordable way to store electricity. Hydropower, meanwhile, has very limited prospects for expansion in the United States because of environmental concerns and the small number of potential sites [19]. As part of any NPP safety that one should consider as part of design and operation of such source of energy is the reactor stability. Understanding time-dependent behaviors of nuclear reactors and the methods of their control is essential to the operation and safety of NPPs. This chapter provides researchers and engineers in nuclear engineering very general yet comprehensive information on the fundamental theory of nuclear reactor kinetics and control and the state-of-the-art practice in actual plants, as well as the idea of how to bridge the two. The dynamics and stability of engineering equipment that affects their economical and operation from safety and reliable operation point of view. In this chapter, we will talk about the existing knowledge that is today’s practice for design of reactor power plants and their stabilities as well as available techniques to designers. Although, stable power processes are never guaranteed. An assortment of unstable behaviors wrecks power apparatus, including mechanical vibration, malfunctioning control apparatus, unstable fluid flow, unstable boiling of liquids, or combinations thereof. Failures and weaknesses of safety management systems are the underlying causes of most accidents. The safety and capital cost challenges involved with traditional NPPs may be considerable, but a new class of reactors in the development stage holds promise for addressing them. These reactors, called SMRs, produce anywhere from 10 to 300 MW, rather than the 1000 MW produced by a typical reactor. An entire reactor, or at least most of it, can be built in a factory and shipped to a site for assembly, where several reactors can be installed together to compose a larger nuclear power station. SMRs have attractive safety features too. Their design often incorporates natural cooling features that can continue to function in the absence of external power, and the underground placement of the reactors and the spent-fuel storage pools is more secure. Since SMRs are smaller than conventional nuclear plants, the construction costs for individual projects are more manageable, and thus the financing terms may be more favorable. And because they are factory-assembled, the on-site construction time is shorter.The utility company can build up its nuclear power capacity step by step, adding additional reactors as needed, which means that it can generate revenue from electricity sales sooner. This helps not only the plant owner but also customers, who are increasingly being asked to pay higher rates today to fund tomorrow’s plants [20]. With the US federal budget under tremendous pressure, it is hard to imagine taxpayers funding demonstrations of a new nuclear technology. But if the United States takes a hiatus from creating new clean-energy options—be it SMRs, renewable energy, advanced batteries, or carbon capture and sequestration—Americans will look back in
81
82
Introduction to energy essentials
10 years with regret. There will be fewer economically viable options for meeting the United States’ energy and environmental needs, and the country will be less competitive in the global technology market.
2.16.6 Summary It seems like at the heart of debate lies the confusion over the exact definition of renewable energy and the requirements that needs to be met in order, to be one. The recent statement by Helene Pelosi, the Interim Director General of International Renewable Energy Agency (IRENA), saying IRENA will not support nuclear energy programs because it is a long, complicated process, it produces waste and is relatively risky, proves that their decision has nothing to do with having a sustainable supply of fuel [21]. And if that is the case then nuclear proponents would have to figure out a way to deal with the nuclear waste management issue and other political implications of nuclear power before they can ask IRENA to reconsider including nuclear energy in the renewable energy list [22]. One more strong argument against fission NPPs as source of renewable energy comes from Dr. James Singmaster in August 3, 2009 and has been republished here as follow: “The basic problem of the climate crisis is the ever-expanding overload of heat energy in the closed biosphere of earth. Temperatures going up indicate the increasing heat energy overload. Everyone reading this should check out Dr. E. Chaisson’s article titled "Long-Term Global Warming from Energy Usage" in EOS,Trans. Amer. Geophys. Union,V. 89, No. 28, Pgs. 253–4 (2008) to learn that nuclear energy, be it fission or fusion, being developed should be dropped with money put into it being put to developing renewable energy supplies using the sun, wind and hydrogen. The hydrogen needs to be generated from splitting water using sunlight with the best one or two of seven catalysts reported in the last two years. Or with excess solar or wind collection generating electricity, that could be used to generate hydrogen by electrolysis of water. There is no way that nuclear power can avoid releasing trapped energy to increase the energy overload, so it should be forgotten. To remove some of the energy as well as some of the carbon overload in the biosphere, we need to turn to pyrolysis of massive ever-expanding organic waste streams to remake charcoal that will be removing some of both overloads. It will require using renewable energy and the pyrolysis process expels about 50% of the carbon as small organic chemicals that can be collected, refined and used foe fuel that is a renewable one. For more about using pyrolysis, search my name on GreenInc blog or google it for other blog comments on pyrolysis. Dr. J. Singmaster”
Nuclear power plant history from past to present and future
References [1] DOE/NE-0088, The History of Nuclear Energy, U.S. Department of Energy Office of Nuclear Energy, Science and Technology. [2] R. Rhodes, The Making of the Atomic Bomb, Simon and Schuster, 1986. [3] A. Weinberg, The First Nuclear Era, AIP Press, 1994. [4] B. Zohuri, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants: An Innovative Design Approach, Springer International Publishing, Switzerland, 2015. [5] B. Zohuri, P. McDaniel, C. De Oliveira, Advanced nuclear open-air Brayton cycles for highly efficient power conversion, Nucl. Technol. J 192 (1) (2015) 48–60. [6] B. Zohuri, Hybrid Energy Systems: Driving Reliable Renewable Sources of Energy Storage,, 1st Ed., Springer Publishing Company, 2018. [7] E. Moniz, Why we still need nuclear power, November 2, 2011, http://energy.mit.edu/news/whywe-still-need-nuclear-power/. [8] B. Zohuri, Neutronic Analysis For Nuclear Reactor Systems, Springer Publishing Company, 2016 November 3. [9] G. Flanagan, ANS 20.1 working group, 2014 (2012). [10] K. Furukawa, et al., Molten salt reactor for sustainable nuclear power MSR FUJI, IAEA TECDOC-1536 (Status of small reactor designs without on-site refueling) (2007) 821856 International Atomic Energy Agency. [11] GE Energy Flex Efficiency 50 Combined Cycle Power Plant, e-brochure, 2012. [12] J.H. Horlock, Cogeneration-Combined Heat and Power(CHP), Krieger Publishing Company, Malabar FL, 1997. [13] J.D. Mattingly, Elements of Gas Turbine Propulsion, McGraw-Hill, Inc., New York, 1996. [14] J.D. Mattingly, ibid. [15] B. Zohuri, Inertial Confinement Fusion Driven Thermonuclear Energy, Springer Publishing Company, 2017 Jan 29. [16] B. Zohuri, Plasma Physics and Controlled Thermonuclear Reactions Driven Fusion Energy, Springer Publishing Company, 2016 Nov 17. [17] K. Johnson, Is Nuclear Power Renewable Energy, Wall Street Journal May (21) (2009). [18] B.L. Cohen, Breeder reactors: a renewable energy source, Am. J. Phys. 51 (1983) 75. [19] B. Zohuri, Hybrid Energy Systems: Driving Reliable Renewable Sources of Energy Storage, 1st Ed., Springer Publishing Company, 2018. [20] A.P. Fraas, Heat Exchanger Design, 2nd Ed., John Wiley & Sons, New York, 1989. [21] J. Kanter, Is Nuclear Power Renewable, New York Times 3 (August) (2009). [22] D. Chowdhury, Is Nuclear Energy Renewable Energy, Stanford Physics Department March (22) (2012). [23] D. Bodansky, Nuclear Energy, Principles, Practices, and Prospects, Springer Publishing Company (2004). [24] F. Settle, Nuclear Chemistry and the Community, http://www.chemcases.com/nuclear/index.html. [25] B. Zohuri, P.J. McDaniel, Thermodynamics in Nuclear Power Plant Systems, 2nd Ed., Springer Publishing Company, 2019.
83
CHAPTER 3
Nuclear energy research and development roadmap and pathways US Department of Energy (DOE) as part of its activity prepared a report to congress in April 2010 based on ongoing nuclear energy research and development roadmap among the national laboratories under auspices of DOE in conjunction with industry and companies involved in such activities. In this report, they best characterized the current prospects of nuclear power in a world that is confronted with a burgeoning demand for energy, higher energy prices, energy supply, a new source of renewable energy, and security concerns. There will be many challenges associated with the potential future deployment of nuclear energy among countries involved with this technology and energy mix on a long-term timescale. Studies carried out over the past decade, both within governments and the learned societies, include consideration of futures with a nuclear contribution to electricity generation capacity of up to 75 gigawatts (GW) by around the middle of the 21st century; they also include scenarios with much lower contributions from nuclear energy.
3.1 Introduction The discovery of nuclear fission in 1939, following the Manhattan Project, was an event that opened up the prospect of an entirely new source of power utilizing the internal energy of the atom. The basic materials that can be used for the release of nuclear energy driven by a fission product are elements of uranium, plutonium, and thorium where larger quantities of these materials can be found in the earth’s crust, and sometimes they are very expensive to be extracted due to their low concentration at the source. Such cost can be balanced by electrical energy produced by them that makes extraction of such materials very cost effective. During the present century, the world’s consumption of energy has grown rapidly due to the increase in the earth population, thus the per capita increase in the use of energy for day-to-day use operation of industry, agriculture, and transportation are few examples that we can mention. For example, in the United States, the population growth rate during the period from 1940 to 1970 was about 1.6% per annum, but the total consumption of energy increased at an average annual rate of approximately 3.5%. Naturally, our primary interest is in commercializing nuclear power energy, which in return produces a larger and larger proportion of nuclear energy in the form of electrical power. In the United States the total consumption of electricity in industry, Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
85
86
Introduction to energy essentials
commerce, and the home had been growing by some 7% each year from 1940 to about 1973. An annual growth rate of only 3% would require the construction of a new electrical power plant, over the 25-year period that was ending in the year 2000, having a capacity equal to that of all the plant operating in 1975. This estimate did not consider the need for replacing obsolete facilities or possible shifts to electricity as an energy source to replace oil, for example, in electric vehicles [1]. In 2002, nuclear power supplied 20% of the United States and 17% of world electricity consumption. Experts project worldwide electricity consumption will increase substantially in the coming decades, especially in the developing world, accompanied by economic growth and social progress. However, the official forecast calls for a mere 5% increase in nuclear electricity–generating capacity worldwide by 2020, and even this is questionable while electricity use could grow by as much as 75% [2]. These projections demand construction of a new nuclear plant and reflect both economic considerations and growing antinuclear sentiment within the public in key countries having this technology at their disposal. The limited prospects for nuclear power and producing energy from it are attributable ultimately to four unresolved problems [2]. 1. Costs: Nuclear power has higher overall lifetime costs compared to natural gas with combined cycle turbine technology and coal at least in the absence of a carbon tax or an equivalent “cap-and-trade” mechanism for reducing carbon emissions. 2. Safety: Nuclear power has perceived adverse safety, environmental, and health effects, heightened by the 1979 Three Mile Island and 1986 Chernobyl reactor accidents, but also by accidents at fuel cycle facilities in the United States, Russia, and Japan. There is also growing concern about the safe and secure transportation of nuclear materials and the security of nuclear facilities from terrorist attacks. 3. Proliferation: Nuclear power entails potential security risks, notably the possible misuse of commercial or associated nuclear facilities and operations to acquire technology or materials as a precursor to the acquisition of a nuclear weapons capability. Fuel cycles that involve the chemical reprocessing of spent fuel to separate weaponsusable plutonium and uranium-enrichment technologies are of special concern, especially as nuclear power spreads around the world. 4. Waste: Nuclear power has unresolved challenges in the long-term management of radioactive wastes. The United States and other countries are yet to implement final disposition of spent fuel or high-level radioactive waste streams created at various stages of the nuclear fuel cycle. Since these radioactive wastes present some danger to present and future generations, the public and its elected representatives, as well as prospective investors in nuclear power plants, properly expect continuing and substantial progress toward a solution to the waste disposal problem. Successful operation of the planned disposal facility at Yucca
Nuclear energy research and development roadmap and pathways
Mountain would ease, but not solve, the waste issue for the United States and other countries if nuclear power expands substantially. Aside from global warming and its impact reduction using nuclear power plant that the MIT [1] report argues with, today, nuclear power generation is not economically cost effective and competitive choice. Taking into consideration the four aforementioned factors, we need government involvement at least in three areas, and that are safety, proliferation, and waste respectfully. However, if we push the issue free carbon dioxide emission source of producing energy for electricity usage, then the cost of ownership of nuclear power plant per kilowatt will be possibly justified and indeed an important and a vital future of source of energy. The generation of electricity from fossil fuels, notably natural gas and coal, is a major and a growing contributor to the emission of carbon dioxide—a greenhouse gas that contributes significantly to global warming. We share the scientific consensus that these emissions must be reduced and believe that the United State will eventually join with other nations in the effort to do so. At least for the next few decades, there are only a few realistic options for reducing carbon dioxide emissions from electricity generation: • Increase efficiency in electricity generation and use; • expand use of renewable energy sources such as wind, solar, biomass, and geothermal; • capture carbon dioxide emissions at fossil-fueled (especially coal) electric-generating plants and permanently sequester the carbon; and • increase use of nuclear power. Increasing energy demand, global climate change, and constrained energy supplies are likely to impact how energy affects your business in the future. Is your company prepared for the energy challenges that lie ahead? The ground-breaking report Energy Strategy for the Road Ahead reveals what 20 leading US companies recommend businesses should do now to prepare for the risks and opportunities of our energy future [2]. Market trends suggest that the demand for energy resources will rise dramatically over the next 25 years: • Global demand for all energy sources is forecasted to grow by 57% over the next 25 years. • US demand for all types of energy is expected to increase by 31% within 25 years. • By 2030, 56% of the world’s energy use will be in Asia. • Electricity demand in the United States will grow by at least 40% by 2032. • New power generation equal to nearly 300 (1000 MW) power plants will be needed to meet electricity demand by 2030. • Currently, 50% of US electricity generation relies on coal, a fossil fuel, while 85% of US greenhouse gas emissions result from energy-consuming activities supported by fossil fuels.
87
88
Introduction to energy essentials
Sources: Annual Energy Outlook (DOE/EIA-0383(2007)), International Energy Outlook 2007 (DOE/EIA-0484(2007), Inventory of US Greenhouse Gas Emissions and Sinks: 1990-2005 (April 2007) (EPA 430-R-07-002). If energy prices also rise dramatically due to increased demand and constrained supply, business impacts could include: • Reduced profits due to high operating costs • Decline of sales of energy-using products • Loss of competitiveness in energy intensive businesses • Disruptions in supply chains as suppliers are unable to meet cost obligations or go bankrupt Recent history also demonstrates that catastrophic weather events, terrorism, and shifting economic centers are not just events of our imagination but realities of our lifetime. Given this challenging landscape, what steps do US businesses need to take today to survive a potentially disruptive energy future? How also we can make production of energy more cost effective so the total cost of ownership and return on investment for these businesses make total sense? Global Business Network (GBN), a member of the monitor group, in cooperation with the US Environmental Protection Agency (EPA), gathered senior executives from 20 major US companies to consider the potential energy impacts that US businesses may face over the next decade. Based on four plausible scenarios of the world in 2020, the report Energy Strategy for the Road Ahead identifies a set of strategies that will help businesses act now to prepare for future energy-related risks. Considering changes in global economic patterns and shifts in US policy and regulation toward climate change as key factors that would affect the shape of the future ahead, the following four scenarios were created by the corporate executives who participated in the GBN workshops: • The same road—where the world continues much in the same direction it appears to be going now in regard to energy and environmental concerns around climate change. • The long road—where the world undergoes a significant shift in the economic, geopolitical, and energy centers of gravity. • The broken road—where the world continues much in the direction of today but is then hit by a severe event that overturns established systems and rules. • The fast road—where reasoned decisions and investments about energy and climate risk are made early enough to make a difference. “The world can expect energy prices to continue their generally upward spiral in the years ahead if global energy policies remain the same,” the International Energy Agency (IEA) reported [3]. Rapid economic development in China and India, coupled with relatively consistent energy use in industrialized nations, will likely strain the world’s ability to meet a
Nuclear energy research and development roadmap and pathways
projected rise in energy demand of some 1.6% a year until 2030, the agency predicted in its annual World Energy Outlook report [4]. The IEA significantly increased its projections of future oil costs in this year’s report due to the changing outlook for demand and production costs. Now, it expects crude oil to average $100 per barrel over the next two decades and more than $200 per barrel in 2030, in nominal terms. Last year’s forecast estimated that a 2030 barrel would amount to only $108.
3.2 Nuclear reactors for power production In the United States, most reactors design and development for the generation of electrical power was branched from early nuclear navy research when it was realized that a compact nuclear power plant would have a great advantage for submarine-driven nuclear propulsion system. To have such power plant on board, would make possible long voyages cross the oceans at high speeds without the necessity for resurfacing at frequent intervals, and Argonne National Laboratory was assigned with such task of designing such reactor. So, the first generation of pressurized water reactor (PWR) was born where the use of highly enriched uranium as the fuel and water under pressure as the moderator as well as coolant allowed small version of this type of high-power reactor. Consequently, the first prototype of such reactor, namely STR Mark 1, starts operation at Arco, Idaho, in March 1953, and production version of it was installed on USS Nautilus, the first nuclear power submarine after May 31, 1953. As a result of experience gained in the successful operation of the submarine reactors, the first commercial version of PWR was designed in Pennsylvania shipping port and went into operation on December 2, 1957, with the water pressure of 13.8 MPa, that is, 2000 psi, and steam produced in the heat exchanger with a temperature of about 254°C (490°F) and pressure close to 4.14 MPa (600 psi). In order to make such a reactor cost effective and reduce the cost of the power produced, only a small number of the fuel elements are highly enriched in uranium-235 (U235) as an alloy with zirconium, the remainder being of normal Uranium as the dioxide. The change in core design required bigger real estate for footprint of the commercialized version of PWR that was not an issue for a land-based facility. The output power of this reactor was about 60 MW electrical equivalents to 230 MW thermal, and further enhancement in core design increased the power out to 150 MW electrical and 505 MW thermal. Pressurized water reactors using slightly 2% to 4% enriched uranium dioxide as the fuel are now commonly used in the United States and other countries around the globe for commercial power generation. The most recent plants have electrical capacities in the neighborhood of 1000 MW (3000 MW thermal) which is due to a modification of the pressurized water concept where steam is produced directly by utilizing fission heat to boil water within the reactor core, rather than in an external heat exchanger.
89
90
Introduction to energy essentials
Later on, other reactor designs based on different fuel materials, moderators, and coolants, with various electrical and thermal powers output, were born, and the examples are thermal reactors such as: • Boiling water reactor (BWR) initiated in 1953 • Water-cooled graphite moderated in 1954 • High-temperature gas-cooled reactor • Liquid metal fast breeder reactors Basically, all commercial reactor power plants of present interest are systems for generating steam utilizing the heat of nuclear fission to boil water and produce steam for turbine, and they often referred to as nuclear steam supply systems or NSSS. The steam is expanded in a turbine which drives a generator to produce electricity in the conventional manner. The exhaust steam from the turbine passes on to a condenser where it is converted into liquid water, and this is returned as feed water to the steam generator of NSSS. The proportion of the heat supplied in a power plant that is actually converted into electrical energy is called the thermal efficiency of the system; thus, in a nuclear installation,
Thermal efficiency =
Electrical energy generated (3.1) Heat produced in the reactor
The maximum possible value of the thermal efficiency is the ideal thermodynamic efficiency, which is given by following relationship
Ideal thermodynamic efficiency =
T2 − T1 (3.2) T2
where, T1= is the absolute temperature of the steam entering the turbine(K Kelvin) T2= is the temperature at which heat is rejected to the condenser(K kelvin) The ideal thermodynamic efficiency can be increased by having T2 as high as possible and T1 as low as possible. In practice, T1 is more or less fixed by the ambient temperature; the thermal efficiency of a steam electric plant is then largely determined by the steam temperature, which should be as high as feasible. Conditions in PWRs and BWRs are such that the steam temperature is lower than in modern fossil-fuel power plants in which the heat is produced by burning coal, oil, or gas. Here, the thermal efficiencies of these reactor plants are only about 33%, compared with 40% for the best fossil-fuel facilities. With the HTGRs and fast-breeder reactors, however, the thermal efficiencies should equal to those of the best fossil-fuel plants, that is, about 40%.
Nuclear energy research and development roadmap and pathways
3.3 Future of nuclear power plant systems In response to the difficulties in achieving suitability, a sufficiently high degree of safety and a competitive economical basis for nuclear power, the DOE initiated the Generation IV program in 1999. Generation IV (GEN IV) refers to the broad division of nuclear designs into four categories as follows: 1. Early prototype reactor [Generation I (GEN I)] 2. The large central station nuclear power plants of today [Generation II (GEN II)] 3. The advanced light water reactors (LWRs) and other systems with inherent safety features that have been designed in recent years [Generation III (GEN III)] 4. The next-generation system to be designed and built two decades from now (GEN IV) By 2000, international interest in the Generation IV project had resulted in ninecountry coalition that includes: i. Argentina ii. Brazil iii. Canada iv. France v. Japan vi. South Africa vii. South Korea viii. United Kingdom ix. The United States Participating are mapping out and collaborating on the research and development of future nuclear energy systems. Although the GEN IV program is exploring a wide variety of new systems, a few examples serve to illustrate the broad approaches reactor designs are developing to meet their objectives. The next-generation systems are based on three general classes of reactors: 1. Gas-cooled 2. Water-cooled 3. Fast-spectrum All these categories and their brief designs were extensively explained in Chapter 1 of this book and briefly are covered in next sections.
3.4 Next generation of nuclear power reactions for power production Experts are projecting worldwide electricity consumption will increase substantially in the coming decades, especially in the development world, accompanying economic growth and social progress that has direct impact on rising electricity prices have focused fresh attention on nuclear power plants. New, safer, and more economical
91
92
Introduction to energy essentials
nuclear reactors could not only satisfy many of our future energy needs but could combat global warming as well. Today’s existing nuclear power plants on line in the United States provide fifth of the nation’s total electrical output. Taking into account the expected increase in energy demand worldwide and the growing awareness about global warming, climate change issues, and sustainable development, nuclear energy will be needed to meet future global energy demand. Nuclear power plant technology has evolved as distinct design generations as we have mentioned in previous section and have briefly summarized here again as follows: First generation (GEN I): prototypes and first realizations (∼1950–1970) Second generation (GEN II): current operating plants (∼1970–2030) Third generation (GEN III): deployable improvements to current reactors (∼2000 and on) Fourth generation (GEN IV): advanced and new reactor systems (2030 and beyond) The Generation IV International Forum, or GIF, was chartered in July 2001 to lead the collaborative efforts of the world’s leading nuclear technology nations to develop next-generation nuclear energy systems to meet the world’s future energy needs. Eight technology goals have been defined for GEN IV systems in four broad areas: 1. Sustainability, 2. economics, 3. safety and reliability, and, finally, 4. proliferation resistance and physical protection These ambitious goals are shared by a large number of countries as they aim at responding to economic, environmental, and social requirements of the 21st century. They establish a framework and identify concrete targets for focusing on GIF R&D efforts. Fig. 3.1 is representation and evolution of the historical generation of all nuclear powers from past to present and future. Table 3.1 further describes these ambitious goals in more details for the GEN IV, nuclear energy systems as well. The next generation (GEN IV) of nuclear energy systems is intended to meet the above goals while being at least as effective as the “third” generation in terms of economic competitiveness, safety, and reliability to provide a sustainable development of nuclear energy. In principle, the GEN IV systems should be marketable or deployable from 2030 onward. The systems should also offer a true potential for new applications compatible with an expanded use of nuclear energy, in particular in the fields of hydrogen or synthetic hydrocarbon production, sea water desalination, and process heat production. It has been recognized that these objectives, widely and officially shared by a large number of countries, should be at the basis of an internationally shared research and
Nuclear energy research and development roadmap and pathways Generation IV Generation III+ Generation III Generation II
Generation I Early prototypes
- Shippingport - Dresden - Magnox 1950
1960
Gen I
Revolutionary designs
Evolutionary designs
Advanced LWRs Commercial power
1970
- ABWR - ACR1000 - AP1000 - APWR - EPR - ESBWR
- CANDU 6 - System 80+ - AP600
- PWRs - BWRs - CANDU 1980
Gen II
1990
-
2000
Gen III
2010
2020
Gen III+
Safer Sustainable Economical More Proliferation resistant and physically secure
2030
Gen IV
Fig. 3.1 Evolution of nuclear power. Fig. 3.2 WWW.DOE.GOV.
Table 3.1 GEN IV nuclear energy systems.
Sustainability-1: Generation IV nuclear energy systems will provide sustainable energy generation that meets clean air objectives and provides long-term availability of systems and effective fuel utilization for worldwide energy production. Sustainability-2: Generation IV nuclear energy systems will minimize and manage their nuclear waste and notably reduce the long-term stewardship burden, thereby improving protection for the public health and the environment. Economics-1: Generation IV nuclear energy systems will have a clear life-cycle cost advantage over other energy sources. Economics-2: Generation IV nuclear energy systems will have a level of financial risk comparable to other energy projects. Safety and reliability-1: Generation IV nuclear energy systems operations will excel in safety and reliability. Safety and reliability-2: Generation IV nuclear systems will have a very low likelihood and degree of reactor core damage. Safety and reliability-3: Generation IV nuclear energy systems will eliminate the need for offsite emergency response. Proliferation resistance and physical protection: Generation IV nuclear energy systems will increase the assurance that they are very unattractive and the least desirable route for diversion or theft of weapons usable materials and provide increased physical protection against acts of terrorism.
93
94
Introduction to energy essentials
development (R&D) program, which allows keeping open and consolidating the technical options and avoiding any early or premature down selection. In fact, because the next-generation nuclear energy systems will address needed areas of improvement and offer great potential, many countries share a common interest in advanced R&D that will support their development. Such development benefits from the identification of promising research areas and collaborative efforts that should be explored by the international research community. The collaboration on R&D by many nations on the development of advanced next-generation nuclear energy systems will in principle aid the progress toward the realization of such systems, by leveraging resources, providing synergistic opportunities, avoiding unnecessary duplication, and enhancing collaboration.
3.5 Technology roadmap for Generation IV nuclear energy systems The technology roadmap defines and plans the necessary R&D to support the next generation of innovative nuclear energy systems known as GEN IV. The roadmap has been an international effort of 10 countries, including Argentina, Brazil, Canada, France, Japan, Republic of Korea, South Africa, Switzerland, the United Kingdom, and the United States, the International Atomic Energy Agency, and the OECD Nuclear Energy Agency. Beginning in 2001, over 100 experts from these countries and international organizations began work on defining the goals for new systems, identifying many promising concepts and evaluating them, and defining the R&D needed for the most promising systems. By the end of 2002, the work resulted in a description of the six most promising systems and their associated R&D needs, and they are listed as follows: 1. Gas-cooled fast reactor (GFR)—features a fast neutron spectrum, helium-cooled reactor, and closed fuel cycle 2. Very-high-temperature reactor (VHTR)—a graphite-moderated, helium-cooled reactor with a once-through uranium fuel cycle 3. Supercritical water-cooled reactor (SCWR)—a high-temperature, high-pressure, water-cooled reactor that operates above the thermodynamic critical point of water 4. Sodium-cooled fast reactor (SFR)—features a fast-spectrum, sodium-cooled reactor and closed fuel cycle for efficient management of actinides and conversion of fertile uranium 5. Lead-cooled fast reactor (LFR)—features a fast-spectrum, lead/bismuth eutectic liquid metal–cooled reactor and a closed fuel cycle for efficient conversion of fertile uranium and management of actinides 6. Molten salt reactor (MSR)—produces fission power in a circulating molten salt fuel mixture with an epithermal spectrum reactor and a full actinide recycling fuel cycle These systems offer significant advances in sustainability, safety and reliability, economics, proliferation resistance, and physical protection. These six systems feature
Nuclear energy research and development roadmap and pathways
increased safety, improved economics for electricity production and new products such as hydrogen for transportation applications, reduced nuclear wastes for disposal, and increased proliferation resistance. In 2009, the experts group published an outlook on Generation IV R&D to provide a view of what GIF members hope to achieve collectively in the period 2010–2014. All GEN IV systems have features aiming at performance improvement, new applications of nuclear energy, and/or more sustainable approaches to the management of nuclear materials. High-temperature systems offer the possibility of efficient process heat applications and eventually hydrogen production. Enhanced sustainability is achieved primarily through adoption of a closed fuel cycle with reprocessing and recycling of plutonium, uranium, and minor actinides using fast reactors; this approach provides significant reduction in waste generation and uranium resource requirements. The following Table 3.2 summarizes the main characteristics of the six GEN IV systems. An extensive description of the six most promising nuclear power systems of GEN IV are given in Chapter 1 with their artistic infrastructures.
3.6 Power conversion study and technology options assessment The study that was done in September 2004 by team of experts at University of California, Nuclear Engineering Department, and that was provided in their executive summary shows that the electrical power conversion system (PCS) for the next generation nuclear plant (NGNP) will take advantage of significantly higher reactor outlet temperature to provide greater efficiency than can be achieved by the current generation of LWRs. In anticipation of the design, development and procurement of an advanced PCS for NGNP, this study was initiated to identify the major design and technology options and their tradTable 3.2 Summary of six GEN IV systems. Systems
Neutron spectrum
Coolant
Temp. (°C)
Fuel cycle
Size (MWe)
VHTR SFR
Thermal Fast
Helium Sodium
900 to 1000 550
Open Closed
SCWR
Thermal/Fast
Water
510–625
Open/Closed
GFR LFR
Fast Fast
Helium Lead
850 480–800
Closed Closed
MSR
Epithermal
Fluoride 700–800 salt
Closed
250–300 30–150, 300–1500, 1000–2000 300–700, 1000–2000 1200 20–180, 300–1200, 600–1000 1000
GFR, Gas-cooled fast reactor; LFR, lead-cooled fast reactor; MSR, molten salt reactor; SCWR, supercritical water-cooled reactor; SFR, sodium-cooled fast reactor; VHTR, very high temperature gas reactor.
95
96
Introduction to energy essentials
eoffs that must be considered in the evaluation of PCS options to support future research and procurement decisions. These PCS technology options affect cycle efficiency, capital cost, system reliability, maintainability, and technical risk, and therefore the cost of electricity from GEN IV systems. A reliable evaluation and estimate of actual costs requires an optimized, integrated PCS design. At this early stage of the NGNP project, it is useful to identify the technology options that will be considered in the design of proposed PCS systems, identify the system performance and cost implications of these design options, and provide a general framework for evaluating the design choices and technology tradeoffs. The ultimate measure of the value of power conversion options is the cost of electricity produced, which is a function of capital and operating cost recovery and the system efficiency and reliability. Evaluating cost is difficult to do without detailed integrated designs, but it is possible to identify the factors that influence component and system performance, cost, and technical risk. In this study, several existing Brayton conversion system designs were studied to illustrate and evaluate the implications of the major design choices to assess performance against the GEN IV economics and sustainability goals and to identify areas of technical incompleteness or weakness. Several reference system designs were considered to provide a semiquantitative basis for performing comparisons. The reference systems include GT-MHR, PBMR, GTHTR-300, Framatome indirect cycle design, and AHTR high-temperature Brayton cycle designs where appropriate Generation II, III, and III+ LWRs (two 1970s designs, the EPR and the ESBWR) were also considered. The design choices and technology options considered relevant for the assessment of NGNP power conversion options include the cycle types and operational conditions, such as working fluid choices, direct versus indirect, system pressure, and interstage cooling and heating options. The cost and maintainability of the PCS are also influenced by the PCS layout and configuration, including distributed versus integrated PCS designs, single versus multiple shafts, shaft orientation, and the implications for the pressure boundary. From the summary table later, it is apparent that high-temperature gas reactor power conversion design efforts till date have resulted in very different design choices based on project-specific requirements and performance or technical risk requirements. In the review of existing designs and the evaluation of the major technology options, it immediately becomes apparent that optimized design involves a complex trade-off between diverse factors, such as cost, efficiency, development time, maintainability, and technology growth path, which must be considered in an integrated PCS system context before final evaluation, see Table 3.3 as well. General observations derived from the review of the reference systems, including comparisons with LWRs systems where applicable, include the following: • There are key PCS design choices that can have large effects on PCS power density and nuclear island size, making careful and detailed analysis of design trade-offs important in the comparison of PCS options.
Nuclear energy research and development roadmap and pathways
Table 3.3 Summary of PCS design features for representative gas reactor systems. Features
PBMR (horizontal)
GT-MHR
GTHTR300
Framatome Indirect
AHTR-IT
Thermal Power (MWt) Direct vs. indirect cycle Recuperated vs. combined cycle Intercooled vs. nonintercooled Integrated vs. distributed PCS Single vs. multiple TM shafts Synchronous vs. asynchronous Vertical vs. horizontal TM Submerged vs. external generator
400
600
600
600
2400
Direct
Direct
Direct
Indirect
Indirect
Recuperated
Recuperated
Recuperated
Combined
Recuperated
Intercooled
Intercooled
Nonintercooled Intercooled
Intercooled/ Reheat
Distributed
Integrated
Distributed
Distributed
Distributed (modular)
Single (previ- Single Single ously multiple) Reduction to Asynchronous Synchronous synchronous
Single
Multiple (modular)
Synchronous
Synchronous
Horizontal
Vertical
Horizontal
Horizontal
Vertical
External
Submerged
Submerged
External
Submerged
PCS, Power conversion system.
• Considering the major construction inputs for nuclear plants—steel and concrete—hightemperature reactors (HTRs) appear to be able to break the economy-of-scale rules for LWRs and achieve similar material-inputs performance at much smaller unit sizes. • For HTRs, a much larger fraction of total construction inputs go into the nuclear island.To compete economically with LWRs, HTRs must find approaches to reduce the relative costs of nuclear-grade components and structures. PCS technology options also include variations on the cycle operating conditions and the cycle type that can have an important impact on performance and cost. These options include the following: • Working fluid choice: He, N2, CO2, or combinations have been considered.Working fluid physical characteristics influence cycle efficiency and component design. • System pressure: Higher pressures lead to an increase in moderate efficiency and smaller PCS components but increase the pressure boundary cost—particularly for the reactor vessel—which introduces a component design and a system cost and performance trade-off.
97
98
Introduction to energy essentials
• Direct versus indirect: Indirect cycles involve an intermediate heat exchanger (IHX) and resulting efficiency reduction, and more complex control requirements, but facilitate maintenance. • Interstage cooling (or heating) results in higher efficiency but greater complexity. Some of the observations from this assessment of these factors include the following: • Differences between He versus N2 working fluids were not considered critical for turbomachinery design because both involve similar differences from current combustion turbines, with the primary difference being in the heat exchanger size to compensate for the lower N2 thermal conductivity. • N2 allows 3600-rpm compressor operation at thermal powers at or below 600 MW(t), while He compressors must operate at higher speeds requiring reduction gears, asynchronous generators, or multiple-shaft configurations. However, power up rating to approximately 800 MW(t) would permit 3600-rpm He compressor operation, providing a potentially attractive commercialization approach. Turbomachinery tolerances for He systems do not appear to be a key issue. • Direct/Indirect: Efficiency loss can be 2% to 4%, depending on design, and the IHX becomes a critical component at high temperatures. Direct cycles have an extended nuclear-grade pressure boundary. Maintainability is considered a key design issue for direct cycles. • Interstage cooling, as well as bottoming cycles (Rankine), can result in significant efficiency improvements, but at a cost of complexity and lower temperature differences for heat rejection, affecting the potential for dry cooling and reduced environmental impact from heat rejection. The PCS configuration and physical arrangement of the system components have important effects on the volume and material inputs into structures, on the pressure boundary volume and mass, on gas inventories and storage volume, on the uniformity of flow to heat exchangers, on pressure losses, and on maintainability. The major factors considered in this study include the following: • Distributed versus integrated PCS design approach: PCS components can be located inside a single pressure vessel (e.g., GT-MHR) or can be divided between multiple pressure vessels (e.g., PBMR, HTR-300). This is a major design choice, with important impacts on several areas of design and performance. • Shaft orientation (vertical/horizontal): Orientation affects the compactness of the system, the optimal design of ducting between turbomachinery and heat exchangers. Vertical turbomachinery provides a reduced PCS footprint area and building volume and can simplify the ducting arrangement to modular recuperator and intercooler heat exchangers. • Single versus multiple shafts: Single shaft may include flexible couplings or reduction gears. In multiple shaft systems, turbocompressors are separated from synchronous turbogenerators, allowing the compressors to operate at higher speed and reducing
Nuclear energy research and development roadmap and pathways
the number of compressor stages required. Multiple shafts and flexible couplings reduce the weight of the individual turbomachines that bearings must support. • Pressure boundary design: The pressure vessels that contain the PCS typically have the largest mass of any PCS components and provides a significant (∼33%) contribution to the total PCS cost.
3.6.1 Heat exchanger components Heat exchanger components are defined and the required designs are summarized as follow: • Heat exchanger designs have significant impacts on both the efficiency and cost of the PCS. For a given heat exchanger type, higher effectiveness must be balanced against the increased size or pressure drop implications. Using small passages increases heat exchanger surface area per unit volume, but those same small passages tend to reduce the heat transfer coefficient due to laminar flow. Higher pressures may be utilized to force those flows back into the turbulent region, but those higher pressures forces construction of a more robust pressure boundary and increase pumping power. • The recuperator effectiveness and total heat exchanger (HX) pressure drop is a significant impact on the cycle efficiency, and there is significant leverage in optimizing the recuperator design for both high-heat transfer effectiveness and minimum pressure drop. For modular recuperators, careful attention must be paid to the module configuration and duct design to obtain equal flow rates to each module. • Material limitations may limit the operating temperatures for many components, including the reactor vessel and heat exchangers. But because of the large flexibility of the Brayton cycle, high-efficiency systems can still be designed within these limitations. Fabrication techniques will probably differ between intermediate, pre- and intercooler, and recuperator heat exchangers because of their operating temperature ranges. It would appear that transients could be tolerated by most of these heat exchanger designs.
3.6.2 Turbomachinery Turbomachinery that are used in new generation of nuclear power system (GEN IV) plays a significant role in building them for the commercial application in order to produce the rising need for electricity of future. • First-order estimates of key turbine and compressor design and performance characteristics can be made with low-level analysis. For the reference systems, key turbomachinery design parameters, (speed, stages, stage diameters, blade heights, and blade clearances) will be similar to current commercial gas turbine engines. • At lower reactor thermal powers, He compressors will require greater than 3600 rpm operation to achieve efficiency goals [800 MW(t) allows 3600 rpm operation].
99
100
Introduction to energy essentials
• Maximum system temperatures in the reference designs are near the limit for uncooled turbines. • For both direct and indirect designs, the seals, housing, and bearing components will be fundamentally different than current gas turbines, requiring extensive development with the associated cost and risk. These observations illustrate the complex interactions of the many design choices that will be considered in the NGNP PCS. It is clear that detailed and integrated design efforts must be performed on candidate designs before quantitative evaluations are possible. The assessment described in this study helps illuminate those critical design choices and the resulting implications for the cost and performance of the future NGNP PCS design.
3.6.3 Advanced computational materials science proposed for GEN IV systems A renewed interest in nuclear reactor technology has developed in recent years in part as a result of international interest in sources of energy that do not produce CO2 as a by-product. One result of this interest was the establishment of the GIF, which is a group of international governmental entities whose goal is facilitating bilateral and multilateral cooperation related to the development of new nuclear energy systems. Historically, both the fusion and fission reactor programs have taken advantage of and built on research carried out by the other programs. This leveraging can be expected to continue over the next 10 years as both experimental and modeling activities in support of the GEN IV program grow substantially. The GEN IV research will augment the fusion studies (and vice versa) in areas where similar materials and exposure conditions are of interest. However, in addition to the concerns that are common to both fusion and advanced fission reactor programs, designers of a future DT fusion reactor [5-7] have the unique problem of anticipating the effects of the 14 MeV neutron source term. For example, advances in computing hardware and software should permit improved (and in some cases the first) descriptions of relevant properties in alloys based on ab initio calculations. Such calculations could provide the basis for realistic interatomic potentials for alloys, including alloy–He potentials that can be applied in classical molecular dynamics simulations. These potentials must have a more detailed description of many-body interactions than accounted for in the current generations, which are generally based on a simple embedding function. In addition, the potentials used under fusion reactor conditions (very high PKA energies) should account for the effects of local electronic excitation and electronic energy loss. The computational cost of using more complex potentials also requires the next generation of massively parallel computers. New results of ab initio and atomistic calculations can be coupled with ongoing advances in kinetic and phase field models to dramatically improve predictions of the nonequilibrium, radiation-
Nuclear energy research and development roadmap and pathways
induced evolution in alloys with unstable microstructures. This includes phase stability and the effects of helium on each microstructural component. However, for all its promise, computational materials science is still a house under construction. As such, the current reach of the science is limited. Theory and modeling can be used to develop understanding of known critical physical phenomena, and computer experiments can, and have been used to, identify new phenomena and mechanisms and to aid in alloy design. However, it is questionable whether the science will be sufficiently mature in the foreseeable future to provide a rigorous scientific basis for predicting critical materials’ properties or for extrapolating well beyond the available validation database. Two other issues remain even if the scientific questions appear to have been adequately answered. These are licensing and capital investment. Even a high degree of scientific confidence that a given alloy will perform as needed in a particular GEN IV or fusion environment is not necessarily transferable to the reactor licensing or capital market regimes. The philosophy, codes, and standards employed for reactor licensing are properly conservative with respect to design data requirements. Experience with the US Nuclear Regulatory Commission suggests that only modeling results that are strongly supported by relevant, prototypical data will have an impact on the licensing process. In a similar way, it is expected that investment on the scale required to build a fusion power plant (several billion dollars) could only be obtained if a very high level of confidence existed that the plant would operate long and safely enough to return the investment. These latter two concerns appear to dictate that an experimental facility capable of generating a sufficient, if limited, body of design data under essentially prototypic conditions (i.e., with ∼14 MeV neutrons) will ultimately be required for the commercialization of fusion power. An aggressive theory and modeling effort will reduce the time and experimental investment required to develop the advanced materials that can perform in a DT fusion reactor [5-7] environment. For example, the quantity of design data may be reduced to that required to confirm model predictions for key materials at critical exposure conditions. This will include some data at a substantial fraction of the anticipated end-of-life dose, which raises the issue of when such an experimental facility is required. Long lead times for construction of complex facilities, coupled with several years irradiation to reach the highest doses, imply that the decision to build any fusion-relevant irradiation facility must be made on the order of 10 years before the design data is needed. Two related areas of research can be used as reference points for the expressed need to obtain experimental validation of model predictions. Among the lessons learned from ASCI, the importance of code validation and verification was emphasized at the workshops among the courtiers involved with such research.
101
102
Introduction to energy essentials
Because of the significant challenges associated with structural materials applications in these advanced nuclear energy systems, the Workshop on Advanced Computational Materials Science: Application to Fusion and Generation IV Fission Reactors was convened by the DOE’s Office of Science and the Office of Nuclear Energy, Science and Technology to ensure that research funded by these programs takes full advantage of ongoing advancements in computational science and the Department’s investment in computational facilities. In particular, participants in the workshop were asked to 1. Examine the role of high-end computing in the prediction of materials behavior under the full spectrum of radiation, temperature, and mechanical loading conditions anticipated for advanced structural materials that are required for future GEN IV fission and fusion reactor environments. 2. Evaluate the potential for experimentally validated computational modeling and simulation to bridge the gap between data that that is needed to support the design of these advanced nuclear technologies and both the available database and data that can be reasonably obtained in currently available irradiation facilities. The need to develop materials capable of performing in the severe operating environments expected in GEN IV reactors represents a significant challenge in materials science. There is a range of potential GEN IV fission reactor design concepts, and each concept has its own unique demands. Improved economic performance is a major goal of the GEN IV designs. As a result, most designs call for significantly higher operating temperatures than the current generation of LWRs to obtain higher thermal efficiency. In many cases, the desired operating temperatures rule out the use of the structural alloys employed today. The very high operating temperature (up to 1000°C) associated with the NGNP is a prime example of an attractive new system that will require the development of new structural materials. However, DOE and Idaho National Laboratory (INL) established the NGNP project as required by Congress in Subtitle C of Title VI of the Energy Policy Act of 2005. The mission of the NGNP project is to develop, license, build, and operate a prototype modular HTGR plant that would generate high-temperature process heat for use in hydrogen production and other energy-intensive industries while generating electric power at the same time. As stipulated by the Energy Policy Act of 2005, prelicensing activities for the NGNP prototype began with the development of the NGNP Licensing Strategy Report to Congress that was jointly issued by Nuclear Regulatory Commission (NRC) and DOE in August 2008. Subsequent NRC interactions with DOE and INL centered primarily on the NRC’s review and assessment of a series of NGNP white paper submittals that describe the approaches that DOE and INL propose to pursue in establishing the technical safety bases and criteria for licensing the NGNP prototype. NGNP prelicensing interactions began in 2006 and were suspended
Nuclear energy research and development roadmap and pathways
in 2013 after DOE decided in 2011 not to proceed into the detailed design and license application phases of the NGNP project. DOE’s decision cited impasses between DOE and the NGNP industry alliance in cost sharing arrangements for the public–private partnership required by Congress. See Fig. 3.2 for a conceptual and artistic layout of GEN IV power plant station. The operating temperatures, neutron exposure levels, and thermomechanical stresses for proposed GEN IV fission reactors are huge technological challenges among material scientists and engineers. In addition, the transmutation products created in the structural materials by the high-energy neutrons produced in this generation of nuclear power reactors can profoundly influence the microstructural evolution and mechanical behavior of these materials.
3.6.4 Material classes proposed for GEN IV systems Table 3.4 shows the suggested materials for new generation of nuclear reactors by DOE. The types of materials that are proposed in DOE workshop on March 2004 are tabulated as follows:
Fig. 3.2 Artistic layout GEN IV power plant station.
103
104
Introduction to energy essentials
Table 3.4 Materials suggest for GEN IV by Department of Energy. Structural materials Systems
Ferritic– martensitic stainless-steel alloys
Austenitic stainlesssteel alloys
Oxide dis- NI-bases persion alloys strengthened steels
GFR LFR MSR SFR SCWRthermal spectrum SCWRfast spectrum VHTR
P P
P P
P S
P P
P P
P S
S
P
P
S
S
P P
S
Graphite Refractory alloys
P
Ceramics
P
P S S
P S S
P
S
P
P, Primary; S, secondary.
3.6.5 Generation IV materials challenges The summary of these challenges for these generation of nuclear power plant are presented here, and they are as follows: • Higher temperature/larger temperature ranges • Examples • VHTR coolant outlet temperature near 1000°C • GFR transient temperatures to 1600–1800°C, gradient across core of ∼400°C • LFR to 800°C steady-state outlet • Issues • Creep • Fatigue • Toughness • Corrosion/SCC • Must drive modeling toward a predictive capability of materials properties in complex alloys across a wide temperature range • High fluence/dose • Examples • LFR, SFR cladding • SCWR core barrel • GFR matrix
Nuclear energy research and development roadmap and pathways
• Issues • Swelling • Creep, stress relaxation • Must drive modeling toward a predictive capability of materials properties in complex alloys to large radiation dose • Unique chemical environments • Examples • Pb and Pb–Bi eutectic • Supercritical water • High-temperature oxidation in gas-cooled systems • Molten salts • Issues • Corrosion. • SCC/IASCC. • Liquid Metal Embrittlement. • Must drive modeling toward a predictive capability of chemical interactions in complex alloys to large radiation dose
3.7 Generation IV materials fundamental issues The coevolution of all components of the microstructure and their roles in the macroscopic response in terms of swelling, anisotropic growth, irradiation creep, and radiationinduced phase transformations should be studied within the science of complex systems. Fig. 3.3 is illustration of top-level material’s study from microstructure and macroscopic viewpoint. In summary, we can conclude that • Six concepts have been identified with the potential to meet the GEN IV goals • Concepts operate in more challenging environments than current LWRs and signifi-
Fig. 3.3 Top-level material’s study from microstructure and macroscopic viewpoint.
105
106
Introduction to energy essentials
cant material development challenges must be met for any of the GEN IV systems to be viable. • Experimental programs cannot cover the breadth of materials and irradiation conditions for the proposed GEN IV reactor designs. • Modeling and microstructural analysis can provide the basis for a material selection that is performed based on an incomplete experimental database and that requires considerable judgment to carry out the necessary interpolation and extrapolation.
3.8 End of cheap oil and future of nuclear power Dusk of cheap oil is in rise while dawn of new generation of power plant (i.e., GEN IV) is in horizon. Global production of conventional oil will begin to decline sooner than most people think, probably within 10 years. As we recall a two-sudden price increase took place in 1973 and 1979 and rudely impacted the industrial world and made it to recognize its dependency on cheap crude oil. First event in 1973 that caused oil price increase took place in response to an Arab embargo during Arab and Israel war when the price tripled and then nearly doubled again when Iran Shah was dethroned, sending the major economies into spin. Although emotional and political reaction of most analysts predicting a shortage of crude oil in the world due to these types of crises and not having enough underground reservoir for exploration of oil will put the future survival of world economy into a critical path, even at the time, oil experts knew that they had no scientific basis. Just a few years earlier oil explores had discovered enormous new oil province on the North Slope of Alaska and below the North Sea off the cost of Europe. The five Middle Eastern nations that are members of Organization of Petroleum Exporting Countries (OPEC) were able to hike price of cured oil not because oil was growing short but because they managed to control 36% of international market. Later, when due to pumping oil flow from Alaska and North Sea the demand for cured oil sagged, then prices of the oil dropped, and the OPEC’s control of prices collapsed. The next oil crunch will not be so temporary, this is because the exploration and discovery of oil field as well as its production around the world suggests that within the next decade, the supply of conventional oil will not support and cannot keep up with demand.Whether this conclusion is in contradiction with what oil companies reporting is a question. Distributing today’s oil production rate of about 23.6 giga-barrel oil (Gbo) per year may suggest a cheap crude oil for next 43 more years based on the official charts shows the reserves are growing. But there are three critical errors. • First, it relies on distorted estimates of reserves. • A second mistake is to pretend that production will remain constant. • Third and most important, conventional wisdom erroneously assume that the last bucket of oil can be pumped from the ground just as quickly as the barrels of oil gushing from wells today.
Nuclear energy research and development roadmap and pathways
In fact, the rate at which any well—or any country—can produce oil always rises to a maximum and then, when about half the oil is gone, begins falling gradually back to zero.
3.9 The future of energy The energy future will be very different. For all the uncertainties highlighted in various reports by experts in the field, we can be certain that the energy world will look a lot different in 2030 than it does today. The world energy system will be transformed but not necessarily in the way we would like to see. We can be confident of some of the trends highlighted in reports on current global trends in energy supply and consumption, environmentally, economically, and socially. But that can and must be altered when there is still time to change the road we are on. The growing weight of China, India, the Middle East, and other non-Organization for Economic Co-operation and Development (OECD) regions in energy markets and in CO2 emissions is something we need to take under consideration in order to deal with global warming. The rapidly increasing dominance of national oil companies and the emergence of low carbon energy technologies seems one necessary solution to the problem in hand, but not sufficient enough. And while market imbalances could temporarily cause prices to fall back, it is becoming increasingly apparent that the era of cheap oil is over. But many of the key policy drivers (not to mention other, external factors) remain in doubt. It is within the power of all governments of producing and consuming countries alike, acting alone or together, to steer the world toward a cleaner, cleverer, and more competitive energy system. Time is running out, and the time to act is now. So, what we need to ask is that “can nuclear power compete?” A variety of companies that are energy production business say the answer may be yes. Manufacturers have submitted new designs to the Nuclear Regulatory Commission’s safety engineers, and that agency has already approved some as ready for construction if they are built on a previously approved site. Utilities, reactor manufacturers, and architecture/engineering firms have formed partnerships to build plants, pending final approvals. Swarms of students are enrolling in college-level nuclear engineering programs, and a rosy projection from industry and government predicts a surge in construction. Like another moon shot, the launch of new reactors after a 35-year hiatus in orders is certainly possible, though not a sure bet. It would be easier this time, the experts say, because of technological progress over the intervening decades. But as with a project as large as a moon landing, there is another question: Would it be worthwhile? Fig. 3.4 is illustration of typical nuclear power plant in our backyard and farms where we live today. In order to answer this question, we need to at least satisfy the four unresolved problems associate with nuclear power plant that was brought up by MIT report [1], and they were mentioned at the beginning of this write up. In order to argue the first
107
108
Introduction to energy essentials
Fig. 3.4 Typical nuclear plant in our backyard. WWW.WIKIPEDIA.ORG Figs. 3.5 and 3.7.
point which is the cost of producing a nuclear power plant with its modern and today’s technologies from total ownership and return on the investments, we need to understand the nature of the beast from the day it was born in the basement of University of Chicago and later was shown to the world as first nuclear explosion.
3.10 Nuclear power in the world today and time for change The world faces serious difficulties in creating enough energy that will be meeting supply and demand for next decades to face population growth. One of the challenges facing these difficulties, beside obtaining that will be needed in coming decades for a growing population, especially given the problem of climate change caused by fossil fuel use and as result the impacts of green effects in our day-to-day life around the globe. The need for carbon-free energy option and reduction in cost of producing electricity from nuclear power of new generation (GEN IV) due to the increase of their thermal output efficiency per Eq. (4.1) and innovative suggestion by different researchers makes the nuclear energy either fission or fusion types, somewhat attractive [5-8]. Current arguments about possibilities to prevent global warming and mitigate the green effects due to fossil and gas fuels power plants have also opened up our vision to a potential revival of nuclear power, regardless of some countries such as Germany has decided to abandon all their nuclear power plants by shutting them down, and of course, aftermath of few nuclear accidents in past and as recent one as Fukushima Daiichi nuclear power plant in Japan in 2011 are few obstacles that we need to overcome for a
Nuclear energy research and development roadmap and pathways
time change to go forward with nuclear as source of energy. GEN IV innovative design by increasing their safety, life cycle, and thermal efficiency of their existence suggests a time change for nuclear power in the world today in order to meet the demand for electricity at lower cost for population growth and to prevent global warming due to emissions from old fossil and gas fuel power plants, and, of course, gasoline burning cars is an additional parameter to add to our global warming effect. As part of proargument for taking a look at the GEN IV reactors as a new generation and produce lower cost of electricity and source of renewable of energy from these nuclear power plants seems to be one of the strongest points of this argument in favor of atomic energy either in near term such as fission process [8] or longer-term process that is known to us as fusion process both magnetic confinement fusion (MCF) [6] and inertial confinement fusion (ICF) [7]. To determine the future cost of electricity from nuclear power source, the cost from currently operating power stations is taken into consideration, especially if we are looking at alternative source of renewable energy going forward with time and increasing demand for electricity based on existing supply today. Although as an alternative some scientists, researchers, and engineers in energy field suggest solar and wind energy as part of our solution, but we know that wind does always blow and sun does not always shine, yet a safe nuclear power plant can produce energy 24 × 7. Civil nuclear power can now boast more than 17,000-reactor years of experience, and nuclear power plants are operational in 30 countries worldwide. In fact, through regional transmission grids, many more countries depend in part on nuclear-generated power; Italy and Denmark, for example, get almost 10% of their electricity from imported nuclear power [10]. Around 11% of the world’s electricity is generated by about 450 nuclear power reactors. About 60 more reactors are under construction, equivalent to 16% of existing capacity, while an additional 150–160 are planned, equivalent to nearly half of existing capacity (see Fig. 3.5).
Fig. 3.5 Nuclear electricity production.
109
110
Introduction to energy essentials
In 2016 nuclear plants supplied 2476 TWh of electricity, up from 2441 TWh in 2015. This is the fourth consecutive year that global nuclear generation has risen, with output 130 TWh higher than in 2012 (see Fig. 3.6). Sixteen countries depend on nuclear power for at least one-quarter of their electricity (Fig. 3.6). France gets around three-quarters of its electricity from nuclear energy; Hungary, Slovakia, and Ukraine get more than half from nuclear, while Belgium, Czech Republic, Finland, Sweden, Switzerland, and Slovenia get one-third or more. South Korea and Bulgaria normally get more than 30% of their electricity from nuclear, while in the United States, United Kingdom, Spain, Romania, and Russia about one-fifth of electricity is from nuclear. Japan used to rely on nuclear power for more than one-quarter of its electricity and is expected to return to somewhere near that level [10]. There is a clear need for new generating capacity around the world, both to replace old fossil fuel units, especially coal-fired ones, which emit a lot of carbon dioxide, and to meet increased demand for electricity in many countries. In 2015, 66.0% of electricity was generated from the burning of fossil fuels. Despite the strong support for and growth in intermittent renewable electricity sources in recent years, the fossil fuel contribution to power generation has remained virtually unchanged in the last 10 years (66.5% in 2005) [10]. OECD International Energy Agency publishes annual scenarios related to energy. In its World Energy Outlook 2017 there is an ambitious “sustainable development scenario” which is consistent with the provision of clean and reliable energy and a reduction of air pollution, among other aims. In this decarbonization scenario, electricity generation from nuclear more than doubles by 2040, increasing to 5345 TWh, and capacity grows to 720 GWe. The World Nuclear Association has put forward a more ambitious scenario than this—the Harmony programme proposes the addition of 1000 GWe of new nuclear capacity by 2050 to provide 25% of electricity then (10,000 TWh) from 1250 GWe of capacity (after allowing for 150 GWe retirements). This would require adding 25 GWe per year from 2021, escalating to 33 GWe per year, which is not much different from the 31 GWe added in 1984, or the overall record of 201 GWe in the 1980s. Providing one-quarter of the world’s electricity through nuclear would substantially reduce carbon dioxide emissions and have a very positive effect on air quality [10]. Per world Nuclear Association the statistical data, the countries worldwide involved in nuclear power development are outlined here by continent wise: • North America Canada has 19 operable nuclear reactors, with a combined net capacity of 13.5 GWe. In 2016, nuclear generated 16% of the country’s electricity.
Nuclear energy research and development roadmap and pathways
Fig. 3.6 World electricity production by source 2017.
Fig. 3.6 Nuclear generation by country in 2016.
111
112
Introduction to energy essentials
All but one of the country’s 19 nuclear reactors are sited in Ontario. In the first part of 2016 the government signed major contracts for the refurbishment and operating lifetime extension of six reactors at the Bruce generating station. The program will extend the operating lifetimes by 30–35 years. Similar refurbishment work enabled Ontario to phase out coal in 2014, achieving one of the cleanest electricity mixes in the world. Mexico has two operable nuclear reactors, with a combined net capacity of 1.6 GWe. In 2016, nuclear generated 6% of the country’s electricity. The United States has 99 operable nuclear reactors, with a combined net capacity of 99.6 GWe. In 2016, nuclear generated 20% of the country’s electricity. There had been four AP1000 reactors under construction, but two of these have been halted. One of the reasons for the hiatus in new build in the United States till date has been the extremely successful evolution in maintenance strategies. Over the past 15 years, improved operational performance has increased utilization of US nuclear power plants, with the increased output equivalent to 19 new 1000 MWe plants being built. The year 2016 saw the first new nuclear power reactor enter operation in the country for 20 years. Despite this, the number of operable reactors has reduced in recent years, from a peak of 104 in 2012. Early closures have been brought on by a combination of factors including cheap natural gas, market liberalization, over subsidy of renewable sources, and political campaigning. • South America Argentina has three reactors, with a combined net capacity of 1.6 GWe. In 2016, the country generated 6% of its electricity from nuclear. Brazil has two reactors, with a combined net capacity of 1.9 GWe. In 2016, nuclear generated 3% of the country’s electricity. • West and Central Europe Belgium has seven operable nuclear reactors, with a combined net capacity of 5.9 GWe. In 2016, nuclear generated 52% of the country’s electricity. Finland has four operable nuclear reactors, with a combined net capacity of 2.8 GWe. In 2016, nuclear generated 34% of the country’s electricity. A fifth reactor—a 1720 MWe EPR—is under construction and plans to build a Russian VVER-1200 unit at a new site (Hanhikivi) are well advanced. France has 58 operable nuclear reactors, with a combined net capacity of 63.1 GWe. In 2016, nuclear generated 72% of the country’s electricity. A 2015 energy policy had aimed to reduce the country’s share of nuclear generation to 50% by 2025. In November 2017, the French government postponed this target. The country’s Energy Minister said that the target was not realistic and that it would increase the country’s carbon dioxide emissions, endanger security of supply, and put jobs at risk.
Nuclear energy research and development roadmap and pathways
One reactor is currently under construction in France—a 1750 MWe EPR at Flamanville. In Germany, seven nuclear power reactors continue to operate, with a combined net capacity of 9.4 GWe. In 2016, nuclear generated 13% of the country’s electricity. Germany is phasing out nuclear generation by about 2022 as part of its Energiewende policy. Energiewende, widely identified as the most ambitious national climate change mitigation policy, has yet to deliver a meaningful reduction in carbon dioxide (CO2) emissions. In 2011, the year after the policy was introduced, Germany emitted 731 Mt CO2 from fuel combustion; in 2015, the country emitted 730 Mt CO2 and remained the world’s sixth-biggest emitter of CO2. The Netherlands has a single operable nuclear reactor, with a net capacity of 0.5 GWe. In 2016, nuclear generated 3% of the country’s electricity. Spain has seven operable nuclear reactors, with a combined net capacity of 7.1 GWe. In 2016, nuclear generated 21% of the country’s electricity. Sweden has eight operable nuclear reactors, with a combined net capacity of 8.4 GWe. In 2016, nuclear generated 40% of the country’s electricity. The country is closing down some older reactors but has invested heavily in operating lifetime extensions and uprates. Switzerland has five operable nuclear reactors, with a combined net capacity of 3.3 GWe. In 2016, nuclear generated 34% of the country’s electricity. The United Kingdom has 15 operable nuclear reactors, with a combined net capacity of 8.9 GWe. In 2016, nuclear generated 20% of the country’s electricity. A UK government energy paper in mid-2006 endorsed the replacement of the country’s ageing fleet of nuclear reactors with new nuclear build. The government aims to have 16 GWe of new nuclear capacity operating by 2030. The placement of structural concrete at Hinkley Point C (two EPR units) has begun, ahead of full construction. • Central and East Europe, Russia Armenia has a single nuclear power reactor with a net capacity of 0.4 GWe. In 2016, nuclear generated 31% of the country’s electricity. Belarus has its first nuclear power plant under construction and plans to have the first of two Russian reactors operating by 2019. At present almost all of the country’s electricity is produced from natural gas. Bulgaria has two operable nuclear reactors, with a combined net capacity of 1.9 GWe. In 2016, nuclear generated 35% of the country’s electricity. The Czech Republic has six operable nuclear reactors, with a combined net capacity of 3.9 GWe. In 2016, nuclear generated 29% of the country’s electricity. Hungary has four operable nuclear reactors, with a combined net capacity of 1.9 GWe. In 2016 nuclear generated 50% of the country’s electricity. Romania has two operable nuclear reactors, with a combined net capacity of 1.3 GWe. In 2016, nuclear generated 17% of the country’s electricity.
113
114
Introduction to energy essentials
Russia has 35 operable nuclear reactors, with a combined net capacity of 26.9 GWe. In 2016, nuclear generated 17% of the country’s electricity. A government decree in 2016 specified construction of 11 nuclear power reactors by 2030, in addition to those already under construction. At the start of 2018, Russia had seven reactors under construction, with a combined capacity of 5.9 GWe. The strength of Russia’s nuclear industry is reflected in its dominance of export markets for new reactors. The country’s national nuclear industry is currently involved in new reactor projects in Belarus, China, Hungary, India, Iran, and Turkey and to varying degrees as an investor in Algeria, Bangladesh, Bolivia, Indonesia, Jordan, Kazakhstan, Nigeria, South Africa, and Tajikistan among others. Slovakia has four operable nuclear reactors, with a combined net capacity of 1.8 GWe. In 2016, nuclear generated 54% of the country’s electricity. A further two units are under construction, with both due to enter commercial operation before the end of the decade. Slovenia has a single operable nuclear reactor with a net capacity of 0.7 GWe. In 2016, Slovenia generated 35% its electricity from nuclear. Ukraine has 15 operable nuclear reactors, with a combined net capacity of 13.1 GWe. In 2016, nuclear generated 52% of the country’s electricity. • Asia Bangladesh started construction on the first of two planned Russian VVER-1200 reactors in 2017. It plans to have the first unit in operation by 2023. The country currently produces virtually all of its electricity from fossil fuels. China has 38 operable nuclear reactors, with a combined net capacity of 34.6 GWe. In 2016, nuclear generated 4% of the country’s electricity. The country continues to dominate the market for new nuclear build. At the start of 2018, 20 of the 58 reactors under construction globally were in China.These include the world’s first Westinghouse AP1000 units and a demonstration high-temperature gas-cooled reactor plant. China is commencing export marketing of the Hualong One, a largely indigenous reactor design. The strong impetus for developing new nuclear power in China comes from the need to improve urban air quality and reduce greenhouse gas emissions. The government’s stated long-term target, as outlined in its Energy Development Strategy Action Plan 2014–2020 is for 58 GWe capacity by 2020, with 30 GWe more under construction. India has 22 operable nuclear reactors, with a combined net capacity of 6.2 GWe. In 2016, nuclear generated 3% of the country’s electricity. The Indian government is committed to growing its nuclear power capacity as part of its massive infrastructure development program. The government in 2010 set an ambitious target to have 14.6 GWe nuclear capacity online by 2024. At the start of 2018 six reactors were under construction in India, with a combined capacity of 4.4 GWe.
Nuclear energy research and development roadmap and pathways
Japan has 42 operable nuclear reactors, with a combined net capacity of 40 GWe. At the start of 2018, only 5 reactors had been brought back online, with a further 21 in the process of restart approval following the Fukushima accident in 2011. In the past, 30% of the country’s electricity has come from nuclear; in 2016, the figure was just 2%. South Korea has 24 operable nuclear reactors, with a combined net capacity of 22.5 GWe. In 2016, nuclear generated 30% of the country’s electricity. South Korea has four new reactors under construction domestically as well as four in the United Arab Emirates. It plans for two more, after which energy policy is uncertain. It is also involved in intense research on future reactor designs. Pakistan has five operable nuclear reactors, with a combined net capacity of 1.4 GWe. In 2016, nuclear generated 4% of the country’s electricity. Pakistan has two Chinese Hualong One units under construction. • Africa South Africa has two operable nuclear reactors and is the only African country currently producing electricity from nuclear. In 2016, nuclear generated 7% of the country’s electricity. South Africa remains committed to plans for further capacity, but financing constraints are significant. • Middle East Iran has a single operable nuclear reactor with a net capacity of 0.9 GWe. In 2016, nuclear generated 2% of the country’s electricity. The United Arab Emirates is building four 1450 MWe South Korean reactors at a cost of over $20 billion and is collaborating closely with the International Atomic Energy Agency and experienced international firms. • Emerging nuclear energy countries As outlined earlier, Bangladesh, Belarus, and the United Arab Emirates are all constructing their first nuclear power plants. A number of other countries are moving toward use of nuclear energy for power production. For more information, see page on emerging nuclear energy countries.
3.11 Improved performance from existing reactors The performance of nuclear reactors has improved substantially over time. Over the past 40 years the proportion of reactors reaching high-capacity factors has increased significantly. For example, 64% of reactors achieved a capacity factor higher than 80% in 2016, compared to 24% in 1976, whereas only 8% of reactors had a capacity factor lower than 50% in 2016, compared to 22% in 1976 [10]. Fig. 3.7 is an illustration of long-term trends in capacity factors, while Fig. 3.8 is a plot presenting the median capacity factor from 2007 to 2016 of life cycle by age of reactor in operational existence today.
115
116
Introduction to energy essentials
Fig. 3.7 Long-term trends in capacity factors.
3.12 Other nuclear reactors In addition to commercial nuclear power plants, there are about 225 research reactors operating in 50 countries, with more under construction. As well as being used for research and training, many of these reactors produce medical and industrial isotopes. The use of reactors for marine propulsion is mostly confined to the major navies where it has played an important role for five decades, providing power for submarines and large surface vessels. At least 140 ships, mostly submarines, are propelled by some 180 nuclear reactors and over 13,000-reactor years of experience have been gained with marine reactors. Russia and the United States have decommissioned many of their nuclear submarines from the Cold War era.
Fig. 3.8 Median capacity factor 2007–2016 by age of reactor.
Nuclear energy research and development roadmap and pathways
Russia also operates a fleet of four large nuclear-powered icebreakers and has three more under construction. It is also completing a floating nuclear power plant with two 40 MWe reactors adapted from those powering icebreakers for use in remote regions [10].
3.13 Summary In order, for nuclear power to continue to be a viable energy option in any country, including the United States, nuclear safety, security, and safeguards must be maintained at the highest levels on a global scale. DOE will help to achieve consensus criteria for safe reactor operation through international organizations, such as the World Association of Nuclear Operators, and seek to enhance safety standards for nuclear power, to promote appropriate infrastructure at the national and international levels, and to minimize proliferation risks from the expansion of nuclear power through its participation with the IAEA and related organizations.
References [1] Massachusetts Institute of Technology, The Future of Nuclear Power: An Interdisciplinary MIT Study, MIT (2003). [2] http://www.energystar.gov/ia/business/GBN_Energy_Strategy.pdf?7d10-c2f8. [3] International Energy Agency, http://www.iea.org/. [4] http://www.iea.org/Textbase/npsum/WEO2008SUM.pdf. [5] B. Zohuri, Plasma Physics and Controlled Thermonuclear Reactions Driven Fusion Energy, 1st Ed., Springer Publisher, 2017. [6] B. Zohuri, Magnetic Confinement Fusion Driven Thermonuclear Energy, 1st Ed., Springer Publisher, 2017. [7] B. Zohuri, Inertial Confinement Fusion Driven Thermonuclear Energy, 1st Ed., Springer Publisher, 2017. [8] B. Zohuri, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants: An Innovative Design Approach, 1st Ed., Springer Publisher, 2015. [9] P. McDaniel, C. de Oliviera, B. Zohuri, J. Cole, A combined cycle power conversion system for the next generation nuclear power plant, Am. Nucl. Soc. Trans., San Diego, California, November, 2012. [10] World Nuclear Association, Nuclear Power in the World Today, November 2020 ,http://www.worldnuclear.org/information-library/current-and-future-generation/nuclear-power-in-the-world-today. aspx.
117
CHAPTER 4
Small modular reactors and a modern power conversion approach This chapter presents the innovative approach to combined-cycle systems for power conversion of small modular reactors (SMR) designed in the form of liquid metal reactor infrastructure, and the need for nuclear power plant for the production of electricity. SMRs are ways to provide safe, clean, and affordable nuclear power options. The advanced SMRs currently under development in the United States represent a variety of sizes, technology options, and deployment scenarios. These advanced reactors are envisioned to vary in size from a couple of megawatts up to hundreds of megawatts and can be used for power generation, process heat, desalination, or other industrial use. This in-depth chapter describes how advanced SMRs offer multiple advantages, such as relatively small size, reduced capital investment, location flexibility, and provisions for incremental power additions. SMRs also offer distinct safeguards, security, and nonproliferation advantages.
4.1 Introduction Worldwide, there are 435 nuclear power reactors in operation, totaling 367 GW(e) of generation capacity, and out of that number of reactors, 103 of them exist in the United States operating with the Generation II (GEN II) and Generation III (GEN III) light water reactor (LWR) technology using ordinary water as both moderator and coolant. The next wave of nuclear plants has taken GEN II concepts to the next level, by improving both safety and efficiency, since this generation of power plants is at the end of their life cycle. Utilities plan to build GEN III at the end of the decade, and now conceptual studies of new designs are on the horizon, known as GEN IV, and include SMRs, with better safety and efficiency in mind as part of the research and development for this roadmap. In a GEN II pressurized water reactor (PWR), water circulates through the core (see Chapter 1) where it is heated by the fuel’s chain reaction.The hot water is then piped to a steam generator, and the steam spins a turbine that produces electricity. However, the GEN III evolutionary PWR improves upon the GEN II PWR-type design primarily by enhancing safety features from the viewpoint of probabilistic risk assessment (PRA) and other mechanical and operational perspectives. Two separate 51-inch thick concrete walls surround the reactor with the inner one lined with metal. Each is strong enough Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
119
120
Introduction to energy essentials
to withstand the impact of a heavy obstacle such as commercial airplane since the aftermath of the 911 episode and destruction of the twin towers in New York. The reactor vessel sits on a 20-ft slab of concrete with a leak-tight “core catcher” where the molten core would collect and cool in the event of a meltdown. There are also four safeguards built-in with independent pressurizers and steam generators, each capable of providing emergency cooling of the reactor core. One of the new GEN III reactor concepts being considered is the pebble-bed fuel, which is a smooth graphite sphere about the size of a tennis ball as it is shown in Fig. 4.1, with the assumption that it could take years to assess the pros and cons of all six GEN IV designs that are mentioned in Chapter 1. With the continuous rise of electricity demand worldwide due to population growth, at least in the United States, the congress might not be willing to wait that long. In addition, to replacing the aging fleet of GEN II reactors, that are coming to the end of their life cycles, the government wants to make progress on another front, and that is the production of hydrogen as part of hybrid energy to fuel the dream of exhaust-free cars running independent of gasoline and reduce dependency on foreign oil [1]. As a result, the frontrunner for the pebble bed reactor (BPR) design and production implementation is the initial $1.25 billion demonstration plant in progress in Idaho, Pebble bed reactor (PBR)
Pebble bed reactor scheme New fuel pebbles Cooling gas
Heated fluid to turbine
Cold fluid from turbine Pump
Reinforced concrete
Spent fuel pebbles
1 mm
Fig. 4.1 Pebble powered reactor schematic. (A) Pebble-type HTGRs. (B) Sketch of a pebble-bed reactor.
Small modular reactors and a modern power conversion approach
which is a helium-cooled, graphite-moderated reactor whose extremely high outlet temperature will be around 1650 to 1830 °F. Also known as the yery-high temperature reactor (VHTR), it would be ideal for high thermal efficiency and efficiently producing hydrogen [1-4]. However, the dawn of a new generation of nuclear reactors with a smaller footprint and lower cost of construction and production are the SMRs. These are considered as part of GEN IV and have a separate roadmap to achieve energy security and green house gas (GHG) emission reduction.The United States must develop and deploy clean, affordable, domestic energy sources as quickly as possible, not only to satisfy the issue of GHG but also as a new source of renewable energy in order to meet the demand and supply for electricity going forward in time. Nuclear power is a key player in this effort and will continue to be a key component of a portfolio of technologies that will meet our energy goals.Thus, the Nuclear Energy (NE), a branch of the Department of Energy (DOE), continuously is looking into research, development, and demonstration activities to ensure that through small and medium-sized GEN III or modular reactors of GEN IV, NE remains a viable energy option for at least the United States. The International Atomic Energy Agency (IAEA) is also supporting worldwide activities within this domain. Small- and medium-sized modular reactors are an option to fulfill the need for flexible power generation for a wider range of users and applications. Small modular reactors, deployable either as single or multimodule plants, offer the possibility to combine nuclear with alternative energy sources, including renewables. SMRs are excellent candidates for meeting demands for the need of electricity due to population growth in time, and by virtue of being modular, the demand for electricity supply can be satisfied by adding more modules to the main infrastructure of the first reactor on a site, as well as covering the cost of construction for the entire series of these modules one at the time; thus the cost of ownership does not put the owner of the NE production facilities into financial debt for a long time. See Fig. 4.2, which is the illustration of small modular reactor infrastructure/building, suggested by the NuScale company. Another advantage of these modular reactors is the safety cost to protect the population around them both from man-made and natural disasters. We would like to avoid another Chernobyl, Three Mile Island, or the recent Fukushima Daichi accident. Fig. 4.3 is the illustration of all three major nuclear accidents that have taken place around the world so far. The safety and capital cost challenges involved with traditional nuclear power plants may be considerable, but a new class of reactors in the development stage holds promise for addressing them. These reactors, called SMRs, produce anywhere from 10 to 300 megawatts, rather than the 1000 megawatts produced by a typical LWR reactor. An entire reactor, or at least most of it, can be built in a factory and shipped to a site for
121
122
Introduction to energy essentials
Fig. 4.2 NuScale power reactor building of SMR.
assembly where several reactors can be installed together to compose a larger nuclear power station. SMRs have attractive safety features, too. Their design often incorporates natural cooling features that can continue to function in the absence of external power, and the possible underground placement of the reactors and the spent-fuel storage pools will be more secure. Since SMRs are smaller than conventional nuclear plants, the construction costs for individual projects are more manageable, and thus the financing terms may be more favorable. Because they are factory assembled, the on-site construction time is shorter. The utility company can build up its nuclear power capacity step by step, adding additional reactors as needed, which means that it can generate revenue from electricity sales sooner. This helps not only the plant owner but also customers who are increasingly being asked to pay higher rates today to fund tomorrow’s plants. Thus, it is fair to say that SMRs provide very flexible and affordable power generation, and they are an excellent source of renewable energy as well.
Fig. 4.3 Three major nuclear accident sites. (A) Chernobyl, (B) three-mile island, and (C) Fukushima.
Small modular reactors and a modern power conversion approach
Also suggested by Zohuri, et. al. [6], an advanced version of these power plants using the technology of liquid metal fast reactors (LMFRs) are a good way of recycling fuel coming out from reactors such as LWRs, with enough production of plutonium-239 that can be used as fuel in these SMRs, regardless of some arguments about this process. If the 239Pu is weapons-grade fuel no one will use it in any reactor to burn as fuel; however, if it is less than weapons grade in the composition of 239Pu, it can be used as reactor fuel A combined-cycle power conversion system for small modular LMFRs research by McDaniel et al. [7] has suggested that the air-Brayton combined cycle results obtained earlier for molten salt reactors and lead-cooled reactors can be extended to temperatures typical of liquid metal cooled fast reactors. Multiple reheat turbines, a recuperator, and a split compressor with intercooler can be added to the Brayton cycle. Adding these components to the system provides a significant increase in efficiency. Advanced SMRs are a key part of the NE goal under DOE to develop safe, clean, and affordable nuclear power options. The advanced SMRs currently under development in the United States represent a variety of sizes, technology options, and deployment scenarios. These advanced reactors envisioned to vary in size from a couple of megawatts up to hundreds of megawatts can be used for power generation, process heat, desalination, or other industrial uses. SMRs can employ light water as a coolant or other nonlight water coolants such as a gas, liquid metal, or molten salt. Advanced SMRs offer many advantages, such as relatively small size, reduced capital investment, ability to be sited in locations not possible for larger nuclear plants, and provisions for incremental power additions. SMRs also offer distinct safeguards, security, and nonproliferation advantages. The Office of NE has long recognized the transformational value that advanced SMRs can provide to the nation’s economic, energy security, and environmental outlook. Accordingly, the Department has provided substantial support to the development of light water-cooled SMRs, which are under licensing review by the Nuclear Regulatory Commission (NRC) and will likely be deployed in the next 10–15 years. The Department is also interested in the development of SMRs that use nontraditional coolants such as liquid metals, salts, and helium because of the safety, operational, and economic benefits they offer.
4.2 Industry opportunities for advanced nuclear technology development The Department of Energy recently issued a multiyear cost-shared funding opportunity to support innovative, domestic nuclear industry-driven concepts that have high potential to improve the overall economic outlook for nuclear power in the United States.This funding opportunity will enable the development of existing, new, and nextgeneration reactor designs, including SMR technologies.
123
124
Introduction to energy essentials
The scope of the funding opportunity is very broad and solicits activities involved in finalizing the most mature SMR designs; developing manufacturing capabilities and techniques to improve cost and efficiency of nuclear builds; developing plant structures, systems, components, and control systems; addressing regulatory issues; and other technical needs identified by industry. The funding opportunity will provide awards sized and tailored to address a range of technical and regulatory issues impeding the progress of advanced reactor development. Initiated in FY2012, the SMR Licensing Technical Support (LTS) Program works with industry partners, research institutions, the national laboratories, and academia to accelerate the certification, licensing, and siting of domestic advanced SMR designs and to reduce economic, technical, and regulatory barriers to their deployment. FY2017 was the last year of planned funding for this successful program, but activities will be completed over the next several years as certification and licensing efforts are completed. As we mentioned in Chapter 2, there are about 50 SMR designs and concepts globally. Most of them are in various developmental stages, and some are claimed as being nearterm deployable, and besides the NE Office of DOE, the IAEA is coordinating the efforts of its member states to develop SMRs of various types by taking a systematic approach to the identification and development of key enabling technologies, with the goal to achieve competitiveness and reliable performance of such reactors. The agency also helps them address common infrastructure issues that could facilitate the SMRs’ deployment. SMRs offer a lower initial capital investment, greater scalability, and siting flexibility for locations unable to accommodate more traditional larger reactors. They also have the potential for enhanced safety and security compared to earlier designs. Deployment of advanced SMRs can help drive economic growth.
4.3 Benefits of small modular reactors Associated with SMRs, there exist certain benefits, which these SMRs offer. To start with, their cost-effectiveness is an improved investment based on total cost of ownership (TCO) and also from the return on investment (ROI) point of view. SMRs offer a lower initial capital investment, greater scalability, and siting flexibility for locations unable to accommodate more traditional larger reactors. They also have the potential for enhanced safety and security compared to earlier designs. Deployment of advanced SMRs can help drive economic growth. These low initial costs come with certain associated benefits that are discussed here.
4.4 Modularity The term “modular” in the context of SMRs refers to the ability to fabricate major components of the nuclear steam supply system in a factory environment and ship to
Small modular reactors and a modern power conversion approach
the point of use. Even though current large nuclear power plants incorporate factoryfabricated components (or modules) into their designs, a substantial amount of fieldwork is still required to assemble components into an operational power plant. SMRs are envisioned to require limited on-site preparation and substantially reduce the lengthy construction times that are typical of the larger units. SMRs provide simplicity of design, enhanced safety features, the economics and quality afforded by factory production, and more flexibility (financing, siting, sizing, and end-use applications) compared to larger nuclear power plants. Additional modules can be added incrementally as the demand for energy increases.
4.5 Lower capital investment SMRs can reduce a nuclear plant owner’s capital investment due to the lower plant capital cost. Modular components and factory fabrication can reduce construction costs and duration.
4.6 Siting flexibility SMRs can provide power for applications where large plants are not needed or sites lack the infrastructure to support a large unit. This would include smaller electrical markets, isolated areas, smaller grids, sites with limited water and acreage, or unique industrial applications. SMRs are expected to be attractive options for the replacement or repowering of aging/retiring fossil plants or to provide an option for complementing existing industrial processes or power plants with an energy source that does not emit greenhouse gases.
4.7 Greater efficiency SMRs can be coupled with other energy sources, including renewables and fossil energy, to leverage resources and produce higher efficiencies and multiple energy end products while increasing grid stability and security. Some advanced SMR designs can produce higher temperature process heat for either electricity generation or industrial applications.
4.8 Safeguards and security/nonproliferation SMR designs have the distinct advantage of factoring in current safeguards and security requirements. Facility protection systems, including barriers that can withstand design basis aircraft crash scenarios and other specific threats, are part of the engineering process being applied to new SMR designs. SMRs also provide safety and potential nonproliferation benefits to the United States and the wider international community.
125
126
Introduction to energy essentials
Most SMRs will be built below grade for safety and security enhancements, addressing vulnerabilities to both sabotage and natural phenomena hazard scenarios. Some SMRs will be designed to operate for extended periods without refueling. These SMRs could be fabricated and fueled in a factory, sealed and transported to sites for power generation or process heat, and then returned to the factory for defueling at the end of the life cycle. This approach could help to minimize the transportation and handling of nuclear material. Light water-based SMRs are expected to be fueled with low enriched uranium, that is, approximately 5% U-235, which is similar to existing large nuclear power plants. The “security by design” concepts being applied to these technologies are expected to increase SMR resistance to theft and diversion of nuclear material. Also, reactor cores for these light water SMRs can be designed to burn plutonium as a mixed oxide (MOX) fuel. Further, SMRs based on nonlight water reactor coolants could be more effective at utilizing plutonium while minimizing the wastes requiring disposal.
4.9 Industry, manufacturing, and job growth The case for SMR economic competitiveness is rooted in the concept that mass manufacture of modular parts and components will reduce the cost per kilowatt of electricity compared with current generating sources. There is both a domestic and international market for SMRs, and the US industry is well positioned to compete for these markets. DOE hopes that the development of standardized SMR designs will also result in an increased presence of US companies in the global energy market. If a sufficient number of SMR units are ordered, it will provide the necessary incentive to develop the appropriate factory capacity to further grow domestic and international sales of SMR power plants.
4.10 Economic development SMR deployment to replace retiring electricity generation assets and meet growing generating needs will result in significant growth in domestic manufacturing, tax base, and high-paying factory, construction, and operating jobs. A 2010 [5] study on economic and employment impacts of SMR deployment estimated that a prototypical 100 MWe SMR costing $500 million to manufacture and install would create nearly 7000 jobs and generate $1.3 billion in sales, $404 million in earnings (payroll), and $35 million in indirect business taxes. The report examines these impacts for multiple SMR deployment rates, that is, low (1–2 units/year), moderate (30 units/year), high (40 units/ year), and disruptive (85 units/year). The study indicates significant economic impact would be realized by developing an SMR manufacturing enterprise at even moderate deployment levels.
Small modular reactors and a modern power conversion approach
If we consider the aforementioned benefits from SMR-type nuclear power plants, their cost-effectiveness and modest initial capital investment, in comparisons to today’s nuclear power plants becomes apparent. There are some major challenges facing any future nuclear power plants.
4.11 Cost of electricity from nuclear power The newest estimates for the cost of nuclear power in the United States are (costs indicated are per delivered kWh): 11.2 US-cents per kWh nuclear power (MIT 2003) 14.1 US-cents per kWh nuclear power (Keystone June 2007) 18.4 US-cents per kWh nuclear power (Keystone midrange estimate) In the United Kingdom, the cost of nuclear electricity was estimated to be 8.2 US-cents per kWh (for an interest rate of 10%) and 11.5 US-cents per kWh (for an interest rate of 15%). Here transmission and distribution of the electricity (usually about 3 US-cents per kWh) have to be added [6]. So, in the United Kingdom the costs of NE is estimated to be in the same range as indicated earlier. Comparison can be made with other technologies, for example, large wind power farms are estimated at 7 US-cents per kWh delivered. Of course, most environmentalists are arguing that wind and solar energy are cheap ways of producing electricity and call them a renewable source of energy. “They are getting less expensive almost daily while nuclear power is getting more and more expensive. Isn’t this alone a very strong indication that nuclear power is a technology from the past?” However, one needs to bear in mind that wind does not blow 24 × 7 nor will the sunshine all the time during day and night. There will be cloudy days, and the sun never shines at night.
4.12 Cost of nuclear technology is too high The reactor types described for GEN IV are not available. Significant will be required to develop these technologies into production applications. We are probably talking about a time frame of 30 to 50 years if not longer. But we need a solution now! It is urgent. The costs for the development are very difficult to estimate, and therefore the costs indicated earlier might be (and probably are) wrong. In the history of nuclear power, the cost of development has always been tremendously underestimated. In addition, we will inevitably have to change our behavior: We should only use as much energy as we are able to produce with sustainable technologies. Demand for energy has to follow the available supply of sustainable energy. This will have many positive side effects, too.
127
128
Introduction to energy essentials
4.13 Cooling water requirement for nuclear power reactors As one can probably imagine, the question of the need for freshwater for cooling systems of all power plants, including nuclear, fossil, or concentrating solar is not an easy one to answer in a general way; however, estimates can be found. In 2010 approximately 50% of the available surface fresh water in the United States was used to cool thermoelectric power plants. This was significantly more than that used for agriculture. Since 2010 the fraction used to cool thermoelectric power plants has gone down presumably due to the introduction of gas turbine combined-cycle plants. Cooling water from nearby lakes or rivers is normally heated up by about max 30 °F (max 17 °C). For example, the typical 1000 MWe nuclear power reactor with a 30 °F ΔT needs approximately 476,500 gallons of water per minute (130,000 m3 water per hour). If the temperature rise is limited to 20 °F, the cooling water need rises to 714,750 gallons per minute (195,000 m3/h). Nuclear power reactors are about 33% efficient, that is, for every three units of thermal energy generated by the reactor core, one unit of electrical energy goes out to the grid as electricity and two units of waste heat go into the environment. Two modes of cooling are used to remove the waste heat from electrical generation: oncethrough cooling using water from rivers, ocean, or lakes, and evaporation in large cooling towers. In the latter, water from rivers, ocean, or lakes is evaporated in the cooling tower. A nuclear power reactor with closed-cycle cooling system of typical 1000 MWe reactor with a cooling tower consumes by the virtue of evaporation about 2000 m3 of water per hour (about 10,300 gallons per minute). See Table 4.1 for a summary of the cooling water requirements for nuclear power units depending on the type of cooling applied. All figures indicated are in cubic meters of water per MW of electricity. With a new roadmap of research and development [12] for a new generation (GEN IV) of nuclear power ahead of us, these authors and others are suggesting an innovative approach for combined cycle (CC), power conversion in the form of an Air-Brayton topping cycle and a steam Rankine bottoming cycle. In this manner a significant portion of the waste heat is rejected directly to the atmosphere.
Table 4.1 Water withdrawal and consumption (evaporation) [7–8].
Once-through cooling Pond cooling Cooling tower
Water withdrawal [m3 water per MWe]
Water consumption [m3 water per MWe]
830,000 17,000 27,000
13,000 13,000 24,000
Small modular reactors and a modern power conversion approach
4.14 Next generation of nuclear power reactions for power production At this point a modern power conversion system will be introduced for SMRs that attempts to address the aforementioned problems with nuclear power. It will be based on the most efficient power conversion system in use by fossil fuel competitors of nuclear power. Actually there will be two power conversion systems considered, a nuclear Air-Brayton combined cycle (NACC) and a nuclear Air-Brayton recuperated cycle (NARC). The cycles have a number of advantages. They take advantage of the large industrial base for Air-Brayton system components. They achieve an efficiency comparable to, or better than, any of those proposed for GEN IV systems. They also use significantly less cooling water than current LWR systems, and finally, they appear capable of addressing a future problem that is emerging as a result of the high penetration of solar and wind systems into the electric grid.
4.15 Technology roadmap for Generation IV nuclear energy systems Energy is broadly defined as the ability to produce a change from the existing conditions. Thus, the term energy implies that a capacity for action is present. The evaluation of energy is done by measuring certain effects that are classified by descriptive names, and these effects can be produced under controlled conditions. For example, mass that is located at a certain position may have potential energy relative to some lower position. If the same mass is in motion, then it may possess kinetic energy. If its characteristics of composition such as temperature or pressure changes, then it is changing its internal energy. The internal energy can be measured by the change in potential energy experienced by an external load. For the past half-century fossil fuels, namely, coal, oil, and natural gas have supplied the major portion of the world’s energy requirements. It has long been realized, however, that in the not-too-distant future these sources of energy will be largely exhausted. At the present time the total energy consumption for all countries is about 1 × 1017 Btu per year. Since the world’s population is steadily growing and the power use per capita is increasing as well, the rate of energy utilization by the year 2030 could well be 5 to 10 times the current value. According to one estimate, the known coal, oil, gas, and oil shale, which can be extracted at no more than twice the present cost would be equivalent to roughly 4 × 1019 Btu. This means about 100 years the world’s economically useful reserves of fossil fuels may approach exhaustion. The total amount of basic raw materials as a source of fuel for fission power plants such as uranium and thorium, in the earth’s crust to a depth of three miles is very large, possibly something like 1012 tons. However, much of this is present in minerals containing such a small proportion of the desired element that extraction would be very
129
130
Introduction to energy essentials
expensive and not very cost effective. In particular high-grade ore reserves are believed to be on the order of 2 × 106 tons; therefore, we need to reduce the cost of recovery from moderately low-grade ores to at least $100 or less per pound of metal with advancing technology in this matter. Development of plant layout and modularization concepts requires an understanding of both primary and secondary systems. The current design of LWRs cannot compete with high-efficiency gas turbine systems on the basis of total cost of ownership (TCO) and the return on investment (ROI). A modern approach to power conversion for nuclear reactors is necessary to reduce the cost of ownership. The following points are motivation for such an innovative approach: • Advanced power conversion systems are required to take advantage of high-temperature capabilities of several GEN IV designs. • Molten salt, liquid lead–bismuth, and high-temperature gas systems may reach reactor outlet temperatures in the range of 1175 to 1275 K. • Brayton power conversion systems using air as a working fluid are a well-developed technology. Traditional studies of improved efficiency have centered on recuperated systems. • Combined-cycle systems using a Rankine bottoming cycle have shown excellent performance with natural gas and coal gasification systems giving above 60% overall thermodynamic efficiency. To understand the limitations of current LWR systems, some basic concepts of the current way nuclear electricity is generated is necessary. The following discussion describes the current process for generating electricity by a nuclear fission process. • A nuclear reactor produces and controls the release of energy from fission. • The energy released as heat is used to make steam. • The steam is used to drive the turbines which produce electricity. Several components of a nuclear reactor include: • Fuel • Moderator • Control rods • Coolant • Steam generator • Containment structure A typical top-level LWR is illustrated in Fig. 4.4 here. As part of the GEN III nuclear reactor family, we can describe a PWR from the top level as well. A diagram is provided in Fig. 4.5. In the PWR, the water which passes over the reactor core and acts as both moderator and coolant does not flow to the turbine but is contained in a pressurized primary loop. The primary loop water produces steam in the secondary loop which drives the
Small modular reactors and a modern power conversion approach
Fig. 4.4 A typical light water reactor.
turbine. The obvious advantage to this is that a fuel leak in the core would not pass any radioactive contaminants to the turbine and condenser. Another advantage is that the PWR can operate at higher pressure and temperature, about 160 atm and about 315°C.This provides a higher Carnot efficiency than the boiling water reactor (BWR), but the reactor is more complicated and costlier to construct. Most of the US reactors are pressurized water reactors [9]. Another class of GEN III nuclear power plant (NPP) reactors is the BWR and it is illustrated in Fig. 4.6. In this case the flow sequence is as follows. The water passes through the reactor core acting as moderator and coolant. It is converted to steam in the core to power the turbine directly. The disadvantage of this is that any fuel leak might
Fig. 4.5 A typical pressurized water reactor.
131
132
Introduction to energy essentials
Fig. 4.6 A typical boiling water reactor.
make the water radioactive and that radioactivity would reach the turbine and the rest of the loop. A typical operating pressure for such reactors is about 70 atmospheres at which pressure the water boils at about 285 0C.This operating temperature gives a Carnot efficiency of only 42% with a practical operating efficiency of around 32%, somewhat less than the PWR. In both, a BWR and PWR, the diagram of steam generation is presented in Fig. 4.7 below. The turbine power cycle is a Rankine cycle that obeys the following steps: • Rankine (steam) power cycle • It directly employs steam to drive the turbines • Associated problems include lower operating temperatures (lower efficiency), turbine blade fouling, larger equipment, and wet cooling
Fig. 4.7 Nuclear power plant steam generation.
Small modular reactors and a modern power conversion approach
Fig. 4.8 Rankin cycle process.
These steps are depicted in Fig. 4.8. The drawback of current NPPs designs can be explained from thermodynamics viewpoint and steam generation, which is the driving factor for the innovative combinedcycle approach [10-12]. A saturation dome is a graphical representation of the combination of vapor and gas for a simple fluid. It can be used to find either the pressure or the specific volume as, long as one already has at least one of these properties. A saturation dome uses the projection of a P–v–T diagram (pressure, specific volume, and temperature) onto the P–v plane. This gives a P–v diagram at a constant temperature. The points that create the left-hand side of the dome represent the saturated liquid states, while the points on the right-hand side represent the saturated vapor states (commonly referred to as the “dry” region). On the left-hand side of the dome there is compressed liquid, and on the right-hand side there is superheated gas [10]. This matter is graphically depicted as Fig. 4.9. Within the dome itself, there is a liquid– vapor mixture. This two-phase region is commonly referred to as the “wet” region. The percentage of liquid and vapor can be calculated using vapor quality [10]. The triple-state line is where the three phases (solid, liquid and gas) exist in equilibrium. The point at the very top of the dome is called the critical point.This point is where the saturated liquid and saturated vapor lines meet. Past this point, it is impossible for a liquid–vapor transformation to occur. It is also where the critical temperature and critical pressure meet. Beyond this point, it is also impossible to distinguish between the liquid and vapor phases. A saturation state is a point where a phase change begins or ends. For example, the saturated liquid line represents a point where any further addition of energy will cause a small portion of the liquid to convert to vapor. Likewise, along the saturated vapor line, any removal of energy will cause some of the vapor to condense back into a liquid, producing a mixture. When a substance reaches the saturated liquid line, it is commonly
133
134
Introduction to energy essentials
Fig. 4.9 PVT in 3-dimensional diagram.
said to be at its boiling point. The temperature will remain constant while it is at constant pressure underneath the saturation dome (boiling water at atmospheric pressure stays at a constant 212 °F) until it reaches the saturated vapor line. This line is where the mixture has converted completely to vapor. Further heating of the saturated vapor will result in a superheated vapor state. This is because the vapor will be at a temperature higher than the saturation temperature (212 °F for water) for a given pressure. Then, the vapor quality refers to the vapor–liquid mixture that is contained underneath the dome. This quantity is defined as the fraction of the total mixture which is vapor based on mass [10]. A fully saturated vapor has a quality of 100%, while a saturated liquid has a quality of 0%. Quality can be estimated graphically as it is related to the specific volume or how far horizontally across the dome the point exists. At the saturated liquid state, the specific volume is denoted as υ f , while at the saturated vapor stage, it is denoted as υ g . However, quality can be calculated by Eq. 4.1 as υ −υf x= (4.1) υg − υ f Now considering, the saturation dome thermodynamics, using Fig. 4.9, we can easily see where the current, existing nuclear power plants of GEN III suffer a drawback as to higher temperatures and efficiencies. Fig. 4.10 shows the comparison of a typical Rankine cycle for a typical LWR versus coal plant. As it can be seen from Fig. 4.10, the following points are limiting the thermal efficiencies of GEN III NPPs versus those for typical coal power plants (CPPs).
Small modular reactors and a modern power conversion approach
• LWRs are trapped in the vapor dome for the steam water system. • Coal plants have a separate heat exchanger called a superheater that allows them to go to higher temperatures and better efficiency. • This has been tried with nuclear plants by adding a combustion superheater and by designing another heat exchanger in the core. Neither worked very well. • One GEN IV concept is to go to supercritical water at ∼25 Mpa, but this will require significant in-core development.
Fig. 4.10 Rankine cycle comparison between LWR and coal power plants.
135
136
Introduction to energy essentials
Fig. 4.11 Sketch of the jet engine components and corresponding thermodynamic states.
These aforementioned issues are very simple, yet they are very important to consider when addressing the requirements for a modern design approach that can affect the future of GEN IV NNPs.
4.16 Open air-Brayton gas power cycle Before addressing the combined cycle, it is worthwhile to discuss the air-Brayton cycle first. The Brayton cycle (or Joule cycle) represents the operation of a gas turbine engine. The cycle consists of four processes, as shown in Fig. 4.11 alongside a sketch of an engine: • a–b: Adiabatic, quasistatic (or reversible) compression in the inlet and compressor • b–c: Constant pressure fuel combustion (idealized as constant pressure heat addition) • c–d: Adiabatic, quasistatic (or reversible) expansion in the turbine and exhaust nozzle, with which we • take some work out of the air and use it to drive the compressor • take the remaining work out and use it to accelerate fluid for jet propulsion or to turn a generator for electrical power generation • d–a: Cool the air at constant pressure back to its initial condition The components of a Brayton cycle device for jet propulsion are shown in Fig. 4.12. We will typically represent these components schematically, as in Fig. 4.12. In practice, real Brayton cycles take one of two forms. Fig. 4.14A shows an “open” cycle where the working fluid enters and then exits the device. This is the way a jet propulsion cycle works. Fig. 4.14B shows the alternative, a closed cycle, which recirculates the working fluid. Closed cycles are used, for example, in space power generation. The major growth in the electricity production industry in the past 30 years has centered on the expansion of natural gas power plants based on gas turbine cycles. The most popular extension of the simple Brayton gas turbine has been the combined cycle
Fig. 4.12 Schematics of typical military gas turbine engines. Top: turbojet with afterburning, bottom: GE F404 low bypass ratio turbofan with afterburning (Mattingly). (Source: Adapted from J.D. Mattingly, Elements of Gas Turbine Propulsion, McGraw-Hill, Inc., New York, 1996 [13]).
Small modular reactors and a modern power conversion approach
137
138
Introduction to energy essentials
Fig. 4.13 Thermodynamic model of gas turbine engine cycle for power generation.
power plant with the Air-Brayton cycle serving as the topping cycle and the steam Rankine cycle serving as the bottoming cycle. The Air-Brayton cycle is an open air cycle, and the steam Rankine cycle is a closed cycle. The Air-Brayton cycle for a natural gas–driven power plant must be an open cycle where the air is drawn in from the environment and exhausted with the products of combustion to the environment. The hot exhaust from the Air-Brayton cycle passes through a heat recovery steam generator (HSRG) prior to exhausting to the environment in a combined cycle. The HRSG serves the same purpose as a boiler for the conventional steam Rankine cycle (Fig. 4.15). In 2007 gas turbine combined cycle (GTCC) plants had a total capacity of 800 GW and represented 20% of the installed capacity worldwide. They have far exceeded the installed capacity of nuclear plants, though in the late 90s they had less than 5% of the installed capacity worldwide of 22%. There are several reasons for this. First natural gas is abundant and cheap. Second combined cycle plants achieve the greatest efficiency of
Fig. 4.14 Options for operating Brayton cycle gas turbine engines. (A) Open cycle operation. (B) Closed cycle operation.
Small modular reactors and a modern power conversion approach
Fig. 4.15 Working principle of combined-cycle gas turbine (CCGT) plant.
any thermal plant. And third, they require the least amount of waste heat cooling water of any thermal plant. A typical gas turbine plant consists of a compressor, combustion chamber, turbine, and an electrical generator. A combined-cycle plant takes the exhaust from the turbine and runs it through a HRSG before exhausting to the local environment. The HRSG serves the function of the boiler for a typical closed cycle steam plant. The steam plant consists of a steam turbine, a condenser, a water pump, an evaporator (boiler), and an electrical generator. In a combined cycle plant, the gas turbine and steam turbine can be on the same shaft to eliminate the need for one of the electrical generators. However, the two shafts, two generator systems provide a great deal more flexibility at a slightly higher cost. In addition to the closed-loop for the steam, an open-loop circulating water system is required to extract the waste heat from the condenser. The waste heat extracted by this “circulating” water system is significantly less per megawatt for a combined cycle system as the open Brayton cycle exhausts its waste heat directly to the air.
4.17 Modeling the nuclear Air-Brayton cycles This effort was undertaken to investigate the possibility of using a nuclear reactor– driven heat exchanger, or group of heat exchangers, to drive a Brayton-like cycle gas turbine as an open cycle power conversion system. Since in a nuclear reactor-driven system, the core fuel elements of the reactor must exist at a higher temperature than
139
140
Introduction to energy essentials
any other component in the system, such a system is usually severely limited in the peak temperatures that can be produced in the gas turbine working fluid.To provide a benchmark for near-term systems a peak turbine inlet temperature of 510 °C was chosen to represent a power conversion system for a near term sodium-cooled fast reactor (SFR). A molten salt system was chosen as an advanced system with a peak turbine inlet temperature of 660 °C. These turbine inlet temperatures are roughly half of the state-ofthe-art aircraft jet engine turbine inlet temperatures. Given this limitation, it is necessary to use several turbines with a reheat heat exchanger between each. The baseline for this study was three turbines operating with turbine inlet temperatures as specified. (A three turbine system with two reheat cycles is typical of steam plants today). It is interesting to estimate the effects on efficiency and power that multiple turbines can have. Consider first a simplified model for the single turbine characteristic of current jet engines and power plants. We will assume isentropic compression and expansion processes and ideal heat exchangers. We have (T − T ) Wc = mCp 2 1
WT = mCp(T3 − T4 )
T1 = Inlet temperature T2 = Compressor Exit Temperature T3 = Burner Exit Temperature T4 = Turbine Exit Temperature m = Mass Flow Rate Cp = Air Specific Heat Q = mCp(T3 − T2 )
(γ −1)/ γ )
(γ −1)/ γ )
(γ −1)/ γ )
(γ −1)/ γ )
p p T3 p3 = r = 2 = 3 T4 p4 p1 p4 p γ = ratio of specific heats x = pressure ratio between x and y py W − WC mCpT4 (r − 1) − mCpT1 (r − 1) T4 ( r − 1) − T1 (r − 1) η= T = = T4 r − T1r Q mCpT4 r − mCpT1r T2 p2 = T1 p1
η=
(4.2)
(T4 − T1 ) (r − 1) = r − 1 = 1 − 1 (T4 − T1 )r
r
r
η = Thermodynamic Efficiency So, for a single turbine system the efficiency is simply a function of the compressor pressure ratio. This is the classic result. However, it only holds true until the compressor exit temperature reaches the burner exit temperature. This would occur at a pressure ratio of about 28.7 for the SFR conditions. Of course, by the time the efficiency got to that point, the input power would have gone to zero. We will see that that is the advantage of the multiple turbines and reheat cycles. The efficiency is not as good,
Small modular reactors and a modern power conversion approach
but the power increases with the number of turbines for the same air-mass flow rate. Consider now the multiple turbine case, assuming all of the turbines have identical expansion ratios. WT = mCpn(T3 − T4 )
WC = mCp(T2 − T1 )
Q = mCp[T3 − T2 + (n − 1)(T3 − T4 )]
η=
mCp[n(T3 − T4 ) − (T2 − T1 )] mCp[T3 − T2 + (n − 1)(T3 − T4 )] ( γ −1)/ nγ
( γ −1)/ γ
T3 p3 T2 p2 = r 1/ n = = =r T1 p1 T4 p4 (4.3) n(r 1/ n − 1) − (T1 /T4 )(r − 1) T4n(r 1/ n − 1) − T1 (r − 1) η = 1/ n = T4 r − T1r + (n − 1)T4 (r 1/ n − 1) nr 1/ n − (n − 1) − (T1 /T4 )r T3 = T4 r 1/ n η =
η=
n(r 1/ n − 1) − (T1 /T3 )r 1/ n (r − 1) nr 1/ n − ( n − 1) − (T1 /T3 )r 1/ n r )
(1 − 1 / r 1/ n ) − (T1 / nT3 )(r − 1) 1 − [(n − 1)/ nr 1/ n ] − (T1 / nT3 )r
Plotting up the efficiencies as a function of compressor pressure ratio for the SFR gives Fig. 4.16.
Fig. 4.16 Ideal air-Brayton efficiencies for different numbers of turbines for an SFR.
141
142
Introduction to energy essentials
Several observations are worthwhile. The single turbine performs as much as 20% better than the multiple turbines. However, remember by the time it gets to an expansion ratio of 28.7, it is producing no power but doing it very efficiently. The multiple turbine efficiencies peak around a pressure ratio of 15 and are fairly flat from 8 to 25. Of course, this analysis was for isentropic compressors and turbines, and actual systems are likely to perform at lower actual efficiencies. Now consider a very simple model for calculating the efficiency of a GTCC power conversion system. An approximate efficiency can be calculated for a combined cycle power plant by the following simple argument [15].
Brayton cycle efficiency =
Heat to Rankine cycle = QR = (1 − ηB )Qin Rankine cycle efficiency = Overall efficiency = =
WB = ηB Qin
WR = ηR QR
WB + WR η Q + ηRQR = ηT = B in Qin Qin
ηBQin + ηR (1 − ηB )Qin = ηB + ηR − ηBηR Qin ηT = ηB + ηR − ηBηR (4.4)
So, for a 40% efficient air-Brayton topping cycle and a 40% efficient Rankine bottoming cycle the system efficiency would be given by
ηT = 0.4 + 0.4 − 0.4 * 0.4 = 0.64
GE currently markets a system that will produce 61% efficiency at design power and better than 60% efficiency down to 87% of design power [14] for GTCC plants. Before getting into realistic estimates of NACC systems it is useful to take a look at Ideal recuperated systems. Recuperators have been used in the Navy’s WR-21 system and the Army’s M1A1 main battle tank [15]. A recuperator takes the exhaust from the turbine and uses it to heat the compressed air coming out of the compressor before it goes to the combustion chamber or burner. If we assume an ideal recuperator, it will raise the temperature of air going into the burner to the exhaust temperature of the
Small modular reactors and a modern power conversion approach
turbine. Of course, this takes an infinitely large heat exchanger, but it provides a bounding estimate for the achievable efficiency. The analysis is as follows: WC = mCp(T2 − T1 )
WT = mCp(T3 − T4 )
Q = mCp(T3 − T4 )
T T mCpT4 3 − 1 − mCpT1 2 − 1 T1 = T4 (r − 1) − T1 (r − 1) = 1 − T1 T14 η= (4.5) T4 (r − 1) T4 T3 mCpT4 − 1 T4 T T3 = rT4 η = 1 − 1′ r T3 For the multiple turbine case, this becomes
η =1−
1 T1 r − 1 (4.6) n T3 1 − 1 / r 1/ n
And as they say in academia, the “proof is left to the student.” The resulting efficiencies are plotted in Fig. 4.17. For the multiple reheat turbines, the ideal efficiencies look better than for the single turbine. Of course, the hardest thing to develop to approach ideal performance is an air-to-air Recuperator heat exchanger.
Fig. 4.17 Ideal recuperated air-Brayton efficiencies for N turbines for an SFR.
143
144
Introduction to energy essentials
4.18 Currently proposed power conversion systems for small modular reactors Before going further, it is worthwhile to note that the modern Air-Brayton power conversion systems are not the only ones being proposed for SMRs. In the 1950s when nuclear power was first being considered as a possible source of submarine propulsion and electric power generation, the state-of-the-art power conversion systems for power plants were all steam boilers. Thus, the earliest power plants and most that exist today use their nuclear heat to boil water. Some of the SMRs that have been proposed follow this technology and are simply smaller versions of the current generation of 1 gigawatt LWRs. The SMRs proposed by Holtec and BWXT fall into this category. Primarily with safety in mind, the NuScale reactor uses a natural convection water system to extract heat from the reactor core. This is a concept pioneered in nuclear submarines to reduce the noise of the reactor pumps for more stealthy operation. It is the most advanced SMR in terms of design and the licensing process. It currently estimates a thermodynamic efficiency of 31%, somewhat lower than current GEN II or III LWRs. The newest power conversion system of interest for an SMR is the supercritical CO2 system. The seminal report describing this technology for nuclear reactors was published in 2004 by V. Dostal, M. J. Driscoll, and P. A. Hejzlar from MIT [16]. The concept was picked up by researchers at Sandia National Laboratories and is currently being developed there by DOE/NE. The high pressures possible (∼7 MPa) enable very small turbines and heat transfer equipment. Efficiency predictions will be discussed in comparison with Air-Brayton systems subsequently. Currently, there are no SMRs advocating this technology as a baseline, but it may be adopted for SMRs as it matures. A fourth power conversion system that has been developed for current fossil steam systems is the supercritical water power conversion system. SMRs in the 30 to 150 MWe power class have been proposed with this power conversion system. Its main advantage is that the working fluid does not change phase. This requires the system to operate above the water critical point at 647 K and 22.1 MPa. Though this is more than double the pressure of conventional steam systems, the technology has well penetrated the fossil steam market. Other possibilities for SMR power extraction and conversion systems are the gascooled reactors. These are not easily adapted to the small part of a SMR as the gas heat extraction capability in the core requires a large surface area, implying large cores. Efficiencies of these systems can be compared with the proposed liquid metal/molten salt air-Brayton systems proposed here, but the development of more compact cores will require significant time to catch up with these more liquid coolant systems.
Small modular reactors and a modern power conversion approach
4.19 Advanced Air-Brayton power conversion systems Advanced Air-Brayton power conversion systems are modeled after current generation gas turbine systems and take advantage of much of the technology developed for these systems. At the time that nuclear power was being developed, the gas turbine was going through a rapid development period also. The development of jet-powered aircraft provided a very strong incentive for advances in gas turbine technology. This technology was adapted to stationary electric power plants to provide peaking power using kerosene as a fuel. The gas turbine plants could start much faster and shut down quicker than the massive steam boiler plants even if the fuel was more expensive. With the advent of cheap natural gas, baseload gas turbine planets came into their own. Then it was observed that the increased temperatures available to the gas turbines allowed their exhaust to be used to heat water in a conventional boiler and the combined cycle was invented. Currently, GTCC powerplants achieve efficiencies a little over 60%. The efficiency of a gas turbine plant is the driving performance measure because 85% of the cost of producing electricity is the cost of the fuel consumed. A system diagram for a typical GTCC is provided in Fig. 4.18. It is made up of a topping air-Brayton cycle and a bottoming Rankine cycle. The ambient air is taken in through the air compressor, combined with the fuel, and burned in the combustion chamber. Then it is expanded through the turbine, passed through the heat recovery steam generator, and exhausted to the atmosphere. The bottoming steam cycle starts with liquid water at the entrance to the pump where it is raised to high pressure. It then passes through the heat recovery steam generator where it is vaporized in much
Fig. 4.18 Simplified gas turbine combined-cycle system.
145
146
Introduction to energy essentials
the same fashion that it would be in a conventional boiler. The steam is then expanded and reheated through a series of typically three turbines. The exhaust goes to a condenser that extracts the waste heat and condenses the steam to water to start the cycle over again. Note that a circulating water system is required to extract the waste heat from the condenser and deposit it in the environment in some fashion. This loop is not shown in Fig. 4.18. Also note that the system diagram is very simplified to present the major components only. Any real system would at least have several feedwater heaters to improve the efficiency of the steam cycle slightly. It is also worth pointing out that the steam cycle is a closed cycle and the working fluid is used continuously. For the Air-Brayton cycle, the air is used once and exhausted to the environment. The simplest NACC system looks exactly like the GTCC system except that a heat exchanger is substituted for the combustion chamber and another fluid loop is added to transfer the working fluid going through the reactor to this heat exchanger. Actually, for the systems to be considered here, the heat transfer fluid going through the reactor passes through another heat exchanger passing its heat to a similar working fluid that is then passed through the heat exchanger that drives the Air-Brayton cycle. For a sodium-cooled reactor this means the primary sodium passes through the reactor and then through a sodium-to-sodium intermediate heat exchanger. The heated sodium then goes to the primary sodium-to-air heat exchanger that drives the air turbine. The recuperated Air-Brayton system (NARC) simply replaces the HRSG with a heat exchanger that preheats the compressed air before it enters the primary heat exchanger and recovers some of the waste heat in the turbine exhaust. After passing through the recuperator heat exchanger, the exhaust is vented through a stack to the atmosphere. The waste heat from a NARC system is deposited directly in the atmosphere without going through the heating of a circulating water system. It is useful to contrast the primary heat exchanger (sodium-to-air) with a combustion chamber. The heat exchanger cannot heat the working fluid to as high a temperature as the combustion chamber. In the heat exchanger the temperature change going from the solid material to the gas involves a temperature drop, so the gas temperature must always be below the temperature of the solid heat exchanger material. In the combustion case the temperature drop is in the other direction where the gas is at a higher temperature than the combustion chamber material. Since the combustion chamber can be cooled, this temperature drop could be quite significant. The gas temperature impinging on the turbine, the prime indicator of thermodynamic efficiency, will always be lower for the NACC or NARC systems than that for the GTCC system. There is a slight compensation though in that the pressure drop can be lower for the heat exchanger than for the combustion chamber. Typical pressure drops in combustion chambers are on the order of 3%–5%, whereas heat exchanger pressure drops can be designed to be less than 1%. The other difference is that the NACC/NARC systems do not change the working fluid. Combustion systems use up the oxygen in the compressed air, and although they typically do not reach stoichiometric temperatures implying that all of the oxygen is
Small modular reactors and a modern power conversion approach
burned, most of the oxygen is burned. Some two turbine gas turbine systems have been proposed. For the NACC/NARC systems the air is only heated. Borrowing a trick from steam systems, the air can be expanded through the first turbine and then reheated and expanded again. For steam cycles, the steam is typically expanded through as many as three turbines. Since the pressure drop through the primary heat exchanger can be as much as one-fifth that through a combustion chamber, it makes sense to consider as many as four reheats and five expansions of the air passing through the turbines. With these thoughts in mind a typical system diagram for a two turbine NACC system is presented in Fig. 4.19. A typical system diagram for a two turbine nuclear Air-Brayton recuperated cycle system is presented in Fig. 4.20. The two turbine systems are the simplest multiturbine NACC/NARC systems that we will consider. Note that the air compressor, air turbines, steam turbines, and generator are all on the same shaft. Another configuration would have the air turbines and the steam turbines on different shafts. This would necessitate two smaller generators. For the discussion here the differences between these configurations will not be considered. However, one configuration change that is of interest is the conversion of the last air turbine to a power, or free, turbine. In this case the first turbine is connected to the air compressor shaft and it drives the compressor. The power, or free, turbine is not connected to this shaft but is only connected to the generator. This is a common configuration in what are called turboshaft engines. The difference that will be considered here is that the working fluid (air) will pass through another heat exchanger prior to entering the power turbine. In combustion systems this is not done because the air can’t be reburned.
Fig. 4.19 Two turbine nuclear air-Brayton combined-cycle system.
147
148
Introduction to energy essentials
Fig. 4.20 Two turbine nuclear air-Brayton recuperated cycle system.
Additionally, it is possible to add a recuperator to the NACC system after the air exits the HRSG and before it is exhausted. This is not done with standard GTCC systems but will be considered here in order to improve a near term system’s efficiency. Before getting into the design and analysis of components and cycles, it is worth pointing out that other combined cycles have been proposed. The steam cycle has been replaced by an organic cycle in at least one design where the organic fluid chosen is toluene. Since the bottoming cycle is a closed cycle, water is not unique as a working fluid and the organic cycle has some advantages. Another combined cycle has been proposed for space power plants that have two closed cycles. The topping cycle is a mixture of helium and xenon and the bottoming cycle uses isobutane as the working fluid. This cycle appears to be significantly more efficient than other proposed power conversion cycles, which for space power systems significantly reduces the size of the radiator used to dump waste heat.
4.20 Design equations and design parameters Two designs will be developed, one for near-term systems with sodium cooling and one for advanced systems with molten salt cooling. These systems are representative of what a 50 MWe SMR system based on an Air-Brayton power conversion might look like. State points, mass flow rates, and component sizes will be estimated.
Small modular reactors and a modern power conversion approach
Fig. 4.21 Estimated volume versus reactor thermal power for liquid metal reactors.
4.20.1 Reactors It would seem that a chapter on SMRs would go into detailed design of the reactor core and heat removal system. However, the reactors of interest here are all of the liquid metal–type sodium-cooled, lead-cooled, lead–bismuth cooled, and molten salt cooled. (Note the molten salt liquid fuel is not being considered as sizing has not been established very definitively.) Many prototypes have been built and their characteristics are well documented in the book by Waltar et al. [17]. A very simple approach is taken to the nuclear core for these systems. A pool-type liquid metal arrangement is assumed, and a simple linear regression is fitted to the power level versus size data from Waltar et al. based on the liquid metal systems that have been built around the world. The fitted curve is presented in Fig. 4.21. This is a somewhat crude approach but is based on actual experience. Certainly, it would be desirable to beat the curve, but the curve is used only to show a comparison between the reactor part and the power conversion part of the plant. The other parameter that is of interest for fitting the rector into the modeling here is the temperature of the hot air at the entrance to the first turbine. Current technology for sodium, and possibly lead or lead–bismuth systems, seems capable of achieving a temperature of 510 °C or 783 K. Molten salt systems are expected to be able to achieve higher temperatures, so a temperature for them will be estimated at 660 °C or 933 K. The following analysis will look at near term possible temperatures of 783 K and developmental temperatures of 933 K. HTGR theoretically can reach temperatures much higher than these, but analyzing an HTGR is beyond the goals here. The vessel volume will then be estimated at 1.04 times the thermal power of the reactor in
149
150
Introduction to energy essentials
megawatts plus 67 m3. A near-term SMR reactor will deliver hot air to the first turbine at 783 K, and an advanced SMR will deliver hot air to the first turbine at 933 K.
4.20.2 Air compressors and turbines For all of the analyses to follow the concept of a “rubber” engine applies. That is, the equipment is rebuilt every time to match the desired conditions. This is different from considering a given “solid” engine and looking at its performance under different conditions. Since the market for gas turbines is so large, the approach taken here is to look at the desired characteristics of a component, and then see if any readily available components meet that requirement. Even if they don’t meet it exactly, if they are close, it may be worthwhile to sacrifice some performance for a readily available developed component. Thus, each component discussed here is designed to optimize the performance of the specific power plant considered. With this in mind, the basic requirement for a compressor is to increase the pressure of the working fluid. When it does this, it also heats the working fluid. The heating of the working fluid defines the work required to drive the compressor. The defining equations are
Tout = TinCPR
γ −1 γ ec
(4.7)
CPR = compressor pressure ratio
e c = the polytropic efficiency for the compressor
γ = ratio of specific heats for air
Wcomp = C p (Tout − Tin ) (4.8) C p = air constant pressure specific heat
Note all calculations follow the standard practice of performing analyses on a per unit mass basis. The efficiency of turbines and compressors will be calculated using a quantity called polytropic, or small stage, efficiency. This efficiency is taken as independent of the pressure ratio. This allows the comparison of performance across multiple pressure ratios. It is also worth pointing out that the specific heat and ratio of specific heats are not constant for air, so an average value must be chosen based on the two temperatures at the start and finish of a process. This value is solved for iteratively by estimating the final temperature and then updating the specific heat and ratio of specific heats until all converge. The classic thermodynamic efficiency can be calculated for a compressor or turbine by calculating an ideal temperature that would be produced with a polytropic efficiency
Small modular reactors and a modern power conversion approach
of 1.0 and then comparing the two temperature changes. For instance, for a compressor we would have γ −1 γ
Tout,ideal = TinCPR
e th =
(4.9)
Wideal Tout,ideal − Tin (4.10) = Tout − Tin Wactual
The governing equations for a turbine are Tout
where
1.0 = Tin CPR
( γ −1)e t γ
(4.11)
e t = polytropic efficiency for the turbine Wt = C p (Tin − Tout ) (4.12)
In addition to estimating the thermodynamic performance for a particular power conversion system, an attempt was made to estimate the size of the components, particularly the heat exchangers. To estimate the sizes for the compressors and turbines the actual mass flow must be calculated. This of course will depend on the power generated. The net electrical power generated by the air-Brayton part of the cycle will be estimated as:
P (e )net = m ∑Wt ,i − m ∑Wc, j (4.13) i
j
This equation is then solved for the mass flow rate given the desired electrical power from the air-Brayton part of the cycle. The compressors are sized based on a hub to tip ratio of 0.5, with a blade solidity of 0.05, and an entering Mach number of 0.4. This is enough information to calculate a compressor radius. A pressure ratio increase per stage is set at 1.25. Then the length of the blades is calculated based on the density entering a given stage for a constant axial velocity.The width of the stage is then calculated as 0.784 times the height of the blades. This gives a length for the compressor and subsequently a volume. The air turbine sizes are estimated in much the same way with a hub to tip ratio of 0.7, a blade solidity of 0.05, an exit Mach number of 0.3, and a pressure drop factor per stage of 2.5. There are many possible designs for compressors and turbines, but these parameters seem to be about average for current designs. The volumes of compressors and turbines are not major contributors to the overall system volume in the end. Thus, these approximations seem reasonably adequate. Perhaps the most critical dimension estimated is the radius of the compressor as that is used to estimate the polytropic efficiency based on a correlation developed by Wilson and Korakianitis [18].
151
152
Introduction to energy essentials
e c = 0.862 + 0.015 ln(m ) − 0.0053 ln(rc ) (4.14)
where
m = the mass flow rate in kg/s
rc = compressor pressure ratio
Wilson also developed a similar correlation for the polytropic efficiency of air turbines given by e t = 0.7127 + 0.03 ln(dm ) − 0.0093 ln(1 / rt ) (4.15)
where
dm = rotor mean diameter in mm
r = pressure ratio for the turbine
The sensitivity of the overall system size will not be addressed for air compressor and turbine performance as they are a small part of the volume of the overall systems. Steam turbine systems are modeled similar to air turbines except that a simple thermodynamic efficiency of 0.95 is assumed in all cases. This is in the range of recent state-of-the-art turbines [8,9]. A hub to tip ratio of 0.6, a blade solidity of 0.05, and an exit Mach number of 0.3 are used to compute the turbine diameter. A pressure drop factor per stage of 1.5 and a stage width proportional to 0.4 times the blade height was used to estimate the turbine length.
4.20.3 Heat exchangers There are numerous heat exchangers in each of the systems to follow. In many cases they make up the largest fraction of the system volume. The largest component for the NACC system is generally the heat recovery steam generator, which includes an economizer, evaporator, and generally three super-heaters. For the NARC systems the largest component is generally the recuperator as it is an air-to-air heat exchanger. The heat transfer in each heat exchanger is calculated by the classic equation where
Eff =
Ci (Tmax,i − Tmin,i ) C min (Thot,in − Tcold,in )
(4.16)
Eff = Heat Exchanger Effectiveness
C min = minimum mass flow rate times specific heat for the two fluids
Thot ,in = the temperature of the hot fluid entering the heat exchanger
Small modular reactors and a modern power conversion approach
Tcold ,in = thetemperatureof thecold fluid entering the heat exchanger
Ci = the mass flow rate times the specific heat for the ith fluid,eitherr hot or cold Tmax,i = the maximum temperature for the ith fluid
Tmin,i = the minimum temperature for the ith fluid
The baseline effectiveness for all heat exchangers is assumed to be 0.95. A pressure drop of 1.0% is also assumed for both fluids. Rather than using pressure drops, the parameter of interest will be defined as the pressure ratio. A 1% pressure drop means the exit pressure from the heat exchanger is 99% of the inlet pressure. The size of the heat exchanger can then be calculated with at least one fluid achieving the 1% pressure drop. This applies to all heat exchangers, be they are liquid metal, molten salt-to-air, air-to-air, air-to-water, or air-to-steam. The overall size to achieve a 0.95 efficiency also depends on the flow path within the heat exchangers, the ratio of the minimum-to-maximum mass flow rates times heat capacities, and the number of heat transfer units for a specific design. Flow paths can be counter flow or cross flow. There are several varieties of cross flow of interest. Transfer units are defined as: AU N tu = (4.17) C min where
A = heat transfer area
C min = is defined above and U is defined by Eq. (4.18) as follow:
where
1 1 1 1 = + + (4.18) U ηhot hhot k ηcold hcold a
ηh ,ηc = heat transfer coefficient for the hot , cold surfaces,
hh , hc = surface efficiencies for hot , cold surfaces ( fins etc.) ,
k = ratio of wall thermal conductivity to its thickness. a And the hot surface area has been assumed to be the same size as the cold surface area and the wall surface area. If that is not the case, then an average must be taken.Then the functional relationship is
Eff = f (N tu ,C min / C max , flowpath ) (4.19)
153
154
Introduction to energy essentials
These functional relationships have been taken from the text by Kays and London [19]. For the systems considered here, the following configurations taken from Kays and London will be used. 4.20.3.1 Primary heat exchangers—sodium to air, molten salt to air Type: Cross flow unmixed Surfaces: Louvered plate-fin Pitch: 437 per meter Heat transfer area per volume: 1204 m2/m3 4.20.3.2 Economizer—air to water Type: Cross flow unmixed Surfaces: Louvered plate-fin Pitch: 437 per meter Heat transfer area per volume: 1204 m2/m3 4.20.3.3 Superheaters —air to steam Type: Cross flow unmixed Surfaces: Louvered plate-fin Pitch: 437 per meter Heat transfer area per volume: 1204 m2/m3 4.20.3.4 Condenser—steam to water Type: Cross flow unmixed Surfaces: Louvered plate-fin Pitch: 437 per meter Heat transfer Area per volume: 1204 m2/m3 4.20.3.5 Recuperator—air to air Type: Cross flow unmixed Surfaces: Plate-fin Pitch: 1789 per meter Heat transfer area per volume: 4372 m2/m3 4.20.3.6 Intercooler—water to air Type: Cross flow unmixed Surfaces: Plate-fin Pitch: 1789 per meter Heat transfer area per volume: 4372 m2/m3
4.20.4 Pumps and generators The efficiency for water pumps was simply assumed to be around 80% and the generators were assumed to be 99% efficient. A mechanical efficiency of 99% was assumed to account for frictional losses in the turbocompressors.
Small modular reactors and a modern power conversion approach
4.20.5 Connections and uncertainty The size of components was increased to allow for uncertainty and connections. The increase factors are as follows: Reactor: Estimated volume = 1.00 * Calculated volume Compressors: Estimated volume = 1.20 * Calculated volume Air turbines: Estimated volume = 1.10 * Calculated volume Primary HXs: Estimated volume = 1.10 * Calculated volume Superheater: Estimated volume = 1.20 * Calculated volume Steam turbines: Estimated volume = 1.20 * Calculated volume Evaporator: Estimated volume = 1.20 *Calculated volume Economizer: Estimated volume = 1.20 *Calculated volume Condenser: Estimated volume = 1.20 *Calculated volume Recuperator: Estimated volume = 1.05 *Calculated volume Intercooler: Estimated volume = 1.05 *Calculated volume
4.20.6 Validation Combining all of these equations into a computer model the NACC code was developed [4]. In order to validate its methods an attempt was made to model a current generation gas turbine system.The most widely used GTCC system in 2001 was the GE S107F configuration [20]. It had an advertised efficiency of 56.5%. Most data required to estimate its performance were available. The only thing that had to be estimated was the gas turbine inlet temperature. This was estimated based on the pressure ratio for the turbine, assuming a near atmospheric pressure at the exit of the HRSG. When this information was input to the NACC code, it calculated an efficiency of 56.55%. That seemed a little too serendipitous. So, the estimated turbine inlet temperatures for 13 GE gas turbines were estimated ranging from 1440 to 1659 K. These gave a range of efficiencies for the S107F of 53.9% to 58.97%. The NACC code appears to be accurate in estimating efficiencies to within 2%. The simple formula given in Eq. (4.4) can be off by 7%.
4.21 Predicted performance of small modular NACC systems To assess the performance of small modular NACC systems two technology levels were considered. A near term system was represented by a sodium reactor with an input temperature of 783 K to the first turbine. An advanced system was represented by a molten salt reactor with an input temperature of 948 K to the first turbine. Both systems were designed to produce 50 MWe. Up to four reheat cycles, or five primary heat exchangers and turbines, were considered for both systems. The input and exit temperatures for all turbines were the same. The performance characteristics for the near-term sodium systems are given in Table 4.2.
155
156
Introduction to energy essentials
Table 4.2 Performance characteristics of a sodium air-Brayton combined-cycle system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 30.44 164.3 4.957 783 648 333.2 7.6 39.2 10.8 15.8 236 39 0 10.9 46 304
50 34.10 146.6 7.541 783 668 272.9 8.3 37.9 12.1 17.7 218 35 0 10.8 43 282
50 35.92 139.2 9.703 783 683 245.2 8.9 36.7 13.3 19.6 210 35 0 11.6 44 276
50 37.14 134.6 10.28 783 703 227.4 9.6 35.2 14.8 21.5 206 36 0 12.9 46 275
Note that the efficiencies are not much better than current LWR systems. The limitation on temperature is severe when compared to a GTCC. Even though the efficiencies are not any better than a current LWR system, the heat dumps to environmental water (15.8–21.5 MW) are significantly less than a 35% efficient LWR that would have to dump 92.8 MW of heat to produce 50 MWe. The performance characteristics for an advanced air-Brayton system based on a molten salt coolant are given in Table 4.3. Now the efficiencies are significantly better but still not up to the gas turbine systems. However, it is probably not a good idea to compare a small modular nuclear reactor against a gas turbine system on any performance measure other than the cost of electricity. Efficiency is everything for a gas turbine system, but it is not that significant for a nuclear system as the cost of fuel is a much smaller fraction of the cost of electricity for a nuclear system. It is also worth pointing out that these are the highest-pressure systems that will be addressed here. The peak pressure is about 17 atm, or about 1.7 MPa.This is significantly less than current LWR system pressures and quite a bit less than the 7 MPa+ being proposed for supercritical CO2 systems. The water heat dumps for these systems are larger than for the sodium systems because a larger fraction of the energy is derived from the bottoming cycle. The efficiency of the near-term sodium combined cycles can be improved by adding a recuperator after the HRSG. This is a relatively small recuperator compared to a NARC system type, but it does boost the efficiency. Its performance characteristics are given in Table 4.4.
Small modular reactors and a modern power conversion approach
Table 4.3 Performance characteristics of a molten salt air-Brayton combined cycle system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 43.24 115.6 9.301 948 728 172.6 9.5 35.3 14.7 22.4 186 31 0 66 41 243
50 45.36 110.2 11.917 948 783 147.9 10.5 32.4 17.6 25.3 181 34 0 7.3 45 243
50 46.72 107.0 14.478 948 813 135 10.7 31.3 18.7 26 178 35 0 8.8 47 243
50 47.48 105.3 16.928 948 833 127.9 10.8 30.7 19.3 26.4 176 36 0 19.5 48 245
Note that the efficiency is not a strong function of the number of turbines or reheat cycles, and the two turbines system is competitive with the three, four, and five turbine systems. The system pressures are also lower, as is characteristic of any recuperated system. Once again, the heat dumps are a little larger because the bottoming cycle is doing Table 4.4 Performance characteristics of a sodium air-Brayton combined cycle system with an added recuperator. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)- K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 40.70 122.9 2.598 783 713 351.7 16.4 24.3 25.7 29.8 194 59 74 82 77 374
50 40.98 122.0 2.556 783 730 309.2 16.1 24 26 36.6 194 57 65 76 75 364
50 41.00 122.0 2.609 783 743 293 16.5 23 27 37.4 194 57 62 75 76 364
50 40.98 122.0 2.788 783 748 277 16 23.5 26.5 36.5 194 55 58 73 73 360
157
158
Introduction to energy essentials
more work. The advanced molten salt system could not accept a recuperator because the compressor exit temperatures were greater than the final turbine exit temperatures.
4.22 Performance variation of small modular NACC systems Given the performance described in Tables 4.2 to 4.4 it is worth considering the sensitivity of these systems to some of the assumptions made in the analysis. Since it gets rather messy if one tries to assess the sensitivity of three different systems with four turbine configurations each, a three-turbine system will be chosen to assess sensitivity, and the recuperator will be included in the near-term system. The first most obvious sensitivity would be to the temperature of the air entering the turbines. This has been addressed already by considering two baseline temperatures and linear interpolation between the two should be adequate. The remaining two most sensitive parameters driving the performances of these NACC systems are the steam pressure in the bottoming cycle and the effectiveness of the recuperator in the air-Brayton system. The sensitivity to recuperator effectiveness will be addressed when discussing NARC systems. The sensitivity of thermodynamic efficiency to the steam pressure for both systems is presented in Fig. 4.22. Since the sodium system is recuperated and the molten salt system is not recuperated, the effects of steam pressure on each seem to be in the opposite direction. However, it is not a big effect. The effect of steam pressure on the HRSG size is presented in Fig. 4.23. Once again, it would seem that the effect of recuperation tends to moderate the increase in size of the HRSG with steam pressure, though this time the variation is in
Fig. 4.22 Thermodynamic efficiency versus steam pressure for air-Brayton combined cycles.
Small modular reactors and a modern power conversion approach
Fig. 4.23 Variation of HRSG size versus steam pressure for air-Brayton combined cycles.
the same direction increasing with increased pressure. The effect on the overall system size is presented in Fig. 4.24. The size of the HRSG is not large enough to influence the overall system size greatly for different steam pressures. It and the recuperator are smaller than the estimate for the reactor and primary heat transport system. Since the baseline steam turbines
Fig. 4.24 System size versus steam pressure for air-Brayton combined cycles.
159
160
Introduction to energy essentials
Fig. 4.25 System thermodynamic efficiency versus steam turbine efficiency.
were assumed to be quite efficient, it was considered useful to see the effect of this assumption on the overall system. The sensitivity of the overall system thermodynamic efficiency is presented in Fig. 4.25. The system efficiency decreases 0.18% for every 1% decrease in the steam turbine efficiency for the near-term sodium system, and it decreases 0.12% for every 1% decrease in the steam turbine efficiency for the advanced molten salt system. Thus, the steam turbine efficiency is not a major determinant of overall efficiency. Next, consider the effect of the primary heater pressure drop or pressure ratio on heater size and system size. The size of the heaters decreases by 35%–37% if the allowed pressure ratio goes from 0.99 to 0.95 as described in Fig. 4.26. It has a larger impact on the overall system. Size than might be expected as the pressure losses in the heaters affect the sizes of everything down stream. It is a significantly bigger effect for the near-term sodium system than for the advanced molten salt system. The magnitude of this effect is plotted in Fig. 4.27. The primary heater pressure ratio affects the system efficiency in a linear manner with a decrease in system efficiency of 0.73% for every 1% decrease in the pressure ratio for the near-term system, and a 0.35% decrease for every 1% decrease in pressure ratio for the advanced system. Next consider the HRSG. This consists of the economizer, evaporator, and the superheaters. It is simpler to just lump all of the effects together as all are heat exchangers and the pressure drop and effectiveness are the parameters of interest. In Figs. 4.28 and 4.29 all of these heat exchanger parameters are varied the same.
Small modular reactors and a modern power conversion approach
Fig. 4.26 Primary Heater relative volume versus pressure ratio.
Fig. 4.28 gives the variation in the relative volume for the HRSG as a function of all of the pressure ratios. Fig. 4.29 gives the relative variation in volume for the HRSG as a function of the effectiveness of all of its components.
Fig. 4.27 Relative system volume versus primary heater pressure ratio.
161
162
Introduction to energy essentials
Fig. 4.28 HRSG relative size versus the pressure ratios for all of its components.
The relative variation in the size of the HRSG is the same for both near term and advanced systems resulting in overlapping curves in Figs. 4.28 and 4.29. The pressure drop through the HRSG components affects the system efficiency in exactly the same manner as the pressure drop through the primary heaters—0.73% per 1% for sodium and 0.34% per 1% for molten salt. The system efficiency is affected less by the effectiveness of the HRSG than the pressure drops through it. The loss in system efficiency for a 1% change in the effectiveness is 0.31% for the near-term system and 0.30% for the advanced system.
Fig. 4.29 HRSG relative volume as a function of its components effectiveness.
Small modular reactors and a modern power conversion approach
This concludes the sensitivity analysis for NACC systems. The sensitivity to recuperator performance will be addressed in the next section when NARC systems are discussed.
4.23 Predicted performance for small modular NARC systems Small modular NARC systems are interesting for a number of reasons. They can achieve higher thermodynamic efficiencies than NACC systems, with an advanced system with intercooler having a predicted efficiency of greater than 50%.They also can be built without a water heat dump for waste heat, making them locatable anywhere on the planet. Consider a typical near term NARC system that produces 50 MWe in Table 4.5. Note that efficiency drops off slightly with an increase in the number of turbines, although it is about 1% better than the best recuperated NACC system and 4% better than the best basic NACC system. The pressures are very comparable to the recuperated near-term NACC system, and the relative volumes are larger than the basic NACC system but smaller than the recuperated NACC system. Mass flow rates are almost double those of the NACC systems. The performance characteristics of an advanced NARC system are given in Table 4.6. A comparison of the advanced NARC system with the advanced NACC system shows many of the same changes that the near-term comparison showed. However, the advanced NACC system could not use a recuperator, so the NARC system operates at a much lower pressure and gets about a 2% better thermodynamic efficiency. Table 4.5 Performance characteristics of a sodium air-Brayton recuperated system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 42.61 117.3 2.497 783 703 602.9 0 50 0 0 188 0 107 145 0 321
50 42.58 117.4 2.67 783 725 558.6 0 50 0 0 188 0 97 118 0 315
50 42.30 118.2 2.795 783 738 538 0 50 0 0 189 0 93 117 0 314
50 41.91 119.3 2.935 783 745 521.2 0 50 0 0 190 0 89 117 0 315
163
164
Introduction to energy essentials
Table 4.6 Performance characteristics of a molten salt air-Brayton recuperated system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 49.55 100.9 2.847 948 855 377.9 0 50 0 0 171 0 62 73 0 249
50 49.40 101.2 3.077 948 868 348.7 0 50 0 0 172 0 56 70 0 246
50 49.24 101.5 3.345 948 883 326.2 0 50 0 0 172 0 51 67 0 244
50 48.99 102.1 3.365 948 898 326.1 0 50 0 0 173 0 51 70 0 248
4.24 Performance variation of small modular NARC systems There are only two components that drive the sensitivities of NARC systems—the primary heat exchanger and the recuperator. Both are sensitive to the pressure drop and the effectiveness of the heat exchange process. Start with the primary heat exchanger. The relative volume as a function of the primary heat exchanger pressure ratio is described in Fig. 4.30. As expected the effect on relative volume is the same for both the near-term and advanced systems; however, the effect of a 5% reduction in pressure ratio produces about a 5% greater savings in relative volume for the NARC system over the NACC system.
Fig. 4.30 Primary heater relative volume as a function of its pressure ratio.
Small modular reactors and a modern power conversion approach
Fig. 4.31 System relative volume as a function of primary heater pressure ratio.
The change in primary heater pressure ratio has a greater effect on the thermodynamic efficiency for the NARC systems than it did for the NACC systems. In this case the system thermodynamic efficiency for the near-term system decreases 1.3% for every 1.0% decrease in the primary heater pressure ratio. For the advanced system this decrease in thermodynamic efficiency is 1% for each 1% decrease in primary heater pressure ratio. This is the largest change in system efficiency observed when considering the variation in component properties. The change in system relative volume as the primary heater pressure ratio is varied is described in Fig. 4.31. As before for the NACC systems, the variation in primary heat exchanger pressure ratio has a much bigger impact on overall system volume increases for the near-term sodium system than it does for the advanced molten salt system. Fig. 4.32 describes the effect of primary heat exchanger effectiveness on the size of the heat exchangers. Reducing the effectiveness by 20% reduces the size of the heat exchangers by 60%. This is essentially the same reduction achieved for the NACC systems. Contrary to the effect of the primary heat exchanger pressure ratio, the change in the effectiveness of the primary heat exchanger has a negligible effect on the system volume.The primary heat exchanger effectiveness affects only its size and does not affect the size of other system components. Once again, the two curves for the near-term and advanced systems are identical. Given that the temperatures into the turbines were specified for this analysis, it is impossible to determine the effect of the primary heat exchanger effectiveness on the system thermodynamic efficiency.
165
166
Introduction to energy essentials
Fig. 4.32 Relative volume of primary heat exchanger versus its effectiveness.
The effects of the recuperator pressure ratio and effectiveness on system size and efficiency for a NARC system are significant. First consider the effect of the pressure ratio. The effect of pressure ratio on recuperator size is described in Fig. 4.33. Unlike the primary heat exchangers, the recuperator variation in pressure ratio has very little effect on the system size, less than 1% for the 5% variation in pressure ratio.
Fig. 4.33 Recuperator relative volume versus recuperator pressure ratio.
Small modular reactors and a modern power conversion approach
Fig. 4.34 Recuperator relative volume as a function of its effectiveness.
The thermodynamic efficiency varies quite linearly with recuperator pressure ratio. The system efficiency decreases 0.9% for every 1% decrease in the pressure ratio for the near-term sodium system and 0.67% for every 1% for the advanced molten salt system. The recuperator relative volume as a function of its effectiveness is described in Fig. 4.34. There is a dramatic drop in size by decreasing the required effectiveness by 5%. The 0.9 effective recuperator is only 20% the size of the 0.95 effective recuperator. Another 10% decrease in size can be achieved by lowering the effectiveness another 5%, but beyond that the drop in size levels out and is not as dramatic. Note once again that the curves for the near-term and advanced systems are coincident. The effect on the overall NARC system by adjusting the recuperator effectiveness is described in Fig. 4.35. For these cases, the recuperators are not a large fraction of the total system volume. The design chosen for the recuperators is very aggressive, and as a recuperator becomes a larger part of the volume of the system, the impact of its change in the system volume will be much more significant. The effect of the recuperator effectiveness on the system thermodynamic efficiency is 0.77% per 1% for the near-term sodium system and 0.89% per 1% for the advanced molten salt system.
4.25 Predicted performance for a small modular intercooled NARC systems All air-Brayton systems lose efficiency as more work is required to compress the air. As the air is compressed, it heats up, and it takes increasingly more work to compress it.
167
168
Introduction to energy essentials
Fig. 4.35 Relative system volume as a function of recuperator effectiveness.
One solution to overcoming this limitation is to split the compressor in two and cool the air after it leaves the first half of the compressor. This results in the second half of the compressor working on cooler air and delivering cooler air to the primary heat exchangers. Since the air must now be heated more in the primary heat exchangers, the efficiency goes down. But if the system is recuperated, the recuperator can usually put back in the lost heat. In other words, intercooling is a good idea to improve efficiency, but to achieve the desired efficiency improvement, the system must have a recuperator. The system characteristics for a near-term intercooled NARC system are presented in Table 4.7. Once again, the thermodynamic efficiency does not vary significantly based on the number of reheat cycles or turbines involved. The efficiencies are about 3% higher, and the pressures have increased by 0.75 to 1.5 atm. The system sizes are very comparable to the nonintercooled NARC system with the recuperator shrinking to make room for the intercooler. An advanced molten salt intercooled NARC system is characterized in Table 4.8. This is the system that finally achieves an efficiency greater than 50%. And once again the system efficiency is not greatly dependent on the number of reheat cycles or turbines. The efficiencies are up by ∼2.5%. The pressures are up by 1–2 atm. The mass flows are about 2/3 the nonintercooled advanced NARC system. And the system volumes are very comparable, with the intercooled systems slightly smaller.
Small modular reactors and a modern power conversion approach
Table 4.7 Performance characteristics of a sodium air-Brayton intercooled NARC system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 45.11 110.8 3.251 783 680 454.6 0 50 0 24.1 181 0 81 127 0 316
50 45.48 109.9 3.595 783 705 398.1 0 50 0 24 181 0 69 111 0 299
50 45.51 109.9 4.119 783 720 370.1 0 50 0 24.1 180 0 63 105 0 292
50 45.38 110.2 4.429 783 730 353.3 0 50 0 24.3 181 0 60 102 0 290
Table 4.8 Performance characteristics of a molten salt air-Brayton intercooled system. Characteristics
2 Turbines
3 Turbines
4 Turbines
5 Turbines
Electrical power (MWe) Efficiency (%) Thermal power (MWt) CPR T (Turbine inlet)-K T (Turbine exit)-K Mass flow rate air (kg/s) Mass flow rate water (kg/s) Brayton power (MWe) Rankine power (MWe) Water heat dump (MWt) Reactor size (m3) HRSG (m3) Recuperator (m3) Brayton system (m3) Rankine system (m3) System volume (m3)
50 52.01 96.1 3.949 948 808 282.6 0 50 0 17.9 167 0 46 75 0 246
50 52.36 95.5 4.594 948 843 249.5 0 50 0 17.8 166 0 38 67 0 237
50 52.46 95.3 5.138 948 863 230.8 0 50 0 17.9 166 0 36 64 0 233
50 52.43 95.4 5.381 948 878 223.9 0 50 0 17.9 166 0 35 64 0 234
169
170
Introduction to energy essentials
4.26 Performance variation of small modular intercooled NARC systems Since the recuperator and primary heat exchanger have been addressed for the NARC system before, only the sensitivities to the intercooler pressure ratio and effectiveness will be addressed here. The intercooler pressure ratio has only a moderate effect on its size as described in Fig. 4.36. Note also that the effect is the same for both the near term and advanced systems. The system thermodynamic efficiencies once again look linear as a function of intercooler pressure ratio with the near-term system decreasing 0.31% for every 1% decrease in the pressure ratio. The advanced system efficiency decreases 0.24% for every 1% decrease in the intercooler pressure ratio. Somewhat similar to the recuperator, the intercooler volume decreases rather significantly as a function of its effectiveness. The relationship describing this sensitivity is described in Fig. 4.37. In dropping 20% in effectiveness, the size of the intercooler decreases by 81%. However, this does not have a significant effect on the overall system size as the recuperator compensates somewhat. The overall system decreases in size by less than 1.5% for both systems as the intercooler size drops from its nominal value to 19% of that value. The thermodynamic efficiency of an intercooled NARC system is least sensitive to the effectiveness of the intercooler. For every 1% decrease in the effectiveness of the near-term intercooler, the thermodynamic efficiency only decreases 0.084%. For the advanced system, a 1% decrease in the effectiveness only generates 0.083% decrease in the thermodynamic efficiency of the whole system.
Fig. 4.36 Intercooler relative volume as a function of its pressure ratio.
Small modular reactors and a modern power conversion approach
Fig. 4.37 Intercooler relative volume as a function of its effectiveness.
4.27 Discussion The performance of NACC and NARC systems have been estimated for a nearterm sodium cooled system and an advanced molten salt–cooled system. Both systems achieve reasonable thermodynamic efficiencies significantly better than current LWR systems. However, efficiencies may not be the driver in the development of advanced SMRs. The sensitivities of efficiency and size have been considered for all of the major components of NACC and NARC systems. Nuclear air-Brayton systems are very similar to gas turbine systems and can adapt much of the technology from these systems. The industrial base for gas turbines is very large. NACC and NARC systems require significantly less environmental water to absorb their waste heat, with the simple recuperated system requiring zero environmental water. Since the air working fluid of the nuclear air-Brayton systems is not consumed, it may be reheated and expanded several times. This adds a great deal of flexibility to NACC and NARC systems. Additional advantages will be discussed in the next subchapter.
4.28 Intermittent renewable energy systems and other challenges The next major shift in the electrical energy market faced by advanced nuclear power plants is the onset of extensive renewable energy systems. Solar and wind systems are increasing at a phenomenal rate. From an economic stand point, they have some of the same characteristics as nuclear. They are very capital intensive with a very low cost, or
171
172
Introduction to energy essentials
negligible cost, for their source of energy. However, once again there is not a Renewable Regulatory Agency to prescribe construction standards.Thus, capital construction is not as uncertain as similar nuclear installations.The real problem they present for the nuclear grid is that they do not control their sources of energy. This makes them intermittent and not easily capable of matching output to demand. At the rate that renewable systems are being brought into the market place, they have saturated the production of electricity at certain times and certain periods of the year. The infamous “duck curve” has been observed in the California market by several organizations (Fig. 4.38) [21]. The Day-Ahead Prices for 2017 appear to form the outline of a duck. Note that for a significant part of the day the prices are negative. That is because the solar energy systems are supplemented financially, and someone besides the customers are willing to pay for the generation capacity. A similar problem occurs in Germany with regard to wind energy in the winter on the North Sea. Several of Germany’s neighbors get free electricity part of the months due to its over production when strong winds occur. There are also places in the United States where wind energy saturates the electricity market for short periods. The obvious answer to this intermittent over production is some kind of storage. Unfortunately, it is very expensive to store electrical energy at this time and a storage capability adds to the capital investment of a solar or wind power station [21]. Another challenge to new nuclear builds is the lack of cooling water to get rid of the waste heat required by current thermodynamic cycles. Typical nuclear plants have efficiencies in the range of 33% to 35%.This means that they must get rid of 67% to 65% of the energy produced. They do this by heating environmental water or vaporizing water in the cooling towers that have become the symbols of nuclear plants. Conventional coal and gas plants are slightly more efficient, but still generally reject more energy than
Fig. 4.38 The California duck curve.
Small modular reactors and a modern power conversion approach
they produce in terms of electricity [3,4]. The latest combined cycle plants do achieve efficiencies approaching 60% or better and are finally able to produce more electrical energy than the waste heat they have to dump. In all cases the waste heat goes into a circulating water system that either heats environmental water and then returns it to the environment to be cooled by atmospheric processes, or evaporates it in a cooling tower. In either case the atmosphere becomes the ultimate heat sink. Currently slightly under 50% of the fresh water in the United States is used to cool power plants. Not all of the fresh water is consumed, but this is still an amazing statistic. More water is consumed to cool power plants than is used for agricultural purposes. Construction of a new power plant is limited by the requirement to have a water heat dump nearby. This is why, for instance, all of Japan’s nuclear power plants are built near the coast. All of the US nuclear power plants embody a water heat dump. The Palo Verde power station in Arizona is limited from expanding due to a water shortage, though there is an increased electrical energy demand in its market area, which includes parts of California. The final problem facing a new generation of nuclear power plants is the lack of a waste repository for commercial spent nuclear fuel and a reprocessing capability to recover the plutonium and remaining U-235 in spent fuel. For a number of years it was thought that the best way to use the US uranium reserves to build a series of breeder reactors that would produce more nuclear fuel than they consumed. This definitely requires a reprocessing capability to recover the plutonium from the once through fuel. Since the United States has forgone that capability for the near future, the next best thing is to achieve as high of a conversion efficiency as possible in the reactors that are built. Basically, the idea is to design the cores so as to burn the plutonium in place in the fuel elements that it was produced in. For a 3%–5% enriched fuel element in a current LWR at the end of its 3 to 4 year burn cycle, the bred plutonium is producing as much energy as the remaining U-235 in the element. If 19.75% enriched fuel elements are loaded in a fast reactor, it is possible to extend the refueling cycle significantly and burn more of the plutonium that has been produced. Conversion ratios exceeding 0.9 may be achievable. A waste repository and hopefully a reprocessing capability will still be required, but the magnitude of both can be significantly reduced by developing future SMRs with very high conversion ratios.
4.29 Dealing with the intermittency of renewable energy systems Based on the intermittency of the common renewable sources for electrical energy, it should be obvious that some form of energy storage is needed. An advanced NACC system or NARC system can help address this need. There are two direct ways that energy can be stored, either as electrical charge or as heat. Electrical charge can be stored in batteries and the US Department of Energy is addressing this issue with a goal of achieving ∼$150/kWh(e). On the other hand, the goal for heat storage is ∼$15/kWh(t) [20]. Given the order of magnitude difference in planned cost, it seems
173
174
Introduction to energy essentials
reasonable to consider ways of making use of heat storage. It should be pointed out that both methods of storage can use excess electricity. Obviously excess electricity can be used to charge batteries, but it can also be used to heat firebrick, or electrolyze water to produce hydrogen as a storage medium.
4.30 Energy storage as heat or electrical charge Electrical energy can be used to store electricity or heat. Obviously electrical energy can charge a battery, but it can also be used with a resistance heater to heat firebrick [22]. Typically, both of these processes are in the 95% to 99% efficiency range. Of course, the recovery of electrical energy from a battery is more straightforward if that is the only application for that electrical energy, and it is fairly efficient depending on the depth of charge. Storing energy as heat, or as recoverable heat, allows the heat to be used directly in industrial processes as well as to generate electricity. Storing heat energy in electrolyzed hydrogen is not as efficient as storing it in firebrick as the best processes currently available for electrolyzing water are only about 80% efficient [22]. However, storing heat energy in the form of hydrogen has some additional advantages. Hydrogen itself is useful in many industrial processes. Hydrogen can be made portable by storing it in pressurized tanks that are mounted on a tractor trailer. Hydrogen can generally be stored for a longer length of time without losing its energy value. Fire brick storage will lose a small percentage of its heat daily through insulation leakage. The goal here is not to put quantitative estimates on all of the possible ways of storing excess electrical energy. The goal here is to show how firebrick and hydrogen can be used as heat storage media with nuclear air-Brayton systems. The economic utility of this type of storage will depend on many factors that cannot at this time be accurately quantified. However, an engineering path can be described for using hydrogen as a heat storage mechanism that will work well with SMRs and produce economic benefits in the electrical energy markets of the future.
4.31 Energy storage as heat—two approaches Now consider a comparison of two approaches to storing enough heat for a daily reservoir of 1000 MWh. This basically corresponds to 100 MW(t) for 10 h of energy production. For a 50 MWe SMR operating at a thermal efficiency close to 50%, this should be a reasonable daily heat store coupled to a grid with a significant penetration of intermittent renewable energy sources. Consider providing the heat at 900, 1100, and 1700 K. Based on a reasonable set of design calculations, a heat store at 1200 oC and 4.1 m in diameter and 18 m high could store 250 MWh of useable heat [22]. This gives a volume of 237.6 m3. A store this size would drive a turbine at a 783 K inlet temperature for 270 min. At a 900 K inlet temperature for 250 min, and at 1100 K for about 215 min. Obviously, it would not serve to provide a 1700 K source of heat. A similar store for 1000
Small modular reactors and a modern power conversion approach
MWh to drive a turbine at 933 K would require 6 of these or approximately 1426 m3. To store the same amount of energy in the form of hydrogen gas at 2900 psi and 300 K, requires about 1856 cubic meters. This can be calculated using 241.8 MJ/kg-mole for the lower heating value of hydrogen to get a mass of 14,888 kg-moles to produce 3.6 × 106 MJ of energy. Or the hydrogen store would require about 62% more volume. For the 1100 K requirement it would require 39% more volume than the heat store. The efficiency of the resistance heaters for the firebrick store should be in the range of 95% to 99% and the daily losses have been estimated at 2.5% [21]. Thus, the electrical to heat output should be in the range of 92.6% to 96.5%. The electrical to heat output for electrolyzing water and burning hydrogen should be in the 80% range based purely on the efficiency of electrolyzing hydrogen. The firebrick storage should be more efficient and require less volume but is not as flexible. Conceptually the hydrogen combustion is easier to understand, so it will be used as representative of heat storage that can be used with an air-Brayton system.
4.32 Hydrogen combustion to augment NACC output Since the air that passes through the turbines in a nuclear air-Brayton system has not been burned, it is possible to add a combustion chamber after the last nuclear heat exchanger in the chain of heat exchangers and turbines. This can be done to boost the temperature entering the last turbine before the gases are passed to the heat recovery steam generator. The air can only be burnt once and once it has been heated by combustion, it is too hot to pass through another nuclear sodium-to-air heat exchanger. Combustion can only be added once before the last air turbine. If the combustion is to take place before the last turbine, and that turbine is going to extract as much energy as possible from the hot gases, it will be useful for that turbine to have a greater pressure drop than the other turbines in the airstream. Up till now all turbines have been designed to have the same pressure ratio. With combustion being added before the last turbine, there is an advantage for this turbine to have a larger pressure drop or expansion ratio. A larger expansion ratio results in a lower turbine exit temperature, so consider Table 4.9 where the overall efficiency is calculated as a function of turbine exit temperatures for the near-term system. The combustion burn temperature has been chosen as 1100 K, the approximate limit for an uncooled turbine [18]. All effects of importance can be demonstrated by the nominal three-turbine system as before. Column definitions: Turbines 1,2 exit temp-K: The exit temperature in Kelvins for the first two turbines for an input temperature of 783 K under normal operation (no H2 burn). Turbine 3 exit temp-K: The exit temperature in Kelvins for the last turbine for an input temperature of 783 K under normal operation (no H2 burn). Base efficiency:Thermodynamic efficiency for the combined cycle without combustion. Burn efficiency: Thermodynamic efficiency for the burning of the H2.
175
176
Introduction to energy essentials
Table 4.9 Performance parameters for near-term NACC system w/H2 burn to 1100 K. Turbines 1,2
Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 665.5 665.5 34.1 668 658 33.8 673 653 33.6 675.5 645.5 33.2 680.5 640.5 32.8
Burn
Combined
Brayton Overall Steam
Efficiency (%) Efficiency (%) Gain
Gain
71.0 71.0 71.0 71.1 71.1
2.385 2.417 2.448 2.486 2.522
48.6 48.6 48.5 48.5 48.4
1.379 1.402 1.421 1.445 1.464
Flow ratio 3.697 3.974 4.197 4.613 4.965
Combined efficiency: Thermodynamic efficiency for the whole system with the combustion turned on. (Not a particularly useful number due to mixed energy sources.) Brayton gain: The ratio of the air-Brayton cycle power with combustion to that without combustion. Overall gain: The ratio of the system power with combustion augmentation to that without combustion augmentation assuming the pinch point temperature spread is kept constant. Steam flow ratio: The ratio of the steam flow with combustion to that without combustion. Several things are worth noting in Table 4.9. First, the system efficiency without a burn is highest if all of the turbines have identical expansion ratios or exit temperatures. The table was only taken to the point where the normal efficiency had dropped by approximately 1%. It could have been taken further with the observed trends continued, but a 1% drop gives a good idea of the expected performance. Second, the thermodynamic efficiency of burning the hydrogen is greater than 70%. This is more efficient than any gas turbine combined cycle system that could be brought on line to provide peaking power. And third, the overall power gain is exceptional if the pinch point temperature difference in the steam cycle can be kept constant.This is a nontrivial requirement. Combustion of the air flowing through the air-Brayton part of the cycle can occur with the mass flow of air being held constant. The nozzle entering the third turbine must be expanded to deal with the increase in temperature of the combustion products and their slight increase in added mass. The capability to do this has been demonstrated. However, to hold the pinch point temperature of the steam cycle constant, the mass flow of steam must be increased significantly. This may or may not be accomplished easily. If the steam turbines are operated at partial power during normal operation, this may cause a significant drop in the normal efficiency. If the pinch point temperature is allowed to increase, this will reduce the added power achieved as a result of combustion as the exhaust temperature will increase resulting in more heat being dumped to the atmosphere. The minimum power increase is probably given by the Brayton gain factor
Small modular reactors and a modern power conversion approach
Table 4.10 Performance parameters for near-term NACC system w/H2 burn to 1700 K. Turbines 1,2 Turbine 3
Base
Burn
Combined Brayton Overall
Steam
Exit temp-K Exit tempK 665.5 665.5 668 658 673 653 675.5 645.5 680.5 640.5
Efficiency (%) 34.1 33.8 33.6 33.2 32.8
Efficiency (%) 73.6 73.8 73.9 74.1 74.2
Efficiency (%) 60.0 60.1 60.2 60.3 60.4
Gain
Gain
Flow ratio
2.099 2.167 2.220 2.290 2.347
5.288 5.398 5.499 5.625 5.744
6.176 6.724 7.165 7.984 8.675
with the steam cycle operating at normal mass flow but higher temperatures. Thus, the power gain is somewhere between ∼40% and ∼140%. At this point it is not clear what is the optimal approach. Further investigation will be required. Consider now a hydrogen burn that brings the turbine inlet temperature for turbine 3 to 1700 K, the rough state-of-the-art for cooled turbine blades [18]. The data are reported in Table 4.10 with the same meanings for each of the columns. Once again, the chemical burn efficiency is greater than 70%. The Brayton gain by itself is greater than 100%, and the overall gain is greater than ∼400% with steam flow ratios approaching a factor of ∼9. Whether this is even close to possible needs to be investigated. But the Brayton gain is well worth pursuing. An alternative to significantly increasing the steam mass flow may be adding an additional turbine in the drive train during periods of high output demands. The calculations for an advanced NACC system using a molten salt coolant are given in Tables 4.11 and 4.12. The base system efficiencies do not fall off as fast, and the gains are more modest and more likely achievable. In all cases though, the burn efficiencies remain above 70%, making this configuration more efficient than any current or planned gas turbine combined cycle system. Table 4.11 Performance parameters for advanced NACC system w/H2 burn to 1100 K. Turbines 1,2 Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 767.5 767.5 46.5 770.5 762.5 46.3 775 755 46.2 777.5 747.5 46.1 780 740 45.9 785 735 45.8 787.5 727.5 45.6 792.5 722.5 45.5
Burn
Combined Brayton
Overall
Steam
Efficiency (%) 74.6 74.6 74.6 74.6 74.5 74.5 74.5 74.5
Efficiency (%) 51.6 51.5 51.5 51.4 51.3 51.3 51.2 51.1
Gain
Gain
Flow ratio
1.133 1.139 1.143 1.148 1.154 1.158 1.164 1.168
1.368 1.372 1.376 1.382 1.387 1.392 1.398 1.403
1.598 1.624 1.640 1.668 1.700 1.723 1.760 1.787
177
178
Introduction to energy essentials
Table 4.12 Performance parameters for advanced NACC system w/H2 burn to 1700 K. Turbines 1,2 Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 767.5 767.5 46.5 770.5 762.5 46.3 775 755 46.2 777.5 747.5 46.1 780 740 45.9 785 735 45.8 787.5 727.5 45.6 792.5 722.5 45.5
Burn
Combined
Brayton Overall
Steam
Efficiency (%) 74.1 74.3 74.4 74.5 74.7 74.8 74.9 75.0
Efficiency (%) 61.0 61.1 61.2 61.3 61.4 61.4 61.5 61.6
Gain
Gain
Flow ratio
1.660 1.687 1.707 1.734 1.762 1.783 1.812 1.834
2.864 2.893 2.916 2.947 2.979 3.006 3.041 3.070
2.850 2.933 2.993 3.089 3.196 3.272 3.398 3.490
4.33 Hydrogen combustion to augment NARC output Now consider using hydrogen combustion to augment the output of a recuperated NARC system. This is a little more complicated. When the exhaust temperature of the last turbine is increased, the recuperator temperatures are increased. This means that the temperature of the air going into the first sodium-to-air heat exchanger is increased. Thus, the reactor does not have to produce as much heat for this heat exchanger. In fact, there becomes a maximum burn temperature for the hydrogen such that there is no longer any heat transfer in the first heat exchanger.That is the recuperator provides all of the heat required for the first turbine. This means that the reactor is only providing heat to the second and third heat exchangers, and so its power output must be decreased. It still serves as a preheater for the combustion of the hydrogen before the last turbine. Consider what this looks like in Table 4.13. New column definitions: Burn gain: The ratio of the power provided with hydrogen combustion to the base power. Burn temp-K: The maximum burn temperature that provides air at 783 K to the first turbine. RX power: The ratio of power required from the reactor during the H2 burn to its base power. Table 4.13 Performance parameters for near-term NARC system w/H2 burn. Turbines 1,2
Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 725.5 725.5 41.7 735.5 705.8 41.8 745.5 685.5 41.7 765.5 655.5 40.9
Burn
Combined Burn
Burn
RX
Efficiency (%) 59.6 67.2 72.6 78.8
Efficiency (%) 42.6 43.7 45.0 47.2
Gain
Temp-K
Power
1.077 1.135 1.211 1.390
866.6 890.4 915.4 958.7
0.608 0.505 0.402 0.220
Small modular reactors and a modern power conversion approach
Table 4.14 Performance parameters for advanced NARC system w/H2 burn Turbines 1,2 Turbine 3
Base
Burn
Combined Burn
Exit temp-K Exit temp-K Efficiency (%) Efficiency (%) Efficiency (%) 867.5 867.5 49.4 61.4 50.1 882.5 842.5 49.5 68.5 51.1 900 810 49.4 74.9 52.6 922.5 762.5 48.5 81.1 54.8
Burn
RX
Gain Temp-K Power 1.076 1.129 1.220 1.409
1065.3 1096.0 1138.0 1204.2
0.609 0.508 0.380 0.203
Note that the peak efficiency for normal operation in this case does not occur when all of the turbines have the same exit temperature. The normal operation efficiency goes up 0.1% points and then starts dropping. It recovers to the same initial efficiency when the ratio of the exit temperature for the third turbine is 92% of that for the first two turbines. Operating at the peak normal efficiency, the burn gain is only 13.5%. However, if normal efficiency can be relaxed by approximately 1%, the burn gain can be increased to ∼40%.When this is done, however, the reactor power must be decreased to approximately 22% of normal.The system is producing 39% more power, but the reactor has dropped to 22% of normal, so the hydrogen burn is producing 117% of normal or approximately 64 MWe. However, it is burning this hydrogen with a 78.8% thermal efficiency, greater than any achieved by current gas turbine combined cycle systems. This occurs because the air has been preheated by the nuclear reactor prior to entering the combustion chamber. The performance of an advanced NARC system using a molten salt coolant is described in Table 4.14. All of the same phenomena are present.The optimum efficiency occurs with a temperature ratio for the exit temperature from the third turbine to that of the first two of 95%, and this ratio can decrease to 90% before the efficiency drops off to the all turbines equal case. Giving up 1% efficiency for normal operation, allows the peak power gain during combustion to be 40.9% of normal. This will require a combustion temperature of 1205 K requiring a modestly cooled turbine by current technology. The efficiency of the burn remains high, achieving over 80% for the 1205 K combustion temperature. Once again, the reactor power must be turned down to the 20% of normal level to allow the combustion to work.
4.34 Hydrogen combustion to augment intercooled NARC output Intercooled NARC systems offer greater efficiency than the simply recuperated systems but behave very similarly in many ways. Consider Table 4.15 for a near term sodiumcooled NARC system with an intercooler. The normal system efficiency is slightly better, the burn efficiency is slightly better, and the burn gain is better. The reactor power turndown requirement is not as severe but still very significant. The ratio of the turbine 3 exhaust temperature to the turbine
179
180
Introduction to energy essentials
Table 4.15 Performance parameters for near-term intercooled NARC system w/H2 burn. Turbines 1,2
Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 705.5 705.5 44.6 720.5 680.5 44.7 730.5 660.5 44.6 748 618 43.7
Burn
Combined
Burn
Burn
RX
Efficiency (%) 67.3 73.9 77.7 83.4
Efficiency (%) 46.0 47.4 48.6 51.1
Gain
Temp-K Power
1.097 1.169 1.242 1.447
892.2 923.8 950.4 1011.6
0.620 0.514 0.435 0.285
1 and 2 temperatures to achieve a >40% increase in power is only 83%. The bigger turbine is working better. Consider the advanced molten salt system in Table 4.16. In this case everything looks better once again. Now the exhaust temperature for the larger third turbine is only 80% of the exhaust temperature for the other two turbines. The combustion burn is still producing 117% of the normal reactor power. The optimum temperature for the burn of 1269 K can still be accomplished with a heat storage reservoir at 1200oC that is slightly smaller than the volume of the stored hydrogen.
4.35 Conclusions A modern nuclear air-Brayton power conversion system on a SMR can address most of the problems facing the nuclear industry in the future. It will take advantage of a wellestablished industry for producing power conversion components. If the SMR is a high conversion ratio fast reactor, it can reduce future difficulties in burning a much larger fraction of our uranium natural resources. The much smaller, and in some cases zero, waste heat rejection requirements for nuclear air-Brayton systems will mitigate some of the consumption of surface fresh water in the United States. Nuclear air-Brayton power conversion systems can also deal with increased power demands above their base load normal operating power loads much easier than current LWR systems. They can work off stored heat or stored hydrogen. In fact they can be made to work off natural gas or any other combustible if the combustion chamber is so designed. If so, they will achieve a burn efficiency exceeding a gas turbine combined cycle system that currently Table 4.16 Performance parameters for advanced intercooled NARC system w/H2 burn. Turbines 1,2 Turbine 3
Base
Exit temp-K Exit temp-K Efficiency (%) 842.5 842.5 52.4 865 805 52.5 875 785 52.4 902.5 722.5 51.5
Burn
Combined Burn
Burn
RX
Efficiency (%) 68.5 75.7 78.5 84.7
Efficiency (%) 53.4 54.9 55.7 58.4
Gain
Temp-K Power
1.094 1.173 1.226 1.448
1098.2 1147.1 1174.6 1268.7
0.621 0.503 0.445 0.276
Small modular reactors and a modern power conversion approach
exists or is likely to be built in the near future. The NACC systems can efficiently operate with a power gain from ∼10% to a factor of almost 800% depending on how the steam bottoming cycle is handled. The NARC systems can operate with a power gain of ∼10% to a gain of 45% depending on how the system is designed. It is clear that nuclear air-Brayton SMRs can operate economically on an electric grid with significant penetration of intermittent renewable energy systems.
References [1] B. Zohuri, Hybrid Energy Systems: Driving Reliable Renewable Sources of Energy Storage, Published by Springer Publishing Company, Nov 26, 2017. [2] B. Zohuri, Nuclear Energy for Hydrogen Generation through Intermediate Heat Exchangers: A Renewable Source of Energy, Springer Publishing Company, Jul 15, 2016. [3] B. Zohuri, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants, An Innovative Design Approach, 1st Ed., Springer Publishing Company, 2014. [4] B. Zohuri, P. McDaniel, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants: An Innovative Design Approach, 2nd Ed., Springer Publishing Company, 2018. [5] Economic and Employment Impacts of Small Modular Reactors, Energy Policy Institute of the Center for Advanced Energy Studies (June, 2010). [6] S. Thomas, P. Bradford, A. Froggatt, D. Milborrow, The economics of nuclear power report, Published by greenpeace.org., 2007 [7] The table about has been extracted from an Australian Study with the title “Water requirements of nuclear power stations”. [8] Another source of information is from the Union of concerned scientist with the title “Got water?”. [9] Nuclear Energy, Report to Congress, U.S. Department of Energy, Nuclear Energy Department, April, 2010. [10] B. Zohuri, P. McDaniel, Thermodynamics in Nuclear Power Plant Systems, 2nd Ed., Springer Publishing Company, 2017. [11] J.H. Horlock, Cogeneration-Combined Heat and Power (CHP), Krieger Publishing Company, Malabar FL, 1997. [12] R.W. Haywood, Analysis of Engineering Cycles, 4th Ed., Pergamon Press, Oxford, 1991. [13] J.D. Mattingly, Elements of Gas Turbine Propulsion, McGraw-Hill, Inc., New York, 1996. [14] T.M.A Blumberg, T. Morosuk, G. Tsatsaronis, Comparative exergoeconomic Evaluation of The Latest Generation of Combined-Cycle Power Plants, Energy Conversion and Management, Elsevier, 2017, pp. 616–626 153. [15] D.G. Wilson, The Design of High-Efficiency Turbomachinery and Gas Turbines, MIT Press, Cambridge, MA, 1984. [16] V. Dostal, M.J. Driscoll, P.A. Hejzlar, A Supercritical Carbon Dioxide Cycle for Next Generation Nuclear Reactors, Tech Rep MIT-ANP-TR-100, Massachusetts Institute of Technology, Cambridge, MA, 2004. [17] A.E. Waltar, D.R. Todd, P.V. Tsvetkov, Fast Spectrum Reactors, Springer Science, New York, 2012. [18] D.G. Wilson, T. Korakianitis, The Design of High-Efficiency Turbomachinery and Gas Turbines, 2nd Ed, Prentice-Hall, Upper Saddle River NJ, 1998. [19] W.M. Kays, A.L. London, Compact Heat Exchangers, 3rd Ed., Krieger Publishing Company, Malabar, FL, 1998. [20] DL. Chase, Combined Cycle Development and Future, GER (2001) 4206. [21] J. Buongiorno, M. Corradini, J. Parsons, D. Petti, The Future of Nuclear Energy in a CarbonConstrained World, an Interdisciplinary Study, Massachusetts Institute of Technology, 2018. [22] C.W. Forsberg, D.C. Stack, D. Curtis, G. Haratyk, N.A. Sepulveda, Converting excess low-price electricity into high-temperature stored heat for industry and high-value electricity production, Electr. J. 30 (2017) 42–52. [23] G. Schiller, A. Ansar, M. Lang, O. Patz, High temperature electrolysis using metal supported solid oxide electrolyser cells (SOEC), J. Appl. Electrochem. 39 (2009) 293–301.
181
CHAPTER 5
Thermonuclear fusion reaction driving electrical power generation Energy demand is expected to more than double by 2050 as the combined effect of the increases of population and energy consumption per capita in developing countries. Fossil fuels presently satisfy 80% of the primary energy demand, but their impact on the environment through greenhouse gas emission is unacceptable. Energy sources that can prove their long-term sustainability and security of supply must replace fossil fuels. The solution to the energy problem can come only by a portfolio of options that includes improvements in energy efficiency and to degrees varying among countries renewable energy, nuclear fission, and carbon capture and sequestration. Fusion has advantages that ensure sustainability and security of supply: fuels are widely available and virtually unlimited; no production of greenhouse gases; intrinsically safe, as no chain-reaction is possible; environmentally responsible—with a proper choice of materials for the reaction chamber, radioactivity decays in a few 10 of years and at around 100 years after the reactor shutdown all the materials can be recycled in a new reactor.
5.1 Introduction The greatest increase in demand for energy is envisaged to come from developing countries where, with rapid urbanization, large-scale electricity generation will be required. With environmental requirements for zero or low CO2 emission sources and the need to invest in a sustainable energy mix, new energy sources must be developed. Fusion will be available as a future energy option by the middle of this century and should be able to acquire a significant role in providing a sustainable, secure, and safe solution to tackle European and global energy needs. Fusion is the process which powers the sun and the stars. It is energy that makes all life on earth possible. It is called “fusion” because the energy is produced by fusing together light atoms, such as hydrogen, at the extremely high pressures and temperatures that exist at the center of the sun (15 million°C). At the high temperatures experienced in the sun, any gas becomes plasma, the fourth state of matter, where solid, liquid, and gas being the other three (see Fig. 5.1). Plasma can be described as an “electrically charged gas” in which the negatively charged electrons in atoms are completely separated from the positively charged atomic nuclei (or ions). Although plasma is rarely found on earth, it is estimated that more than 99% of the universe exists as plasma. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
183
184
Introduction to energy essentials
Fig. 5.1 Fusion reaction inside the sun.
Fusion power is a theoretical form of power generation in which energy will be generated by using nuclear fusion reactions to produce heat for electricity generation. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, and at the same time, they release energy. This is the same process that powers stars like our sun. Devices designed to harness this energy are known as fusion reactors. With the reduction of CO2 emissions driving future energy policy, fusion can start market penetration around 2050 with up to 30% of electricity production, especially it is expected to contribute to base-load generation by 2100. In order to replicate this process on earth, gases need to be heated to extremely high temperatures of about 150 milliondegrees Celsius whereby atoms become completely ionized. The fusion reaction that is shown in Fig. 5.2 is easiest to accomplish is the reaction between two hydrogen isotopes: deuterium, extracted from water, and tritium, produced during the fusion reaction through contact with lithium. When deuterium and tritium nuclei fuse, they form a helium nucleus, a neutron, and a lot of energy.
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.2 Fusion chemical reaction driving energy production.
Fusion of light nuclei is the energy source that powers the sun. A fusion power plant utilizes the fusion reaction between tritium and deuterium. The process yields a helium nucleus and a neutron, whose energy is harvested for electricity production. Deuterium is widely available, but tritium exists only in tiny quantities. The fusion reactor has to produce it via a reaction between the neutron and lithium. Lithium, again, is abundant in the earth’s crust and in seawater. The global deuterium and lithium resources can satisfy the world’s energy demand for millions of years, which is virtually is an unlimited source of energy as long as we have our ocean water around us and with this resource following chemical reaction can be achieved in the form of fusion process as illustrated in Fig. 5.3 as well [1]. Atomic nuclei are positively charged and repel each other. They only fuse if they collide fast enough to overcome the repelling force. As particle speed corresponds to temperature, the fusion fuels have to be heated to about 200 million°C, 20 times hotter than the core of the sun. At these temperatures, atoms dissolve into nuclei and electrons, forming gas of charged particles called plasma. The hot fusion plasma must not touch the reactor wall, and it is therefore confined by means of magnetic fields. The Neutron Proton
Deuterium
4Helium
Tritium 4
Helium
6Lithium
Fig. 5.3 Fusion reaction as unlimited energy source.
Tritium
185
186
Introduction to energy essentials
technology of confining hot plasmas in a doughnut-shaped chamber is routine in fusion experiments worldwide. The bases for confining hot fusion plasma produced by the chemical reaction that takes place between deuterium and tritium are considered in two different technical methods as: 1. Magnetic confinement-driven fusion or magnetic confinement fusion (MCF) [2]. 2. Inertial confinement-driven fusion or inertial confinement fusion (ICF) [3]. Each of these technical approaches and processes are defined in proceeding sections of this chapter with a little more detail as well. Scientists have built devices able to produce temperatures more than 10 times higher than those in the sun. To reach these temperatures there must first be powerful heating, and thermal losses must be minimized by keeping the hot fuel particles away from the walls of the container. This is achieved by creating a magnetic “cage” made by strong magnetic fields that prevent the particles from escaping. For energy production this plasma has to be confined for a sufficiently long period for fusion to occur. The simple form or infrastructure point of view is known as tokamak as illustrated in Fig. 5.4 and was originally suggested by a Russian scientist which has a form of the donut. In a tokamak the plasma is held in a doughnut-shaped vessel. Using special coils, a magnetic field is generated, which causes the plasma particles to run around in spirals, without touching the wall of the chamber as suggested in Fig. 5.4 by European Fusion Development Agreement (EFDA). EFDA has been followed by EUROfusion, which is a consortium of national fusion research institutes located in the European Union and Switzerland. The European Union has a strongly coordinated nuclear fusion research program.
Coils
Blanket
Plasma
Magnetic field line
Fig. 5.4 Simple tokamak infrastructure illustration. (Courtesy: EFDA).
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.5 Jet assembly. (Courtesy: EFDA).
The most developed configuration at present is the tokamak, a Russian word for a torus-shaped magnetic chamber. Scientists have succeeded in producing gas with temperatures 10 times higher in fusion devices. Megawatts of power have been produced for a few seconds. In Europe, this has been achieved in the Joint European Torus (JET), the world’s largest fusion device, which currently holds the world record for fusion power (see Fig. 5.5). Nearly 2000 scientists and engineers are currently working on a broad range of fusion R&D projects in more than 20 laboratories, including JET. Fusion energy has the potential to provide a sustainable solution to European and global energy needs. International Thermonuclear Experimental Research (ITER), which means the way in Latin, is an international collaboration on an experimental facility. It is the world’s greatest energy project which aims to demonstrate that fusion can be part of the solution by improving our energy mix to meet the global energy needs, via magnetic confinement fusion (MCF).
187
188
Introduction to energy essentials
Fig. 5.6 Schematic of the stages of inertial confinement fusion using lasers.
ITER is an international nuclear fusion research and engineering megaproject, which will be the world’s largest magnetic confinement plasma physics experiment. On the other hand, under National Ignition Facility (NIF) organized by Lawrence Livermore National Laboratory (LLNL), the idea of ICF grew out of the decades-long effort to generate fusion burn and gain in the laboratory, using high energy and powerful laser by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of two isotopes of hydrogen known as deuterium (D) and tritium (T) where fusion reaction as illustrated in Figs. 5.2 and 5.3, and schematically, such process takes place in form of laser-driven pellet implosion and is illustrated in Fig. 5.6. In Fig. 5.6, the blue color arrows represent radiation, while the orange color arrows are blow-off, and purple color arrows are inwardly transported thermal energy as they are described as follows: 1. Laser beams or laser-produced X-rays rapidly heat the surface of the fusion target, forming a surrounding plasma envelope. 2. Fuel is compressed by the rocket-like blow-off of the hot surface material. 3. During the final part of the capsule implosion, the fuel core reaches 20 times the density of lead and ignites at 100,000,000 °C. 4. Thermonuclear burn spreads rapidly through the compressed fuel, yielding many times the input energy. As we stated in previous chapters under nuclear fission prosses, current nuclear power plants, which use fission, or the splitting of atoms to produce energy, have been pumping out electric power for more than 50 years. But achieving nuclear fusion burn and gain has not yet been demonstrated to be viable for electricity production. For fusion burn and gain to occur, a special fuel consisting of the hydrogen isotopes deuterium and tritium must first “ignite.” A primary goal for NIF is to achieve fusion ignition in which the energy generated from the reaction outstrips the rate at which X-ray radiation losses and electron conduction cool the implosion. NIF was designed to produce extraordinarily high temperatures and pressures—tens of millions of degrees and pressures many billion times greater than earth’s atmosphere. These conditions currently exist only in the cores of stars and planets and in nuclear
Thermonuclear fusion reaction driving electrical power generation
Laser beams
Fuel layer Fuel capsule
Fig. 5.7 Illustration of NIF’s 192 beams gold cylinder known as hohlraum.
weapons. In a star, strong gravitational pressure sustains the fusion of hydrogen atoms. The light and warmth that we enjoy from the sun, a star 93 million miles away, are reminders of how well the fusion process works and the immense energy it creates. Replicating the extreme conditions that foster the fusion process has been one of the most demanding scientific challenges of the past half-century. Physicists have pursued a variety of approaches to achieve nuclear fusion in the laboratory and to harness this potential source of unlimited energy for future power plants. Referring to Fig. 5.7, all of the energy of NIF’s 192 beams is directed inside a gold cylinder called a hohlraum, which is about the size of a dime. A tiny capsule inside the hohlraum contains atoms of deuterium (hydrogen with one neutron) and tritium (hydrogen with two neutrons) that fuel the ignition process. In this process the capsule and its deuterium–tritium fuel will be compressed to a density 100 times that of solid lead and heated to more than 100 million degrees Celsius hotter than the center of the sun. These conditions are just those required to initiate thermonuclear fusion, the energy source of stars.
5.2 Magnetic confinement fusion (MCF) MCF is an approach to generate thermonuclear fusion power that uses magnetic fields to confine the hot fusion fuel in the form of a plasma. Magnetic confinement is one of two major branches of fusion energy research, the other being inertial confinement fusion. The magnetic approach dates into the 1940s and has seen the majority of development since then. It is usually considered more promising for practical power production.
189
190
Introduction to energy essentials
Fig. 5.8 The reaction chamber of the Tokamak à configuration variable (TCV).
Fusion reactions combine light atomic nuclei such as hydrogen to form heavier ones such as helium, producing energy. The reaction in MCF takes place in the reaction chamber of the machine such as Tokamak à configuration variable (TCV), an experimental tokamak fusion reactor at École Polytechnique fédérale de Lausanne, Lausanne, Switzerland, which has been used in research since it was built in 1992. The characteristic torus-shaped chamber is clad with graphite to help withstand the extreme heat (the shape is distorted by the camera’s fisheye lens) as is illustrated in Fig. 5.8. As it was stated earlier, the TCV, literally “variable configuration tokamak”, is a Swiss research fusion reactor of the École Polytechnique fédérale de Lausanne. Its distinguishing feature over other tokamaks is that its torus section is 3 times higher than wide.
Thermonuclear fusion reaction driving electrical power generation
This allows studying several shapes of plasmas, which is particularly relevant since the shape of the plasma has links to the performance of the reactor. The TCV was set up in November 1992. In order to overcome the electrostatic repulsion between the nuclei, they must have a temperature of several tens of millions of degrees, under which conditions they no longer form neutral atoms but exist in the plasma state. In addition, sufficient density and energy confinement are required, as specified by the Lawson criterion. At these temperatures, no material container could withstand the extreme heat of the plasma. MCF attempts to create these conditions by using the electrical conductivity of the plasma to contain it with magnetic fields. The basic concept can be thought of in a fluid picture as a balance between magnetic pressure and plasma pressure, or in terms of individual particles spiraling along magnetic field lines as it is depicted in Fig. 5.9, where in Physics, the motion of an electrically charged particle such as an electron or ion in a plasma in a magnetic field can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation. In Fig. 5.9, (A) No disturbing force, (B) with an electric field, E, (C) with an independent force, F (e.g. gravity), (D) in an inhomogeneous magnetic field, grad H [3]. Developing a suitable arrangement of fields that contain the fuel ions without introducing turbulence or leaking the fuel at a profuse rate has proven to be a difficult problem. The development of magnetic field effect (MFE) has gone through three distinct phases. In the 1950s it was believed MFE would be relatively easy to achieve, and this developed into a race to build a suitable machine. By the late 1950s, it was clear that turbulence and instabilities in the plasma were a serious problem, and during the 1960s, “the doldrums,” effort turned into a better understanding of the physics of plasmas. In 1968, a Soviet team invented the tokamak magnetic confinement device, which demonstrated performance 10 times better than the best alternatives. Since then the MFE field has been dominated by the tokamak approach. Construction of a 500-MW power-generating fusion plant using this design, the ITER, began in France in 2007 and is scheduled to begin operation in 2025. Note that: MFEs present an attractive opportunity to impose control overexcited electronic states of molecules, drawing attention from many areas of technology. For example, in optical imaging magnetic control over probes’ emissivity may serve to enhance spatial resolution, 2–3, while in the medical field the ability to magnetically modulate sensitized generation of singlet oxygen could be a valuable asset for photodynamic therapy. The key to such applications are molecules with strong optical transitions in the visible or near infrared (NIR) spectral region and excited states responsive to magnetic fields.
191
192
Introduction to energy essentials
Fig. 5.9 Charged particle drifts in a homogeneous magnetic field.
5.2.1 Magnetic mirrors A major area of research in the early years of fusion energy research was the magnetic mirror. Most early mirror devices attempted to confine plasma near the focus of a nonplanar magnetic field generated in a solenoid with the field strength increased at either end of the tube. In order to escape the confinement area, nuclei had to enter a small annular area near each magnet. It was known that nuclei would escape through this area, but by adding and heating fuel continually, it was felt this could be overcome. In 1954, Edward Teller gave a talk in which he outlined a theoretical problem that suggested the plasma would also quickly escape sideways through the confinement fields. This would occur in any machine with convex magnetic fields, which existed in
Thermonuclear fusion reaction driving electrical power generation
the center of the mirror area. Existing machines were having other problems, and it was not obvious whether this was occurring. In 1961, a Soviet team conclusively demonstrated this flute instability was indeed occurring, and when a US team stated they were not seeing this issue, the Soviets examined their experiment and noted this was due to a simple instrumentation error. The Soviet team also introduced a potential solution, in the form of “Ioffe bars.” These bent the plasma into a new shape that was concave at all points, avoiding the problem Teller had pointed out.This demonstrated a clear improvement in confinement. A UK team then introduced a simpler arrangement of these magnets they called the “tennis ball,” which was taken up in the United States as the “baseball.” Several baseball series machines were tested and showed much-improved performance. However, theoretical calculations showed that the maximum amount of energy they could produce would be about the same as the energy needed to run the magnets. As a powerproducing machine, the mirror appeared to be a dead end. In the 1970s, a solution was developed. By placing a baseball coil at either end of a large solenoid, the entire assembly could hold a much larger volume of plasma and thus produce more energy. Plans began to build a large device of this “tandem mirror” design, which became the mirror fusion test facility (MFTF). Having never tried this layout before, a smaller machine, the Tandem mirror experiment (TMX) was built to test this layout. TMX demonstrated a new series of problems that suggested MFTF would not reach its performance goals, and during construction, MFTF was modified to MFTF-B. However, due to budget cuts, one day after the construction of MFTF was completed it was mothballed. Mirrors have seen little development since that time.
5.2.2 Toroidal machines The physics of magnetically confined plasmas has had much of its development as part of the program to develop fusion energy and is an important element in the study of space and astrophysical plasmas. Closely related areas of physics include Hamiltonian dynamics, kinetic theory, and fluid turbulence. A number of topics in physics have been developed primarily through research on magnetically confined plasmas. The physics that underlies the magnetic confinement of plasmas is reviewed here to make it more accessible to those beginning research on plasma confinement and for interested physicists. As part of these theoretical efforts, few magnetic confinements chambers as toroidal machines configuration are proposed, and they are briefly described as follows, and they all fall within the ITER design team summarized the physics of tokamaks. 5.2.2.1 Z-pinch machine The first real effort to build a control fusion reactor used the pinch effect in a toroidal container. A large transformer wrapping the container was used to induce a current in the plasma inside. This current creates a magnetic field that squeezes the plasma into
193
194
Introduction to energy essentials
a thin ring, thus “pinching” it. The combination of Joule heating by the current and adiabatic heating as it pinches raises the temperature of the plasma to the required range in the tens of millions of degrees Kelvin. First built in the United Kingdom in 1948 and followed by a series of increasingly large and powerful machines in the United Kingdom and the United States, all early machines proved subject to powerful instabilities in the plasma. Notable among them was the kink instability, which caused the pinched ring to thrash about and hit the walls of the container long before it reached the required temperatures. The concept was so simple, however, that herculean effort was expended to address these issues. This led to the “stabilized pinch” concept, which added external magnets to “give the plasma a backbone” while it compressed. The largest such machine was the UK’s Zero Energy Thermonuclear Assembly (ZETA) reactor, completed in 1957, which appeared to successfully produce fusion. Only a few months after its public announcement in January 1958, these claims had to be retracted when it was discovered the neutrons being seen were created by new instabilities in the plasma mass. Further studies showed any such design would be beset with similar problems, and research using the Z-pinch approach largely ended. See Fig. 5.10, which is a demonstration of the United Kingdom ZETA fusion reactor chamber as a proposed way confining plasma fusion reaction, magnetically. ZETA was a major experiment in the early history of fusion power research. Based on the pinch plasma confinement technique and built at the Atomic Energy Research Establishment in England, ZETA was larger and more powerful than any fusion machine in the world at that time. Its goal was to produce large numbers of fusion reactions, although it was not large enough to produce net energy. ZETA went into operation in August 1957, and by the end of the month, it was giving off bursts of about a million neutrons per pulse. Measurements suggested the fuel was reaching between 1 and 5 million Kelvins, a temperature that would produce nuclear fusion reactions, explaining the quantities of neutrons being seen. Early results were leaked to the press in September 1957, and the following January an extensive review was released. Front-page articles in newspapers around the world announced it as a breakthrough toward unlimited energy, a scientific advance for Britain greater than the recently launched Sputnik had been for the Soviet Union US and Soviet experiments had also given off similar neutron bursts at temperatures that were not high enough for fusion. This led Lyman Spitzer to express his skepticism of the results, but his comments were dismissed by UK observers as jingoism. Further experiments on ZETA showed that the original temperature measurements were misleading; the bulk temperature was too low for fusion reactions to create the number of neutrons being seen. The claim that ZETA had produced fusion had to be publicly withdrawn, an embarrassing event that cast a chill over the entire fusion establishment. The neutrons were later explained as being the product of instabilities in the fuel. These
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.10 United Kingdom ZETA fusion reactor assembly.
instabilities appeared inherent to any similar design, and work on the basic pinch concept as a road to fusion power ended by 1961. In spite of ZETA’s failure to achieve fusion, the device went on to have a long experimental lifetime and produced numerous important advances in the field. In one line of development, the use of lasers to more accurately measure the temperature was developed on ZETA and was later used to confirm the results of the Soviet tokamak approach. In another, while examining ZETA test runs, it was noticed that the plasma self-stabilized after the power was turned off. This has led to the modern reversed field pinch concept. More generally, studies of the instabilities in ZETA have led to several important theoretical advances that form the basis of modern plasma theory.
195
196
Introduction to energy essentials
Fig. 5.11 Example of a stellarator design.
5.2.2.2 Stellarators confinement system An early attempt to build a magnetic confinement system was the stellarator, introduced by Lyman Spitzer in 1951. Essentially the stellarator consists of a torus that has been cut in half and then attached back together with straight “crossover” sections to form a figure 8. This has the effect of propagating the nuclei from the inside to outside as it orbits the device, thereby canceling out the drift across the axis, at least if the nuclei orbit fast enough (see Fig. 5.11). Configuration in Fig. 5.11 is a fundamental example of a Stellarator design, as used in the Wendelstein 7-X experiment: a series of magnet coils (blue) surrounds the plasma (yellow color). A magnetic field line is highlighted in green on the yellow plasma surface. Not long after the construction of the earliest figure-8 machines, it was noticed the same effect could be achieved in a completely circular arrangement by adding a second set of helically wound magnets on either side. This arrangement generated a field that extended only partway into the plasma, which proved to have the significant advantage of adding “shear,” which suppressed turbulence in the plasma. However, as larger devices were built on this model, it was seen that plasma was escaping from the system much more rapidly than expected, much more rapidly than could be replaced. By the mid-1960s it appeared the stellarator approach was a dead end. In addition to the fuel loss problems, it was also calculated that a power-producing machine based on this system would be enormous, the better part of a thousand feet long. When the tokamak was introduced in 1968, interest in the stellarator vanished, and the latest design at Princeton University, the Model C, was eventually converted to the symmetrical tokamak (ST). A stellarator is a device used to confine hot plasma with magnetic fields to sustain a controlled nuclear fusion reaction. The name refers to the possibility of harnessing the power source of the sun, a stellar object. It is one of the earliest fusion power devices,
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.12 Helically symmetric experiment (HSX) stellarator infrastructure illustration.
along with the Z-pinch and magnetic mirror. See Fig. 5.12, which is the presentation of helically symmetric experiment (HSX) stellarator. The stellarator was invented by Lyman Spitzer of Princeton University in 1951, and much of its early development was carried out by his team at what became the Princeton Plasma Physics Laboratory (PPPL). The basic concept is to layout the magnetic fields so that particles circulating around the long axis of the machine follow twisting paths, which cancels out instabilities seen in purely toroidal machines. This would keep the fuel confined long enough to allow it to be heated to the point where fusion would take place. The first Model A started operation in 1953 and proved the basic layout worked. Larger models followed, but these demonstrated poor performance, suffering from a problem known as pump-out that caused them to lose plasma at rates far worse than theoretical predictions. By the early 1960s, any hope of quickly producing a commercial machine faded, and attention turned to studying the fundamental theory of high-energy plasmas. By the mid-1960s, Spitzer was convinced that the stellarator was matching the Bohm diffusion rate, which suggested it would never be a practical fusion device. The release of information on the USSR’s tokamak design in 1968 indicated a leap in performance. This led to the Model C stellarator being converted to the ST as a way to confirm or deny these results. ST confirmed them, and large-scale work on the stellarator concept ended as the tokamak got most of the attention.The tokamak ultimately proved to have similar problems to the stellarators but for different reasons. Since the
197
198
Introduction to energy essentials
1990s, this has led to renewed interest in the stellarator design [2]. New methods of construction have increased the quality and power of the magnetic fields, improving performance. A number of new devices have been built to test these concepts. Major examples include Wendelstein 7-X in Germany, the HSX in the United States, and the large helical device in Japan Stellarators have seen renewed interest since the turn of the millennium as they avoid several problems subsequently found in the tokamak. Newer models have been built, but these remain about two generations behind the latest tokamak designs. 5.2.2.3 Tokamaks confinement system In the late 1950s, Soviet researchers noticed that the kink instability would be strongly suppressed if the twists in the path were strong enough that a particle traveled around the circumference of the inside of the chamber more rapidly than around the chamber’s length. This would require the pinch current to be reduced and the external stabilizing magnets to be made much stronger. This physical process and phenomena are illustrated in Fig. 5.13. In 1968 Russian research on the toroidal tokamak was first presented in public, with results that far outstripped existing efforts from any competing design, magnetic or not. Since then the majority of effort in magnetic confinement has been based on the tokamak principle. In the tokamak a current is periodically driven through the plasma itself, creating a field “around” the torus that combines with the toroidal field to produce a winding field in some ways similar to that in a modern stellarator, at least in that nuclei move from the inside to the outside of the device as they flow around it. The tokamak fusion test reactor (TFTR) as it is illustrated in Fig. 5.14, was an experimental tokamak built at Princeton Plasma Physics Laboratory (PPPL) c.1980 and entering service in 1982. TFTR was designed with the explicit goal of reaching scientific breakeven, the point where the heat being released from the fusion reactions in the plasma is equal or greater than the heating being supplied to the plasma by external devices to warm it up. The TFTR never achieved this goal, but it did produce major advances in confinement time and energy density. It was the world’s first magnetic fusion device to perform extensive scientific experiments with plasmas composed of 50/50 deuterium/tritium (DT), the fuel mix required for practical fusion power production, and also the first to produce more than 10 MW of fusion power. It set several records for power output, maximum temperature, and fusion triple product. TFTR shut down in 1997 after 15 years of operation. PPPL used the knowledge from TFTR to begin studying another approach, the spherical tokamak, in their National Spherical Torus Experiment. The Japanese JT-60 is very similar to the TFTR, both tracing their design to key innovations introduced by Shoichet Yoshikawa during his time at PPPL in the 1970s.
Thermonuclear fusion reaction driving electrical power generation
Relatively constant electric current Toroidal field coils
Constant toroidal field Transient poloidal field —due to plasma current
Transient plasma current Plasma current Resultant transient field
Toroidal field component
Transient poloidal field
Fig. 5.13 Tokamak magnetic fields.
By the late 1970s, newer machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven now in sight, in the late 1970s a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET) and TFTR, had the explicit goal of reaching breakeven. Instead, they demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the ITER (see Fig. 5.15) effort emerged and remained the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues.
199
200
Introduction to energy essentials
Fig. 5.14 Tokamak fusion test reactor (TFTR) assembly. (Courtesy: Princeton Plasma Physics Laboratory).
Fig. 5.15 is an illustration of cutaway diagram of the ITER, the largest tokamak in the world, which began construction in 2013 and is projected to begin operation in 2035. It is intended as a demonstration that a practical fusion reactor is possible and will produce 500 MW of power. Blue human figure at the bottom shows scale. Furthermore, in 1991, START was built at Culham, UK, as the first purpose-built spherical tokamak. This was essentially a spheromak with an inserted central rod. START produced impressive results, with β values at approximately 40%—3 times that produced by standard tokamaks at the time. The concept has been scaled up to higher plasma currents and larger sizes, with the experiments NSTX (US), MAST (UK), and Globus-M (Russia) currently running. Spherical tokamaks have improved stability properties compared to conventional tokamaks and as such the area is receiving considerable experimental attention. However, spherical tokamaks till date have been at the low toroidal field and as such are impractical for fusion neutron devices. 5.2.2.4 Other systems Some more novel configurations produced in toroidal machines are the reversed field pinch (see Fig. 5.16) where a reversed field pinch (RFP) is a device used to produce and contain near-thermonuclear plasmas. It is a toroidal pinch, which uses a unique magnetic field configuration as a scheme to magnetically confine a plasma, primarily to
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.15 Cutaway diagram of the ITER configuration.
Fig. 5.16 The Q profile in a reversed field pinch.
201
202
Introduction to energy essentials
Fig. 5.17 The poloidal field in a reversed field pinch.
study magnetic fusion energy. Its magnetic geometry is somewhat different from that of the more common tokamak. As one moves out radially, the portion of the magnetic field pointing toroidally reverses its direction, giving rise to the term “reversed field.” This configuration can be sustained with comparatively lower fields than that of a tokamak of similar power density. One of the disadvantages of this configuration is that it tends to be more susceptible to nonlinear effects and turbulence. This makes it a perfect laboratory for nonideal (resistive) magnetohydrodynamics. RFPs are also used in the study of astrophysical plasmas as they share many features (see Fig. 5.17). The largest reversed field pinch device presently in operation is the RFX (R/a = 2/0.46) in Padua, Italy. Others include the MST (R/a = 1.5/0.5) in the United States, EXTRAP T2R (R/a = 1.24/0.18) in Sweden, TPE-RX (R = 0.51/0.25) in Japan, and KTX (R/a = 1.4/0.4) in China. As part of the characteristic of RFP, unlike the tokamak, which has a much larger magnetic field in the toroidal direction than the poloidal direction, an RFP has a comparable field strength in both directions (though the sign of the toroidal field reverses). Moreover, a typical RFP has a field strength of approximately one-half to one-tenth that of a comparable tokamak. The RFP also relies on driving current in the plasma to reinforce the field from the magnets through the dynamo effect. The reversed-field pinch works toward a state of minimum energy. The magnetic field lines coil loosely around a center torus. They coil outward. Near the plasma edge, the toroidal magnetic field reverses and the field lines coil in the reverse direction. Internal fields are bigger than the fields at the magnets. 1. Advantages: Due to the lower overall fields, an RFP reactor might not need superconducting magnets. This is a large advantage over tokamaks since superconducting magnets are delicate and expensive and so must be shielded from the neutron-rich fusion environment. RFPs are susceptible to surface instabilities and so require a close-fitting shell. Some experiments (such as the Madison Symmetric Torus) use their close-fitting shell as a magnetic coil by driving current through the shell itself. This is attractive from a reactor standpoint since a solid copper shell, for example, would be fairly robust against high-energy neutrons compared with superconducting
Thermonuclear fusion reaction driving electrical power generation
magnets. There is also no established β limit for RFPs. There exists a possibility that a reversed field pinch could achieve ignition solely with ohmic power (by driving current through the plasma and generating heat from electrical resistance, rather than through electron cyclotron resonance heating), which would be much simpler than tokamak designs, though it could not be operated in steady state. 2. Disadvantages: Typically, RFPs require a large amount of current to be driven, and although promising experiments are underway, there is no established method of replacing ohmically driven current, which is fundamentally limited by the machine parameters. RFPs are also prone to tearing modes which lead to overlapping magnetic islands and therefore rapid transport from the core of the plasma to the edge. These problems are areas of active research in the RFP community. The plasma confinement in the best RFP’s is only about 1% as good as in the best tokamaks. One reason for this is that all existing RFPs are relatively small. MST was larger than any previous RFP device, and thus it tested this important size issue [1]. The RFP is believed to require a shell with high electrical conductivity very close to the boundary of the plasma. This requirement is an unfortunate complication in a reactor. The Madison Symmetric Torus was designed to test this assumption and to learn how good the conductor must be and how close to the plasma it must be placed. In RFX, the thick shell was replaced with an active system of 192 coils, which cover the entire torus with their saddle shape, and respond to the magnetic push of the plasma. Active control of plasma modes is also possible with this system. The other system is the Levitated Dipole Experiment (LDX). A levitated dipole is a type of nuclear fusion reactor design using a superconducting torus, which is magnetically levitated inside the reactor chamber. The name refers to the magnetic dipole that forms within the reaction chamber, similar to Earth’s or Jupiter’s magnetospheres. It is believed that such an apparatus could contain plasma more efficiently than other fusion reactor designs. The LDX was funded by the US Department of Energy’s Office of Fusion Energy (see Fig. 5.18). The machine was run in a collaboration between MIT and Columbia University. Funding for the LDX was ended in November 2011 to concentrate resources on tokamak designs. Bulk plasma behavior inside the LDX is that for a single particle corkscrew along the field lines, flowing around the dipole electromagnet. This leads to a giant encapsulation of the electromagnet. As material passes through the center, the density spikes. This is because lots of plasma is trying to squeeze through a limited area. This is where most of the fusion reactions occur. This behavior has been called a turbulent pinch. This behavior is presented in Fig. 5.19 as can be seen here. In large amounts, the plasma formed two shells around the dipole: a low-density shell, occupying a large volume, and a high-density shell, closer to the dipole. This is
203
204
Introduction to energy essentials
Fig. 5.18 A picture of the LDX chamber from January 25, 2010.
shown here. The plasma was trapped fairly well. It gave a maximum β number of 0.26. A value of 1 is ideal. A single ion motion inside the LDX can be demonstrated by Fig. 5.20 as depicted here. This experiment needed a very special free-floating electromagnet, which created the unique “toilet-bowl” magnetic field. The magnetic field was originally made of two counter-wound rings of currents. Each ring contained a 19-strand niobium-tin Rutherford cable (common in superconducting magnets). These looped around inside an Inconel magnet, a magnet that looked like an oversized donut. The donut was charged using induction. Once charged, it generated a magnetic field for roughly an 8-h High pressure small volume
Low pressure large volume
Dipole
Fig. 5.19 Bulk plasma behavior inside the LDX.
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.20 Single ion motion inside the LDX.
period. Overall, the ring weighed 450 kg and levitated 1.6 m above a superconducting ring. The ring produced roughly a 5-tesla field. This superconductor was encased inside liquid helium, which kept the electromagnet below 10 K. This design is similar to the D20 dipole experiment at Berkeley and the RT-1 experiment at the University of Tokyo. From diagnostics point of view, the machine was monitored using diagnostics fairly standard to all fusion, which included a flux loop as it is depicted in Fig. 5.21. The magnetic field passes through the wire loop. As the field varied inside the loop, it generated a current. This was measured, and from the signal the magnetic flux was measured. 5.2.2.5 Compact toroid Compact toroid, such as the spheromak and the field-reversed configuration, attempt to combine the good confinement of closed magnetic surfaces configurations with the simplicity of machines without a central core. An early experiment of this type (dubious–discuss) in the 1970s was trisops. (Trisops fired two theta-pinch rings toward each other.)
Fig. 5.21 A magnetic flux loop.
205
206
Introduction to energy essentials
5.3 Inertial confinement fusion (ICF) The mechanism of ICF in action is very much similar to the process that takes place in a hydrogen bomb. In a hydrogen bomb, the fusion fuel is compressed and heated with a separate fission bomb (see Teller–Ulam design). A variety of mechanisms transfers the energy of the fission “trigger’s” explosion into the fusion fuel. The requirement of a fission bomb makes the method impractical for power generation. Not only would the triggers be prohibitively expensive to produce, but there is a minimum size that such a bomb can be built, defined roughly by the critical mass of the plutonium fuel used. Generally, it seems difficult to build nuclear devices smaller than about 1 kiloton in yield, which would make it a difficult engineering problem to extract power from the resulting explosions. As the explosion size is scaled down, so too is the amount of energy needed to start the reaction off. Studies from the late 1950s and early 1960s suggested that scaling down into the megajoule energy range would require energy levels that could be delivered by any number of means. This led to the idea of using a device that would “beam” the energy at the fusion fuel, ensuring mechanical separation. By the mid-1960s, it appeared that the laser would develop to the point where the required energy levels would be available. Generally speaking, ICF systems use a single laser, the driver, whose beam is split up into a number of beams which are subsequently individually amplified by a trillion times or more. These are sent into the reaction chamber (called a target chamber) by a number of mirrors, positioned in order to illuminate the target evenly over its whole surface. The heat applied by the driver causes the outer layer of the target to explode, just as the outer layers of an H-bomb’s fuel cylinder do when illuminated by the X-rays of the fission device. The material exploding off the surface causes the remaining material on the inside to be driven inward with great force, eventually collapsing into a tiny near-spherical ball. In modern ICF devices the density of the resulting fuel mixture is as much as 100 times the density of lead, around 1000 g/cm3. This density is not high enough to create any useful rate of fusion on its own. However, during the collapse of the fuel, shock waves also form and travel into the center of the fuel at high speed. When they meet their counterparts moving in from the other sides of the fuel in the center, the density of that spot is raised much further. Given the correct conditions, the fusion rate in the region highly compressed by the shock wave can give off significant amounts of highly energetic alpha particles. Due to the high density of the surrounding fuel, they move only a short distance before being “thermalized,” losing their energy to the fuel as heat. This additional energy will cause additional fusion reactions in the heated fuel, giving off more high-energy particles. This process spreads outward from the center, leading to a kind of self-sustaining burn known as ignition.
Thermonuclear fusion reaction driving electrical power generation
5.3.1 How inertial confinement fusion (ICF) works Since the late 1940s, researchers have used magnetic fields to confine hot, turbulent mixtures of ions and free electrons called plasmas, so they can be heated to temperatures of 100 to 300 million Kelvins (180 million to 540 million degrees Fahrenheit). Under those conditions, positively charged deuterium nuclei (containing one neutron and one proton) and tritium nuclei (two neutrons and one proton) can overcome the repulsive electrostatic force that keeps them apart and “fuse” into a new, heavier helium nucleus with two neutrons and two protons. The helium nucleus has a slightly smaller mass than the sum of the masses of the two hydrogen nuclei, and the difference in mass is released as kinetic energy according to Albert Einstein’s famous formula E = mc². The energy is converted to heat as the helium nucleus, also called an alpha particle, and the extra neutrons interact with the material around them. In the 1970s, scientists began experimenting with powerful laser beams to compress and heat the hydrogen isotopes to the point of fusion, a technique called ICF. In the “direct drive” approach to ICF, powerful beams of laser light are focused on a small spherical pellet containing micrograms of deuterium and tritium, as illustrated in Fig. 5.22.The rapid heating caused by the laser “driver” makes the outer layer of the target explode. In keeping with Isaac Newton’s third law (For every action there is an equal and opposite reaction.), the remaining portion of the target is driven inward in a rocket-like implosion,
Fig. 5.22 Schematic of the four main stages of a direct-drive target implosion. (A) Early time, (B) acceleration phase, (C) deceleration phase, and (D) peak compression. (Courtesy: Lawrence Livermore National Laboratory).
207
208
Introduction to energy essentials
causing compression of the fuel inside the capsule and the formation of a shock wave, which further heats the fuel in the very center and results in a self-sustaining burn. The fusion burn propagates outward through the cooler, outer regions of the capsule much more rapidly than the capsule can expand. Instead of magnetic fields, the plasma is confined by the inertia of its own mass—hence the term inertial confinement fusion. The laser intensity increases during the acceleration phase (Fig. 5.16B). Ablationsurface modulations grow exponentially because of the Rayleigh–Taylor (RT) instability, while the main shock within the DT gas converges toward the target center. Exponential growth continues until the perturbation amplitude reaches ∼10% of the perturbation wavelength when the instability growth becomes nonlinear. The greatest concern during the acceleration phase is the integrity of the shell. The ablation-surface modulations grow at a rate that depends in part on the shell adiabat α, defined as the electron pressure divided by the Fermi-degenerate pressure that the shell would have at absolute zero temperature. Larger adiabats result in thicker, lower-density imploding shells, larger ablation velocities, and better overall stability, but at the cost of lower overall performance. (The ablation velocity is the rate at which the ablation surface moves through the shell.) [3]. In the “indirect-drive” method, the approach attempted first at NIF, the lasers heat the inner walls of a gold cavity called a hohlraum containing the pellet, creating a superhot plasma that radiates a uniform “bath” of soft X-rays. The X-rays rapidly heat the outer surface of the fuel pellet, causing a high-speed ablation, or “blow-off,” of the surface material and imploding the fuel capsule in the same way as if it had been hit with the lasers directly. Symmetrically compressing the capsule with radiation forms a central “hot spot” where fusion processes set in—the plasma ignites and the compressed fuel burns before it can disassemble (See Fig. 5.23). NIF will be the first laser in which the energy released from the fusion fuel will outstrip the rate at which X-Ray radiation losses and electron conduction cool the implosion—a condition known as ignition. Unlocking the stored energy of atomic nuclei will produce 10 to 100 times the amount of energy required to initiate the selfsustaining fusion burn. Creating inertial confinement fusion and energy gain in the NIF target chamber will be a significant step toward making fusion energy viable in commercial power plants. LLNL scientists also are exploring other approaches to developing ICF as a commercially viable energy source in a fast ignition (FI) schema. The approach to FI is being taken by the NIF to achieve thermonuclear ignition and burn is called the “central hot spot” scenario. This technique relies on simultaneous compression and ignition of a spherical fuel capsule in an implosion, roughly like in a diesel engine (see how to make a star). Although the hot-spot approach has a high probability of success, there also is considerable interest in a modified approach called FI in which compression is separated from the ignition phase. Fast ignition uses the same hardware as the hot-spot approach but adds a high-intensity, ultrashort-pulse laser to
Thermonuclear fusion reaction driving electrical power generation
Laser beams rapidly heat the inside surface of the hohlraum
X rays from the hohlraum create a rocket-like blowoff of capsule surface, compressing the inter-fuel portion of the capsule
During the final part of the implosion, the fuel core reaches 100 times the density of lead and ignites at 100,000,000°C
Thermonuclear burn spreads rapidly through the compressed fuel, yielding many times the input energy
Fig. 5.23 Indirect soft X-ray hohlraum-drive compression of fusion pellet. (Courtesy: Lawrence Livermore National Laboratory).
provide the “spark” that initiates ignition. A deuterium–tritium (DT) target is first compressed to high density by lasers, and then the short-pulse laser beam delivers energy to ignite the compressed core—analogous to a sparkplug in an internal combustion engine. Note that in radiation thermodynamics, a hohlraum (a nonspecific German word for a “hollow space” or “cavity”) is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. This idealized cavity can be approximated in practice by making a small perforation in the wall of a hollow container of any opaque material. The radiation escaping through such a perforation will be a good approximation to black-body radiation at the temperature of the interior of the container. An advantage of the FI approach is that the density and pressure requirements are less than in central hot-spot ignition, so in principle fast ignition will allow some relaxation of the need to maintain precise, spherical symmetry of the imploding fuel capsule. In addition, FI uses a much smaller mass ignition region, resulting in reduced energy input, yet provides an improved energy gain estimated to be as much as a factor of 10 to 20 over the central hot-spot approach. With reduced laser-driver energy, substantially increased fusion energy gain—as much as 300 times the energy input—and lower capsule symmetry requirements, the fast-ignition approach could provide an easier development pathway toward an eventual inertial fusion energy power plant.
5.3.2 How fast ignition (IF) works In the compression stage, X-rays generated by laser irradiation of the hohlraum wall deposit their energy directly on the outside of a spherical shell, the ablator shell, that rapidly heats and expands outward. This action drives the remaining shell inward, compressing the fuel to form a uniform dense assembly. To ignite the fuel assembly, about 20 kJ of energy must be deposited in a 35-μm spot in a few picoseconds (trillionths of a second), heating the fuel to the ignition temperature and initiating thermonuclear burn. The leading approach to FI uses a hollow cone of high-density material inserted into the fuel capsule to allow clean entry of the second laser beam to the compressed fuel assembly (see more on fast ignition) (see Fig. 5.24). In Fig. 5.18, density and temperature profiles of a conventional central hot spot ICF target and a FI target.
209
210
Introduction to energy essentials
Fig. 5.24 Illustration of conventional ICF versus fast ignition. (Courtesy: Lawrence Livermore National Laboratory).
The physics basis of FI, however, is not currently as mature as that of the central hot-spot approach. The coupling efficiency from a short-pulse laser to the FI hot spot is a critical parameter dependent on very challenging and novel physics. Fast ignition researchers must resolve these physics problems to justify advancement to the next stage. Success in demonstrating efficient transport of a high-energy pulse into dense plasma, development of a target design for the compression phase, and definition of a power plant concept could lead to a new energy source for the nation and the world. Because modern thermonuclear weapons use the fusion reaction to generate their immense energy, scientists will use NIF ignition experiments to examine the conditions associated with the inner workings of nuclear weapons. Ignition experiments also can be used to help scientists better understand the hot, dense interiors of large planets, stars, and other astrophysical phenomena. In summary, as it was stated earlier, a more recent development is the concept of “FI,” which may offer a way to directly heat the high-density fuel after compression, thus decoupling the heating and compression phases of the implosion. In this approach the target is first compressed “normally” using a driver laser system, and then when the implosion reaches maximum density (at the stagnation point or “bang time”), a second ultrashort pulse ultrahigh power petawatt (PW) laser delivers a single pulse focused on one side of the core, dramatically heating it and hopefully starting fusion ignition. The two types of fast ignition are the “plasma bore-through” method and the “cone-inshell” method. In the first method the petawatt laser is simply expected to bore straight through the outer plasma of an imploding capsule and to impinge on and heat the dense core, whereas in the cone-in-shell method, the capsule is mounted on the end
Thermonuclear fusion reaction driving electrical power generation
of a small high-Z (high atomic number) cone such that the tip of the cone projects into the core of the capsule. In this second method, when the capsule is imploded, the petawatt has a clear view straight to the high-density core and does not have to waste energy boring through a “corona” plasma; however, the presence of the cone affects the implosion process in significant ways that are not fully understood. Several projects are currently underway to explore the fast ignition approach, including upgrades to the OMEGA laser at the University of Rochester, the GEKKO XII device in Japan, and an entirely new £500 million facility, known as HiPER, proposed for construction in the European Union. If successful, the fast ignition approach could dramatically lower the total amount of energy needed to be delivered to the target; whereas NIF uses UV beams of 2 MJ, HiPER’s driver is 200 kJ, and heater 70 kJ, yet the predicted fusion gains are nevertheless even higher than on NIF. The bottom line recipe for a small star suggested by LLNL falls into the following steps: • Take a hollow, spherical plastic capsule about 2 mm in diameter (about the size of a small pea). • Fill it with 150 μg (less than 1 millionth of a pound) of a mixture of deuterium and tritium, the two heavy isotopes of hydrogen. • Take a laser that for about 20 billionths of a second can generate 500 trillion watts— the equivalent of 5 million 100-W light bulbs. • Focus all that laser power onto the surface of the capsule. • Wait 10 billionths of a second. • Result: one miniature star. By following the LLNL recipe, we would make a miniature star that lasts for a tiny fraction of a second. During its brief lifetime, it will produce energy the way the stars and the sun do, by nuclear fusion. Our little star will produce 10 to 100 times more energy than we used to ignite it.
5.3.3 Issues with successful achievement The primary problems with increasing ICF performance since the early experiments in the 1970s have been of energy delivery to the target, controlling symmetry of the imploding fuel, preventing premature heating of the fuel (before maximum density is achieved), preventing premature mixing of hot and cool fuel by hydrodynamic instabilities and the formation of a “tight” shockwave convergence at the compressed fuel center. In order to focus the shock wave on the center of the target, the target must be made with extremely high precision and sphericity with aberrations of no more than a few micrometers over its surface (inner and outer). Likewise, the aiming of the laser beams must be extremely precise, and the beams must arrive at the same time at all points on the target. Beam timing is a relatively simple issue though and is solved by using delay
211
212
Introduction to energy essentials
lines in the beams’ optical path to achieve picosecond levels of timing accuracy. The other major problem plaguing the achievement of high symmetry and high temperatures/densities of the imploding target are so-called “beam–beam” imbalance and beam anisotropy. These problems are, respectively, where the energy delivered by one beam may be higher or lower than other beams impinging on the target and of “hot spots” within a beam diameter hitting a target which induces uneven compression on the target surface, thereby forming Rayleigh–Taylor (RT) instabilities [4] in the fuel, prematurely mixing it and reducing heating efficacy at the time of maximum compression. The Richtmyer–Meshkov instability is also formed during the process due to shock waves being formed as it is illustrated in Fig. 5.25. All of these problems have been substantially mitigated to varying degrees in the past two decades of research by using various beam smoothing techniques and beam energy diagnostics to balance beam to beam energy; however, RT instability remains a major
Fig. 5.25 Step-by-step demonstration of laser driven fusion pellet.
Thermonuclear fusion reaction driving electrical power generation
issue. Target design has also improved tremendously over the years. Modern cryogenic hydrogen ice targets tend to freeze a thin layer of deuterium just on the inside of a plastic sphere while irradiating it with a low-power IR laser to smooth its inner surface while monitoring it with a microscope-equipped camera, thereby allowing the layer to be closely monitored ensuring its ”smoothness” [6]. Cryogenic targets filled with a deuterium– tritium (DT) mixture are ”self-smoothing” due to the small amount of heat created by the decay of the radioactive tritium isotope.This is often referred to as “beta-layering” [7]. An inertial confinement fusion fuel microcapsule (sometimes called a “microballoon”) of the size to be used on the NIF, which can be filled with either deuterium and tritium gas or DT ice. The capsule can be either inserted in a hohlraum (as above) and imploded in the indirect drive mode or irradiated directly with laser energy in the direct-drive configuration. Microcapsules used on previous laser systems were significantly smaller owing to the less powerful irradiation earlier lasers were capable of delivering to the target. Certain targets are surrounded by a small metal cylinder which is irradiated by the laser beams instead of the target itself, an approach known as “indirect drive” [7]. In this approach the lasers are focused on the inner side of the cylinder, heating it to a superhot plasma which radiates mostly in X-rays. The X-rays from this plasma are then absorbed by the target surface, imploding it in the same way as if it had been hit with the lasers directly. The absorption of thermal X-rays by the target is more efficient than the direct absorption of laser light; however, these hohlraums or “burning chambers” also take up considerable energy to heat on their own thus significantly reducing the overall efficiency of laser-to-target energy transfer. They are thus a debated feature even today; the equally numerous “direct-drive” design does not use them. Most often, indirect-drive hohlraum targets are used to simulate thermonuclear weapons tests due to the fact that the fusion fuel in them is also imploded mainly by X-ray radiation. A variety of ICF drivers are being explored. Lasers have improved dramatically since the 1970s, scaling up in energy and power from a few joules and kilowatts to megajoules (see NIF laser in Section 5.3.4 later) and hundreds of terawatts, using mostly frequencydoubled or tripled light from neodymium glass amplifiers. Heavy-ion beams are particularly interesting for commercial generation, as they are easy to create, control, and focus. On the downside, it is very difficult to achieve very-high-energy densities required to implode a target efficiently, and most ion-beam systems require the use of a hohlraum surrounding the target to smooth out the irradiation, reducing the overall efficiency of the coupling of the ion beam’s energy to that of the imploding target further. An ICF, which was a foam-filled cylindrical target with machined perturbations, being compressed by the Nova laser. This shot was done in 1995. The image shows the compression of the target, as well as the growth of the Rayleigh–Taylor instabilities as illustrated in Fig. 5.19.
213
214
Introduction to energy essentials
Fig. 5.26 is an illustration of the mockup for a gold-plated NIF, while Fig. 5.27 is an illustration of an ICF fuel microcapsule, which sometimes called a “microballoon” of the size to be used on the NIF that can be filed with either deuterium (D) and tritium (T) gas or DT ice. The capsule can be either inserted in a hohlraum (as earlier) and imploded in the indirect drive mode or irradiated directly with laser energy in the direct-drive configuration. Microcapsules used on previous laser systems were significantly smaller owing to the less powerful irradiation earlier lasers were capable of delivering to the target.
5.3.4 National ignition laser facility The NIF, is a large laser-based ICF research device, located at the Lawrence Livermore National Laboratory in Livermore, California. NIF uses lasers to heat and compress a small amount of hydrogen fuel with the goal of inducing nuclear fusion reactions. NIF’s mission is to achieve fusion ignition with high-energy gain and to support nuclear
Fig. 5.26 Mockup of a gold plated NIF hohlraum. (Courtesy: Lawrence Livermore National Laboratory).
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.27 National Ignition Facility fuel of deuterium–tritium (DT). (Courtesy: Lawrence Livermore National Laboratory).
weapon maintenance and design by studying the behavior of matter under the conditions found within nuclear weapons. NIF is the largest and most energetic ICF device built till date and the largest laser in the world (see Fig. 5.28). Construction on the NIF began in 1997, but management problems and technical delays slowed progress into the early 2000s. Progress after 2000 was smoother, but com-
Fig. 5.28 The National Ignition Facility, located at Lawrence Livermore National Laboratory. (Courtesy: Lawrence Livermore National Laboratory).
215
216
Introduction to energy essentials
pared to initial estimates, NIF was completed 5 years behind schedule and was almost 4 times more expensive than originally budgeted. Construction was certified complete on March 31, 2009 by the US Department of Energy, and a dedication ceremony took place on May 29, 2009. The first large-scale laser target experiments were performed in June 2009 [5], and the first “integrated ignition experiments” (which tested the laser’s power) were declared completed in October 2010. Bringing the system to its full potential was a lengthy process that was carried out from 2009 to 2012. During this period a number of experiments were worked into the process under the National Ignition Campaign, with the goal of reaching ignition just after the laser reached full power, sometime in the second half of 2012. The campaign officially ended in September 2012, at about 1/10 the conditions needed for ignition. Experiments since then have pushed this closer to 1/3, but considerable theoretical and practical work is required if the system is ever to reach ignition. Since 2012, NIF has been used primarily for materials science and weapons research. See Fig. 5.29, where the target assembly for NIF’s first integrated ignition experiment is mounted in the cryogenic target positioning system, or cryoTARPOS. The two triangle-shaped arms form a shroud around the cold target to protect it until they open 5 s before a shot. ICF devices use drivers to rapidly heat the outer layers of a target in order to compress it. The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium (D) and tritium (T). The energy of the laser heats
Fig. 5.29 Ignition target assembly. (Courtesy: Lawrence Livermore National Laboratory).
Thermonuclear fusion reaction driving electrical power generation
the surface of the pellet into a plasma, which explodes off the surface. The remaining portion of the target is driven inward, eventually compressing it into a small point of extremely high density. The rapid blow-off also creates a shock wave that travels toward the center of the compressed fuel from all sides. When it reaches the center of the fuel, a small volume is further heated and compressed to a greater degree. When the temperature and density of that small spot are raised high enough, fusion reactions occur and release energy. The fusion reactions release high-energy particles, some of which, primarily alpha particles, collide with the surrounding high-density fuel and heat it further. If this process deposits enough energy in a given area, it can cause that fuel to undergo fusion as well. However, the fuel is also losing heat through X-ray losses and hot electrons leaving the fuel area, so the rate of alpha heating must be greater than these losses, a condition known as bootstrapping. Given the right overall conditions of the compressed fuel—high enough density and temperature—this bootstrapping process will result in a chain reaction, burning outward from the center where the shock wave started the reaction. This is a condition known as ignition, which will lead to a significant portion of the fuel in the target undergoing fusion and releasing large amounts of energy. Till date most ICF experiments have used lasers to heat the target. Calculations show that the energy must be delivered quickly in order to compress the core before it disassembles. The laser energy also must be focused extremely evenly across the target’s outer surface in order to collapse the fuel into a symmetric core. Although other drivers have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the right combination of features. NIF aims to create a single 500 terawatt (TW) peak flash of light that reaches the target from numerous directions at the same time, within a few picoseconds. The design uses 192 beamlines in a parallel system of flashlamp-pumped, neodymium-doped phosphate glass lasers. To ensure that the output of the beamlines is uniform, the initial laser light is amplified from a single source in the injection laser system (ILS).This starts with a low-power flash of 1053-nanometer (nm) infrared light generated in a ytterbium-doped optical fiber laser known as the master oscillator. The light from the master oscillator is split and directed into 48 preamplifier modules (PAMs). Each PAM contains a two-stage amplification process. The first stage is a regenerative amplifier in which the pulse circulates 30 to 60 times, increasing in energy from nanojoules to tens of millijoules. The light then passes four times through a circuit containing a neodymium glass amplifier similar to (but much smaller than) the ones used in the main beamlines, boosting the nanojoules of light created in the master oscillator to about 6 J. According to LLNL, the design of the PAMs was one of the major challenges during construction. Improvements to the design since then have allowed them to surpass their initial design goals.
217
218
Introduction to energy essentials
Fig. 5.30 Illustration of NIF laser beamline. (Courtesy: Lawrence Livermore National Laboratory).
Fig. 5.30 shows the beam path depiction of NIF laser beam, one of 192 similar beamlines. On the left-hand side are located the amplifiers and optical switch, and on the right-hand side is the final spatial, switchyard, and optical frequency convertor. The main amplification takes place in a series of glass amplifiers located at one end of the beamlines. Before firing, the amplifiers are first optically pumped by a total of 7680 xenon flash lamps (the PAMs have their own smaller flash lamps as well). The lamps are powered by a capacitor bank which stores a total of 422 MJ (117 kWh) of electrical energy. When the wave-front passes through them, the amplifiers release some of the light energy stored in them into the beam. To improve the energy, transfer the beams are sent through the main amplifier section 4 times, using an optical switch located in a mirrored cavity. In total these amplifiers boost the original 6 J provided by the PAMs to a nominal 4 MJ. Given the time scale of a few billionths of a second, the peak UV power delivered to the target is correspondingly very high, 500 TW. Near the center of each beamline, and taking up the majority of the total length, are spatial filters. These consist of long tubes with small telescopes at the end that focuses the laser beam down to a tiny point in the center of the tube where a mask cuts off any stray light outside the focal point. The filters ensure that the image of the beam when it reaches the target is extremely uniform, removing any light that was misfocused by imperfections in the optics upstream. Spatial filters were a major step forward in ICF work when they were introduced in the cyclops laser, an earlier LLNL experiment. The total length of the path the laser beam propagates from one end to the other, including switches, is about 1500 m (4900 ft).The various optical elements in the beamlines are generally packaged into line replaceable units (LRUs), standardized boxes about the size of a vending machine that can be dropped out of the beamline for replacement from below. See Fig. 5.31 for NIF’s basic layout where the laser pulse is generated in the room just right of center and is sent into the beamlines (blue) on either side. After several passes through the beamlines, the light is sent into the “switchyard” (red) where it is aimed into the target chamber (silver).
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.31 NIF’s basic layout infrastructure configuration. (Courtesy: Lawrence Livermore National Laboratory).
After the amplification is complete the light is switched back into the beamline, where it runs to the far end of the building to the target chamber. The target chamber is a 10-m diameter (33 ft) multipiece steel sphere weighing 130,000 kg (290,000 lb). Just before reaching the target chamber, the light is reflected in various mirrors in the switchyard and target area in order to impinge on the target from different directions. Since the length of the overall path from the master oscillator to the target is different for each of the beamlines, optics are used to delay the light in order to ensure all of them reach the center within a few picoseconds of each other. NIF normally directs the laser into the chamber from the top and bottom.The target area and switchyard system can be reconfigured by moving half of the 48 beamlines to alternate positions closer to the equator of the target chamber. The total length of the path the laser beam propagates from one end to the other, including switches, is about 1500 m (4900 ft).The various optical elements in the beamlines are generally packaged into LRUs, standardized boxes about the size of a vending machine that can be dropped out of the beamline for replacement from below. One important aspect of any ICF research project is ensuring that experiments can actually be carried out on a timely basis. Previous devices generally had to cool down for many hours to allow the flashlamps and laser glass to regain their shapes after firing (due to thermal expansion), limiting use to one or fewer firings a day. One of the goals for NIF is to reduce this time to less than 4 h in order to allow 700 firings a year.
219
220
Introduction to energy essentials
The name NIF refers to the goal of igniting the fusion fuel, a long-sought threshold in fusion research. In existing (nonweapon) fusion experiments the heat produced by the fusion reactions rapidly escapes from the plasma, meaning that external heating must be applied continually in order to keep the reactions going. Ignition refers to the point at which the energy given off in the fusion reactions currently underway is high enough to sustain the temperature of the fuel against those losses. This causes a chain-reaction that allows the majority of the fuel to undergo a nuclear burn. Ignition is considered a key requirement if fusion power is to ever become practical. NIF is designed primarily to use the indirect-drive method of operation in which the laser heats a small metal cylinder instead of the capsule inside it. The heat causes the cylinder, known as a hohlraum (German for “hollow room,” or cavity), to reemit the energy as intense X-rays, which are more evenly distributed and symmetrical than the original laser beams. Experimental systems, including the OMEGA and Nova lasers, validated this approach through the late 1980s. In the case of the NIF, the large delivered power allows for the use of a much larger target; the baseline pellet design is about 2 mm in diameter, chilled to about 18 Kelvins (−255°C) and lined with a layer of frozen DT fuel. The hollow interior also contains a small amount of DT gas. In a typical experiment, the laser will generate 3 MJ of infrared laser energy of a possible 4. About 1.5 MJ of this is left after conversion to UV, and about 15% of this is lost in the X-ray conversion in the hohlraum. About 15% of the resulting X-rays, about 150 kJ, will be absorbed by the outer layers of the target. The resulting inwarddirected compression is expected to compress the fuel in the center of the target to a density of about 1,000 g/cm3 (or 1,000,000 kg/m3); for comparison, lead has a normal density of about 11 g/cm3 (11,340 kg/m3). The pressure is the equivalent of 300 billion atmospheres. It is expected this will cause about 20 MJ of fusion energy to be released, resulting in a net fusion energy gain of about 15 (G = Fusion energy/UV laser energy). Improvements in both the laser system and hohlraum design are expected to improve the energy absorbed by the capsule to about 420 kJ, which, in turn, could generate up to 100–150 MJ of fusion energy. However, the baseline design allows for a maximum of about 45 MJ of fusion energy release, due to the design of the target chamber.This is the equivalent of about 11 kg of TNT exploding. See Fig. 5.17, which is Sankey diagram of the laser energy to hohlraum X-ray to target capsule energy coupling efficiency. Note the “laser energy” is after conversion to ultra-violet (UV), which loses about 50% of the original IR power (see Fig. 5.32). These output energies are still less than the 422 MJ of input energy required to charge the system’s capacitors that power the laser amplifiers. The net wall-plug efficiency of NIF (UV laser energy out divided by the energy required to pump the lasers from an external source) is less than 1%, and the total wall-to-fusion efficiency is under
Thermonuclear fusion reaction driving electrical power generation
Fig. 5.32 Laser energy-driven fusion hohlraum. (Courtesy: Lawrence Livermore National Laboratory).
10% at its maximum performance. An economical fusion reactor would require that the fusion output be at least an order of magnitude more than this input. Commercial laser fusion systems would use the much more efficient diode-pumped solid-state lasers, where wall-plug efficiencies of 10% have been demonstrated, and efficiencies 16–18% are expected with advanced concepts under development. NIF is also exploring new types of targets. Previous experiments generally used plastic ablators, typically polystyrene (CH). NIF’s targets also are constructed by coating a plastic form with a layer of sputtered beryllium or beryllium–copper alloys, and then oxidizing the plastic out of the center. In comparison to traditional plastic targets, beryllium targets offer higher overall implosion efficiencies for the indirect-drive mode where the incoming energy is in the form of X-rays. Although NIF was primarily designed as an indirect-drive device, the energy in the laser is high enough to be used as a direct-drive system as well where the laser shines directly on the target. Even at UV wavelengths the power delivered by NIF is estimated to be more than enough to cause ignition, resulting in fusion energy gains of about 40 times, somewhat higher than the indirect drive system. A more uniform beam layout suitable for direct drive experiments can be arranged through changes in the switchyard that move half of the beamlines to locations closer to the middle of the target chamber. It has been shown, using scaled implosions on the OMEGA laser and computer simulations, that NIF should also be capable of igniting a capsule using the so-called “polar direct drive (PDD)” configuration where the target is irradiated directly by the laser, but only from the top and bottom, with no changes to the NIF beamline layout. In this configuration the target suffers either a “pancake” or “cigar” anisotropy on implosion, reducing the maximum temperature at the core.
221
222
Introduction to energy essentials
Other targets, called Saturn targets, are specifically designed to reduce the anisotropy and improve the implosion.They feature a small plastic ring around the “equator” of the target, which quickly vaporizes into a plasma when hit by the laser. Some of the laser light is refracted through this plasma back toward the equator of the target, evening out the heating. Ignition with gains of just over 35 times is thought to be possible using these targets at NIF, producing results almost as good as the fully symmetric direct-drive approach.
References [1] B. Zohuri, Plasma Physics and Controlled Thermonuclear Reactions Driven Fusion Energy, 1st Ed., Springer Publishing Company, November 17, 2016. [2] B. Zohuri, Magnetic Confinement Fusion Driven Thermonuclear Energy, 1st Ed., Springer Publishing Company, February 27, 2017. [3] B. Zohuri, Inertial Confinement Fusion Driven Thermonuclear Energy, 1st Ed., Springer Publishing Company, January 29, 2017. [4] A.C. Hayes, G. Jungman, J.C. Solem, P.A. Bradley, R.S. Rundberg, Prompt beta spectroscopy as a diagnostic for mix in ignited NIF capsules, Mod. Phys. Lett. A 21 (13) (2006) 1029. [5] Inertial Confinement Fusion Program Activities, April, 2002. Archived May 11, 2009, at the Wayback Machine. [6] Inertial Confinement Fusion Program Activities, March, 2006. Archived May 11, 2009, at the Wayback Machine. [7] J. Lindl, B. Hammel, Recent Advances in Indirect Drive ICF Target Physics, 20th IAEA Fusion Energy Conf. (PDF), Lawrence Livermore National Laboratory, 2004, retrieved August 23, 2004.
CHAPTER 6
Other electrical power generation energy sources Globally as well as in the United States electricity is produced with diverse energy sources and technologies. Every country with their innovative technologies of producing electric power of their demand due to population growth uses many different energy sources for power generation and technologies to generate electricity and that includes the United Sates under Department of Energy (DOE) guideline in collaboration with industries and universities in this game. The sources and technologies have changed over time, and some are used more than others. In this chapter we are discussing other sources of electrical power generation minus the nuclear power plants as source of energy production since this subject extensively was discussed in previous Chapters 2 through 4. Here we consider any other sources of energy except nuclear power to generate electricity as a clean source of energy that prevents any monoxide or dioxide carbon production as result of producing electricity per demand.
6.1 Introduction The three major categories of energy for electricity generation are fossil fuels (coal, natural gas, and petroleum), nuclear energy, and renewable energy sources as presented in Fig. 6.1 for the United States. Most electricity is generated with steam turbines using fossil fuels, nuclear, biomass, geothermal, and solar thermal energy. Other major electricity generation technologies include gas turbines, hydro turbines, wind turbines, and solar photovoltaics. In this chapter we introduce every one of these sources of energy per description provided by the US Energy Information Administration (EIA) and all credit for this chapter goes to this administration, and we encourage our readers to refer to site of EIA from time-to-time for more update and introduction of new innovative source of energy since technology in this field is changing so rapidly [1]. The EIA collects, analyzes, and disseminates independent and impartial energy information to promote sound policymaking, efficient markets, and public understanding of energy and its interaction with the economy and the environment. EIA provides a wide range of information and data products covering energy production, stocks, demand, imports, exports, and prices and prepares analyses and special reports on topics of current interest. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
223
224
Introduction to energy essentials
Fig. 6.1 Source of US electricity generation as of 2017. Note: Electricity generation from utility-scale facilities. (Source: Adapted from US Energy Information Administration, Electric Power Monthly. February 2018, preliminary data).
Per this organization chart (see Fig. 6.2), EIA has four assistant administrators directing program functions for energy statistics, energy analysis, communications, and resource and technology management. As of today, EIA celebrated its 30th anniversary in April 2008. Watch our short EIA documentary video, learn more about the legislation that has shaped the agency, and review our past six As part of EIA history this agency receives funding for its activities with an annual appropriation from the Congress. Find out about our current and requested budgets, how we measure performance, how to do business with us, and the changes we are planning for the next five years. Breakdown of electricity generation by energy source is presented by dataset as it is illustrated in Fig. 6.3. Generation data consist of both utility and nonutility sources. Nuclear electricity is the electricity generated by the use of the thermal energy released from the fission of nuclear fuel in a reactor. Renewable energy represents the energy resources that are naturally replenishing but flow limited. They are virtually inexhaustible in duration but limited in the amount of energy that is available per unit of time. Renewable energy resources include hydro (conventional hydroelectric power), geothermal, solar, tidal action, ocean thermal, wave action, wind, and biomass. Conventional thermal electricity represents the electricity generated by an electric power plant using coal, petroleum, or gas as its source of energy. Hydroelectric pumped storage represents the hydroelectricity that is generated during peak loads by using water previously pumped into an elevated storage reservoir
Other electrical power generation energy sources
Fig. 6.2 US Energy Information Administration organization chart.
Fig. 6.3 Breakdown of electricity generation by energy source.
225
226
Introduction to energy essentials
during off-peak periods when excess generating capacity is available to do so. When additional generating capacity is needed, the water can be released from the reservoir through a conduit to turbine generators located in a power plant at a lower level. In the hydroelectric pumped storage calculation, we consider the pumped storage facility production minus the energy used for pumping. Hydroelectricity represents the electricity generated by an electric power plant whose turbines are driven by falling water. It includes electric utility and industrial generation of hydroelectricity, unless otherwise specified. Generation is reported on a net basis, that is, on the amount of electric energy generated after the electric energy consumed by station auxiliaries and the losses in the transformers that are considered integral parts of the station are deducted. Hydroelectric capacity excludes hydroelectric pumped storage capacity where separately reported. Biomass represents the renewable energy resource constituted by the organic nonfossil material of biological origin. Biomass is wood and waste. Fig. 6.4 shows the countries with highest electricity generation as well.
Fig. 6.4 Data depiction charts by countries with highest electricity generation.
Other electrical power generation energy sources
As we stated at the beginning, there are three major categories of energy for electricity generation as: 1. Coal 2. Natural gas 3. Petroleum 1. Fossil fuels are the largest sources of energy for electricity generation. a. Natural gas was the largest source—about 32%—of US electricity generation in 2017. Natural gas is used in steam turbines and gas turbines to generate electricity. b. Coal was the second largest energy source for US electricity generation in 2017—about 30%. Nearly all coal-fired power plants use steam turbines. A few coal-fired power plants convert coal to a gas for use in a gas turbine to generate electricity. c. Petroleum was the source of less than 1% of US electricity generation in 2017. Residual fuel oil and petroleum coke are used in steam turbines. Distillate—or diesel—fuel oil is used in diesel-engine generators. Residual fuel oil and distillates can also be burned in gas turbines. 2. Nuclear energy provides one-fifth of US electricity. a. Nuclear energy was the source of about 20% of US electricity generation in 2017. Nuclear power plants use steam turbines to produce electricity from nuclear fission. See Fig. 6.5, which is presentation of US electricity by major energy sources between 1950 and 2017. 3. Renewable energy sources provide nearly 20% of US electricity. A variety of renewable energy sources are used to generate electricity and were the source of about 17% of total US electricity generation in 2017. a. Hydropower plants produced about 7% of total US electricity generation and about 44% of electricity generation from renewable energy in 2017. Hydropower plants use flowing water to spin a turbine connected to a generator. b. Wind energy was the source of about 6% of total US electricity generation and about 37% of electricity generation from renewable energy in 2017. Wind turbines convert wind energy into electricity. c. Biomass, the source of about 2% of total US electricity generation in 2017, is burned directly in steam-electric power plants, or it can be converted to a gas that can be burned in steam generators, gas turbines, or internal combustion engine generators. d. Solar energy provided about 1% of total US electricity in 2017. Photovoltaic (PV) and solar-thermal power are the two main types of solar electricity
227
228
Introduction to energy essentials
billion kilowatthours 4,500
4,000
3,500
3,000 Petroleum and other
2,500
Renewables Nuclear
2,000
Natural gas Coal
1,500
1,000
500
0 1950
1960
1970
1980
1990
2000
2010
Fig. 6.5 Electricity generation-driven major energy source (1950–2017). Note: Electricity generation from utility-scale facilities. (Source: Adapted from US Energy Information Administration, Monthly Energy Review (Table 7.2a). March 2018, preliminary data for 2017).
generation technologies. PV conversion produces electricity directly from sunlight in a photovoltaic cell. Most solar-thermal power systems use steam turbines to generate electricity. See Fig. 6.6, which is presentation of US electricity generation from renewable energy sources between 1950 and 2017.
6.2 What is natural gas? Natural gas occurs deep beneath the earth’s surface. Natural gas consists mainly of methane, a compound with one carbon atom and four hydrogen atoms. Natural gas also contains small amounts of hydrocarbon gas liquids and nonhydrocarbon gases. We use natural gas as a fuel and to make materials and chemicals. Natural gas and crude oil are mixtures of different hydrocarbons. Hydrocarbons are molecules of carbon and hydrogen in various combinations. Hydrocarbon gas liquids
Other electrical power generation energy sources
billion kilowatthours 550 500 450 400 350 Solar 300
Wind Geothermal
250
Biomass
200
Hydroelectric
150 100 50 0 1950
1960
1970
1980
1990
2000
2010
Fig. 6.6 Electricity generation-driven by renewable energy source (1950–2017). Note: Electricity generation from utility-scale facilities. Hydroelectric is conventional hydropower. (Source: Adapted from US Energy Information Administration, Monthly Energy Review (Table 7.2a). March 2018, preliminary data for 2017).
(HGL) are hydrocarbons that occur as gases at atmospheric pressure and as liquids under higher pressures. HGL can also be liquefied by cooling. The specific pressures and temperatures at which the gases liquefy vary by the type of HGL. HGL may be described as being light or heavy according to the number of carbon atoms and hydrogen atoms in an HGL molecule. HGL are categorized chemically as: • Alkanes or paraffins • Ethane—C2H6 • Propane—C3H8 • Butanes—normal butane and isobutane—C4H10 • Natural gasoline or pentanes plus—C5H12 and heavier • Alkenes or olefins • Ethylene—C2H4 • Propylene—C3H6 • Butylene and isobutylene—C4H8
229
230
Introduction to energy essentials
1. Hydrocarbon gas liquids are from natural gas and crude oil. HGL are found in raw natural gas and crude oil, and they are extracted when natural gas is processed at natural gas processing plants and when crude oil is refined into petroleum products. Natural gas plant liquids (NGPL), which account for most of HGL production in the United States, fall solely into the alkanes category. Refinery production accounts for the remainder of US alkanes production, and it accounts for all of the olefins production data that are published by EIA. Greater volumes of olefins are produced at petrochemical plants from HGL and heavier feedstock. EIA does not collect or report petrochemical production data (see Fig. 6.7).
million barrels per day Petroleum refining
Natural gas processing
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016
Fig. 6.7 US hydrogen gas liquid production drive by source (2000–2016). (Source: Adapted from US Energy Information Administration, Petroleum Supply Annual. September 2017).
Other electrical power generation energy sources
2. Hydrocarbon gas liquids have many uses Because HGL straddle the gas/liquid boundary, their versatility and high energy density in liquid form make them useful for many purposes: • Feedstock in petrochemical plants to make chemicals, plastics, and synthetic rubber • Fuels for heating, cooking, and drying • Fuels for transportation • Additives for motor gasoline production • Diluent (a diluting or thinning agent) for transportation of heavy crude oil In 2016, total HGL use accounted for about 13% of total US petroleum consumption (Fig. 6.8). HGL were initially considered a nuisance product but are now high-value products. Shortly before World War I, a problem in a natural gas pipeline occurred. A section of a pipeline in a natural gas field ran under a cold stream, and the low temperature caused liquids to form and sometimes block the flow of natural gas in the pipeline. This experience led engineers to treat natural gas before it entered natural gas transmission Total = 4.14 million barrels per day
Refinery olefins 7% Isobutane 8% Normal butane 8%
Propane 36%
Natural gasoline 10%
Ethane 31%
Fig. 6.8 US hydrocarbon gas liquids production driven by type 2016. Note: Refinery olefins include butylene, ethylene, isobutylene, and propylene. (Source: Adapted from US Energy Information Administration, Petroleum Supply Annual. September 2017).
231
232
Introduction to energy essentials
NGPL + LRG = HGL = NGL + Refinery olefins Natural gas liquids (NGL) Ethane Propane Normal butane Isobutane Natural gasoline
Refinery olefins Ethylene Propylene Butylene Isobutylene
Hydrocarbon gas liquids (HGL) Natural gas plant liquids (NGPL) Ethane Propane Normal butane Isobutane Natural gasoline Dry gas
Liquefied refinery gases (LRG) Ethane Propane Normal butanes Isobutane Natural gasoline Refinery olefins
Natural gas processing plant/fractionator
Refinery/condensate splitter Plant condensate
Wet gas
Field/lease separator
Crude oil/lease condensate Water
Oil and natural gas wells
Fig. 6.9 Source, production, and types of hydrocarbon gas liquids. (Source: Adapted from US Energy Information Administration).
pipelines. Natural gas processing facilities were built to cool and compress natural gas, which separated the hydrocarbon gases as liquids from the natural gas. The HGL then became marketable commodities as fuels and as feedstock for making other petroleum products and petrochemicals. Fig. 6.9 is presenting all the sources, production plans, and types of hydrocarbon gas liquids as well. For further information refer to the site provided by EIA reference [2].
6.2.1 How did natural gas form? Millions of years ago, the remains of plants and animals (diatoms) decayed and built up in thick layers, sometimes mixed with sand and silt. Over time, these layers were buried under sand, silt, and rock. Pressure and heat changed some of this organic material into
Other electrical power generation energy sources
Ocean
300–400 million years ago
Ocean
50–100 million years ago
Sand and silt Plant and animal remains Tiny sea plants and animals died and were buried on the ocean floor. Over time, they were covered by layers of silt and sand.
Over millions of years, the remains were buried deeper and deeper. The enormous heat and pressure turned them into oil and gas.
Sand and silt rock Oil and gas deposits Today, we drill down through layers of sand, silt, and rock to reach the rock formations that contain oil and gas deposits.
Fig. 6.10 Petroleum and natural gas formation depiction.
coal, some into oil (petroleum), and some into natural gas. In some places, the natural gas moved into large cracks and spaces between layers of overlying rock. In other places, natural gas occurs in the tiny pores (spaces) within some formations of shale, sandstone, and other types of sedimentary rock where it is referred to as shale gas or tight gas. Natural gas also occurs in coal deposits, which is called coalbed methane. See Fig. 6.10 for formation of petroleum and natural gas in the nature.
6.2.2 How do we get natural gas? The search for natural gas begins with geologists who study the structure and processes of the earth. They locate the types of rock that are likely to contain natural gas deposits. Some of these areas are on land and some are offshore and deep under the ocean floor. Geologists often use seismic surveys on land and in the ocean to find the right places to drill wells. Seismic surveys on land use echoes from a vibration source at the surface of the earth, usually a vibrating pad under a special type of truck. Geologists can also use small amounts of explosives as a vibration source. Seismic surveys conducted in the ocean rely on blasts of sound that create sonic waves to explore the geology beneath the ocean floor. If a site seems promising, an exploratory well is drilled and tested. Once a formation is proven to be economic for production, one or more production (or development) wells are drilled down into the formation, and natural gas flows up through the wells to the surface. In the United States and a few other countries, natural gas is produced directly from shale and other types of rock formations that contain natural gas in pores within the rock. The rock formation is fractured by forcing water, chemicals, and sand down a well. This releases the natural gas from the rock, and the natural gas flows up the well to the surface. Wells drilled to produce oil may also produce associated natural gas. See Fig. 6.11 for natural gas or oil extraction from earth. In Fig. 6.11 operators preparing a hole for explosive charges used in seismic exploration. Because natural gas is colorless, odorless, and tasteless, distributors add mercaptan (a chemical that smells like sulfur) to give natural gas a distinct unpleasant odor (it
233
234
Introduction to energy essentials
Fig. 6.11 Gas or oil exploration. (Source: Stock photography (copyrighted)).
smells like rotten eggs). This added odor serves as a safety measure to help detect leaks in natural gas pipelines. The natural gas withdrawn from a well is called wet natural gas because it usually contains liquid hydrocarbons and nonhydrocarbon gases. Methane and other useful gases are separated from the wet natural gas near the site of the well or at a natural gas processing plant. The processed natural gas is called dry- or consumer-grade natural gas. This natural gas is sent through pipelines to underground storage fields or to distribution companies and then to consumers. Coal may contain coalbed methane, which can be captured when coal is mined. Coalbed methane can be added to natural gas pipelines without any special treatment. Another source of methane is biogas, which forms in landfills and in vessels called digesters. Most of the natural gas consumed in the United States is produced in the United States. Some natural gas is imported from Canada and Mexico in pipelines. A small amount of natural gas is also imported as liquefied natural gas.
6.3 Coal Coal was the second largest energy source for US electricity generation in 2017—about 30%. Nearly all coal-fired power plants use steam turbines. A few coal-fired power plants
Other electrical power generation energy sources
How coal was formed Before the dinosaurs, many giant plants died in swamps.
Over millions of years, the plants were buried under water and dirt.
Heat and pressure turned the dead plants into coal.
Water 100 million years ago
Rocks and dirt Dirt Swamp 300 million years ago
Dead plants
Coal
Fig. 6.12 Coal formed process illustration. (Source: Adapted from National Energy Education Development Project (public domain)).
convert coal to a gas for use in a gas turbine to generate electricity. Coal takes millions of years to form. Coal is a combustible black or brownish-black sedimentary rock with a high amount of carbon and hydrocarbons. Coal is classified as a nonrenewable energy source because it takes millions of years to form. Coal contains the energy stored by plants that lived hundreds of millions of years ago in swampy forests. The plants were covered by layers of dirt and rock over millions of years. The resulting pressure and heat turned the plants into the substance we call coal (see Fig. 6.12).
6.3.1 Types of coal Coal is classified into four main types, or ranks: anthracite, bituminous, subbituminous, and lignite. The ranking depends on the types and amount of carbon the coal contains and on the amount of heat energy the coal can produce. The rank of a coal deposit is determined by the amount of pressure and heat that acted on the plants over time. Anthracite contains 86%–97% carbon and generally has the highest heating value of all ranks of coal. Anthracite accounted for less than 1% of the coal mined in the United States in 2016. All of the anthracite mines in the United States are in northeastern Pennsylvania. Anthracite is mainly used by the metals industry. Bituminous coal contains 45%–86% carbon. Bituminous coal in the United States is between 100 million and 300 million years old. Bituminous coal is the most abundant rank of coal found in the United States, and it accounted for 44% of total US coal production in 2016. Bituminous coal is used to generate electricity and is an important fuel and raw material for making iron and steel. West Virginia, Kentucky, Illinois, Pennsylvania, and Indiana were the five main bituminous coal-producing states in 2016, accounting for 74% of total bituminous production.
235
236
Introduction to energy essentials
Subbituminous coal typically contains 35%–45% carbon, and it has a lower heating value than bituminous coal. Most subbituminous coal in the United States is at least 100 million years old. About 45% of total US coal production in 2016 was subbituminous, and nearly 90% was produced in Wyoming. Lignite contains 25%–35% carbon and has the lowest energy content of all coal ranks. Lignite coal deposits tend to be relatively young and were not subjected to extreme heat or pressure. Lignite is crumbly and has high moisture content, which contributes to its low heating value. Lignite accounted for 10% of total US coal production in 2016, and about 53% was mined in Texas and 38% in North Dakota where it is mostly used to generate electricity. A facility in North Dakota also converts lignite to synthetic natural gas and pipes it to natural gas consumers in the eastern United States.
6.3.2 Coal explained: coal prices and outlook The price of coal varies by coal rank and grade, mining method, and geographic region. Coal is classified into four main ranks (lignite, subbituminous, bituminous, and anthracite), depending on the amounts and types of carbon it contains and the amount of heat energy it can produce. Prices are generally higher for coal with high heat content. The average annual sale prices of coal at mines producing each of the four major ranks of coal in 2015, in dollars per short ton (2000 pounds). • Bituminous—$51.57 • Subbituminous—$14.63 • Lignite—$22.36 • Anthracite—$97.91 Coal prices at surface mines are generally lower than prices at underground mines. In locations where coal beds are thick and near the surface, such as in Wyoming, mining costs and coal prices tend to be lower than in locations where the beds are thinner and deeper, such as in Appalachia. The higher cost of coal from underground mines reflects the more difficult mining conditions and the need for more miners. When coal is burned, it releases impurities that can combine with oxygen to form sulfur dioxide (SO2). When SO2 combines with moisture in the atmosphere, it produces acid rain that can harm forests and lakes. Because of environmental regulations limiting sulfur emissions, low-sulfur coals can command a higher price than high-sulfur coals.
6.3.3 Coal transportation costs can be significant Once coal is mined, it must be transported to consumers.Transportation costs add to the delivered price of coal. In some cases, such as in long-distance shipments of Wyoming coal to power plants in the East, transportation costs are more than the price of coal at the mine. Most coal is transported by train, barge, truck, or a combination of these modes. All of these transportation modes use diesel fuel. Increase in oil and diesel fuel prices can
Other electrical power generation energy sources
Fig. 6.13 Average annual prices of coal delivered to end-use sectors 2008–2015. (Source: Adapted from US Energy Information Administration, Annual Coal Report. November 2016).
significantly affect the cost of transportation, which affects the final delivered price of coal (see Fig. 6.13). In 2015, the average sales price of coal at the mine was $31.83 per ton, and the average delivered coal price to the electric power sector was $42.58 per ton, resulting in an average transportation cost of $10.75 per ton or about 25% of the total delivered price.
6.3.4 Most coal is purchased for power plants About 93% of the coal consumed in the United States is used to generate electricity. In 2015, about 33% of total US electricity generation was from coal. When based only on the cost per million British thermal units (Btu), coal has been the least expensive fossil fuel used to generate electricity since 1976.
6.3.5 The price of coal can depend on the type of transaction Most of the coal sold for electric power generation is sold through long-term contracts. Supplies are supplemented with spot purchases of coal. Spot purchases are shipments of fuel purchased for delivery within one year. Spot prices can fluctuate based on shortterm market conditions, while contract prices tend to be more stable. In 2015, the average delivered price of coal to the electric power sector was $44.58 per ton, which includes both spot and contract purchases.
237
238
Introduction to energy essentials
6.3.6 A more expensive coal used to make iron and steel In addition to producing electricity, coal is also used to produce coal coke, or coke, which is used in smelting iron ore to make steel. Coke is made by baking certain types of coal in special high-temperature ovens. The coal does not contact the air until almost all of the impurities are released as gases. The resulting coke is mostly carbon. Coal used to make coke must be low in sulfur and requires more thorough cleaning than coal used in power plants, which makes the coal more expensive (see Fig. 6.14). In 2015, the average delivered price of coal used to make coke was about $118.69 per ton—about three times higher than the price of coal delivered to the electric power sector.
6.3.7 The outlook for coal prices in the United States In the Annual Energy Outlook 2017 (AEO), the US Energy Information Administration (EIA) projects that coal prices will generally increase through the year 2050. The
Fig. 6.14 Coal application to make iron and steel. (Source. Stock photography (copyrighted)).
Other electrical power generation energy sources
amount that coal prices increase depends on projections for coal demand and coal mining productivity. The implementation of the US Environmental Protection Agency’s Clean Power Plan is a major factor in the projections for coal demand in the AEO 2017.
6.4 Petroleum Petroleum was the source of less than 1% of US electricity generation in 2017. Residual fuel oil and petroleum coke are used in steam turbines. Distillate—or diesel—fuel oil is used in diesel-engine generators. Residual fuel oil and distillates can also be burned in gas turbines.
6.4.1 What is crude oil and what are petroleum products? Crude oil is a mixture of hydrocarbons that formed from plants and animals that lived millions of years ago. Crude oil is a fossil fuel, and it exists in liquid form in underground pools or reservoirs, in tiny spaces within sedimentary rocks, and near the surface in tar (or oil) sands. Petroleum products are fuels made from crude oil and other hydrocarbons contained in natural gas. Petroleum products can also be made from coal, natural gas, and biomass.
6.4.2 Products made from crude oil After crude oil is removed from the ground, it is sent to a refinery where different parts of the crude oil are separated into useable petroleum products. These petroleum products include gasoline, distillates such as diesel fuel and heating oil, jet fuel, petrochemical feedstocks, waxes, lubricating oils, and asphalt (Fig. 6.15). A US 42-gallon barrel of crude oil yields about 45 gallons of petroleum products in US refineries because of refinery processing gain. This increase in volume is similar to what happens to popcorn when it is popped.
6.4.3 Nuclear energy provides one-fifth of US electricity Nuclear energy was the source of about 20% of US electricity generation in 2017. Nuclear power plants use steam turbines to produce electricity from nuclear fission. Nuclear energy is energy in the core of an atom. Atoms are the tiny particles in the molecules that make up gases, liquids, and solids. Atoms themselves are made up of three particles called protons, neutrons, and electrons. An atom has a nucleus (or core) containing protons and neutrons, which is surrounded by electrons. Protons carry a positive electrical charge, and electrons carry a negative electrical charge. Neutrons do not have an electrical charge. Enormous energy is present in the bonds that hold the nucleus together. This nuclear energy can be released when those bonds are broken. The bonds can be broken through nuclear fission, and this energy can be used to produce electricity.
239
240
Introduction to energy essentials
Petroleum products made from a barrel of crude oil, 2017 Gallons
Other distillates (heating oil)—150 W) cooling applications has been limited to custom applications requiring either low thermal resistance or having a severely restricted enclosure area. The cost of these larger diameter heat pipes was high due to a limited number of manufacturers and handmade assembly times. A new and valuable addition to the heat transfer community, a heat transport device known as an LHP, is discussed in this work. This body of research is very important as the LHP is becoming increasingly prevalent in heat transfer applications. U.S. commercial use of the LHP will begin on the next generation of communications satellites being developed and built by Hughes Space and Communications Company. These satellites take advantage of the passive nature of the LHP, requiring no external means of pumping, along with its ability to transport large quantities of heat over significant distances. This device comes to the heat transfer community at an ideal time, as the aerospace industry is demanding higher and higher power payloads and this increasing power must be handled by the most efficient means possible. The LHP is also being investigated for uses in ground-based applications such as solar collectors and computer cooling. This dissertation focuses on experimentation conducted with a space-based satellite application in mind; however, results are applicable to other implementations as well. The LHP is a descendant of the conventional heat pipe. The LHP utilizes the advantages of the conventional heat pipe while overcoming some of the conventional heat pipe’s inherent disadvantages. This dissertation serves as a complete body of work on this new device; from background and the literature review on the development and history of the LHP, to important computer simulation and experimental work, both ground-based and space-based, performed on the LHP in an effort to gain a thorough
Heat pipe driving heat transfer
understanding of the workings of the LHP and to investigate novel new applications for the LHP such as the ability to control the temperature of an entire spacecraft payload with a minute fraction of the heater power once required. The LHP introduces important new opportunities to the heat transfer community and the research presented here furthers the knowledge and understanding of this breakthrough device. The application of heat pipes can be as diverse as their structure and shapes. We can claim these unique heat transfer devices are used in many fields in industry and they have played very important roles from simple heat exchanger to electronic, space application, nuclear reactor, oil lines, and even for constructing ice pontoons through marshes and the foundations of drilling towers, as well as roads in permafrost regions. Reference 16 has variety flavor of heat pipes applications in industry and the companies and manufactures who are involved with their unique design and application of such devices. For example, within United States there are applications and development in progress in a drill for ultra-deep drilling of a bore in the form of a miniature fast-neutron reactor cooled by means of heat pipes. Other application of heat pipe can be seen in centrifugal heat pipe shapes that are used for cooling asynchronous motors with short-circuited cast rotors. Such motors are used mainly in mechanical engineering.With the use of centrifugal heat pipes in a rotor, it has become possible to control the motor speed electrically, eliminating the need for complex transmissions and gear trains [16]. Investigations are currently in progress in the USSR to explore the possible use of heat pipes to cool transformers, both air-filled and oil-filled, miniature and high-power, and for cooling of electrical busbars. The West German firm Brown Boveri Corporation has developed a system of electronic devices with heat pipes. 1. Thyristor systems of power greater than 1 kW; the thermal resistance R of the heat pipe is 0.035 K/W and the cooling air velocity is v = 6 m/s. 2. A device for a portable current rectifier system (700 W; thermal resistance 0.055∼ cooling air velocity v = 6 m/s). Heat pipes have proved adaptable to the incorporation of electronic equipment, thereby increasing the cooling effect by factors of 10. Products of the British SRDE laboratory (Signal Research and Development Establishment) include the following: heat pipes in the form of planar electrical insulators, a heat pipe of very small diameter, and various combinations of heat pipes and thermally insulating modules. Very interesting possibilities have opened up for producing static batteries and thermal energy converters based on heat pipes, thermal diodes, vapor chambers, etc., and materials that vary their aggregate state (fused salts, metals, sulfur with halogens, etc.; operating temperature 500–800 °C∼ the material of the heat pipes is stainless steel, and the heat-transfer agent is sodium.The thermal power stored can be up to 10–100 kW/h. High-temperature heat pipes using alkali metals can be employed successfully as electrodes in plasma generators.
405
406
Introduction to energy essentials
In the energy industry there is a trend to build electric stations using solar energy and hot springs. At present, an electric station of power of at least 100 kW is under construction in the southern United States; it takes the form of a battery of hightemperature heat pipes, heated by the sun, and working into water-vapor generators or thermoelectric converters. Such batteries of heat pipes, linked to heat-storage units, will make it possible to develop electrical energy around the clock. There are plans to use heat pipes as electric cables and distribution lines. On October 4, 1974, a sounding rocket was launched into space (the Black Brant Sounding Rocket), which carried heat pipes made by the NASA/Goddard Space Flight Center; ESRO; GFW; Hughes Aircraft Company, NASA/AMES; A. ESRO constructed two aluminum heat pipes of length 885 mm and diameter 5 mm. The wick was a single layer of stainless steel mesh with an artery diameter of 0.5 mm. One pipe was filled with ammonia and the other with acetone.The acetone heat pipe transmitted 8.4 W of power, and the ammonia pipe transmitted 21 W. The heat sink was an aluminum block. B. GFW (Geselfsehaft fiir Weltraumforschung) of the West German Ministry of Technology constructed a flat aluminum heat pipe in the form of a disk of diameter 150 mm and a titanium heat pipe of length 600 mm, charged with methanol, with its end face joined to the disk by an aluminum tube. The flat heat pipe was filled with acetone, and the other end was joined to a heat-storage device (a canister with a molten substance —”Eicosane”—with a fusion temperature of 35 °C. This system transmitted 26 W of power. C. The Hughes Aircraft Company constructed two flexible heat pipes made of stainless steel (6.4 mm in diameter; 270 ram in length). The working liquid is methanol, and the wick is a metal mesh. D. NASA/Ames constructed two stainless steel heat pipes of length 910 mm and diameter 12.7 ram. The liquid is methanol, and the inert gas is nitrogen. The wick is a screw thread on the body, and the artery is a wafer of metallic felt. This kind of artery is insensitive to the presence of NCG. E. NASA constructed a cryogenic heat pipe made of aluminum with longitudinal channels of length 910 mm and diameter 16 mm, charged with methanol. Thus, in the international experiment on October 4, 1974, the organizations NASA/ GSFC (Grumman and TRW), NASA/Ames (Hughes), Hughes (Hughes), ESRO (the IKE Institute in Stuttgart), and GFW (Dornier) took part in the testing of heat pipes in space. Of these, Grumman constructed five different groups of heat pipes, and TRW constructed three. In addition to the sounding rockets, NASA has used a number of satellites for testing heat pipes, to evaluate the effect of long-term weightless conditions on heat-pipe parameters (the spacecraft Skylab, OAO-III, ATS-6, CTS, etc.). The French National Center for Space Research, CNS, independently of the American and European Space Center (USA), developed and operated a program of
Heat pipe driving heat transfer
space experiments with heat pipes, constructed by the Aerospatiale and SABCA companies. In November, 1974, the French sounding rocket ERIDAN 214 was launched, carrying a radiator of heat pipes. The aim of the experiment was to verify the operational capability of heat pipes under weightless conditions; to verify that the heat pipes would be ready to operate at the start of a rocket flight; and to select various heat-pipe structures for spacecraft equipment. Three types of heat pipe were investigated. 1. A curved heat pipe made by SABCA, of length 560 mm and diameter 3.2 mm, made of stairs steel, the filter being a stainless steel mesh, with ammonia as the heat-transfer agent. The transmitted power was 4 W. The pipe was flexible. 2. A heat pipe made by the CENG organization (the atomic center in Grenoble) of length 270 mm and diameter 5 ram, made of copper, with a wick made of sintered bronze powder. The heat-transfer agent was water. The transmitted power was 20 W. 3. A SABCA heat pipe, similar to No. 1, but straight. The transmitted power was 5 W. The heat sink was a box with a variable-phase fusible substance Tf = 28.5 °C (n-octadecane). The energy source was an electric battery with U = 27 V. The total weight of the experimental equipment was 2.3 kg. These investigations point very clearly to positive gains at present, and we can confidently assert that heat pipes will find wide applications in space in the near future. For example, the United States plans to use heat pipes for thermal control and thermal protection of the reusable shuttle and also for the Spacelab space laboratory. For these, the heat-sensitive equipment will be located in boxes or canisters within which the temperature will be held constant by means of heat pipes located in the walls of the enclosure. Reference Provide a vast variety application of heat pipes in present industry and future trend of it.
10.10 Summary Heat pipes General A heat pipe is a passive energy recovery heat exchanger that has the appearance of a common plate-finned water coil except the tubes are not interconnected. Additionally, it is divided into two sections by a sealed partition. Hot air passes through one side (evaporator) and is cooled while cooler air passes through the other side (condenser). While heat pipes are sensible heat transfer exchangers, if the air conditions are such that condensation occurs on the fins there can be some latent heat transfer and improved efficiency. Heat pipes are tubes that have a capillary wick inside running the length of the tube, are evacuated, and then filled with a refrigerant as the working fluid and are permanently sealed. The working fluid is selected to meet the desired temperature conditions and
407
408
Introduction to energy essentials
is usually a Class I refrigerant. Fins are similar to conventional coils—corrugated plate, plain plate, spiral design. Tube and fin spacing are selected for appropriate pressure drop at design face velocity. HVAC systems typically use copper heat pipes with aluminum fins; other materials are available. Advantages: • Passive heat exchange with no moving parts. • Relatively space efficient. • The cooling or heating equipment size can be reduced in some cases. • The moisture removal capacity of existing cooling equipment can be improved. • No cross contamination between air streams. Disadvantages • Adds to the first cost and to the fan power to overcome its resistance. • Requires that the two air streams be adjacent to each other. • Requires that the air streams must be relatively clean and may require filtration. Applications Heat pipe heat exchanger enhancement can improve system latent capacity. For example, a 1 °F dry bulb drop in air entering a cooling coil can increase the latent capacity by about 3%. The heat pipe’s transfer of heat directly from the entering air to the low-temperature air leaving the cooling coil saves both cooling and reheating energy. It can also be used to precool or preheat incoming outdoor air with exhaust air from the conditioned spaces. Best applications • Where lower relative humidity is an advantage for comfort or process reasons, the use of a heat pipe can help. A heat pipe used between the warm air entering the cooling coil and the cool air leaving the coil transfers sensible heat to the cold exiting air, thereby reducing or even eliminating the reheat needs. Also the heat pipe precools the air before it reaches the cooling coil, increasing the latent capacity and possibly lowering the system cooling energy use. • Projects that require a large percentage of outdoor air and have the exhaust air duct in close proximity to the intake can increase system efficiency by transferring heat in the exhaust to either precool or preheat the incoming air. Possible applications • Use of a dry heat pipe coupled with a heat pump in humid climate areas. • Heat pipe heat exchanger enhancement used with a single-path or dual-path system in a supermarket application. • Existing buildings where codes require it or they have “sick building” syndrome and the amount of outdoor air intake must be increased. • New buildings where the required amount of ventilation air causes excess loads or where the desired equipment does not have sufficient latent capacity. Applications to avoid • Where the intake or exhaust air ducts must be rerouted extensively, the benefits are likely not to offset the higher fan energy and first cost.
Heat pipe driving heat transfer
• Use of heat pipe sprays without careful water treatment. Corrosion, scale, and fouling of the heat pipe where a wetted condition can occur needs to be addressed carefully. Technology types (Resource) Hot air is the heat source, flows over the evaporator side, is cooled, and evaporates the working fluid. Cooler air is the heat sink, flows over the condenser side, is heated, and condenses the working fluid.Vapor pressure difference drives the evaporated vapor to the condenser end and the condensed liquid is wicked back to the evaporator by capillary action. Performance is affected by the orientation from horizontal. Operating the heat pipe on a slope with the hot (evaporator) end below horizontal improves the liquid flow back to the evaporator. Heat pipes can be applied in parallel or series. Efficiency Heat pipes are typically applied with air face velocities in the 450–550 feet per minute range, with 4–8 rows deep and 14 fins per inch, and have an effectiveness of 45%–65%. For example, if entering air at 77 °F is cooled by the heat pipe evaporator to 70 °F and the air off the cooling coil is reheated from 55–65 °F by the condenser section, the effectiveness is 45% [=(65 - 55)/(77 - 55) = 45%]. As the number of rows increases, effectiveness increases but at a declining rate. For example, doubling the rows of a 48% effective heat pipe increases the effectiveness to 65%. Tilt control can be used to: • Change operation for seasonal changeover. • Modulate capacity to prevent overheating or overcooling of supply air. • Decrease effectiveness to prevent frost formation at low outdoor air temperatures. Tilt control (6° maximum) involves pivoting the exchanger about its base at the center with a temperature-actuated tilt controller at one end. Face and bypass dampers can also be used. Manufacturers Heat Pipes Manufactures list 1. American Heat Pipes, Inc. 6914 E. Fowler Ave. Suite E Tampa, FL 33617 1-800-727-6511 2. Dectron Inc 4300 Blvd. Poirier Montreal, PQ H4R 2C5 Canada (514) 334-9609 [email protected] 3. Des Champs Laboratories Inc P.O.Box 220 Douglas Way
409
410
Introduction to energy essentials
Natural Bridges Station, VA 24579 (703) 291-1111 4. EcoTech Consultants, Inc. 3466 Holcombe Bridge Road Suite 1000 Norcross, GA 30092 (404) 723-6564 5. Heat Pipe Technology Inc P.O. Box 999 Alachua, FL 32615-0999 1-800-393-2041 6. Munters Dry Cool 16900 Jordan Rd. Selma, TX 78154-1272 1-800-229-8557 [email protected] 7. Nautica Dehumidifiers, Inc. 9 East Carver St. Huntington, NY 11743 (516) 351-8249 [email protected] 8. Octagon Air Systems 1724 Koppers Road Conley, GA 30288 (404) 609-8881 9. Power-Save International P.O. Box 880 Cottage Grove, OR 97424 1-800-432-5560 10. Seasons 4 Inc. 4500 Industrial Access Road Douglasville, GA 30134 (770) 489-0716 11. Temprite Industries 1555 Hawthorne Lane West Chicago, IL 60185 1-800-552-9300 12. Venmar CES 2525 Wentz Ave.
Heat pipe driving heat transfer
Saskatoon, SK S7K 2K9 Canada 1-800-667-3717 [email protected]
References [ 1] R.S. Gaugler, Heat transfer device, U.S. Patent 2 (350) (June 6, 1944) 348. [2] L. Trefethen, On the surface tension pumping of liquids or a possible role of the candlewick in space exploration, G. E. Tech. Info., Ser. No. 615 (February, 1962) D114. [3] T.Wyatt Wyatt, Satellite temperature stabilization system. Early development of spacecraft heat pipes for temperature stabilization, U.S. Patent No. 3 (152) (October 13, 1964) 774 application was files June 11, 1963. [4] G.M. Grove, T.P. Cotter, G.F. Erikson, Structures of very high thermal conductivity, J. Appl. Phys. 35 (1964) 1990. [5] S.W. Chi, Heat Pipe Theory and Practice, McGraw-Hill, New York, 1976. [6] P.D. Dunn, D.A. Reay, Heat Pipes, 3rd Ed., Pergamon, New York, 1982. [7] B.D. Marcus, Theory and design of variable conductance heat pipes: Control Techniques, Research Report, No. 2, NASA, July, 1971, 13111 -6027-R0-00. [8] G.A. Bennett, Conceptual design of a heat pipe radiator, Los Alamos Scientific Laboratory, N.M., USA (September 1, 1977), LA-6939-MS Technical Report. [9] Y.F. Gerasimov, Y.F. Maidanik, G.T. Schegolev, Low-temperature heat pipes with separated channels for vapor and liquid, Eng.-Phys. J. 28 (6) (1975) 957–960 (in Russian). [10] K.Watanabe,A. Kimura, K. Kawabata,T.Yanagida, M.Yamauchi, Development of a variable-conductance heat-pipe for a sodium-sulfur (NAS) battery, Furukawa Rev. No. 20 (2001). [11] J.E. Kemme, Heat pipe design considerations, Los Alamos Scientific Laboratory report LA-4221-MS (August 1, 1969). [12] K.A.Woloshun, M.A. Merrigan, E.D. Best. HTPIPE: A steady-state heat pipe analysis program, A User’s Manual. [13] G.P. Peterson, An Introduction to Heat Pipes: Modeling, Testing, and Applications, John Wiley & Sons, Inc., 1994, pp. 175–210. [14] Scott D., Garner, P.E., Thermacore Inc. [15] P.J. Brennan, E.J. Kroliczek, Heat Pipe Design Handbook, B & K Engineering, Inc.,Towson, Maryland, 1979. [16] MIL-STD-1522A (USAF), “Military standard general requirements for safe design and operation of pressurized missile and space systems,” May, 1984. [17] J.E. Deverall, J.E. Kemme, High thermal conductance devices utilizing the boiling of lithium and silver, Los Alamos Scientific Laboratory Report, LA-3211 (October 1964). [18] G.M. Grover, T.P. Cotter, G.F. Erickson, Structures of Very High Thermal Conductance, J. Appl. Phys. 35 (6) (1964) 1990–1991. [19] G.M. Grover, J. Bohdansky, C.A. Busse, The use of a new heat removal system in space thermionic power supplies, European Atomic Energy Community - EURATOM Report EUR (1965) 2229.e. [20] T.P Cotter, J. Deverall, G.F. Erickson, G.M. Grover, E.S. Keddy, J.E. Kemme, ann E.W. Salmi, Status report on theory and experiments on heat pipes at Los Alamos, in Proc. Int. Conf. Thermionic Power Gener., London, (September,1965). [21] W.A. Ranken, J.E. Kemme, Survey of Los Alamos and EURATOM Heat Pipe Investigations, in Proc. IEEE Thermionic Conversion Specialist Conf, San Diego, California, (1965 October); Los Alamos Scientific Laboratory Report, LA-DC-7555. [22] J.E. Kernme, Heat pipe capability experiments, in Proc. Joint AEC Sandia Laboratories Report, SC-M66-623 1 (October1966). Expanded version of this paper, Los Alamos Scientific Laboratory Report, LA-3585-MS (August 1966), also as LA-DC-7938. Revised version of LA-3583-MS, in Proc. IEEE Thermionic Conversion Specialist Conf., Houston, Texas (November, 1966).
411
CHAPTER 11
Thermodynamic systems 11.1 Introduction In this introduction, we will define and describe “What is Thermodynamic System?”, so we have a fundamental aspect of this subject purely related to energy, heat, work, etc. Thermodynamic systems can be defined as systems that function by utilizing the effects of heat transfer. A prominent example of a thermo system is refrigeration. Fluid in the system is called the refrigerant. A compressor reduces the volume of the refrigerant which transitions into a superheated vapor. The vapor is passed through a heat exchanger. Air from the environment is fanned over the condenser, causing heat inside the system to transfer to the environment. The removal of heat from the system condenses the refrigerant back into a liquid state. This is further cooled by a long and thin capillary tube that significantly decreases the fluid’s pressure. Newly cooled liquid refrigerant is passed through an evaporator that vaporizes the fluid to repeat the cycle. In this notable system, heat is continuously exchanged between a median and into the environment, a process fundamentally grounded in thermodynamics. Systems, like the example above, play a significant role in energy generation. On a larger scale, the thermodynamic transitions in systems such as pressurized water reactor (PWR) nuclear reactors play a pivotal role in the generation of energy for millions. For example, the secondary loop, depicted in Fig. 11.1, utilizes several states of water and heat transfer to generate energy. Water in the system is pumped through a nuclear heat source that causes it to transition into a pressurized and superheated vapor. This vapor is passed through a turbine, spinning it and generating kinetic energy to power a generator. Passing the vapor through the turbine causes the vapor to lose some energy, resulting in some condensation. Leaving the turbine, the mixture of water in different states is cooled back to liquid and the cycle is repeated. Consequentially, it becomes essential to study the laws and properties behind thermo systems to manipulate and design for optimal outcomes. This chapter will detail the concept of continuity and how it applies to mass, momentum, and energy of a thermo system. This chapter will then go over the three laws of thermodynamics with a focus on the first law’s prominence in understanding heat transfer. Finally, it will present a
Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
413
414
Introduction to energy essentials
Pressurizer
Once Through Steam Generator Turbine
Reactor Vessel
Generator
Cooling Tower
Condenser
Fig. 11.1 Simplified PWR diagram.
detailed application of these concepts with the PWR as a thermo system and how its conditions are engineered.
11.2 Continuity Continuity is the conservation of a property’s quantity throughout the system. This concept is vital to understanding how energy can flow into or out of a system. In systems, energy is typically transferred through mass in the form of fluid, liquid, or gas. Continuity allows for the mass (and energy) flow and storage rates to be determined. Continuity of a system is generally quantified by mass or volume. Control mass systems have a fixed mass that may not flow in or out of the system. However, the mass may facilitate the flow of heat or work between the system and the environment. On the other hand, a control volume system has mass flowing into and out of the system while maintaining a fixed volume. A simple example of a control mass system is a boric acid batching tank. As shown in Fig. 11.2, the system boundary is defined as the tank itself in which no mass enters or exits, making it a control mass system. Contrary, a simple example of a control volume system is fluid flowing through a pipe. The pipe in Fig. 11.3 has fixed length between two surfaces that make up a definite volume in the system boundaries. However, the mass in the system varies as fluid passes through.
Fig. 11.2 Boric acid batching tank.
Thermodynamic systems Pipe Wall Fluid Flow In
Fluid Flow Out System Boundary 2
System Boundary 1
Fig. 11.3 Fluid flow in pipe.
Whether a system has a controlled mass or controlled volume depends on how the boundaries are defined. The same schematic can be mass controlled, or volume controlled depending on where the boundaries are placed. For example, the PWR primary loop in Fig. 11.4A, taken as a whole, is a controlled mass loop. This is because the mass circulates between the steam generator and reactor, and no mass enters or exists this loop. However, the steam generator and reactor could be seen as their own individual systems. With the system boundaries redrawn in Fig. 11.4B, they become volumecontrolled systems as the mass flows in and out of each system. We can describe continuity mathematically with the boundary’s surface area as well as the fluid’s density and velocity at a given temperature and pressure. Equation (11.1) describes the system’s time-dependent mass passing through a boundary of a volumecontrolled system.The dot above m notation is used to indicate a time-dependent variable. m 1 = ρ1V1 A1 (11.1) Example 1. and, A=
πD2 π(6 / 12ft )2 = 4 4 = 0.196 ft 2
Coolant at 600°F Hot leg Cold leg Coolant at 550°F
Reactor
Tube Side of Steam Generator
System System Boundary Boundary 1
Coolant at 600°F Hot leg
System Boundary 2
Cold leg Coolant at 550°F
Reactor
Tube Side of Steam Generator
Fig. 11.4 PWR reactor/steam generator primary loop. (A) Control Mass System. (B) Control Volume Systems.
415
416
Introduction to energy essentials
Given steam at 1 psia and 300 °F flowing at 50 ft/s in a 6-inch-diameter pipe, compute the mass flow rate Solution: At the given temperature and pressure, Steam has a specific volume of 452.3 ft 3/ lbm (from Tables of Superheated Steam) [1].
ρ=
1 1 = v 452.3 ft 3/ lbm
= 0.0022 lbm / ft 3
therefore,
m = ρVA
= ( 0.0022 )( 50 )( 0.196 )
lb s = 0.0216 m 3600 s h
= 77.8 lbm / h
One of the outcomes of control volume with one inlet and outlet is the mass flow rate in is equal to mass flow rate out and any change of mass inside system. This is due to conservation of mass. This can be mathematically represented as δ m stored (11.2) m in = m out + δt Assuming the pipe does not store any fluid, only allowing it to pass, the right-most term becomes zero. This results in the flow of mass through the inlet equal to the flow of mass through the outlet. This relation is particularly helpful in describing velocity changes from differences in boundary surface area. Example 2. Water flows into the diffuser shown at the left at a rate of
m in = 100 lbm / s
Assuming the water has a density of 62.4 lbm / ft 3, what is the inlet velocity and exit velocity?
Thermodynamic systems
As there is only one inlet and out outlet m outlet = m inlet = 100 lbm /s
Aoutlet =
π (1)
=
= 0.785 ft 2
4
Voutlet =
2
m outlet ρ Aoutlet
100 = 2.04 ft /s ( 62.4 )( 0.785)
Thus, a diffuser is useful in reducing the velocity of a fluid. A nozzle (the opposite of a diffuser) is used to increase the velocity of a fluid Solution: Ainlet =
π ( 6 / 12 ) 4 Vinlet =
2
= 0.196 ft 2
m inlet ρ Ainlet
= 8.16 ft / s Using Eq. (11.4),
Σm outlet = Σm inlet
Momentum of a system can hold continuity as it is the product of mass and velocity. Energy continuity is often more difficult as there are several ways energy can leak from the system to the environment. Similar to the mass balance equation, the energy balance Eq. (11.3) describes energy continuity of a control mass system. δ E stored (11.3) E in = E out + δt There are five forms of energy flow that permit energy to enter the environment. One way energy can flow is from enthalpy, the product of mass flow and the fluid’s specific enthalpy. This is prominent systems like straight lengths of insulated pipe. In the . following equation m is the mass flow rate and h is the fluid’s specific enthalpy.
E = mh (11.4)
Another way is from height difference that induces potential energy flow between the control volumes. This is typically determined by a convenient base height that must remain consistent throughout calculation. g denotes local gravitational acceleration, gc
417
418
Introduction to energy essentials
denotes the gravitational proportionality constant, J is a conversion factor, and z is elevation difference as they all presented in Eq. (11.5) m g E PE = m (PE ) = z (11.5) J gc
Example 3. Compute the energy flow rate due to nitrogen flowing at 2.50 lbm /s in a 6.00-inch-diameter pipe at 200 °F and 250 psia. Assume the nitrogen is a perfect gas. See the figure below.
. min
System Boundary . mout = 2.50 lbm/S T = 200°F P = 250 psia
6 in
Solution: V =
m ρA
m = 2.50 lbm /s
= 9000 lbm /h. From the perfect gas law,
ρ=
=
P RT
250 10 73 . ( 460 + 200 ) 28 01 . = 0.989 lbm /ft 3 .
The flow area of the pipe is A=
πd 2 4
( π )( 0.500 ) =
= 0.196 ft 2 .
4
Thus,
V =
m ρA
2
Thermodynamic systems
=
2.50 ( 0.989 )( 0.196 ) = 12.9 ft/s,
and, from Eq. (11.5). The third form is energy flow rate from kinetic energy (otherwise known as due to fluid velocity). V is the fluid’s velocity written as mV 2 E KE = m ( KE ) = (11.6) 2 Jgc
The fourth is due to heat transfer as induced by difference in temperature. Heat tends to flow from areas of high temperature to low temperature in attempts to achieve a uniform temperature. Consequentially, heat in a high temperature system will attempt to flow out if the environment has low temperature. Systems with no heat transfer are called adiabatic. The fifth energy flow form is from work transfer. Work transfer is commonly observed when a control volume is designed to do work (exert force over some distance) on a mechanical contraption. A common example is the secondary loop in a PWR spinning a turbine to generate electricity as illustrated by Fig. 11.5. In this system, volume is continuous, but work transfer consistently transfers energy out of the system The losses in energy continuity are worth noting as even seemingly negligible leaks hold immense effect on efficient electricity production.
11.3 System thermodynamics The first law of thermodynamics illustrates the conservation of energy, describing how energy can be transferred between a system and the environment. Energy can be defined as the ability to do work.This is contained in mass by forces on particles. Energy
OTSG
Turbine 2 Condenser
Fig. 11.5 Simplified schematic of PWR secondary loop.
. Wk
419
420
Introduction to energy essentials
can be categorized in three forms: potential, kinetic, and internal. Potential energy is from elevation difference. Kinetic energy is from movement. Internal energy is derived from stochastic particle motion of particles. Unlike mass, energy may be transferred between the boundaries of a system with work or heat. Work is force exerted over a distance. On the other hand, heat is energy that moves from temperature difference of high to low concentrations. For example, a system may consist of a gas piston connected to an axle. The gas inside the piston is heated, and the increasing internal energy causes the gas to expand. The gas expanding exerts a force on the piston head over some distance, doing work. The motion from the piston head rising may spin an axle, transferring kinetic energy out of the system. There are two special cases for gas-piston systems where work can be easily calculated: constant pressure and constant temperature expansion. Like conservation of matter, the first law of thermodynamics produces the relationship that the total energy flowing into a system is equal to the total energy out plus energy stored. The quantities of energy in and out are only from heat or work in a system where mass is continuous. There is a special case where a system’s total energy is simply the energy in which is equal to energy out. This is in the steady flow-steady state systems where the mass flow rate and fluid’s state are not functions of time. As a result, the energy stored within the system does not change. There is also uniform flow-uniform state systems in which the properties of the mass flowing through system is constant, and its state is considered the same throughout. This is mathematically described in Eq. (11.7), where subscript 1 and 2 refer to two time intervals.
E in12 = E out12 + ( E 2 − E1 )stored (11.7)
In thermodynamic systems, thermodynamic availability provides a very convenient means of measuring the amount of work system can produce. It is defined as the amount of energy that can be extracted out of a fluid flowing through that will result in the outlet being in equilibrium with the environment. The system for measuring availability is depicted by Fig. 11.6.
h1 1 . Wk
. Q 2 Environment at T0
State 2 is in equilibrium with the environment
Fig. 11.6 System demonstrating thermodynamic availability.
Thermodynamic systems
Specific availability is denoted by ψ as written in the following equation:
ψ = h 1 − T0 ⋅ s1 (11.8)
In Eq. (11.8), h1 is the inlet’s specific enthalpy, T0 is the environment’s temperature, . and s1 is the inlet’s specific entropy. The availability flowrate, ψ, is simply ψ multiplied by m. As a result, the system’s total thermodynamic availability is the availability flowrate multiplied by time. In practice, this quantity allows for evaluations about system efficiency by comparing a system’s potential to its empirical performance. Thermodynamic availability loss presents further insight on system performance. Loss quantifies the system energy unable to be transferred into work due to extraneous energy transfer such as friction. Consequentially, optimizing the system to minimize availability loss poses a key engineering problem in thermodynamic system design. Example 4. In the figure below, the turbine exhaust pressure is 1 psia and quality is 95%. What is the availability of the exhaust? Assume T0 = 100 °F. 1 . Wk
2
P2 = 1 psia x2 = 0.95
Solution:
h = h f + xh fg
= 69.73 + ( 0.95 )(1036.1)
= 1054.03 BTU/lbm
s = s f + xs fg
= 0.1326 + ( 0.95 )(1.8455)
= 1.8858 BTU/ lbm
ψ = h + T0 s
= 1054.03 − ( 560 )(1.8858 )
= −2.018 BTU / lbm °R
The minus sign indicates that work is required to bring the fluid back to ambient conditions.
421
422
Introduction to energy essentials
11.4 Heat transfer and fluid flow In studying thermodynamic systems, it is important to understand the processes propelling heat transfer. Conduction is the form of energy transfer allowing for transition between high to low regions. Consequentially, the feeling of heat is due to this energy in transit and is not completely synonymous to energy. Steady-state and one-dimensional conditions in which there is constant energy transfer in one direction are assumed to simplify calculations. As a result, conduction can be described by the Fourier equation, q is heat transfer rate, dT / dx is temperature gradient in direction of energy flow, A is cross section area, and k is thermal conductivity. As illustrated by Eq. (11.9), rate of heat transfer is heavily dependent on geometry and material. Negative sign convention depicts heating flowing to areas of lower temperature.
q = −k A
dT (11.9) dx
A simple example is the Fourier equation for heat transfer through a slab. Equation (11.9) can be rewritten as Eq. (11.10) where T1 and T2 represent temperatures on opposite sides of the slab and x is the thickness. Intuition can further be improved by rewriting Eq. (11.10) as a complex fraction. This allows us to define the numerator as thermal potential difference and the denominator as thermal resistance. See Fig. 11.7.
T − T2 T1 − T2 q = k A 1 = (11.10) x x / kA
Example 5. Consider a flat steel plate with a thermal conductivity of 25 BTU/h ft °F. Find the rate of heat transfer, q, through this plate for the following conditions. (Refer to Fig. 11.7) A − 18 ft 2 T2 = 95 °F
q=kA = ( 25 )(18 )
T1 − T2 x
(100 − 95) 1 (1) 12
= 2.7 × 104 BTU/h
Solution. As this is a simple slab as Fig. 11.7, Eq. (11.10) maybe used To further improve analysis, the thermal resistance in the denominator can be adjusted by the addition of a thermal contact resistance term. This compensates for
Thermodynamic systems
T2 q2
q1
T2 > T1 q1 = q2 A = area of slab face x = thickness of slab
A
T1 x
Fig. 11.7 Heat transfer through slab.
imperfect contact between two slabs, hc is the contact resistance heat transfer coefficient as it is written in Eq. (11.11)
q=
T1 − T2 1 (11.11) x + k A hc A
Thermal conductivity k is an important property of materials. High k indicates the material is easily able to transfer heat through it, making it a conductor. On the other hand, low k indicates the material resists heat flowing through it, rending it an insulator. Conductors and insulators hold vast applications in thermodynamic systems. Though an intrinsic material property, k, varies with temperature. As a result, it is useful to calculate an average thermal conductivity as a function of temperature as written in Eq. (11.12): T
km =
2 1 k(T )dT (11.12) ∫ T1 − T2 T1
Furthermore, it is important to understand the processes that facilitate thermal conduction. This is known as convection and is composed of heat conduction, energy storage, and fluid movement. There are two types of convection: free and forced. Free is from temperature differences causing fluid density changes with hot rising and cold descending. On the other hand, forced convection has the fluid propelled by a prime mover. This form typically has higher heat transfer as there is more contact between fluid and solid surfaces. The rate at which a fluid’s heat transfers to the surface facilitating it is described by Newton’s Law of Cooling. q is heat transfer rate, h is surface conductance, A is transfer surface area, Tw is surface temperature, and Tf is the fluid’s temperature. See Eq. (11.13)
q = h A(Tw − T f ) (11.13)
423
424
Introduction to energy essentials
Example 6. Find the average heat flux in the reactor core given the following data: h = 8300
BTU h ft 2 °F
Tw = 604.3 °F
T f = 582 °F Solution. Divide Eq. (11.13) by heat transfer area of flux, thus we have: q = h (Tw − T f A
)
q′′ =
= 8300 ( 604.3 − 582 )
= 185, 000
BTU h ft 2
11.5 Extended application Study of thermo systems holds applicability to a multitude of technologies. For example, thermodynamic properties of water are engineered in systems such as PWRs. These reactors are comprised of three loops, each utilizing water in its various states. The primary loop uses water as a coolant, moderator, and reflector. The secondary loop has water in a loop to spin turbines. The cooling tower loop transfers waste energy out to the environment. Efficient operation of these three loops requires careful understanding of heat and energy transformations previously detailed. In this system, four important states arise which are saturated liquid, saturated vapor, subcooled water, and superheated vapor. Fig. 11.8 depicts a simplified illustration of the PWR’s secondary loop with corresponding water conditions at various components of the system. The thermodynamic conditions to achieve ideal reactor performance have been vigorously studied and described in tables and graphs. The American Society of Mechanical Engineer’s (ASME) Steam Tables provide us with important information water’s properties at certain states given specific conditions. Tables for Properties of Saturated Steam and Saturated Water are organized by temperature or pressure. Properties of Superheated Steam and Compressed Water give properties by pairs of temperature and pressure. Using these tables, one can find the corresponding volume, enthalpy, and entropy.
Thermodynamic systems
Fig. 11.8 PWR secondary loop fluid conditions.
425
426
Introduction to energy essentials
Fig. 11.9 Typical Mollier diagram.
Data from steam tables can be represented in graphs such as the T-S, P-v, P-T, and Mollier diagrams.The Mollier diagram plots enthalpy by entropy and is particularly useful as it displays all five properties on the graph as well as state. See Fig. 11.9.
Reference [1] B. Zohuri, P. McDaniel, Thermodynamics in Nuclear Power Plant Systems, 2nd Ed.., Springer, 2015.
CHAPTER 12
Thermal-hydraulic analysis of nuclear reactors Nuclear power plants (NPPs) currently generate better than 20% of the central station electricity produced in the United States. The United States currently has 104 operating power-producing reactors, with 9 more planned. France has 58 with 1 more planned. China has 13 with 43 planned. Japan has 54 with 3 more planned. In addition, Russia has 32 with 12 more planned. Production of electricity via nuclear has certainly come into its own and is the safest, cleanest, and greenest form of electricity currently introduced on this planet. However, many current thermodynamic texts ignore nuclear energy and use a few examples of nuclear power systems. Nuclear energy presents some interesting thermodynamic challenges, and it helps to introduce them at the fundamental level. Research activities are currently underway worldwide to develop Generation IV (GEN IV) nuclear reactor concepts with the objective of improving thermal efficiency and increasing the economic competitiveness of GEN IV nuclear power plants compared to modern thermal power plants. Our goal here will be to introduce the thermal aspect of nuclear power reactors as it applies to a variety of issues related to nuclear reactor thermal hydraulics and safety, which deals with energy production and utilization therefore to have some general understanding of nuclear power plants, is essential. However, that is true for any textual introduction to this science; yet by considering concrete systems, it is easier to give insight into the fundamental laws of the science and to provide an intuitive feeling for further study.
12.1 Introduction By far, the most widely built nuclear system is the pressurized water reactor (PWR). There are a number of reasons for this. Steam turbines have for many decades been the dominant means of generating mechanical energy to turn electrical generators. Bear in your mind that the temperatures reached in the thermodynamic cycle of a PWR are within the range of fairly common engineering materials. They were the first system built and operated reliably to produce electricity. A typical PWR system is described in Fig. 12.1. The basic PWR consists of five major components, the reactor core, steam generator(s), steam turbine, condenser, and electrical generator, and three water/steam loops. Each loop requires a pump that is not shown to keep the diagram cleaner. The Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
427
428
Introduction to energy essentials
Fig. 12.1 Pressurized water reactor schematic.
nuclear energy is converted to thermal energy in the reactor core. This thermal energy is then transported via the first loop to the steam generator where it is passed to the water in the second loop. The water in the second loop enters as a liquid and is turned to steam.The steam then passes to the turbine where the thermal energy is converted to mechanical energy to rotate the electrical generator. After the thermal energy has been converted to mechanical energy in the steam turbine, the low-pressure steam passes to the condenser to be cooled by the water in the third loop. The second law of thermodynamics tells us that we cannot simply expand the steam to a low-enough energy state that it can return to the steam generator in its original liquid state. Therefore, we must extract more thermal energy from the low-pressure steam to return it to its liquid state where it can be pumped back into the steam generator. The third loop is called the circulating water system, and it is open to the environment. There are multiple ways of providing this cooling water including intake and return to a river or an ocean, intake and return to a cooling pond or intake from a river and exhaust through a cooling tower. However, we are getting ahead of ourselves. Consider for a minute why nuclear energy is so useful. A great deal of energy is produced by a very little mass. Example calculation: Calculate the U-235 consumed to produce 1 MW of thermal energy for 1 day. Note that a megawatt is a unit of power or energy per unit time. 1 MW = 106 watts = 106 J/s 1 day = 24 h = 24 × 3600 s The energy released in the fission of a U-235 atom is ∼200 Mev. 1 ev = 1.6 × 10−19 J 1 Mev = 1.6 × 10−13 J 200 Mev = 32 pJ Fissioning 1 atom of U-235 produces 3.2 × 10−11 J To produce 106 J requires 106/3.2 × 10−11 atoms = 3.125 × 1016 atoms For a duration of 8.64 × 104 s The total number of atoms consumed will be 3.125 × 8.64 × 1020 atoms Therefore, 2.7 × 1021 atoms will be consumed. A gram mole of U-235 is 6.022 × 1023 atoms. So, a gram is 6.022 × 1023/235 = 2.563 × 1021 atoms/g Therefore, 1 megawatt-day of nuclear energy consumes 1.05 g of U-235. The fundamental thing to understand is that a PWR converts nuclear energy to electrical energy, and it does this by converting the nuclear energy first to thermal energy
Thermal-hydraulic analysis of nuclear reactors
and then converting the thermal energy to mechanical energy, which is finally converted to electrical energy. The science of thermodynamics and as a result thermal hydraulics and fluid mechanics deal with each of these conversion processes. To quantify how each of these processes takes place, we must understand and apply the fundamental laws of thermodynamics, then extend them to thermal-hydraulics aspects of the situation.
12.2 Basics understanding of thermal hydraulics aspects What is thermal hydraulics? Thermal hydraulics (T/H) is the study and/or analysis of a fluid that is influenced by the addition of heat. The fluid could be a multicomponent, multiphase fluid that is usually flowing or accelerating in a fixed structure, for example, piping system or large vessel. The heat can be added in many different ways. For example, heat can be added to the fluid from conduction through a heat exchanger or by radiation heat transfer from extremely hot rods. Because of the potentially complicated nature of thermal-hydraulic engineering and analysis, the methods used to simulate the behavior of the multiphase fluid flow are very complicated. Nuclear thermal hydraulics is related to applied research on a variety of issues related to nuclear reactor thermal hydraulics and safety, which deal with energy production and utilization. Some of the research topics may include: • Single and two-phase phenomena in heated microchannels (convective heat transfer, boiling, onset of flow instability, flow regimes, single and two-phase pressure drop) • Single and two-phase phenomena in tube bundles (flow visualization, two-phase flow patterns) • Thermal hydraulics of the accelerator-based production of tritium (apt) system • Enhancement of boiling heat transfer • Interphase transfer processes in two-phase flow • Transport of radioactive trace species and aerosols in bubbles • Condensation in two-phase flow systems with noncondensable • Modeling of nonequilibrium two-phase mist flow • Hydrodynamics of countercurrent two-phase flow • Hydrodynamics of three-phase flow systems (i.e., gas, liquid, solid particles) • Numerical modeling of condensation with noncondensable • Modeling of condensation in thermal hydraulics system codes • Dynamic testing and simulation of digital feedwater control systems in BWRs • Flow visualization and particle image velocimetry for nonequilibrium two-phase flow • Development of inline moisture fraction measurement instrumentation • Mechanistic modeling of steam explosions • Thermal hydraulics of fuel cells To cover these areas is certainly an immense undertaking and one should have an understanding of thermodynamic science.Thermodynamics is the science that deals with energy production, storage, transfer, and conversion, while T/H that is also called thermohydraulic is the study of hydraulic flow in thermal systems. A common example is steam generation
429
430
Introduction to energy essentials
in power plants and the associated energy transfer to mechanical motion and the change of states of the water while undergoing this process. It is a very broad subject that affects most fields of science including biology and microelectronics. The primary forms of energy considered in this text will be nuclear, thermal, chemical, mechanical, and electrical. Each of these can be converted into a different form with widely varying efficiencies. Predominantly thermodynamics is most interested in the conversion of energy from one form to another via thermal means. However, before addressing the details of thermal energy conversion, consider a more familiar example. Newtonian mechanics defines work as force acting through a distance on an object. Performing work is a way of generating mechanical energy. Work itself is not a form of energy but a way of transferring energy to a mass.Therefore, when one mass gains energy, another mass, or field, must lose that energy. A thermal hydraulic evaluation generally consists of three parts. 1. A review of the problem and/or problem objectives should be performed so that decisions on resources, such as computer tools, can be made successfully. This, of course, is the most crucial part of the evaluation process. The engineer should look at all aspects of the problem and/or mission to determine the proper tools and techniques that one should use to achieve accurate results and minimize cost. It is important that the proper/correct thermal-hydraulic computer programs are selected. For example, if the fluid in the system is being evaluated in a single phase at high pressure, and then the designed computer program of choice may be a finite-element computer program. If the process is being evaluated, for example, a steam generator, the more appropriate choice may be a finite difference program, that is, RELAP5. Some of the thermal-hydraulics computer programs are briefly presented in the thermalhydraulics computer program section of this chapter. Many thermal hydraulics computer programs are available, and it is imperative that the appropriate computer programs are used for the first time. It is also important that these programs have the proper quality assurance performance.This includes the validation and verification of the program to simulate the process, and the validation and verification of the user to be able to use the program to simulate the process. All these areas are important in the initial phases of the evaluation to ensure that cost is held to a minimum. In all cases, the scope of analyses should be completed to determine the initial estimates of analytical response and cost. In some cases, these analyses reveal that a more detailed analysis is not required, and the project cost may be reduced significantly. 2. Once the calculations are finished, they must be reviewed for accuracy and practicality. The quality assurance (QA) reviewer following the procedures provided in the QA program usually accomplishes this review. The quality of the practicality review is based on the experience level of the organization that has completed the calculation and on the experience level of the reviewing organization. Questions that should be answered even for large established companies are as follows: a. Is there an adequate experience level of the performer and reviewer to provide a good safe and reliable product?
Thermal-hydraulic analysis of nuclear reactors
b. Were the evaluations performed according to necessary regulatory and company standards? 3. The final product must be delivered in a form that is transferable, readable, and defendable. For any given project, an organization in charge should ensure proper QA for the software and provides experienced individuals so that the product can be presented successfully in front of oversight and/or regulatory authorities. Additionally, the final recommended solution must be practical enough to successfully be implemented. In most cases, it will be necessary for the analyst who is involved to interface with the system and construction engineers so the final product can be installed successfully.
12.3 Units In this section, we will discuss the System International (SI) and English (E) Systems.
12.3.1 Fundamental units Before going further it will be a very good idea to discuss units for physical quantities and the conversion of units from one system to another. Unfortunately, the field of thermodynamics is beset with two popular systems of units. One is the SI system consisting of the kilogram, meter, and second. The other is the English (E) system consisting of the pound-mass, foot, and second. Starting with the SI system, the unit of force is the Newton. The unit of work or energy is the Joule, and the unit of pressure is the Pascal. We have the following: 1 Newton = 1 kilogram-meter/second2 1 Joule = 1 Newton-meter 1 Pascal = 1 Newton/meter2 Now the acceleration of gravity at sea level on the earth is 9.8066 m/s [1], so 100 kg of mass will weight 980.66 Newton. Also, when we want to avoid spelling out very large or small quantities, we will usually use the standard abbreviations for powers of 10 in units of 1000. We have the following: kilo = 103 mega = 106 giga = 109 deci = 10−1 centi = 10−2 milli = 10−3 micro = 10−6 nano = 10−9 For the English system, we have the following: lbm = 1 lbf (at sea level) 1 ft-lbf = 1 lbf × 1 ft
431
432
Introduction to energy essentials
1 British thermal unit (Btu) = 778 ft-lbf 1 psi = 1 lbf/in.2 Note that the fact that 1 lbf = 1 lbm at sea level on the earth means that a mass of 100 lbm will weigh 100 lbf at sea level on the earth. The acceleration of gravity at sea level on the earth is 32.174 ft/s2. Thus, we have 1 lbf/(1 lbm-foot/s2) = 32.174. If we move to another planet where the acceleration of gravity is different, the statement that 1 lbm = 1 lbf does not hold. Consider comparative weights on Mars. The acceleration of gravity on Mars is 38.5% of the acceleration of gravity on Earth. Therefore, in the SI system, we have
W = 0.385 × 9.8066 m/s2 × 100 kg = 377.7 Newtons In the English system, we have,
W = 0.385 × 100 lbm = 38.5 lbf
12.3.2 Thermal energy units The British thermal unit (Btu) is defined to be the amount of heat that must be absorbed by a 1 lb-mass to raise its temperature by 1 °F. T he calorie is the SI unit that is defined similarly. It is the amount of heat that must be absorbed by 1 g of water to raise its temperature by 1 °C. This raises the question as to how a calorie compares with a joule since both appear to be measures of energy in the SI system. James Prescott Joule spent a major part of his life proving that thermal energy was simply another form of energy like mechanical kinetic or potential energy. Eventually, his hypothesis was accepted, and the conversion factor between the calorie and joule has been defined by
1calorie =4.1868 Joules The constant 4.1868 is called the mechanical equivalent of heat.
12.3.3 Units conversion As long as one remains in either the SI system or the English system, calculations and designs are simple. However, that is no longer possible as different organizations, and different individuals usually think and work in their favorite system. In order to communicate with an audience that uses both SI and English systems, it is important to be able to convert back and forth between the two systems. The basic conversion factors are as follows: 1 kg = 2.20462 lbm 1 lbm = 0.45359 kg 1 m = 3.2808 ft 1 ft = 0.3048 m 1 J = 0.00094805 Btu 1 Btu = 1055 J 1 atm = 14.696 psi
Thermal-hydraulic analysis of nuclear reactors
1 atm = 101325 Pa 1 psi = 6894.7 Pa 1 bar = 100000.0 Pa 1 bar = 14.504 psi The bar unit is simply defined by rounding off sea level atmospheric pressure to the nearest 100 kPa.
12.4 System properties In order to characterize a system, we will have to identify its properties. Initially there are three main properties that we will be concerned with—density, pressure, and temperature, all of which are intensive variables. We will use intensive properties to characterize the equilibrium states of a system. Systems will be composed of pure substances and mixtures of pure substances. A pure substance is a material that consists of only one type of atom or one type of molecule. A pure substance can exist in multiple phases. Normally the phases of concern will be gas, liquid, and solid, although for many pure substances there can be several solid phases.Water is an example of a pure substance that can readily be observed in any of its three phases. A solid phase is typically characterized as having a fixed volume and fixed shape. A solid is rigid and incompressible. A liquid has a fixed volume but no fixed shape. It deforms to fit the shape of the container that is in it. It is not rigid but is still relatively incompressible. A gas has no fixed shape and no fixed volume. It expands to fit the container that is in it. To characterize a system composed of one or more pure components and one or more phases we will need to specify the correct number of intensive variables required to define a state. Gibbs Phase Rule named after J. Willard Gibbs who first derived it gives the correct number of intensive variables required to completely define an equilibrium state in a mixture of pure substances. It is
V = C + P + 2 (12.1) V = Number of variables required to define an equilibrium state. C = The number of pure components ( substances ) present. P = The number of phases present.
Thus, for pure steam at sea level and above 100 °C, we have one component and one phase, so the number of variables required to specify an equilibrium state is two, typically temperature and pressure. However, temperature and density would also work. If we have a mixture of steam and liquid water in the system, we have one component and two phases, so only one variable is required to specify the state, either pressure or
433
434
Introduction to energy essentials
temperature would work. If we have a mixture like the air that is composed of oxygen, nitrogen, and argon, we have three components and three phases (the gas phase for each component), and we are back to requiring two variables. As we progress, we will introduce additional intensive variables that can be used to characterize the equilibrium states of a system in addition to density, pressure, and temperature.
12.4.1 Density Density is defined as the mass per unit volume. The standard SI unit is kilograms per cubic meter (kg/m3). The standard English unit is pounds mass per cubic foot (lbm/ft3). If the mass per unit volume is not constant in a system, it can be defined at a point by a suitable limiting process that converges for engineering purposes long before we get to the atomistic level. The inverse of density is specific volume. Specific volume is an intensive variable, whereas volume is an extensive variable.The standard unit for specific volume in the SI system is cubic meters per kilogram (m3/kg). The standard unit in the English system is cubic feet per pound-mass (ft3/lbm).
12.4.2 Pressure Pressure is defined as force per unit area. The standard unit for pressure in the SI system is the Newton per square meter or Pascal (Pa). This unit is fairly small for most engineering problems, so pressures are more commonly expressed in kilo-Pascals (kPa) or mega-Pascals (MPa). The standard unit in the English system really does not exist. The most common unit is pounds-force per square inch (psi). Pressure as an intensive variable is constant in a closed system. It really is only relevant in liquid or gaseous systems. The force per unit area acts equally in all directions and on all surfaces for these phases. It acts normal on all surfaces that contain or exclude the fluid (The term fluid includes both gases and liquids.). The same pressure is transmitted throughout the entire volume of liquid or gas at equilibrium (Pascal’s law). This allows the amplification of force by a hydraulic piston. Consider the system in Fig. 12.2. In Fig. 12.2, the force on the piston at B is greater than the force on the piston at A because the pressure on both is the same, and the area of piston B is much larger.
Fig. 12.2 A hydraulic amplifier.
Thermal-hydraulic analysis of nuclear reactors
Fig. 12.3 Pressure in a liquid column.
In a gravity field, the pressure in a gas or liquid increases with the height of a column of the fluid. For instance, in a tube containing a liquid held vertically, the weight of all of the liquid above a point in the tube is pressing down on the liquid at that point. Consider Fig. 12.3 as follows, then: dp = ρ gdh
H
p(0 ) = P ( H ) + ∫ ρ gdh
(12.2)
0
Thus, the pressure at the bottom of the container is equal to the pressure on the top of the fluid in the container plus the integral of the weight of the fluid per unit area in the container. This raises an interesting concept. Often it will be important to distinguish between absolute pressure and gage pressure. The preceding equation calculates the absolute pressure. The gage pressure is simply the pressure exerted by the weight of the column without the external pressure on the top surface of the liquid. It is certainly possible to have a negative gage pressure but not possible to have a negative absolute pressure. A vacuum pressure occurs when the absolute pressure in a system is less than the pressure in the environment surrounding the system. A very common way of measuring pressure is an instrument called a manometer. A manometer works by measuring the difference in height of a fluid in contact with two different pressures. A manometer can measure absolute pressure by filling a closed-end tube with the liquid and then inverting it into a reservoir of liquid that is open to the pressure that is to be measured. Manometers can also measure a vacuum gage pressure. Consider Fig. 12.4 as shown here.
Fig. 12.4 Pressure measurement with manometers.
435
436
Introduction to energy essentials
The tall tubes on the right in each system are open to the atmosphere. System A is operating at a small negative pressure, or vacuum, relative to the atmosphere. System B is operating at a positive pressure relative to the atmosphere. The magnitude of the pressure in each case can be calculated by measuring the height difference between the fluids in the two sides of the U-tube and calculating its weight per unit area. This is the difference in the pressures inside the systems A or B and the atmospheric pressure pushing down on the open columns on the right.
12.4.3 Temperature The other intensive variable to be considered at this point is the temperature. Mostly everyone is familiar with temperature as a measure of coldness or hotness of a substance. As we continue our study of thermodynamics, we will greatly refine our concept of temperature, but for now, it is useful to discuss how a temperature scale is constructed. Traditionally the Fahrenheit scale was established by defining the freezing point of water at sea level pressure to be 32 °F, and the boiling point of water to be 212 °F under the same conditions. A thermometer containing a fluid that expands readily as a function of temperature could be placed in contact with a system that contained ice and water vapor saturated air. The height of the fluid in the thermometer would be recorded as the 32 °F height. Then the same thermometer would be placed in a water container that was boiling and the height of the fluid in the thermometer marked as the 212 °F point. The difference in height between the two points would then be marked off in 180 divisions with each division representing 1 °F. The Celsius scale was defined in the same way by setting the freezing point of water at 0 °C and the boiling point at 100 °C. Water was chosen as the reference material because it was always available in most laboratories around the world. When it became apparent that absolute temperatures were possibly more important than simply temperatures in the normal range of human experience, absolute temperature scales were defined. The freezing point of water was defined as 273.15 Kelvins and the boiling point was defined as 373.15 Kelvins to match up with the Celsius scale. Note that the unit on the absolute scale is Kelvin, not degrees Kelvin. It was named in honor of Lord Kelvin who had a great deal to do with the development of temperature measurement and thermodynamics. The freezing point of water was further defined as the equilibrium of pure ice and air-saturated water. However, it was difficult to attain this point because as ice melts it forms a layer of pure water around itself, which prevents direct contact of pure ice and air-saturated water. Therefore, in 1954, the two-point method was abandoned, and the triple point of water was chosen as a single standard. The triple point of water is 273.16 Kelvins, 0.01 Kelvin above the ice point for water at sea level pressure. A single point can be used to define the temperature scale if temperatures are measured with a constant volume, ideal gas thermometer. Basically, the ideal gas thermometer can measure the pressure exerted by a constant volume of gas
Thermal-hydraulic analysis of nuclear reactors
in contact with the system to be measured. It can also measure the pressure exerted by the gas when in contact with a system at the triple point of water. The ratio of the two pressures gives the ratio of the measured absolute temperature to the absolute temperature of the triple point of water. However, additional secondary standards are defined to simplify calibration over a broad range of temperatures.The International Practical Temperature Scale is defined by Triple point of equilibrium hydrogen 13.81 K Boiling point of hydrogen at 33.33 kPa 17.042 K Boiling point of hydrogen at 1 atm 20.28 K Boiling point of neon 27.102 K Triple point of oxygen 54.361 K Boiling point of oxygen 90.188 K Triple point of water 273.16 K Boiling point of water 373.15 K Freezing point of zinc 692.73 K Freezing point of silver 1235.08 K Freezing point of gold 1337.58 K Once the absolute temperature scale in Kelvins was defined it became part of the SI system. An absolute scale matching the Fahrenheit scale between the freezing point of water and its boiling point has been defined for the English system. Since there are 180 degrees between the freezing and boiling points in the Fahrenheit scale and 100 degrees over the same range in the Kelvin scale, the absolute scale for the English system where the unit of measurement is called a degree Rankine, is simply 1.8 times the number of Kelvins. So, the freezing point of water on the Rankine scale is 491.67 °R, and the boiling point is 671.67 °R. Absolute zero on the Rankine scale is -459.67 °F. To convert back and forth, the following formulas apply.
104 ≤ Grx ≤ 109 (12.3)
Pr = 0 (12.4)
12.5 Properties of the atmosphere Before going further, it will be useful to have a model for the atmosphere that can be used for calculations. This is important to realize that the atmosphere at sea level supports a column of air that extends upwards of 50 miles. Given the equation derived earlier for the pressure in a column of fluid, we have as always to begin at sea level.
Pr = ∞ (12.5a)
437
438
Introduction to energy essentials
Or integrating the last term of Eq. (12.5a), we obtain
3 1/ 2 Pr Nux 4 (12.5b) Grx1/ 4 ( 2.435 + 4.884 Pr 1/ 2 + 4.953 Pr)1/ 4
To perform the integration, the aforementioned temperature has been assumed constant. This is not quite true as the standard lapse rate for the troposphere up to about 40,000 feet is approximately 2 °C per 1000 feet or 3.6 °F per 1000 feet. This means that the air is denser than the exponential model predicts. However, it is approximately correct for the troposphere particularly if only a limited range of elevations is considered, and the average temperature is used. The initial values at sea level for the standard atmosphere are Pressure: 14.696 psi 101.325 kPa Temperature 59 °F (519 °R) 15 °C (288 K) Density 076474 lbm/ft3 1.225 kg/m3 Composition Mole fraction (%) Nitrogen 78.08 Oxygen 20.95 Argon 0.93 Carbon dioxide 0.03 Ne, He, CH4 et al. 0.01 The relative composition is essentially constant up to the top of the troposphere.
12.6 The structure of momentum, heat, and mass transport In their text, Bird et al. present the interrelationships between the transport of heat, energy, and mass in a useful table reproduced in Table 12.1. They structure their book along with the columns of the table, covering first momentum, then energy and finally mass transport but emphasizing the natural interrelationships between these transport processes.
12.7 Common dimensionless parameters In the transfer and conversion of thermal energy, we will be interested in separating the entire universe into a system and its environment. We will mainly be interested in the energy transfers and conversions that go on within the system, but in many cases, we will need to consider its interactions with the rest of the world or its environment. Systems that consist of a fixed amount of mass that is contained within fixed boundaries are called closed systems. Systems that pass the mass back and forth to the environment will be called open systems. Both open and closed systems allow energy to flow across their borders, but the flow of mass determines whether they are open or closed systems. Open systems will
Thermal-hydraulic analysis of nuclear reactors
Table 12.1 The interrelationship between the transport of momentum, energy, and mass.
Molecular transport
Momentum
Energy
Mass
Viscosity hx
Thermal conductivity Mass diffusivity ± (Fourier Equation)
Nux = f (Grx* , Pr) One-dimensional laminar transport Three-dimensional transport in a continuum Laminar flow
Turbulent flow Transport between phases Radiation
Shell momentum balance Momentum equation
Shell energy balance
Shell mass balance
Energy equation
Species equation
Unsteady viscous flow 2D viscous flow Boundary layer Momentum transport Time averaging Interphase momentum transport —
2D conduction inflow Boundary layer Energy transport Time averaging Interphase energy transport Energy transport
2D diffusion inflow Boundary layer Mass transport Time averaging Interphase mass transport —
Source: Adapted from R.B. Bird, W.E. Stewart, and E.N. Lightfoot, Transport Phenomena, John Wiley and Sons, New York, 1960 [2].
also carry energy across their borders with the mass as it moves. Consider the simple compressed gas in the piston below as a closed system.
12.8 Computer codes The thermal-hydraulic analysis of nuclear reactors is largely performed by what are known as “system codes.” T hese codes predict the flows in the complex network of pipes, pumps, vessels, and heat exchangers that together form the thermal-hydraulic systems of a nuclear reactor. Codes in this category include the US codes RELAP, TRAC, and TRACE and the European codes, CATHARE and ASTEC. They embody necessarily highly simplified models that in essence, solve one-dimensional forms of the conservation equations for mass momentum and energy.They necessarily make heavy reliance on empirical correlations for such things as frictional pressure drops. This use of empirical correlations extends to their treatment of two-phase flows where quantities such as interphase mass momentum and heat transfer are again of necessity, represented using empirical correlations.These codes have been used for many decades and are now very well established, and given this long process of refinement, they are able to produce remarkably accurate predictions of plant behavior under both steady and transient conditions.The most widely used of these codes and the worldwide workhorse of nuclear reactor thermal analysis is the RELAP suite, originating with the US Nuclear Regulatory Commission (NRC).
439
440
Introduction to energy essentials
However, such codes are fundamentally limited in that they are at heart only one dimensional. If a part of the plant can reasonably to be modeled as one-dimensional flow in a pipe, these codes are excellent. However, there are plainly many important phenomena and locations where this one-dimensionality is not a good approximation. An obvious example of this might be the flow within the bulky, three-dimensional reactor vessel itself. There have been attempts to extend these system codes to handle multidimensional effects. These have had some success, but there is naturally a trade-off between the fidelity of the representation and the computational complexity. It is oversimplified, but one might characterize a “3D system code” as an array of one-dimensional parallel pipe, allowed to interact “sideways” with each other via some “cross-flow” coupling. The models so produced can be better than the original one-dimensional ones but do not represent complex flows well. The NRC uses computer codes to model and evaluate fuel behavior, reactor kinetics, thermal-hydraulic conditions, severe accident progression, time-dependent dose for designbasis accidents, emergency preparedness and response, health effects, and radionuclide transport during various operating and postulated accident conditions. Results from applying the codes support decision-making for risk-informed activities, review of licensees’ codes and performance of audit calculations, and resolution of other technical issues. Code development is directed toward improving the realism and reliability of code results and making the codes easier to use. For more information, see the following code categories [3]: • Probabilistic risk assessment codes • Fuel behavior codes • Reactor kinetic codes • Thermal-hydraulics codes • Severe accident codes • Design-basis accident (DBA) codes • Emergency preparedness and response (EPR) codes • Health effects/dose calculation codes • Radionuclide transport codes (for license termination/decommissioning)
12.8.1 Probabilistic risk assessment codes • SAPHIRE: Systems Analysis Programs for Hands-on Integrated Reliability (SAPHIRE) is used for performing probabilistic risk assessments.
12.8.2 Fuel behavior codes Fuel behavior codes are used to evaluate fuel behavior under various reactor operating conditions: • FRAPCON-3: It is a computer code used for steady state and mild transient analysis of the behavior of a single fuel rod under near-normal reactor operating conditions.
Thermal-hydraulic analysis of nuclear reactors
• FRAPTRAN: It is a computer code used for transient and design-basis accident analysis of the behavior of a single fuel rod under off-normal reactor operation conditions.
12.8.3 Reactor kinetics codes Reactor kinetics is used to obtain reactor transient neutron flux distributions: • PARCS: The Purdue Advanced Reactor Core Simulator (PARCS) is a computer code that solves the time-dependent two-group neutron diffusion equation in threedimensional Cartesian geometry using nodal methods to obtain the transient neutron flux distribution. The code may be used in the analysis of reactivity-initiated accidents in light-water reactors where spatial effects may be important. It may be run in the stand-alone mode or coupled to other NRC thermalhydraulic codes such as RELAP5.
12.8.4 Thermal-hydraulics codes Advanced computing plays a critical role in the design, licensing, and operation of nuclear power plants. The modern nuclear reactor system operates at a level of sophistication whereby human reasoning and simple theoretical models are simply not capable of bringing to light a full understanding of a system’s response to some proposed perturbation, and yet, there is an inherent need to acquire such understanding. Over the past 30 years or so, there has been a concerted effort on the part of the power utilities, the NRC, and foreign organizations to develop advanced computational tools for simulating reactor system thermal-hydraulic behavior during real and hypothetical transient scenarios. In particular, T/H codes are used to analyze the loss of coolant accidents (LOCAs) and system transients in light-water nuclear reactors. The lessons learned from simulations carried out with these tools help form the basis for decisions made concerning plant design, operation, and safety. The NRC and other countries in the international nuclear community have agreed to exchange technical information on thermal-hydraulic safety issues related to reactor and plant systems. Under the terms of their agreements, the NRC provides these member countries the latest versions of its thermal-hydraulic systems analysis computer codes to help evaluate the safety of planned or operating plants in each member’s country. To help ensure these analysis tools are of the highest quality and can be used with confidence, the international partners perform and document assessments of the codes for a wide range of applications, including identification of code improvements and error corrections. The thermal-hydraulics codes developed by the NRC include the following: • TRACE: The TRAC/RELAP Advanced Computational Engine. A modernized thermal-hydraulics code designed to consolidate and extend the capabilities of NRC’s three legacy safety codes—TRAC-P, TRAC-B, and RELAP. It is able to
441
442
Introduction to energy essentials
analyze large/small break LOCAs and system transients in both pressurized- and boiling-water reactors (PWRs and BWRs). The capability exists to model thermalhydraulic phenomena in both one-dimensional (1-D) and three-dimensional (3-D) space. This is the NRC’s flagship thermal-hydraulics analysis tool. • SNAP: The Symbolic Nuclear Analysis Package is a graphical user interface with preprocessor and postprocessor capabilities, which assists users in developing TRACE and RELAP5 input decks and running the codes. • RELAP5: The Reactor Excursion and Leak Analysis Program is a tool for analyzing small-break LOCAs and system transients in PWRs or BWRs. It has the capability to model thermal-hydraulic phenomena in 1D volumes. While this code still enjoys widespread use in the nuclear community, active maintenance will be phased out in the next few years as the usage of TRACE grows. • Legacy tools that are no longer actively supported include the following thermalhydraulics codes: • TRAC-P—large-break LOCA and system transient analysis tool for PWRs; capability to model thermal-hydraulic phenomena in 1D or 3D components • TRAC-B—large- and small-break LOCA and system transient analysis tool for BWRs; capability to model thermal-hydraulic phenomena in 1D or 3D components • CONTAIN—containment transient analysis tool for PWRs or BWRs; capability to model thermal hydraulic phenomena (within a lumped-parameter framework) for existing containment designs
12.8.5 Server accident codes Severe accident codes are used to model the progression of accidents in light-water reactor nuclear power plants: • MELCOR: Integral Severe Accident Analysis Code: Fast-running, parametric models. • MACCS2: Accident Consequence Analysis Code: The computer code used to calculate the dispersion of radioactive material to the environment and the population. The MACCS2 code uses a dose-response model to determine the health consequences of a severe accident in terms of early fatalities (how many people in a population would die in the weeks or months following exposure) and latent cancer risk (how many people in a population would contract a fatal cancer as a result of exposure). MACCS2 originated as an acronym for the MELCOR Accident Consequence Code System but is now commonly known simply as the MACCS2 Accident Consequence Analysis Code. • SCDAP/RELAP5: Integral Severe Accident Analysis Code: Uses detailed mechanistic models. • CONTAIN: Integral Containment Analysis Code: Uses detailed mechanistic models. (CONTAIN severe accident model development was terminated in the mid-1990s.)
Thermal-hydraulic analysis of nuclear reactors
The MELCOR code has similar containment capabilities (but less detailed in some areas) and should generally be used instead of CONTAIN. • IFCI: Integral Fuel–Coolant Interactions Code. • VICTORIA: Radionuclide Transport and Decommissioning Codes: Radionuclide transport and decommissioning codes provide dose analyses in support of license termination and decommissioning.
12.8.6 Design-basis accident (DBA) codes DBA codes are used to determine the time-dependent dose at a specified location for a given accident scenario: • RADTRAD: A simplified model for RADionuclide Transport and Removal and Dose estimation. The RADTRAD code uses a combination of tables and numerical models of source term reduction phenomena to determine the time-dependent dose at specified locations for a given accident scenario. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room, to estimate site boundary doses, and to estimate dose attenuation due to modification of a facility or accident sequence. RADTRAD 3.03 is available from the Radiation Safety Information Computational Center (RSICC)
12.8.7 Emergency preparedness and response (EPR) codes EPR codes compute power reactor source terms, airborne transport of activity, and the resulting doses to allow easy comparison to EPA protective action guidelines: • RASCAL: Radiological Assessment Systems for Consequence AnaLysis. The RASCAL code evaluates releases from nuclear power plants, spent fuel storage pools and casks, fuel cycle facilities, and radioactive material handling facilities and is designed for use by the NRC in the independent assessment of dose projections during the response to radiological emergencies. Obtain the latest information on RASCAL including version 4.3. There is no cost associated with receipt of this code.
12.8.8 Dose and risk calculation software Health effects/dose calculation codes are used to model and assess the health implications of radioactive exposure and contamination. • VARSKIN: The NRC sponsored the development of the VARSKIN code in the 1980s to assist licensees in demonstrating compliance with Paragraph (c) of Title 10, Section 20.1201, of the Code of Federal Regulations (10 CFR 20.1201), “Occupational Dose Limits for Adults.”Specifically, 10 CFR 20.1201(c) requires licensees to have an approved radiation protection program that includes established protocols for calculating and documenting the dose attributable to radioactive contamination of the skin. Since that time, the code has been significantly enhanced to simplify data entry and increase efficiency.VARSKIN 3 is available from the Radiation
443
444
Introduction to energy essentials
Safety Information Computational Center (RSICC). For additional information, see NUREG/CR-6918, “VARSKIN 3: A Computer Code for Assessing Skin Dose from Skin Contamination.” Since the release of VARSKIN 3 in 2004, the NRC staff has compared its dose calculations for various energies and at various skin depths, with doses calculated by the Monte Carlo N-Particle Transport Code System (MCNP) developed by Los Alamos National Laboratory (LANL). That comparison indicated that VARSKIN 3 overestimates the dose with increasing photon energy. For that reason, the NRC is sponsoring a further enhancement to replace the existing photon dose algorithm, develop a quality assurance program for the beta dose model, and correct technical issues reported by users. To facilitate that enhancement, NRC encourages you to contact us, if you are aware of any problems or errors associated with the VARSKIN code.
12.8.9 Radionuclide transport codes Radionuclide transport and decommissioning codes provide dose analyses in support of license termination and decommissioning: • DandD: A code for screening analyses for license termination and decommissioning. The DandD software automates the definition and development of the scenarios, exposure pathways, models, mathematical formulations, assumptions, and justifications of parameter selections documented in Volumes 1 and 3 of NUREG/CR-5512. • Probabilistic RESRAD 6.0 and RESRAD-BUILD 3.0 Codes: The existing deterministic RESRAD 6.0 and RESRAD-BUILD 3.0 codes for site-specific modeling applications were adapted by Argonne National Laboratory (ANL) for NRC regulatory applications for probabilistic dose analysis to demonstrate compliance with the NRC’s license termination rule (10 CFR Part 20, Subpart E) according to the guidance developed for the Standard Review Plan (SRP) for Decommissioning. (The deterministic RESRAD and RESRAD-BUILD codes are part of the family of codes developed by the US Department of Energy. The RESRAD code applies to the cleanup of sites and the RESRAD-BUILD code applies to the cleanup of buildings and structures.) The most capable tool available to us for modeling these multidimensional effects is Computational Fluid Dynamics (CFD). Modern CFD is able to produce high-quality predictions flows in complex geometries but only with the use of large computing resources. It would be utterly impractical to build a CFD model of, for example, the entire primary circuit of a PWR. However, much of the primary circuit may be able to be modeled with adequate fidelity using a cheap one-dimensional systems code, and it may only be in a limited part of the circuit that full three-dimensional effects are important. The natural response to this is to develop methods where simple one-dimensional models are replied where they are appropriate but where these are then coupled to full three-dimensional treatments of those parts of the system, which require it.
Thermal-hydraulic analysis of nuclear reactors
In summary, if readers are interested to go to a more granular level of information, they should refer to a book by Zohuri [4].
References [1] J.R. Elliott, C.T. Lira, Introductory Chemical Engineering Thermodynamics, Prentice Hall, Upper Saddle River, NJ, 1999. [2] R.B. Bird, W.E. Stewart, E.N. Lightfoot, Transport Phenomena, John Wiley and Sons, New York, 1960. [3] http://www.nrc.gov/about-nrc/regulatory/research/safetycodes.html. [4] B. Zohuri, Thermal-Hydraulic Analysis of Nuclear Reactors, second ed., Springer Publishing Company, New York, NY, May 25, 2017.
445
CHAPTER 13
Energy storage driving renewable energy Electricity markets are changing rapidly because of (1) the addition of wind and solar and (2) the goal of a low-carbon electricity grid. These changes result in times of high electricity prices and very low or negative electricity prices. California has seen its first month where more than 20% of the time (mid-day) the wholesale price of electricity was zero or negative. This creates large incentives for coupling heat storage to advanced reactors to enable variable electricity and industrial-heat output (maximize revenue) while the reactor operates at base load (minimize cost). Recent studies have examined coupling various types of heat storage to Rankine and Brayton power cycles. However, there has been little examination of heat-storage options between (1) the reactor and (2) the power-conversion system or industrial customer. Heat-storage systems can be incorporated into sodium, helium-, and salt-cooled reactors. Salt-cooled reactors include the fluoride-salt-cooled high-temperature reactor (FHR) with its solid fuel and clean coolant and the molten salt reactor (MSR) with its fuel dissolved in the salt. For sodium and salt reactors, it is assumed that a heat-storage system would be in the secondary loop between the reactor and power cycle. For helium-cooled reactors, heat storage can be in the primary or secondary loop.
13.1 Introduction This chapter gives an elementary account of hybrid renewable energy systems (HRES). This type of system according to today’s demand on providing new source of electricity on-pick and storage of energy as a source of such demandable energy of electricity off-pick. HRES are becoming popular as stand-alone power systems for providing electricity in remote areas due to advances in renewable energy technologies and subsequent rise in prices of petroleum products. A hybrid energy system (HES), or hybrid power, usually consists of two or more renewable energy sources used together to provide increased system efficiency as well as greater balance in energy supply [1]. A renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat. Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services [26].
Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
447
448
Introduction to energy essentials
This chapter also take a first look at the rational and the heat-storage options for deploying gigawatt-hour heat-storage systems with GEN IV reactors. Economics and safety are the primary selection criteria. The leading heat-storage candidate for sodiumcooled systems (a low-pressure secondary system with small temperature drop across the reactor core) is steel in large tanks with the sodium flowing through channels to move heat in and out of storage. The design minimizes sodium volume in the storage and, thus, the risks and costs associated with sodium. For helium systems (high pressure with large temperature drop across the core), the leading heat-storage options are: 1. varying the temperature of the reactor core, 2. steel or alumina firebrick in a secondary pressure vessel and, 3. nitrate or hot-rock/firebrick at atmospheric pressure. For salt systems with low pressure, high temperatures, and small temperature drop across the reactor core, the leading heat-storage systems are secondary salts. In each case, options are identified and questions to be addressed are identified. In some cases, there is a strong coupling between the heat-storage technology and the power cycle. The leading sodium heat-storage technology may imply changes in the power cycle. High-temperature salt systems couple efficiency to Brayton power cycles that may create large incentives for the heat storage to remain within the power cycle rather than in any intermediate heat transfer loop.
13.2 Hybrid energy system introductory HESs combine two or more forms of energy generation, storage, or end-use technologies, and they can deliver a boatload of benefits compared with single source systems. The option of having variety in our day-to-day life could be considered as the spice of life; therefore, why limit ourselves to just one energy source or storage option? In these cases, HESs are an ideal solution since they can offer substantial improvements in performance and cost reduction and can be tailored to varying end-user requirements. The energy storage system (ESS) in a conventional stand-alone renewable energy power system (REPS) usually has a short lifespan mainly due to irregular output of renewable energy sources. In certain systems, the ESS is oversized to reduce the stress level and to meet the intermittent peak power demand. A hybrid energy storage system (HESS) is a better solution in terms of durability, practicality, and cost effectiveness for the overall system implementation. The structure and the common issues of stand-alone REPS with ESS are discussed in this paper. This section presents different structures of stand-alone REPS with HESS such as passive, semiactive, and active HESS. As there are a variety of energy storage technologies available in the market, decision matrixes are introduced in this paper to evaluate the technical and economic characteristics of the energy storage technologies based on the requirements of stand-alone REPS. A detailed
Energy storage driving renewable energy
review of the state-of-the-art control strategies such as classical control strategies and intelligent control strategies for REPS with HESS are highlighted. The future trends for REPS with HESS combination and control strategies are also discussed. Configurations could include renewable or nonrenewable energy sources, electrical and chemical energy storage, and fuel cells, often connected via a smart grid. They have the potential to dramatically reduce cost and emissions from energy generation and distribution for households but can be held back by the limitations of individual power generation or storage technologies—this may include cost, inconsistent supply (like interrupted solar on a cloudy day), etc.This means there is substantial demand for hybrid energy solutions to lower cost and improve efficiency while still meeting performance requirements. Fig. 13.1 is a presentation of an example for HES, which is depicted by CSIROscope corporation. CSIROscope an Australian corporation researcher is claiming that there is now an increased availability of renewable and modular power generation and storage technologies such as batteries, fuel cells, and household solar. “These technologies are becoming cost competitive, but the key to greater use is to combine them in connected hybrid systems,” Dr. Badwal a researcher at this company says. He also goes a further step by stating that “By doing this, we can offer substantial improvements in performance and cost.”
Fig. 13.1 Example of hybrid energy system. (Courtesy of CSIROscope Corporation).
449
450
Introduction to energy essentials
Consequently, the early player in this game will be ahead of their business and research ball, by keeping their heads together with industry partners, and the collaborative space could be used to share the benefits of emerging HESs with industry and government to maximize the value of local energy sources. Having such foundation under consideration, the first questions that come to mind are what hybrid system and the word hybrid is stands for and what do we really mean by looking at a HES as a new source of renewable energy and usage of such source during on-peak demand for electricity. Going toward the next century demand for more electricity is on rise, and consequently the on-peak hours of such demand impose a challenging duty on-grid; thus, an alternative source of energy needs to be found to meet such supply and demand constraints. Hence, looking for a new source of renewable energy is more and more appealing. The word hybrid can be referred to as some phenomena that are a combination of two different elements that may consist of: 1. Modern science has seen dramatic advances in hybrid technology, giving birth to hybrid cars. 2. Incorporating information and communications technology systems that automate smart-houses and eco homes. Similarly, HESs have been designed to generate electricity from different sources, such as solar panels and wind turbines, and now tap into sources such as hydrogen that is stored in a different manner and standing by as a class of renewable energy. Therefore, a demand for its production is most efficient and cost effective in the scope of every researcher and scientist at university, industry, and national laboratory level who are working in this field. However, one of the biggest downfalls of renewable energy is that energy supply is not constant; sources like solar and wind power fluctuate in intensity due to the weather and seasonal changes. Therefore, a reliable backup system is necessary for renewable energy-generating stations that are not connected to a national power grid, and they can produce energy during off-peak and store them for utilization during on-peak period, and that is the driving factor behind the idea of producing hydrogen via nuclear power which is indeed a solution to reduce carbon emission. The price to pay includes the cost of nuclear waste storage and other related issues such as proliferation and security of the fissionable weapon grade waste coming out of the reactor core at the end of burnup residue of fuel used in them or for that matter air or land based on manmade event (i.e., Three Mile Island, Chernobyl accident due to operator errors) or natural disaster (i.e., Fukushima Daiichi in Japan). Nevertheless, something could probably be done to avoid at least part of this pollution and reduce the public fear of a nuclear disaster by increasing the safety design of these power plants going forward with GEN IV designs, while we can tap into waste of thermal energy generated by these reactors and put them in use for producing a new source of renewable energy such as hydrogen production plants (HPPs) that are coupled particularly to these very-high-temperature reactors (VHTRs).
Energy storage driving renewable energy
As we said, finding a reliable backup system for renewable energy is an inevitable condition, and the systems that consist of a variety of power control methods and storage equipment which include battery bank and diesel generators among others do not have reliable endless life cycle enough to meet the demand on-grid during on-peak or even at small scale looking at these sorts of power storage for usage at residential level or remote areas. The power systems that are connected to the national grid do not have this problem because, in most cases, there are many different sources of power contributing to the national electricity supply. Then the question about reducing the demand for energy stepping into future time or meeting that demand is on the table, and somehow as solutions need to be found, thus hybrid technology for the production of electrical energy seems very appealing, and research around these systems to make them cost effective and efficient has gathered a huge momentum these days. It is undoubtedly true that big centralized power stations are still needed to generate enough power for big industrial sites. However, if we managed to dramatically reduce the amount of energy that the entire residential and small commercial building stock withdraws every year out of the national energy grid, we may probably need less nuclear power plants (NPPs). That is arguably the viewpoint of antinuclear folks, but it is something to be remained to see and should not be a showstopper for solutions such as hybrid systems to be in place, and research to make them more productive and efficient must continue. HESs often consist of a combination of fossil fuels and renewable energy sources and are used in conjunction with energy storage equipment (batteries) or hydrogen storage tanks. This is often done either to reduce the cost of generating electricity from fossil fuels or to provide backup for a renewable energy system, ensuring continuity of power supply when the renewable energy source fluctuates. There are several types of HESs such as wind-solar hybrid, solar-diesel, wind-hydro, and wind-diesel, which are among present in production plants. The design of a system or the choice of energy sources depends on several considerations. The factors affecting the choice of hybrid power technology can also tell us why people use hybrids and some of the advantages. The main factors are cost and resources available. As some of localized advantages as stand-alone operation and off the grid in a self-sustain mode with respect to need for electricity is worth mentioning is a solar system barn in a remote or isolated area, where the framers can take advantages of independency on electricity feed from the grid. Solar energy can be produced on or off the grid. On-grid means a house remains connected to the state electricity grid. Off-grid has no connection to the electricity grid, so the house, business, or whatever being powered is relying solely on solar or solar-hybrid. The ability to produce electricity off the grid is a major advantage of solar energy for people who live in isolated and rural areas. Power prices and the cost of installing power lines are often exorbitantly high in these places, and many have frequent power
451
452
Introduction to energy essentials
Fig. 13.2 A solar barn in a remote area.
cuts. Fig. 13.2 is an illustration of a solar barn that can go off-grid, and solar power is a huge advantage for people in isolated locations, while Fig. 13.3 is an illustration of a solar farm as part of the electrical grid for providing the electricity power. The cost of hybrid power technology greatly affects the choices people make, particularly in developing countries. This also depends on the aim of the project. People who are planning to set up a hybrid energy project for their own use often focus on lowering the total investment and operational costs, while those planning to generate electricity for sale focus on the long-term project revenue. As such, systems that incorporate hydrogen storage and fuel cells are not very common with small-scale projects. The viability of one HES over another is usually pegged on the cost of generating each kilowatt [2,3].
Fig. 13.3 An illustration of a solar farm.
Energy storage driving renewable energy
The availability of the natural resources plays an enormous part when selecting the components of a HES—the right power, generation location, and method must be chosen [4]. Often, a hybrid system is opted for because the existing power resource is not enough to generate the amount of power needed—which is often the case when using microhydro plants. In some developing countries, such as parts of Ethiopia, a wind-solar hybrid power system, consisting of wind turbines and solar photovoltaic (PV) panels, was found to be most viable. This was because the wind resource alone was not sufficient to meet the electric load. Solar PV panel is used primarily for grid-connected electricity to operate residential appliances, commercial equipment, lighting, and air conditioning for all types of buildings. Through stand-alone systems and the use of batteries, it is also well-suited for remote regions where there is no electricity source. Solar PV panels can be ground mounted, installed on building rooftops, or designed into building materials at the point of manufacturing. Solar PV cells were very expensive, so it was not feasible for the project developers to use solar power alone [5]. The efficiency of solar PV increases in colder temperatures and is particularly wellsuited for Canada’s climate. Many technologies are available which offer different solar conversion efficiencies and pricing. Solar PV modules can be grouped together as an array of series and parallel connected modules to provide any level of power requirements, from mere watts (W) to kilowatt (kW) and megawatt (MW) size. Many city dwellers are also choosing to go off the grid with their alternate energy as part of a self-reliant lifestyle (see Fig. 13.4).
Fig. 13.4 A house with a solar system.
453
454
Introduction to energy essentials
In the next page, you may observe some of the HES sources, where some industry conducting research around that includes the enhancement of these systems by improving them technologically to present better return on investment (ROI) and total cost of ownership for energy owners of these resources to meet supply and demand for the electricity. • Coal mining and energy production Improving mine safety and developing smarter extraction and carbon capture techniques, which help lower mission. • Electricity grid and modeling Improve energy efficiency through intelligence models, systems, and management. • Energy storage and battery technologies Cutting-edge energy storage technologies that utilize heat, ceramics, and batteries. • Solar energy Making solar a reliable, stable power source for future energy—including solar thermal and PVs. • Oil and gas Understanding and unlocking resources of such energy both onshore and offshore gas and oil and enabling safe, efficient, and sustainable development of these wealth of resources. • Nuclear energy The new development based on research on new generation of NPP known as GEN IV has built up a new momentum to increase the thermal efficiencies of these power plants higher than their previous generation of GEN III, while they are more cost effective to be manufactured [6]. • Cryogenic for renewable energy The cryogenic energy facility stores power from renewables or off-peak generation by chilling air into liquid form. When the liquid air warms up, it expands and can drive a turbine to make electricity. The company behind the scheme, Highview Power Storage, believes that the technology has a great potential to be scaled up for long-term use with green energy sources. • Low emissions technologies New technologies that facilitate the development of low emissions energy sources and improve emissions from existing sources. Hybrid systems are most suitable for small grids and isolated or stand-alone and self-reliable systems as hybrid power generation is, by definition, a solution for getting around problems where one energy source is not sufficient. The popularity of HESs has grown so much that it is now a niche industry in itself—with custom systems being engineered for specific functions. For instance, Enercon, a German wind power company, has come up with a unique factory-designed hybrid power technology, including the world’s first hybrid winddiesel-powered ship, the E-Ship 1 [7].
Energy storage driving renewable energy
Fig. 13.5 Enercon E-Ship I.
The German wind turbine manufacturer Enercon launched and christened its new rotor ship E-Ship 1 on August 2, 2008. The vessel has now been in service for 5 years to transport wind turbines and other equipment to locations around the world and is shown in Fig. 13.5.
13.2.1 Hybrid system as source of renewable energy As we have mentioned in the previous section, HES is a combination of energy sources of different characteristics and an energy storage medium.When it comes to stand-alone (Fig. 13.6) applications, depending on HES is a challenging process due to number of reasons such as determining the best combination, which reduces the initial capital investment, maintaining power supply reliability, reducing the maintenance of system components, etc. [8]. A combination of energy sources having different characteristics reduces the impact of time-varying energy potential of renewable energy sources. Simply solar PV energy
Fig. 13.6 Schematic diagram of a stand-alone hybrid energy system.
455
456
Introduction to energy essentials
is available in the daytime, but when it comes to night, you need to fine some other alternatives or store some solar PV energy during daytime. When it comes to wind energy, it is also having similar qualities but generally with a much chaotic variation.The time-varying nature of the renewable energy potential makes it essential to incorporate energy storage and dispatchable energy sources. Energy systems are playing a major role in day-to-day life. It may be your refrigerator, air conditioner, power generator that you use to get electricity, etc. that we discuss. It is prudent that though we use power and energy, very few of us are concerned on energy conservation. Even though we always try to match it with financial aspects, I feel that there is something more on it especially when considering the social responsibility. Fossil fuel resources are depleting at a rapid speed, and at the same time, we are facing lots of problems created by the emission of fossil fuel combustion. Therefore, we are in a period where special attention should be given to conservation of energy. Optimal designs of energy systems become vital in such circumstances, which is always a challenging process where a number of technoeconomic and environmental aspects need to be considered. Most of the time, modeling related with such energy systems is a difficult task. Meanwhile the number of design parameters is to be considered. This makes the optimization work hard, and it is essential to move away from classical methods. Current commercial, utility-scale HESs include: • Geothermal + solar PV • Biomass + solar CSP • Solar PV + fuel cells • Wind + solar PV • Biodiesel + wind • Gas + solar CSP • Coal + solar CSP More information of any of these commercial plants can be found here [9].
13.3 Energy storage systems The benefits of energy storage are significant and have long been recognized as necessary for the coordinated and reliable operation of utility grids. Energy storage is especially important to the integration of distributed renewable generation technologies. Storage protects against errors in forecasting, removes barriers to connecting renewable energy resources to a variety of grids, shifts demand on-peaks by storing off-peak energy, provides frequency regulation, and can delay expensive grid upgrades or downtime due to sudden demand or any trip-off of any sources attached to the nationwide grid system. See Chapter 17 for reference by Zohuri and McDaniel [10].
Energy storage driving renewable energy
Fig. 13.7 Electrical grid distribution in the US Department of Energy graphics. (Courtesy of Department of Energy).
It is important to know that there is no “national power grid” in the United States. In fact, the continental United States is divided into three main power grids (Fig. 13.7): 1. The Eastern Interconnected System or the Eastern Interconnect. 2. The Western Interconnected System or the Western Interconnect. 3. The Texas Interconnected System or the Texas Interconnect. Current commercial, utility-scale energy storage technologies include: • Pumped hydropower storage • Compressed air energy storage (CAES) • Adiabatic compressed air energy storage for electricity (ADELE) • Molten salt energy storage • Batteries • Flywheels Note that: “Adiabatic” here means additional use of the compression heat to increase efficiency. The technology of choice today is the pumped-storage power plant. In any excess power supply, water is electrically pumped into a reservoir on a hill, so that it can be discharged when power demand is high to drive a turbine in the valley downstream. Germany has a pumped-storage power plant producing a total of about 7000 MW with an efficiency which they are claiming is “between” 75% and 86%. The expansion
457
458
Introduction to energy essentials
Fig. 13.8 Herdecke pumped-storage power plant. (Courtesy of RWE of German Power Company).
potential is severely limited, especially in northern Germany where the balancing need is greatest. Fig. 13.8 is an illustration of a CAES in Herdecke, Germany, and its conceptual design is similar in principle to pumped storage: during the phases of excess availability, electrically driven compressors compress air in a cavern to some 70 bars. For discharge of the stored energy, the air is conducted via an air turbine, which drives a generator. Just as in pumped storage, its power can be released very quickly. One merit over pumped storage, however, is that the visible impact on the landscape is low. What is more is the facilities can be built near the centuries of wind power production, especially in central and northern Germany (see Fig. 13.9).
Fig. 13.9 Turbine hall of the Vianden pumped-storage power plant. (Courtesy of RWE of German Power Company).
Energy storage driving renewable energy
Generator Thermal Turbine energy storage Compressor Air intake filter
Caverns
Fig. 13.10 Illustration of ADELE facility. (Courtesy of RWE of German Power Company).
13.4 Compressed air energy storage description CAES is the term given to the technique of storing energy as the potential energy of a compressed gas. Usually it refers to air pumped into large storage tanks or naturally occurring underground formations. While the technique has historically been used to provide the grid with a variety of ancillary services, it is gaining attention recently as a means of addressing the intermittency problems associated with wind turbine electrical generators. See Fig. 13.10, which is an artistic schematic of the CAES approach. When energy is available, it is used to run air compressors which pump air into the storage cavern. When electricity is needed, it is expanded through conventional gas turbine expanders. Note that some additional energy (typically natural gas) is used during the expansion process to ensure that maximum energy is obtained from the compressed air (albeit as much as 67% less gas than would be used for an equivalent amount of electricity using gas turbine generators without CAES). Today, there exist two CAES plants: 1. Compressed air energy storage 2. Advanced adiabatic compressed air energy storage (AA-CAES) CAES plants store energy in the form of compressed air. Only two plants of this type exist worldwide, the first one built over 30 years ago in Huntorf, Germany, with a power output of 320 MW and a storage capacity of 580 MWh. The second one is located in McIntosh, Alabama, USA, and began operation in 1991 with a 110 MW output and 2860 MWh of storage capacity. Both are still in operation.
459
460
Introduction to energy essentials
13.4.1 Compressed air energy storage One that is in Huntorf (Lower Saxony) since 1978, and another in McIntosh (Alabama, USA) since 1991. The efficiency of the 320-MW plant in Huntorf is about 42%, that of McIntosh around 54%. This means that they are more than 20 percentage points below the efficiency of pumped-storage plants [10]. 1. Huntorf Plant The world’s first compressed air storage power station, the Huntorf Plant has been operational since 1978. The 290 MW plant, located in Bremen, Germany, is used to provide peak shaving, spinning reserves, and Volt Amperes Reactive (VAR) support. A total volume of 11 million cubic feet is stored at pressures up to 1000 psi in two underground salt caverns, situated 2100 to 2600 ft below the surface. It requires 12 hours of off-peak power to fully recharge, and then is capable of delivering full output (290 MW) for up to 4 hours. This system operates a conventional cycle and combusts natural gas prior to expansion [25]. 2. McIntosh Alabama’s Electric Cooperative has been running the world’s second CAES facility since 1991. Called the McIntosh project, it is a 110 MW unit. This commercial venture is used to store off-peak power, generate peak power, and provide spinning reserve. Nineteen million cubic feet is stored at pressures up to 1080 psi in a salt cavern up to 2500 ft deep and can provide full power output for 26 hours. This system recovers waste heat which reduces fuel consumption by ∼25% compared to the Huntorf Plant [25]. There more companies are investing into CAES approach and they are listed as follows: 3. Iowa Stored Energy Park Announced in January of 2007, the Iowa Stored Energy Park (ISEP) is partnership between the Iowa Association of Municipal Utilities and the Department of Energy. They plan to integrate a 75 to 150 MW wind farm with underground CAES, 3000 ft below the surface. The ISEP is currently in design phase with anticipated generation starting in 2011. 4. General Compression A start-up company in the Boston area has teamed up with a compressor company (Mechanology) to produce the world’s first wind turbine-air compressor. These new wind turbines will have the capacity of approximately 1.5 MW, but instead of generating electricity, each wind turbine will pump air into CAES. This approach has the potential for saving money and improving overall efficiency by eliminating the intermediate and unnecessary electrical generation between the turbine and the air compressor. Conceptually, the basic idea is to use an electric compressor to compress air to a pressure of about 60 bars and store it in giant underground spaces like old salt caverns,
Energy storage driving renewable energy
aquifers, or pore storage sites and to power a turbine to generate electricity again when demanded. These cavern storages are sealed airtight as proved by the existing two plants and have also been used to store natural gas for years now. There are few advantages associated with CAES and the primary benefits of implementing a CAES system are ancillary services provided to the grid. Applications include: peak shaving; spinning reserve; VAR support; and arbitrage [25]. By utilizing CAES, the energy from a variety of sources (including wind, solar, and the grid itself) can be temporarily stored to be recovered at a later time, presumably when it is more needed and, perhaps, more valuable. The advantages of CAES are particularly compelling when coupled with an intermittent source such as wind energy. The proposed wind park in Iowa will result in a wind farm which could conceivably be used by utilities to supplement base loads or in meeting hourly load variations and peaks. Although CAES systems which use underground storage are inherently site specific, it is estimated that more than 80% of the US territory, including most of Idaho, has geology suitable for such underground storage. CAES utilizes proven technology that can be optimized for specific site conditions and competitively delivered by various suppliers. However, the concept has two major problems when it comes to pressuring air. First, compressing the air leads to a very significant amount of heat generation and subsequent power loss if unused. In addition, the air will freeze the power turbine when decompressed.Therefore, both the existing plants in Huntorf and McIntosh use a hybrid concept with gas combustion as gas turbine power stations require compressed air to work efficiently anyway. Instead of using the combustion of the gas to compress the air like in a conventional gas turbine [22], the stored air in the caverns can be used, meaning that, technically, these CAES plants both store and produce electricity. As is the case with any energy conversion, certain losses are inevitable. Less energy eventually makes it to the grid if it passes through the CAES system than in a similar system without storage. Some of these losses are mitigated in the approach used by General Compression (using the wind turbine to compress the air directly). In any event, the requirement for additional heating in the expansion process is the most significant disadvantage. By some estimates, 1 kWh worth of natural gas will be needed for every 3 kWh generated from a CAES system. This is particularly problematic if fossil fuels are used for the heat addition. As natural gas prices increase, the economics of CAES, marginal at present, could fail. Again, using the wind energy example, one might view a wind farm using CAES as a gas turbine plant with a threefold increase in yield over a conventional gas turbine generator. While this is an impressive improvement, it takes some of the “renewable” luster off the wind farm. It is not clear how policies like the production tax credit or renewable portfolio standards will view this technology. What is the disadvantages that worth to consider and as part of this kind of storage is? What lowers the efficiency?
461
462
Introduction to energy essentials
We can seek the answer as follows: 1. First, the air that heats up during compression must be cooled down again to the ambient temperature before it can be stored in the cavern. 2. Second, the cold air must be reheated for discharge of the storage facility since it cools strongly when expanding in a turbine for power generation. Today’s plants use natural gas for this. Valuable efficiency percentages are lost. Rheinisch-Westfälisches Elektrizitätswerk (RWE) Power Corporation, the biggest Germany power producer company along General Electric (GE) are leading player in the extraction of energy raw materials and they have teamed up to work on production of an adiabatic CAES facility and the project for electricity supply known as ADELE.The concept and principle of process steps behind the ADELE is as follows and Fig. 13.10 shows a conceptual layout of such facility. When the air is compressed, the heat is not released into the surroundings: most of it is captured in a heat-storage facility. During discharge, the heat-storage device rereleases its energy into the compressed air, so that no gas cocombustion to heat the compressed air is needed. The object is to make efficiencies of around 70% possible. What is more, the input of fossil fuels is avoided. Hence, this technology permits the CO2-neutral provision of peak-load electricity from renewable energy. That this technology is doable has been shown by the European Union (EU) project.
13.4.2 Advanced adiabatic compressed air energy storage Currently in the development phase is the first ever AA-CAES plant called ADELE [23] in Germany under the direction of the RWE AG and in cooperation with GE, Züblin AG and the German Aerospace Center (DLR) [24]. The AA-CAES was under study by GE and it was presented in 2008. The aim of the new joint project mounted by the German Aerospace Center (DLR), Ed. Züblin AG, Erdgasspeicher Kalle GmbH, GE Global Research, OomsIttner-Hof GmbH, and RWE Power AG—the project being officially sealed in January 2010—is to develop an adiabatic CAES power station up to bidding maturity for a first demonstration plant. The federal ministry for economics has held out a prospect of funding for the ADELE project. The notable difference to existing CAES plants is that the heat produced by the compressing process, which reaches up to 600°C (873 K) was dissipated into the environment. Now it is now transferred by heat exchangers and stored in heat-storage sites. During the discharge, the heat storage releases its energy into the compressed air so that no gas cocombustion to heat the compressed air is needed in order to prevent the turbines from freezing, making it a real energy storage with a theoretical efficiency of approximately 70% and vastly carbon dioxide (CO2) neutral. Fig. 13.11 is illustration of two commercial units available by GE.
Energy storage driving renewable energy
Fig. 13.11 General Electric commercially available CAES units.
In conclusion, if implemented in Idaho, CAES can be used to delay or offset upgrades to the electric transmission grid that would otherwise be necessary. Additionally, it can be used to offset the adverse effects of intermittent renewable energy sources such as wind and solar. The energy community, particularly wind developers and grid operators with significant wind capacity, are watching the Iowa project closely. The economics of the concept appear to work out, but significant research and development efforts could address and mitigate some of the disadvantages. Until the improvements discussed above become commercially available, a biomass source of combustor gas for the expander would bring the approach to a carbon-neutral status. In the light of looming carbon regulations and rising natural gas costs that would alleviate most of the economic uncertainty of CAES. As part of business case argument, we can state that: • The Electric Power Research Institute (EPRI) calls CAES the only energy storage option, apart from pumped hydro, that is available now and can store large amounts of energy and release it over long periods of time—both of which are necessary if you are looking at energy storage for the electrical grid. One hundred and fifty megawatt salt-based project is under development in upstate New York. • Economics of large CAES (100–300 MW underground storage) 1. Capital $590 to $730 per kW 2. Variable $1 to $2 per KWh
463
464
Introduction to energy essentials
Fig. 13.12 Chart of regions with geology favorable for CAES and class 4+ winds.
3. Hours 10 4. Total cost $600 to $750 per kW $kW + (Hours x $/kWh). Fig. 13.12 is presentation of chart of regions with geology favorable for CAES and class 4+ winds are superimposed to indicate promising CAES plant locations. Source: “Compressed Air Energy Storage:Theory, Resources, and Applications for Wind Power,” Samir Succar and Robert H. Williams, Princeton University (published April 2008). Pros and cons of the CAES are listed below: Benefits • Efficiency—CAES plants consume about 35% of the amount of premium fuel utilized by a conventional combustion turbine (CT) and thus produce about 35% of the pollutants per kWh generated from a CT. • Availability—CAES is the only technology available today, other than pumped hydro which can store large amounts of energy and release them over long periods of time. According to a recent study by EPRI, 80% of US land has geology suitable for underground storage. Pumped hydro is still the most common option for large-scale energy storage, but few new sites are available, and they are linked to weather. • Potential large scale—Like pumped hydro, there are no technical limits to the implementation of large projects. • Energy price variation—Playing the spread between on-peak and off-peak prices. The differential between the two prices is the time value of energy storage. This is basically, “Buy low, sell high.” But according to Smith at B&V, this does not necessarily get you there. And “there” is the ability for a CAES project to generate revenue as a stand-alone project.
Energy storage driving renewable energy
• Capacity • Ancillary services such as spinning reserves, regulation up, regulation down, black start, and VAR support. • Integrating renewable energy sources. Risks and issues • Limited geologic formations—Unfortunately, the geologic formations necessary for compressed air storage are relatively rare, meaning that it likely will never be a major contributor to the national energy system. At large scale open to similar siting constraints as pumped hydro. • Safety—Mainly concerns with the catastrophic rupture of the tank. Highly conservative safety codes make this a rare occurrence at the trade-off of higher weight. Codes may limit the legal working pressure to less than 40% of the rupture pressure for steel bottles, and less than 20% for fiber-wound bottles. Design rules are according to the ISO 11439 standard. High-pressure bottles are fairly strong so that they generally do not rupture in crashes. • Cost—Also subject to financing difficulties due to nature of underground construction. • Proof of concept—The effectiveness and economy of CAES has not yet been fully proved, especially adiabatic storage. • Reheat requirement—Upon removal from storage, the air must be reheated prior to expansion in the turbine to power a generator. The technology is not truly “clean” because it consumes about 35% of the amount of premium fuel consumed by a conventional CT and thus produces about 35% of the pollutants on a per kWh basis when compared to it.
13.5 Variable electricity with base-load reactor operation Another way storing energy to meet a variable electricity with base-load reactor operation is suggest by Forsberg [12] of MIT based on recent technology and research suggested by Forsberg et al. in [13] on nuclear air-Brayton combined cycle (NACC) which is a continuous collaboration. The goal of their collaboration is not only deal with a low-carbon world and use the energy sources such as nuclear, but to look at other means of renewable energy source such as wind, solar, and hydrogen produced by means of VHTR of next generation nuclear plant coupled with HPP in a coexisting circumstance. The defining characteristics of these technologies are: 1. High-capital and low-operating costs requiring full capacity operation for economic energy production. 2. Output does not match the variable energy needs by man. This challenge suggests a need to develop new nuclear technologies to meet the variable energy needs for low-carbon world while improving economics. Hence, to the
465
466
Introduction to energy essentials
Fig. 13.13 Capability of modular FHR with NACC and FIRES with base-load FHR operation. See references by Forsberg et al. [12-13].
above challenge, we have been developing a FHR with a NACC [7-8] and Firebrick Resistance-Heated Energy Storage (FIRES). The goals are to: I. Improve NPP economics by 50% to 100% relative to a base-load NPP. II. Develop the enabling technology for a zero-carbon nuclear renewables electricity grid by providing dispatchable power. III. Eliminate major fuel failures and hence eliminate the potential for major offsite radionuclide releases in a beyond design basis accident. Fig. 13.13 shows the capabilities of a modular FHR when coupled to the electricity grid. FHR produces base-load electricity with peak electricity produced by a topping cycle using auxiliary natural gas or stored heat—or further into the future using hydrogen. The FIRES heat-storage capability enables the FHR to replace energy storage technologies such as batteries and pumped storage—a storage requirement for a grid with significant nondispatchable solar- or wind-generating systems. The FHR is a new class of reactors (Fig. 13.14) with characteristics different from light water reactor (LWR). The fuel is the graphite-matrix coated-particle fuel used by high-temperature gas-cooled reactor (HTGR) resulting in similar reactor core and fuel
Fig. 13.14 Comparison of the LWR and FHR [7].
Energy storage driving renewable energy
cycle designs—except the power density is greater because liquids are better coolants than gases. The coolant is a clean fluoride salt mixture. The coolant salts were originally developed for the MSR where the fuel is dissolved in the coolant. Current coolantboundary material limitations imply maximum coolant temperatures of about 700°C. New materials are being developed that may allow exit coolant temperatures of 800°C or more. The power cycle is like that used in natural-gas-fired plants. The fluoride salt coolants were originally developed for the US Aircraft Nuclear Propulsion in the late 1950s. The goal was to develop a nuclear-powered jet bomber. These fluoride salts have low nuclear cross sections with melting points of 350°C to 500°C and boiling points more than 1200°C—properties for efficient transfer of heat from a reactor to a jet engine. Since then there have been two developments. The first development was high-temperature graphite-matrix coated-particle fuels for HTGRs that are compatible with liquid salt coolants. The second has been a half-century of improvements in utility gas turbines that now make it feasible to couple a nuclear reactor (the FHR) to NACC. The FHR is coupled to a NACC with FIRES (Fig. 13.15). In the power cycle external air is filtered, compressed, heated by hot salt from the FHR while going through a coiled tube air heat (CTAH) exchanger, sent through a turbine producing electricity, reheated in a second CTAH to the same gas temperature, and sent through a second turbine producing added electricity. Warm low-pressure air flow from the gas turbine system exhaust drives a heat recovery steam generator (HRSG), which provides steam to either an industrial steam distribution system for process heat sales or a Rankine cycle for additional electricity production. The air from the HRSG is exhausted up the stack to the atmosphere. Added electricity can be produced by injecting fuel (natural gas, hydrogen, etc.) or adding stored heat after nuclear heating by the second CTAH. These boost temperatures in the compressed gas stream going to the second turbine and to the HRSG [13].
Fig. 13.15 Nuclear air-Brayton combined cycle (NACC) with Firebrick Resistance-Heated Energy Storage (FIRES) [13].
467
468
Introduction to energy essentials
Since a NACC system looks quite good for a salt-cooled reactor, it is worth considering what it might do for a sodium-cooled reactor. With some modifications, it appears that it could be competitive with systems that have been built. A computer model was built based on standard techniques for analyzing Brayton and Rankine systems. System performance was optimized by varying the turbine outlet temperatures for a fixed turbine inlet temperature. A second parameter that can usually be varied to obtain optimum performance is the peak pressure in the steam cycle. For most of the cases considered here, this was held constant at 12.4 MPa (1800 psi) [8]. Fairly detailed design was attempted for the heat exchangers involved in the system as they tend to dominate system size, more details are provided in Chapters 5 to 7 of this book. The techniques and data were extracted from the text by Kays and London [14]. The stored heat option involves heating firebrick inside a prestress concrete pressure vessel with electricity to high temperatures at times of low electricity prices; that is, below the price of natural gas. When peak power is needed, compressed air after nuclear heating and before entering the second turbine would be routed through the firebrick, heated to higher temperatures and sent to the second turbine. The efficiency of converting electricity to heat is 100%. The efficiency of converting auxiliary heat (natural gas or stored heat) to electricity in our current design is 66%. This implies a round trip efficiency of electricity to heat to electricity of ∼66%. Improvements in gas turbines in the next decade are expected to raise that efficiency to 70%. FIRES would only be added to NACC in electricity grids where there are significant quantities of electricity at prices less than the price of natural gas. As discussed later, these conditions are expected in any power grid with significant installed wind or solar capacity. As we said, much of the FIRES heat-storage technology is being developed by General Electric® and its partners for adiabatic CAES system called Adele (German abbreviation). The first prototype storage system is expected to be operational by 2018 with 90 MWe peak power and storing 360 MWh. When the price of electricity is low the air is (1) adiabatically compressed to 70 bars with an exit temperature of 600°C, (2) cooled to 40°C by flowing the hot compressed air through firebrick in a prestress concrete pressure vessel, and (3) stored as cool compressed air in underground salt caverns. At times of high electricity prices, the compressed air from the underground cavern goes through the firebrick is reheated, and sent through a turbine to produce electricity with the air exhausted to the atmosphere. The expected round trip storage efficiency is 70%. The Adele project is integrating firebrick heat storage into a gas turbine system. For NACC using FIRES there are differences: (1) the peak pressure would be about a third of the Adele project, (2) the firebrick is heated to higher temperatures, and (3) electricity is used to heat the firebrick at times of low electricity prices to higher temperatures. The technology for heat-storage integration into NACC is partly under development. To show that utilization of a NACC system is very efficient and cost effective for such an innovative approach to store energy in the form of FIRES process, the
Energy storage driving renewable energy
Fig. 13.16 System layout with recuperator and intercooler. C, compressor; GT, gas turbine; ST, steam turbine; PHX, primary heat exchanger; IC, intercooler; P, pump [10].
following reasoning is presented here and for further information reader should explore the text book by Zohuri [6]. Since the high-pressure water in the bottoming cycle must be heated and the heating of the air in the air compressor increases the work required, it is possible to split the compressor and add an intercooler that heats the high-pressure water in the bottoming cycle and cools the output from the first part of the compressor. If this is done, the efficiency goes to 40.3% and the overall compressor pressure ratio goes to 2.0. A system diagram is provided in Fig. 13.16 [6]. The efficiency of NACC power systems continues to increase with increased turbine inlet temperatures. For the foreseeable future there does not appear to be a limitation to using off the shelf materials as it is not likely that a reactor heated system will exceed 1300 K turbine inlet temperature. A comparison of the cycle efficiencies for several cycles that have been proposed for the next generation nuclear plant [6] is presented in Fig. 13.17. The calculations for the NACC systems are based on the system described in Fig. 13.17 with a peak steam pressure of 12.4 MPa. NACC systems can be applied to most of the proposed next generation systems. Their strongest competitor in terms of cycle efficiency is the supercritical CO2 system. NACC systems will match or better the efficiency of these systems at or above 700°C. But NACC systems have the competitive advantage of a large customer base for system
469
470
Introduction to energy essentials
Fig. 13.17 Cycle efficiencies for various advanced cycles [8].
hardware, significantly reduced circulating water requirement for rejecting waste heat, and much greater efforts to improve the technology relative to other power cycles [13]. On January 21, 2010, the California Public Utilities Commission (CPUC) approved Pacific Gas and Electric’s (PG&E’s) request for matching funds of $25 million for the project. The CPUC found that the CAES demonstration project will provide PG&E with a better understanding of a promising energy storage technology, which has the potential to lower costs for customers and reduce greenhouse gas (GHG) emissions through greater integration of renewable energy sources. The California Energy Commission has also shown support for the project with conditional approval of a $1 million grant. The commercial-scale project has a nominal output capacity of 300 MW—like a mid-sized power plant—for up to 10 hours. It is estimated that a commercial plant could come on-line in the 2020–2021 timeframe. The time frame of this project is laid out here and Fig. 13.18 is conceptual of such commercial facility. PG&E is exploring this project in three primary phases: Phase 1: Reservoir feasibility including site control, reservoir performance, economic viability, and environmental impacts.
Energy storage driving renewable energy
Compressed air energy storage
Off peak
electricity in Motor
Compressor
Air in
Pressure turbines
Generator
Peak
electricity out
Air out
Depleted gas reservoir
Fig. 13.18 Conceptual illustration for PG&E approach.
Phase 2: Commercial plant engineering, procurement and construction, and commissioning. Phase 3: Operations monitoring and technology transfer. However, construction of a prototype brings new obstacles and other challenges. The engineering of heat-storage sites capable of holding the energy over longer periods without significant losses; compressors that can handle both the high pressures as well as the high temperatures and turbines with the ability to maintain on a constant output under changing conditions (changing temperatures, decreasing air pressure) are some of the challenges. However, with the current state of the art it is very doable. Before we finish off this section, few acronyms and definitions should reflect here, which are used in this technology as renewable energy and they are: A. Adiabatic storage—The heat that appears during compression is also stored, and then returned to the air when the air is expanded. This is a subject of ongoing study, but no utility-scale plants of this type have been built. The theoretical efficiency of adiabatic energy storage approaches 100% for large and/or rapidly cycled devices, but in practice round trip efficiency is expected to be 70%. Heat can be stored in a solid such as concrete or stone, or more likely in a fluid such as hot oil (up to 300°C) or a molten-salt (600°C).
471
472
Introduction to energy essentials
B. Diabatic storage—The extra heat is removed from the air with intercoolers following compression (thus approaching isothermal compression), and is dissipated into the atmosphere as waste. Upon removal from storage, the air must be reheated prior to expansion in the turbine to power a generator.The heat discarded in the intercoolers degrades efficiency, but the system is simpler than the adiabatic one, and thus far is the only system which has been implemented commercially. The McIntosh CAES plant requires 0.69 kWh of electricity and 1.17 kWh of gas for each 1.0 kWh of electrical output (a non-CAES natural gas plant can be up to 60% efficient therefore uses 1.67 kWh of gas per kWh generated). A. Dispatchable generation—Sources of electricity that can be dispatched at the request of power grid operators; that is, it can be turned on or off upon demand. This should be contrasted with certain types of base-load generation capacity, such as nuclear power, which may have limited capability to maneuver or adjust their power output. CAES can help make intermittent power sources such as wind power dispatchable. The time periods in which dispatchable generation plant may be turned on or off may vary, and be considered in time frames of minutes or hours. B. Intercooler (original UK term, sometimes after cooler in US practice), or charge air cooler, is an air-to-air or air-to-liquid heat exchange device which removes the heat of compression (i.e., the temperature rise) that occurs in any gas when its pressure is raised or its unit mass per unit volume (density) is increased. Compressing air heats it and expanding air cools it.Therefore, practical air engines require heat exchangers in order to avoid excessively high or low temperatures and even so do not reach ideal constant temperature conditions. C. Isothermal compression and expansion approaches (which attempt to maintain operating temperature by constant heat exchange to the environment) are only practical for rather low power levels, unless very effective heat exchangers can be incorporated. The theoretical efficiency of isothermal energy storage approaches 100% for small and/or slowly cycled devices and/or perfect heat transfer to the environment. D. Turboexpander (also referred to as an expansion turbine) is a centrifugal or axial flow turbine through which a high-pressure gas is expanded to produce work that is often used to drive a compressor. Because work is extracted from the expanding highpressure gas, the expansion is an isentropic process (i.e., a constant entropy process) and the low-pressure exhaust gas from the turbine is at a very low temperature, sometimes as low as −90°C or less.
13.6 Why we need nuclear power Some scientists and engineers in the nuclear engineering field are calling for 100% renewable energy. That is totally the wrong approach. However, the new generation of NPPs that are known as GEN IV are taking different approach from design point of view to be more efficient and cost effective from ownership perspective. It has been suggested
Energy storage driving renewable energy
Fig. 13.19 A nuclear power plant South of Detroit, Michigan.
by this author [6] and others as innovative approach to consider combined cycle as way of improving the thermal output of these rectors as a means of small modular reactor (SMR) design configuration to the level if not higher but at least to be close to 60% efficiency, where today’s fossil and gas fuel type power plants are to produce electricity. GEN IV NPPs are designed with smaller real estate and foot print in respect to their predecessor of past generation such as GEN III types. See Fig. 13.19, which illustrates a typical generation three (GEN III) NPPs, in southern part of Detroit Michigan. As part of Clime Desk collaboration, the story of renewable energy was published by Julian Spector in July 20th of 2015 in Citlab.com site under the title of “The Environmentalist Case Against 100% Renewable Energy Plants” [15]. In his article, he claimed that “It might be technically feasible, but that doesn’t mean it’s the best plan to pursue” and he continued to state that; Renewable energy has had a busy year. California and New York have adopted ambitious plans calling for 50% renewable energy by 2030. A group of Stanford and Berkeley scientists has put forth an even bolder vision—encouraging all 50 states to run on wind, water, and solar by 2050, without any nuclear energy or biofuels in the picture. New York City Mayor Bill de Blasio has announced his intention to go fully renewable with the city government’s power, too. A world without any fossil fuel energy would be a much cleaner place for both people and the environment. Right now, renewable energy accounts for just 13% of all US electricity. A significant increase in that share would lead to a major reduction in air pollution and its attendant diseases, not to mention the costs of climate change-induced flooding or wildfires. The lives, time, and property saved could be put to work tackling other social problems. But it is not entirely clear that a US energy grid based on 100% renewables is the best way to achieve a zero-carbon future. On the contrary, there is a strong environmentalist case for approaching that goal with caution. Limiting a zero-carbon future to wind, water, and solar means greater costs of storing this energy, discarding other
473
474
Introduction to energy essentials
existing zero-carbon sources like nuclear, and generally blanketing the earth with panels and turbines as a mean to save it.
13.6.1 The merits of total transformation For their renewable energy roadmap study, Stanford professor Mark Jacobson and his team used US Energy Information Administration data to project “business as usual” energy consumption in 2050. They then compiled state-by-state energy portfolios needed to meet that projected demand through expanded wind, water, and solar energy generation. The endpoint is a future in which every driver in America rides an electric car, every stove in every house and restaurant cooks with electricity instead of gas, every plane flies by cryogenic hydrogen (that is what rockets use, and the Soviet Union built an experimental airplane that flew on it, too) [16]. The authors point out that electric energy is more efficient than fuel combustion for heating and motors. When that efficiency is scaled up to an entirely electrified society by 2050, they project a 39.3% reduction in America’s electricity load compared to business as usual (see Fig. 13.20). Mark Jacobson plans a plan chart to convert US energy to 100% renewables by 2050 involves gradually reducing reliance on fossil fuels and nuclear energy, and increasing supply of wind, solar, and water energy. Heavy reliance on wind (50% of supply) and solar (45.25% of supply) pose the challenge, though, of meeting peak consumer energy demand when the sun is not shining, or the winds die down. Jacobson and company propose to do this without any
Fig. 13.20 A plan from Stanford’s Mark Jacobson et al. (Courtesy of Energy & Environmental Science).
Energy storage driving renewable energy
new battery technology by assembling a host of creative energy storage devices, such as piping surplus energy as heat into the ground and pulling it up later for use or using cheap off-peak electricity to make ice which then goes to work cooling buildings during high-demand periods. The roadmap says that by 2050 it can “match load without loss every 30 [seconds] for six years”; the authors have an auxiliary study to support this claim, although it has not been released yet. This all might sound overwhelmingly expensive, but the researchers counter that with macrolevel accounting of the societal costs avoided by a fully renewable grid. They estimate the price tag for lives lost to fossil fuel-induced air pollution; eliminating that pollution, they find, would save up to $600 billion per year in 2050. They also estimate savings to the United States from avoiding climate change-related damage, such as droughts, wildfires, floods, and severe weather. The shift to renewables will eliminate around 3.9 million jobs associated with the old energy industry but will result in a net gain of around two million 40-year jobs. Jacobson sees this transition as a way to recognize the negative externalities the country has already been paying for. “The people who are running these coal mines have not paid for the health and climate costs they have been causing,” he says. “They have been freeloading on society for a long time.”
13.6.2 The downsides of monoculture The goal of 100% renewables plans is to achieve a host of social benefits by cutting carbon emissions out of energy production. Rejecting some zero-carbon energy sources, such as nuclear, from the outset makes the problem harder, says MIT doctoral candidate Jesse Jenkins, who researches the electric power sector’s transition to a zero-carbon system. “Why would we want to constrain ourselves to a narrow set of options to confront climate change and air pollution and other energy sector challenges when those challenges are already quite difficult?” he says. An entirely renewable portfolio creates its own special obstacles. For instance, Jenkins notes, the marginal value of renewables decreases as they penetrate the market. The free energy inputs of wind and solar initially displace the more expensive energy inputs, like natural gas. But assuming renewables successfully displace all coal and natural gas, then the plan would require building more wind and solar to displace nuclear, which provided 19% of US electricity production in 2014.That requires spending more money to achieve the same goal of a clean grid. Fig. 13.21 is illustration of solar farm with its array of panels to collect Sun energy. The other problem for planning an all-renewable grid is variability: solar produces when there is enough sun and wind produces when there is enough wind. Luckily the sun tends to shine during the day, when there’s higher demand for energy. But to ensure power when the renewables are not producing much requires energy storage. That
475
476
Introduction to energy essentials
Fig. 13.21 Solar panels soak in rays at a Southern California Edison Electricity Station in Carson, California. (Courtesy of REUTERS/Lucy Nicholson).
storage could be done through batteries, or the heat, ice, and other methods Jacobson mentions. “What people really miss about storage is it’s not just a daily storage problem,” says Armond Cohen, executive director of the Clean Air Task Force, a group that researches low-carbon energy technologies. “Wind and solar availability around the world, from week to week and month to month, can vary up to a factor of five or six.” Storage must account for when the wind cuts out for weeks due to seasonal weather variation. It is easy enough to make ice one night to cool your building the next day, Cohen notes, but to save energy for 3 weeks of low wind you would need to store up enough ice to cool the building for that whole time. Accounting for sufficient storage, then, increases the costs and scope of the energy transition. Jacobson calls for 605,400 MW of new storage capacity. US grid storage as of August 2013 totaled 24,600 MW, meaning a nearly 25-fold increase would be required to meet the roadmap. That is not impossible, but it is an effort that would not be necessary with continuous energy sources.
13.6.3 The other zero-carbon energy: nuclear Plans calling for 100% renewable energy eliminate nuclear energy from the mix. The new state roadmap casts out nuclear without much discussion, but Jacobson tells CityLab this is because when you factor in the mining and refining of uranium, nuclear energy emits more carbon than wind power. He also cites the difficulty and expense of creating new nuclear plants, and other risks like proliferation and meltdown. The decision to entirely abandon nuclear was particularly galling to Michael Shellenberger, president and cofounder of the Breakthrough Institute, which researches
Energy storage driving renewable energy
Fig. 13.22 Three Mile Island nuclear power plant at night in 2011. (Courtesy of REUTERS/Jonathan Ernst).
ways that modernism and technology can improve human prosperity and safeguard the environment, and an author of An Ecomodernist Manifesto [17]. He argues that nuclear’s efficiency, small land-use footprint, and limited resultant pollution make it a vital part of any low-carbon future. This is debatable by nuclear scientists and engineers. Even some environmentalists believe any plan for cleaner energy in the United States should involve nuclear. Fig. 13.22 is picture of Three Mile Island NPP at night. “If you care about the environment, you want food and energy production to become more efficient and centralized,” he says. “You want to put fewer inputs in and get more outputs out and get less waste.” As primary energy sources advanced from firewood to coal to natural gas to nuclear, Shellenberger says, humans have managed to get consecutively more energy out compared to what they put in. “Neither solar nor wind are substitutes for coal or natural gas or oil,” he says. “The new product has to be equal to or superior to the predecessor and solar and wind are totally different than those fuels and inferior in that they’re intermittent.” Stanford economics professor Frank Wolak, director of the Program on Energy and Sustainable Development (PESD) [18], agrees that nuclear should play a role in a zerocarbon grid. He notes that American nuclear generators are safer than ever and have an extremely high capacity factor, meaning they produce almost all their potential energy. American nuclear set a record high capacity factor of 91.8% for 2014. Wind and solar have capacity factors less than half as large. Note that: The PESD is an international, interdisciplinary program that draws on the fields of economics, political science, law, and management to investigate how real energy markets work. This means understanding not only what technologically possible and economically efficient but also how actual political and regulatory processes is lead to outcomes that are costlier and less effective than they could be.
477
478
Introduction to energy essentials
Fig. 13.23 Illustration of monthly capacity factors for selected fuels and technologies. (Courtesy of US Energy Information Administration).
Nuclear energy has a much higher capacity factor than renewable energy does— meaning it produces far more of its potential energy, see Fig. 13.23. “Nuclear energy is an extremely reliable source of zero-carbon energy,” Wolak writes via email. “It makes very little economic sense to phase it out, particularly given how successful the US nuclear industry has been over the past 30 years.” The irony of environmentalists cutting out nuclear in favor of primarily wind and solar is that these sources require much more transformation of the landscape to produce the same amount of energy. That footprint draws opposition from other environmental groups and people who just do not want to live near wind turbines. The Jacobson plan, for instance, envisions 156,200 new 5-MW offshore wind turbines. Cohen from the Clean Air Task Force compares that to the Cape Wind project, which would have installed 468 MW of wind turbines off Cape Cod. That project collapsed following legal and political opposition from millionaire landowners, but also local townspeople and fishermen. Jacobson’s proposal amounts to building nearly 1700 times the offshore capacity of Cape Wind (see Fig. 13.24).
13.6.4 A diverse portfolio The Jacobson roadmap shows that a 100% renewable grid is feasible from an engineering standpoint. The politics of implementing such a plan are much trickier, though. The study itself offers only broad recommendations for easing the transition (e.g., “Incentivize more use of efficient lighting in buildings and on city streets.”), and points to the ramp-up of aircraft production during World War II as evidence of America’s manufacturing ability under pressure. Whether or not US urgency about the environment will ever reach wartime heights is another question. Jacobson says it will be up to policymakers to fill in the details, and
Energy storage driving renewable energy
Fig. 13.24 A proposed rendering of the Failed Cape Wind Project to install 468 MW of wind turbines off Cape Cod.
notes that the recent renewable visions outlined by California and New York show it can be done. “We’re trying to provide an endpoint and each state is going to have to figure out how to get to that endpoint,” he says. Technical and political feasibility aside, it is also unclear why a fully renewable grid would be more desirable than any other combination of zero-carbon energy sources. “[The 100% renewable roadmap] is not an optimization study,” says Jenkins. “It’s not saying this is the best pathway forward in terms of any metric, particularly in terms of cost. They say, ‘How much can we push renewables and only renewables? And what will be necessary to try to decarbonize with that pathway alone?’” In other words, if the goal is to cut out carbon emissions, there are other ways to do it. Jenkins is working on models that optimize the electric grid with constraints for cost, technical feasibility, and low CO2 emissions. For an area with Texas-like wind and solar resources and energy demand, around 67% would come from nuclear (plus hydropower or fossil fuels with carbon capture and storage if available). Wind and solar make up about 19%, and the remaining 13% would be gas utilities that fire up quickly to meet peak demand when the other sources cannot.Those numbers change for different places, and in a scenario with better storage capacity, renewables can take on more of the load from nuclear. “If you try to push any one of these pieces too far it ends up being more costly and difficult to manage than the optimal system.” “All these pieces work together,” Jenkins says. “If you try to push any one of these pieces too far it ends up being more costly and difficult to manage than the optimal system.” As solar and wind technology improves and gets cheaper, other paths to cleaner power are evolving, too. New molten salt nuclear reactors, still in development, promise
479
480
Introduction to energy essentials
less uranium-intensive power generation that does not need water for cooling. They would play a significant role in bringing costs down for nuclear plants. Technology to retrofit fossil fuel plants for carbon capture and storage is still scaling up and lowering costs, Jenkins says. That would make it possible to clean up coal and gas plants that still have decades of operation left in their lifetimes, rather than shutting them down and building new capacity in their place. There is a good environmental argument for replacing dirty fossil fuel systems with renewables, but the reasons for replacing zero-carbon systems with other zero-carbon systems are less clear. Recognizing cost constraints while planning for a zero-carbon grid would force us to do more with less, which is actually a pretty good approach to sustainability.
13.7 Security of energy supply Coordinating Energy Security in Supply Activities (CESSA) is of utmost importance for the United States of America (USA) and EU and its member states, in economic, technical, and political terms. Secure energy supply is a cornerstone of the “magic triangle” of energy policy, the two others being competition and sustainability. And in times of rising geopolitical conflicts, supply security has also increased in importance in the external relations of the USA and EU. The project of CESSA was originally funded by DG Research in EU within sixth Framework Program and was also supported by DG TREN through information and access to decision makers. CESSA was coordinated by the Université Paris-Sud and the École des Mines de Paris/Paris Tech, with work packages attributed to the University of Cambridge, the Universidad Pontificia Comillas in Madrid, and the German Institute for Economic Research (DIW Berlin) in cooperation with the Chair of Energy Economics and Public Sector Management at the University of Technology (TU) Dresden. The Florence School of Regulation provided input to the project coordination and the conclusions. In addition, scholars from Stanford University and the Massachusetts Institute of Technology, among others, contributed to the work. The salience of electricity security differs greatly across the member states of the EU. In western member states, history has provided robust and flexible electricity system and market liberalization is generally well advanced. These countries enjoy a diverse range of energy sources and much investment is underway to expand this range of supply options. Investment in nuclear energy represents one such option; however, several EU countries, including Ireland and Austria, remain resolutely opposed to nuclear power. The growth of EU from 12 states to 27 has reduced the proportion of member countries with an antinuclear stance. In EU, nuclear power policy is shaped by two regulatory pressures: the regulation of electricity markets and the safety regulation of a hazardous and politically contentious
Energy storage driving renewable energy
technology. While the benefits of a single European electricity market are widely recognized, progress on the question of pan-European safety regulation is much less developed. International project collaboration is emerging, particularly in Eastern EU member states. CESSA would support moves toward the regionalization and the eventual Europeanization of safety regulation. There is also some movement in the United States toward nuclear power industry by looking the new generation of these plants namely GEN IV and in particular in smaller footprint in design known as SMR and companies like Westinghouse (W), GE, Babcock & Wilcox (B&W), and in particular a newly established company namely NuScale are in lead of these types of reactors. Economics is central to the future of nuclear power. We stress that NPPs can be developed in a liberalized electricity market with no direct subsidy. This possibility is favored by stable long-term carbon prices; sustained high oil and gas prices and regulatory approval for grid reinforcement by monopoly transmission companies similar to that put in place to assist new renewables projects. During the CESSA project the relative economic attractiveness of nuclear energy investment has improved significantly, such that economic risks now appear less daunting, although important issues of economic risk do remain, notably arising from the recent rapid escalation in construction costs and remaining uncertainties about the time before commissioning. The need to expand the supply of domestically produced energy is significant. America’s transportation sector relies almost exclusively on refined petroleum products. Approximately 52% of the petroleum consumed for transportation in the United States is imported [19], and that percentage is expected to rise steadily for the foreseeable future (Fig. 13.24). On a global scale, petroleum supplies will be in higher demand as highly populated; developing countries expand their economies and become more energy intensive. Hydrogen-powered fuel cell vehicles would virtually eliminate imports of foreign oil, because the hydrogen fuel can be produced almost entirely from the diverse domestic energy sources of renewable resources, fossil fuels, and nuclear power. Hydrogen’s role as a major energy carrier would also provide the United States with a more efficient and diversified energy infrastructure that includes a variety of options for fueling central and distributed electric power generation systems. America’s reliance on imported oil is the key challenge to our energy security.While oil is used in all sectors and for a wide variety of uses, the large majority is used for transportation—and a majority of that is used in light-duty passenger vehicles (cars and light trucks) [20].
13.8 Environmental quality The combustion of fossil fuels accounts for the majority of anthropogenic GHG emissions (chiefly carbon dioxide, CO2) released into the atmosphere. The largest sources of CO2 emissions are the electric utility and transportation sectors. Should strong
481
Introduction to energy essentials
constraints on carbon emissions be required, hydrogen will play an important role in a low-carbon global economy. Distributed hydrogen production from natural gas and central hydrogen production from natural gas (with the potential for capture and sequestration of carbon) and coal (with the capture and sequestration of carbon) can provide the means for domestic fossil fuels to remain viable energy resources. In addition, fuel cells operating on hydrogen produced from renewable resources or nuclear energy result in near-zero-carbon emissions. Air quality is a major national concern. It has been estimated that about 50% of Americans live in areas where levels of one or more air pollutants are high enough to affect public health and/or the environment (see Fig. 13.25) [21]. Personal vehicles and electric power plants are significant contributors to the nation’s air quality problems. Most states are now developing strategies for achieving national ambient air. Despite great progress in air quality improvement, approximately 150 million people nationwide lived in counties with pollution levels above the National Ambient Air Quality Standards in 2007. See references a and b below (see Fig. 13.26). a. US Environmental Protection Agency, “Air Trends: Basic Information,” (n.d.), retrieved November 18, 2008, from http://www.epa.gov/airtrends/sixpoll.html b. US Censure Bureau, 2007 Population Estimate, retrieved November 18, 2008, from http://www.census.gov However, bear in your mind that, heat accounts for over half of global energy consumption and is a significant contributor to CO2 emissions. Renewables play a key role in decarbonizing and providing cleaner heat but currently account for less than 10%
U.S. petroleum consumption Projection
22
Non-transportation (industrial, commercial, residential & power)
20 18
Million barrels per day
482
Air
16
Rail Marine Off-highway & military
14
Buses & freight trucks
12 10
Light trucks
8
U.S. production of petroleum, biofuels and coal liquids
6
U.S. petroleum production Cars
4 2 0 1985
1990
1995
2000
2005
2010
2015
2020
2025
2030
Fig. 13.25 America’s widening “oil gap.” (Courtesy of Department of Energy).
2035
Energy storage driving renewable energy
Fig. 13.26 Number of people living in countries with air quality concentrations above the level of the National Ambient Air Quality Standards in 2007. (Courtesy of Department of Energy).
of heat supply. A range of barriers need to be overcome to increase renewable heat deployment, yet renewable heat has received much less policy attention than renewable electricity. Thus, implementing a reasonable and effective policies, when it comes to renewable energy in a time of transition it plays an important role that is discussed in the next section of this chapter.
13.9 Nuclear power plant as renewable source of energy Policy Exchange’s Energy and Environment Unit conducts innovative and independent policy research into a wide range of environmental, infrastructure, and regulatory challenges. Our objectives are to influence policy making and to shape debate. We produce publications, organize events, and use the media to promote our findings and policy proposals. A key focus of our work is to identify ways to tackle environmental challenges effectively, while minimizing adverse impact on living standards. We promote welldesigned regulation to exploit the power of markets to achieve environmental outcomes innovatively and cost effectively. The discovery of nuclear fission in 1939, following the Manhattan Project, was an event that opened the prospect of entirely new source of power utilizing the internal energy of the atom. Nuclear energy is an incredibly efficient method of producing and generating electricity that also enables low-carbon emissions. Currently, many NPPs use generation III reactors to produce power. In the United States today there are 99 operating nuclear power reactors with a total installed capacity of about 100 GWe. This represents about 10% of US electrical
483
484
Introduction to energy essentials
Fig. 13.27 Electric generation by energy source.
generating resources. However, the high reliability and base-load status of these plants result in a contribution of about 20% of the actual US electricity generation. More importantly, these 99 nuclear power reactors are the source of 63.3% of our clean-air electricity. Other sources include hydro at 21.2%, wind at 13%, geothermal at 1.3%, and solar at 0.7%. See Fig. 13.27 for US electric generation by energy source. Nuclear power is a largely controversial topic in environmental physics as there are multiple pros and cons. It could be the future for our planet or just the same story as fossil fuels where it lasts for a couple hundred years or so then becomes high in demand
Energy storage driving renewable energy
and exceedingly expensive. Unfortunately, nuclear power is not renewable as it uses radioactive “heavy” metals such as uranium as its fuel which is not very abundant with about 2 to 4 parts per million in the earth’s crust however still frothy times more abundant than silver. Also, for it to be economically viable to extract, it needs to be extracted in large quantities which can only be done in countries like Canada and Australia. Many countries with small amounts of uranium cannot extract it and to make it a profitable proposition from ROI point of view. The contribution of nuclear to clean, reliable electricity is threatened by the approaching, inevitable retirement of existing reactors. Nuclear Regulatrory Commision (NRC) licenses expire, and reactor retirements begin in 2029. All currently operating plants will retire by 2050. Despite the advantages of high reliability, competitive generating costs, and low environmental impact, the potential for new builds in the United States of GWe scale, conventional plants are not very promising, especially for utilities in unregulated markets. This hesitance is due to inherently uncertain licensing and construction costs and durations, and perceived health and environmental risks. However, new advanced generation of SMRs has key characteristics that answer many of the concerns raised by environmental communities as utilities and their owners are considering new NPPs for their future generation portfolio to produce enough supply of electricity, to meet their demands due to population increase, where this demand is at 19% increase globally. Worldwide, there are 435 nuclear power reactors are in operation, totaling 367 GW(e) of generation capacity, and out of that number reactors, 103 of them do exist in the United States that are now operation under technology of Generation II and III, by employing LWR technologies, which use ordinary water as both moderator and a coolant. The next wave of nuclear plants has taken over GEN II concepts to the next level, by improving both safety and efficiency, since these generation of power plants are at the end of their life cycle. Utilities plan to build GEN III at the end of decade, and now conceptual study of new designs are in horizon, that are known as GEN IV and fall under new read map of SMRs, with better safety and efficiency in mind as part research and development of this roadmap. In a Generation II type reactor of Pressurize Water Reactor (PWR), water circulates through core (See Chapter One of this book), where it is heated by fuel’s chain reaction. The hot water is then piped to a steam generator, and the steam spins a turbine that produces electricity. However, the Generation III evolutionary PWR improves upon the Generation II of PWR types design primarily by enhancing safety feature from view point of Probability of Risk Analysis (PRA) and other mechanical and operational perspectives. Two separate 51-inch thick concrete walls in the form of inner one lined up with metal, are each strong enough to with stand the impact of heavy obstacle such as commercial airplane since aftermath of 911 episode and destruction of twine-towers in New York.
485
486
Introduction to energy essentials
Pebble bed reactor (PBR)
Pebble bed reactor scheme New fuel pebbles Cooling gas
Heated fluid to turbine
Cold fluid from turbine Pump
1 mm (a) Pebble type HTGRs
Reinforced concrete
Spent fuel pebbles
(b) Sketch of a pebble-bed reactor
Fig. 13.28 Pebble-powered reactor schematic.
The reactor vessel sits on a 20-ft slab of concrete with a leak-tight “core catcher”, where the molten core would collect and cool in the event of a meltdown. There are also four safeguards building with independent pressurizers and steam generators, each capable of providing emergency cooling of the reactor core. One of the new roadmap for new construction of GEN III considers the Pebble Power fuel, which is a smooth graphite sphere about the size of tennis ball as it is shown in Fig. 13.28, with assumption that, it could take years to assess the pros and cons of all six GEN IV designs that are mention in Chapter One of this book.With continuous rise on electricity demand worldwide due to population growth, at least in the United States the congress might not wait that long. In addition, to replacing the aging fleet of Generation II reactors that are coming to the end of their life cycles, the government wants to make progress on another front and that is the production of hydrogen as part of hybrid energy to fuel the dream of exhaust-free cars running independent of gasoline and dependency on foreign oil [26]. As a result, the frontrunner for pebble bed reactor design and production implementation, the initial $1.25 billion demonstration plant is in progress in Idaho, which is a helium-cooled, graphite-moderate reactor whose extremely high outlet temperature around 1650°F to 1830°F, also known as VHTR, would be ideal for thermal-efficiency output and also efficiently producing hydrogen [26-28]. One of the key characteristic of SMRs is their efficiency of producing thermal output for generating the electricity both for day-to-day consumption as well as being a new source for renewable energy while could get coupled in conjunction with plant
Energy storage driving renewable energy
Fig. 13.29 Emission avoided by the US nuclear industry. (Source: Emissions avoided by nuclear power are calculated using regional fossil fuel emissions rates from the Environmental Protection Agency and plant generation data from the Energy Information Administration Updated: 4/14).
producing hydrogen as fuel for the new generation of cars to prevent any additional carbon dioxide by existing engine in today’s cars running around [28]. Fig. 13.29 is an illustration of such conceptual idea, by utilizing one of the six final design of GEN IV, that is considered to be VHTR. Decarbonizing our existing electricity system with 100% renewable energy would be possible, but unnecessarily expensive and perhaps unsustainable. The intermittent nature of solar and wind would mean that large amounts of underutilized backup capacity and storage would be required at great expense to the consumer/taxpayer. Biomass could be used to provide some backup power supply, but this is unlikely to be a sustainable solution for more than a small part of our electricity system [2]. Decarbonizing our whole energy system using renewable sources would test the limits of the possible. Electricity comprises just one-fifth of final energy demand in the United Kingdom, for example, so creating a 100% renewable energy economy would be an order of magnitude more difficult than the already challenging task of powering our existing electricity grid with 100% renewable sources. Here in the United States under Department of Energy watch, nuclear industries also pushing the same agendas as United Kingdom, when it comes into the new advanced generation of NPPs. There is much hope for generation IV reactors in terms of sustainability, safety, and price. While Generation III reactors will likely remain popular as the choice technology for new reactors currently being built, Generation IV will be an opportunity to build more sustainable nuclear reactors for the longer-term future (by that meaning in the next 20 to 50 years) [29]. The technology is certainly in route for this timeline, but
487
488
Introduction to energy essentials
there are other political, strategic, and economic hurdles that Generation IV prototypes will need to overcome in order to become part of the nuclear power infrastructure [29]. One of the biggest hurdles is the fact that there has been a decrease in funding for Generation IV reactors, especially given the popularity of Generation III reactors [29]. Though Generation IV shows quite a bit of promise, especially in developing applications outside of current NPPs, it is hard to move the current technology past the status quo [30].To find a solution to this problem, many large international organizations, such as the Generation IV International Forum, have been working to find synergies with other systems to promote the Research and Development (R&D) of Generation IV reactors [29]. Major areas of R&D for Generation IV reactors have been in sustainability and safety, two key measures that nuclear reactor development is measured against [30]. As we have discussed in previous chapters of this book, GEN IV NPPs offer an excellent safety margin and impose better safety criteria. 2014 marked a period of important progress in terms of safety design guidelines for Generation IV reactors. There has been an established hierarchy of safety standards, starting with Safety Fundamentals, and moving on to Safety Design Criteria and Guidelines, ending with Technical Codes & Standard [3].The safety criteria are also being explored for the variety of different systems that have been prototyped, including VHTR (Fig. 13.30), sodium fast reactor, super critical water reactor (SCWR), gas fast reactor, lead fast reactor, and MSR [31,32]. These six Control rods
Pump Graphite reactor core
Very-high-temperature reactor
Graphite reflector
Water
Blower
Oxygen
Reactor
Helium coolant
Heat exchanger
Heat sink Hydrogen
Hydrogen production plant
Fig. 13.30 Very-high-temperature reactor of Generation IV in conjunction with hydrogen power plant facility.
Energy storage driving renewable energy
are the main designs being explored, with modifications being made after the Fukushima accident [33]. The six designs fall under two general categories thermal reactors and fast reactors [32]. Main safety designs being explored would allow reactors to avoid pressurized operations and have automatic reactor shutdowns in the case of emergency [30]. Many designs also seek to avoid the usage of water to cool the reactor (which would reduce the risks in the situation where water is lost through leaks or heat) [30]. The main advantages Generation IV seeks to provide are reducing the amount of time the waste remains radioactive for (on the magnitude of ten, reducing from millennia to centuries), improving the energy yield for the nuclear fuel, increasing the variety of fuels that can be used to power the reactor, and allowing for reactors to use already present nuclear waste in its operations [33]. Many of these goals are encompassed under the umbrella of sustainability, to enable nuclear reactors to become more sustainable and environmental friendly [30]. In conclusion, we can state that Generation IV reactors have some very large goals ahead, but they are the ones that we should all be aiming for. Nuclear power has the potential to change the way energy is accessed on this planet and could provide an alternative that is efficient and sustainable.
13.10 The future of nuclear power SMRs are the latest “new” technology that nuclear advocates preach about as the game changer that will overcome previous economic and safety failure associated with previous generation NPPs of past. The debate over SMRs has been particularly intense by pioneer manufacturer, such as NuScale, because of the rapid failure of large “nuclear renaissance” reactors in market economies, the urgent need to address climate change, and the dramatic success of alternative, decentralized resources in lowering costs and increasing deployment. Therefore, assessing the prospects for SMR technology from three perspectives is in order and they can be implemented as: 1. The implementations of the history of cost escalation in nuclear reactor construction for learning, economies of scale, and other process that SMR advocates claim will lower the cost of construction and its trend into production market by the demand on energy by virtue of modulization. 2. The challenges SMR technology face in terms of high costs resulting from lost economies of scale, long lead time needed to develop a new design, the size of the task to create assembly lines for modular reactors, and intense concerns about safety. 3. The cost and other characteristic—e.g., scalability, speed to market, flexibility, etc.— of available alternative compared SMR technology, and least but not last operational licensing. One of the important element of surprise as part of “nuclear renaissance” that needs to observed is, the recent decision of the major vendors such as Westinghouse and B&W
489
490
Introduction to energy essentials
at least in the United States, which dramatically reduce SMR development efforts reflects the severe disadvantages that SMR technology faces in the next several decades, yet on the other hand companies such as NuScale is expanding on SMRs technology and going forward at full speed with design and implementation of the advanced SMRs into production and is on virtue of growing as a new company into being a major player in the nuclear energy industry [8]. The analysis that is shown by Cooper [34] has identified the four factors that is creating the conditions for a “nuclear renaissance” and the dozen characteristics that suggest small modular technology would play a large part in that renaissance have turned sharply against nuclear power energy and its future of existence. In fact, he argues that, looking at the long history of commercial nuclear power, this outcome is not merely repetitive, and it is endemic and possibly inevitable. Thus, the failure of nuclear economics (see next section of this chapter) is not just bad luck, however nuclear energy is inherently uneconomic, is due, to the fact that relies on a catastrophically dangerous resource that is vulnerable to human frailties and the vicissitudes of “Mother Nature.” Cooper also argues that the severe threats to public safety posed by nuclear power and the evolving demands of safety result in an extremely complex technology that requires long lead times and large sunk capital cots. The technology suffers constant stream of cost escalation and does not exhibit cost reduction processes that are observed in other industries due to mass production involved with mass consumption by consumers, which is the main driving for cost reduction of a good as well as competition by other manufactures playing the same domain and space, where the particular good is sold. Such fluctuation of roller-coaster motion can be observed by the following two charts, Figs. 13.31 and 13.32. Where Fig. 13.32 is a completed and canceled nuclear energy capacity compared to fossil-fired capacity. In this figure we note the data between 1970 and 1996-time frame based on argument that, was the Three Mile Island accident in 1979 the main cause of US nuclear power’s woes? Therefore, any nation that claims to have the wherewithal (technical expertise and economic resources) to build a “safe” nuclear reactor will have the wherewithal to meet its needs for electricity with alternatives that are less costly and less risky.Thus, at present and for the foreseeable future, it is a virtual certainty that nuclear power is not going to be the least cost option or close to it, even in a low-carbon utility sector [35]. Considering these dismal picture of the prospects for nuclear technology, as illustrated by Figs. 13.33 and 13.34–13.37, large and small, it is not surprising that SMR technology has stumbled getting into the starting gate, with a dramatic reduction in interest from two of the leading developers, such as B&W and Westinghouse in the United States for time being, unless the future picture of need for nuclear power energy drastically would changes and force the major developers to pay their attention back to it, given the economical perspective in favor of such source of energy.
Energy storage driving renewable energy
Fig. 13.31 Illustration of cost trend for nuclear, winds, and solar. (Complement of US Department of Energy, August 2013).
B&W, one of the major firms that had received a federal SMR subsidy, stepped back from the development of SMR technology because of the failure “to secure significant additional investors or customer engineering, procurement, and construction contracts to provide the financial support necessary to develop and deploy reactors and this move cut a lot of advocate of this source of energy by surprise, where the spending from $80 million per year got reduced to around $60 million per year” [36,37].
Fig. 13.32 Illustration of completed and canceled nuclear capacity compared to fossil-fired capacity. (Complement of Bull Atomic Science 2013).
491
492
Introduction to energy essentials
By the same talking, Westinghouse, one of also, major US developer in SMRs space technology and the leading vendor supplying the design for the large-scale nuclear power and presently handling projects under construction with their AP-1000 NPP in the United States also announced similar move that was made by B&W. This also was stepping back from development of small modular nuclear technology. The reason for the decision:Westinghouse could find no customers. Instead of pushing ahead to build SMRs, Westinghouse said it would focus on decommissioning of existing reactors [12,13].
13.11 Small modular reactor driven renewable and sustainable energy In order to address this subject within this section, we ask ourselves the question that; What is the most efficient source of energy? The answer falls into the following fact that the true cost of electricity is difficult to pin down. That is because a number of inputs comprise it: the cost of fuel itself, the cost of production, as well as the cost of dealing with the damage that fuel does to the environment. Energy Points, a company that does energy analysis for business, factors in these myriad values in terms of what percentage of the energy input—fossil fuel energy, plus energy for production and energy for environmental mitigation—will become usable electricity. The chart above in Fig. 13.33 shows that fossil fuels yield, on a national average, only a portion of their original energy when converted into electricity. That is because
Fig. 13.33 Cost of electricity per 1 MWh.
Energy storage driving renewable energy
they are fossil fuels that require other fossil fuels to make the conversion into electricity; their emissions, such as carbon dioxide, also require a lot of energy to be mitigated. Renewables, however, have energy sources that are not fossil fuel and their only other energy inputs are production and mitigating the waste from that production. That actually results in more energy produced than fossil fuels put in. Wind, the most efficient fuel for electricity, creates 1164% of its original energy inputs when converted into electricity; on the other end of the efficiency spectrum, coal retains just 29% of its original energy. These are national averages, meaning that, for example, solar might be more efficient in a place such as Arizona with lots of infrastructure and direct sunlight than it is across the whole nation. Thus, a scenario such as solar farm technology may very well be suited in such environment and arguably source of fresh water shortage as a coolant media for fossil, gas, and NPP, may also enhance the solar power plant farm as only choice of solution to generate electricity as well renewable energy approach, which may very well be cost efficient for such production. However, no matter what, in any given area, electricity might come from a number of different sources, including oil, coal, gas, wind, hydropower, and solar. Each has its own set of costs, both internal and external. From Energy Points: Energy Points’ methodology measures environmental externalities and calculates the energy it takes to mitigate them. For example, it quantifies the GHG emissions that result from turning coal and natural gas into electricity and then calculates the energy it would take to mitigate those emissions through carbon capture and sequestration. Water scarcity and contamination are quantified as the energy that is required to durably supply water to that area. And in the case of solar or wind energy, Energy Points incorporates the life cycle impact of manufacturing and shipping the panels. This metric is a more rounded calculation than merely cost or carbon footprint. For example, hydroelectricity has the lowest carbon footprint of 4 gCO2/kWh, but when Energy Points factors in the full lifecycle of the different fuels, wind is the most efficient. Additionally, natural gas is the cheapest fuel to produce electricity, according to levelized cost data from the Environmental Protection Agency, which measured the total cost of building and operating a generating plant over an assumed financial life and duty cycle. Though it is cheap, it is not very efficient if you factor in its production and emissions [38].
13.12 Small modular reactor driven hydrogen for renewable energy source Research is going forward to produce hydrogen based on nuclear energy. Hydrogen production processes necessitate high temperatures that can be reached in the fourthgeneration nuclear reactors (i.e., SMRs). Technological studies are now underway in
493
494
Introduction to energy essentials
order to define and qualify components that in the future will enable us to retrieve and transfer heat produced by these reactors. Hydrogen CT power could be one of the solutions to our future energy needs particularly in on-peak demand for electricity, but until recently the problem with hydrogen power was its production for use as an energy source. Although hydrogen is the most common element in the known universe to human being, actually capturing it for energy use is a process which itself usually requires some form of fuel or energy [27].
13.13 Why we still need nuclear power “Nuclear power’s track record of providing clean and reliable electricity compares favorably with other energy sources. Low natural gas prices, mostly the result of newly accessible shale gas, have brightened the prospects that efficient gas-burning power plants could cut emissions of carbon dioxide and other pollutants relatively quickly by displacing old, inefficient coal plants, but the historical volatility of natural gas prices has made utility companies wary of putting all their eggs in that basket. Besides, in the long run, burning natural gas would still release too much carbon dioxide. Wind and solar power are becoming increasingly widespread, but their intermittent and variable supply make them poorly suited for large-scale use in the absence of an affordable way to store electricity. Hydropower, meanwhile, has very limited prospects for expansion in the United States because of environmental concerns and the small number of potential sites.” “The United States must take a number of decisions to maintain and advance the option of nuclear energy. The NRC’s initial reaction to the safety lessons of Fukushima must be translated into action; the public needs to be convinced that nuclear power is safe.Washington should stick to its plan of offering limited assistance for building several new nuclear reactors in this decade, sharing the lessons learned across the industry. It should step up its support for new technology, such as SMRs and advanced computermodeling tools. And when it comes to waste management, the government needs to overhaul the current system and get serious about long-term storage. Local concerns about nuclear waste facilities are not going to magically disappear; they need to be addressed with a more adaptive, collaborative, and transparent waste program [39]. These are not easy steps, and none of them will happen overnight. But each is needed to reduce uncertainty for the public, the energy companies, and investors. A more productive approach to developing nuclear power—and confronting the mounting risks of climate change—is long overdue. Further delay will only raise the stakes.
13.14 Is nuclear energy source of renewable energy Assuming for time being we are taking fission reaction as foundation for present (GEN III) and future (GEN IV) nuclear power reactors, as source nuclear energy source to somewhat degree, we can argue it is a clean source of energy [40].
Energy storage driving renewable energy
Although nuclear energy is considered clean energy its inclusion in the renewable energy list is a subject of major debate.To understand the debate, we need to understand the definition of renewable energy and nuclear energy first. However, until we manage through future technology of these fission reactors to manage to bring down the price electricity per kilowatt-hours driven by fusion energy down to the point of those by gas or fossil fuels, there is no chance to push these reactors beyond GEN III. However, efforts toward reduction price of electricity driven by nuclear fission power plants, especially using some innovative design of GEN IV plants with high temperature base line in conjunction with some thermodynamics cycles such as Brayton and Rankine, is on the way by so many universities and national laboratory such as Idaho National Laboratory and Universities such as MIT, UC Berkeley, and University of New Mexico as well as this author [6,28]. Renewable energy is defined as an energy source/fuel type that can regenerate and can replenish itself indefinitely. The five renewable sources used most often are biomass, wind, solar, hydro, and geothermal. Nuclear energy on the other hand is a result of heat generated through the fission process of atoms. All power plants convert heat into electricity using steam. At NPPs, the heat to make the steam is created when atoms split apart—called fission. The fission releases energy in the form of heat and neutrons. The released neutrons then go on to hit other neutrons and repeat the process, hence generating more heat. In most cases the fuel used for nuclear fission is uranium. One question we can raise here in order, to further understand whether, or not, we need present nuclear technology as a source of energy is that: What is the difference between clean energy and renewable energy? Put another way, why is nuclear power in the doghouse when it comes to revamping the nation’s energy mix? The issue has come to the forefront the time during the debate over the Waxman– Markey energy and climate bill and its provisions for a national renewable energy mandate. To simply put it, Republicans have tried and failed several times to pass amendments that would christen nuclear power as a “low-emissions” power source eligible for all the same government incentives and mandates as wind power and solar power. Many environmental groups are fundamentally opposed to the notion that nuclear power is a renewable form of energy—on the grounds, that it produces harmful waste byproducts and relies on extractive industries to procure fuel like uranium. Even so, the nuclear industry and pronuclear officials from countries including France have been trying to brand the technology as renewable, on the grounds, that it produces little or no GHGs. Branding nuclear as renewable could also enable nuclear operators to benefit from some of the same subsidies and friendly policies offered to clean energies like wind, solar, and biomass.
495
496
Introduction to energy essentials
So far, however, efforts to categorize nuclear as a renewable source of power are making little headway. The latest setback came in around August of 2009, when the head of the International Renewable Energy Agency (IRENA)—an intergovernmental group known as IRENA that advises about 140-member countries on making the transition to clean energy— dismissed the notion of including nuclear power among its favored technologies. “IRENA will not support nuclear energy programs because it is a long, complicated process, it produces waste and is relatively risky,” Hélène Pelosse, its interim director general, told in general. Energy sources like solar power, Ms. Pelosse said, are better alternatives—and less expensive ones, “especially with countries blessed with so much sun for solar plants,” she said it in 2009.
13.14.1 Argument for nuclear as renewable source of energy Most supporters of nuclear energy point out the low-carbon emission aspect of nuclear energy as its major characteristic to be defined as renewable energy. According to nuclear power opponents, if the goal to build a renewable energy infrastructure is to lowercarbon emission then there is no reason for not including nuclear energy in that list [39]. But one of the most interesting arguments for including nuclear energy in the renewable energy portfolio came from Bernard L. Cohen, former professor at University of Pittsburg. Professor Cohen defined the term indefinite (time span required for an energy source to be sustainable enough to be called renewable energy) in numbers by using the expected relationship between the sun (source of solar energy) and the earth. According to Professor Cohen, if the uranium deposit could be proved to last as long as the relationship between the Earth and Sun is supposed to last (5 billion years) then nuclear energy should be included in the renewable energy portfolio [41]. In his paper Professor Cohen claims that using breeder reactors (nuclear reactor able to generate more fissile material than it consumes) it is possible to fuel the earth with nuclear energy indefinitely. Although the amount of uranium deposit available could only supply nuclear energy for about 1000 years, Professor Cohen believes that the actual amount of uranium deposit available is way more than what is considered extractable right now. In his arguments he includes uranium that could be extracted at a higher cost, uranium from the sea water and, also uranium from eroding earth crust by river water. All, of those possible uranium resources if used in a breeder reactor would be enough to fuel the earth for another 5 billion years and hence renders nuclear energy as renewable energy.
13.14.2 Argument against nuclear as renewable source of energy One of the biggest arguments against including nuclear energy in the list of renewables is the fact that uranium deposit on earth is finite, unlike solar and wind. To be counted
Energy storage driving renewable energy
as renewable, the energy source (fuel) should be sustainable for an indefinite period of time, according to the definition of renewable energy. Another major argument proposed by the opponents of including nuclear energy as renewable energy is the harmful nuclear waste from nuclear power reactors.The nuclear waste is considered as a radioactive pollutant that goes against the notion of a renewable energy source. Yucca Mountain is one of the examples used quite often to prove this point. Most of the opponents in the United States also point at the fact that while most renewable energy source could render the US energy independent, uranium would still keep the country energy dependent as United States would still have to import uranium [40].
13.15 Safety Aftermath of the major accidents at Three Mile Island in 1979 and Chernobyl in 1986 and then recent devastated Japan’s Fukushima NPP frailer in Japan in March 2011, pretty much nuclear power fell out of favor, and some countries applied the brakes to their nuclear programs. Concerns about climate change and air pollution, as well as growing demand for electricity, led many governments to reconsider their aversion to nuclear power, which emits little carbon dioxide and had built up an impressive safety and reliability record. Some countries reversed their phaseouts of nuclear power, some extended the lifetimes of existing reactors, and many developed plans for new ones. Despite all these given concerns and issues in respect to the nuclear energy, still we are facing the fact of why we still need nuclear power as clean source of energy, particularly when we deal with renewable source of energy arguments [12]. Today, roughly 60 nuclear plants are under construction worldwide, which will add about 60,000 MW of generating capacity—equivalent to a sixth of the world’s current nuclear power capacity, however this movement has been lost after March 2001 and Japan’s Fukushima nuclear power episode. Nuclear power’s track record of providing clean and reliable electricity compares favorably with other energy sources. Low natural gas prices, mostly the result of newly accessible shale gas, have brightened the prospects that efficient gas-burning power plants could cut emissions of carbon dioxide and other pollutants relatively quickly by displacing old, inefficient coal plants, but the historical volatility of natural gas prices has made utility companies wary of putting all their eggs in that basket. Besides, in the long run, burning natural gas would still release too much carbon dioxide. Wind and solar power are becoming increasingly widespread, but their intermittent and variable supply make them poorly suited for large-scale use in the absence of an affordable way to store electricity. Hydropower, meanwhile, has very limited prospects for expansion in the United States because of environmental concerns and the small number of potential sites [42].
497
498
Introduction to energy essentials
As part of any NPP safety that one should consider as part of design and operation of such source of energy is the reactor stability. Understanding time-dependent behaviors of nuclear reactors and the methods of their control is essential to the operation and safety of NPPs. This chapter provides researchers and engineers in nuclear engineering very general yet comprehensive information on the fundamental theory of nuclear reactor kinetics and control and the state-of-the-art practice in actual plants, as well as the idea of how to bridge the two. The dynamics and stability of engineering equipment that affects their economical and operation from safety and reliable operation point of view. In this chapter, we will talk about the existing knowledge that is today’s practice for design of reactor power plants and their stabilities as well as available techniques to designers. Although, stable power processes are never guaranteed. An assortment of unstable behaviors wrecks power apparatus, including mechanical vibration, malfunctioning control apparatus, unstable fluid flow, unstable boiling of liquids, or combinations thereof. Failures and weaknesses of safety management systems are the underlying causes of most accidents [43]. The safety and capital cost challenges involved with traditional NPPs may be considerable, but a new class of reactors in the development stage holds promise for addressing them. These reactors, called SMRs, produce anywhere from 10 to 300 MW, rather than the 1000 MW produced by a typical reactor. An entire reactor, or at least most of it, can be built in a factory and shipped to a site for assembly, where several reactors can be installed together to compose a larger nuclear power station. SMRs have attractive safety features, too. Their design often incorporates natural cooling features that can continue to function in the absence of external power, and the underground placement of the reactors and the spent-fuel storage pools is more secure. Since SMRs are smaller than conventional nuclear plants, the construction costs for individual projects are more manageable, and thus the financing terms may be more favorable. And because they are factory-assembled, the on-site construction time is shorter. The utility company can build up its nuclear power capacity step by step, adding additional reactors as needed, which means that it can generate revenue from electricity sales sooner. This helps not only the plant owner but also customers, who are increasingly being asked to pay higher rates today to fund tomorrow’s plants [44]. With the US federal budget under tremendous pressure, it is hard to imagine taxpayers funding demonstrations of a new nuclear technology. But if the United States takes a hiatus from creating new clean-energy options—be it SMRs, renewable energy, advanced batteries, or carbon capture and sequestration—Americans will look back in 10 years with regret. There will be fewer economically viable options for meeting the United States’ energy and environmental needs, and the country will be less competitive in the global technology market.
Energy storage driving renewable energy
13.16 Renewable energy policies Renewable energy has grown rapidly in recent years, especially in the electricity sector where renewables now account for the largest power capacity additions globally. However, renewables still account for only just above 10% of final energy consumption and the energy sector remains dominated by fossil fuels. Renewables need to increase further and faster to bring about an energy transition that achieves climate targets, ensures energy access for all, reduces air pollution and improves energy security. Policy Exchange’s Energy and Environment Unit conducts an innovative and independent policy research into a wide range of environmental, infrastructure, and regulatory challenges. We promote well-designed regulation that exist in public domain to exploit the power of markets to achieve environmental outcomes innovatively and cost effectively of renewable source of energy. Every energy conversion in our universe entails built in inefficiencies—converting heat to propulsion, carbohydrates to motion, photons to electrons, electrons to data, and so forth. All entail a certain energy cost, or waste, that can be reduced but never eliminated. But, in no small irony, history shows—as economists have often noted—that improvements in efficiency lead to increased, not decreased, energy consumption. If at the dawn of the modern era, affordable steam engines had remained as inefficient as those first invented, they would never have proliferated, nor would the attendant economic gains and the associated rise in coal demand have happened. We see the same thing with modern combustion engines. Today’s aircraft, for example, are three times as energy-efficient as the first commercial passenger jets in the 1950s. That did not reduce fuel use but propelled air traffic to soar and, with it, a fourfold rise in jet fuel burned [45]. The purpose of improving efficiency in the real world, as opposed to the policy world, is to reduce the cost of enjoying the benefits from an energy-consuming engine or machine. So long as people and businesses want more of the benefits, declining cost leads to increased demand that, on average, outstrips any “savings” from the efficiency gains. Fig. 13.34 shows how this efficiency effect has played out for computing and air travel [46]. Of course, the growth in demand growth for a specific product or service can subside in a (wealthy) society when limits are hit: the amount of food a person can eat, the miles per day an individual is willing to drive, the number of refrigerators or lightbulbs per household, etc. But a world of eight billion people is a long way from reaching any such limits [47]. The macro picture of the relationship between efficiency and world energy demand is clear (see Fig. 13.35). Technology has continually improved society’s energy efficiency. But far from ending global energy growth, efficiency has enabled it. The improvements in cost and efficiency brought about through digital technologies will accelerate, not end, that trend [47].
499
Introduction to energy essentials
Increasing energy efficiency increases demand Data traffic vs. energy efficiency
Air traffic vs. energy efficiency 5.0
1,000
4.5
Global traffic (exabytes/month)
100
4.0 3.0 2.5
1 Computing efficiency (watt-hours/30 trillion calculations)
0.1 0.01 0.001 1995
Global traffic (trillion passenger miles per month)
3.5
10
2.0 Flying efficiency (barrels oil/1,000 passenger miles)
1.5 1.0 0.5
2000
2005
2010
2015
0 1995
2020
2000
2005
2010
2015
2020
Fig. 13.34 Increasing energy efficiency increases demand. (Sources: Cisco, “Visual Networking Index: Forecast and Trends, 2017–2022 White Paper,” Feb. 27, 2019; Jonathan Koomey et al., “Implications of Historical Trends in the Electrical Efficiency of Computing,” IEEE Annals of the History of Computing 33, no. 3 (March 2011): 46-54; Timothy Prickett Morgan, “AIchemy Can’t Save Moore’s Law,” The Next Platform, June 24, 2016; Joosung Lee and Jeonhgoon Mo, “Analysis of Technological Innovation and Environmental Performance Improvement in Aviation Sector,” International Journal of Environmental Research and Public Health 8, no. 9 (July–September 2011): 3777–95; IATA (International Air Transport Association), “Air Passenger Market Analysis,” December 2018).
As global efficiency improves, energy demand rises Change since 2000 (year 2000 = 100)
500
180 160 Energy demand for the world
140
GDP per capita
120 100 Energy/unit GDP
80 60 2000
2010
2020
2030
2040
Fig. 13.35 As global efficiency improves, energy demand rises. (Source: ExxonMobil, “2018 Outlook for Energy: A View to 2040”; PWC Global, “ The World in 2050,” 2019).
Energy storage driving renewable energy
13.17 Electricity markets Discussing about this subject, we need to ask ourselves that what has changed. We can start with the fact that, mankind has had the same energy policies for 300,000 years— meet variable energy demands by throwing a little more carbon on the fire. While the technology has changed from the cooking fire to the gas turbine, the economics have not. The cost of the cooking fire (stone or brick) and the gas turbine are low. Most of the labor and capital resources are gathering the fuel (wood, natural gas, etc.) and bringing it to the fire. These are low-capital-cost and high-operating-cost technologies. As a consequence, it is economical to produce variable energy to match variable energy needs by operating the fire at part load. In a low-carbon world the energy sources are nuclear, wind, and solar. These technologies have high-capital costs and low-operating costs. If these energy production facilities are operated at half capacity, the bus-bar cost of electricity approximately doubles. Because energy is about 8% of the global economic output, increases in energy costs have large impacts on the United States and global standards of living. Equally important, the uneven distribution of renewables has serious geopolitical implications. The differences between fossil energy technologies (low-capital cost, high-operating cost) and low-carbon technologies (high-capital cost, low-operating cost) have major impacts on electricity prices as seen in deregulated electricity markets. In these markets electricity generators bid a day ahead to provide electricity to the grid. The grid operator accepts the lowest bids to meet electricity demands. All of the winning bids are paid the electricity price ($/MWh) of the highest-priced winning electricity bid required to meet the electricity demand for that hour. Nuclear, wind, and solar bid their marginal operating costs which are near zero. Fossil plants bid their marginal costs that are close to the cost of fossil fuels that they burn. In a market with nuclear and fossil plants, the fossil plants set the hourly price of electricity. If one adds large quantities of solar or wind, their low-operating costs set market prices at times of high wind or solar production. Fig. 13.36 shows the impact of solar additions between 2012 and 2017 on California electric prices on a spring day with high solar input and low electricity demand. Electricity prices collapse at times of high solar production. In this specific example, the prices have gone negative because of government subsidies that allow the solar producer to pay the grid to take electricity to collect production tax credits. The price increases as the sun goes down because of lower solar electricity production and peak demand occurs in the early evening. Recent electricity market analysis for California indicates that California invested heavily in solar power in recent past few years to the point that it has surplus of electricity so much that other states, such as Arizona, are sometimes paid to take it. On 14 days during March 2017, Arizona utilities got a gift from California and that was Free Solar Power.
501
502
Introduction to energy essentials
Fig. 13.36 Impact of added solar on California electricity prices for second Sunday in April 2012 and 2017.
In fact, even, better than free solar energy. California produced so much solar power on those days that it paid Arizona to take excess electricity its residents were not using to avoid overloading its own power lines. It happened on 8 days in January and nine in February as well. All told, those transactions helped save Arizona electricity customers millions of dollars this year, though grid operators declined to say exactly how much. And California also has paid other states to take power. The number of days that California dumped its unused solar electricity would have been even higher if the state had not ordered some solar plants to reduce production—even as natural gas power plants, which contribute to GHG emissions, continued generating electricity. Solar and wind power production was curtailed a relatively small amount—about 3% in the first quarter of 2017—but that is more than double the same period last year. And the surge in solar power could push the number even higher in the future. Why does not California, a champion of renewable energy, use all the solar power it can generate? The answer, in part, is that the state has achieved dramatic success in increasing renewable energy production in recent years. But it also reflects sharp conflicts among major energy players in the state over the best way to weave these new electricity sources into a system still dominated by fossil-fuel-generated power. Today, Arizona’s largest utility, Arizona Public Service, is one of the biggest beneficiaries of California’s largesse because it is next door and the power can easily be sent there on transmission lines. On days that Arizona is paid to take California’s excess solar power, Arizona Public Service says it has cut its own solar generation rather than fossil fuel power. So, California’s excess solar is not reducing GHGs when that happens.
Energy storage driving renewable energy
Ca rat liforn ep ia ay ers
Ar
izo
na
Fig. 13.37 Artistic illustration of combined solar and wind energy power.
That is a good deal for Arizona, which uses what it is paid by California to reduce its own customers’ electricity bills. Utility buyers typically pay an average of $14 to $45 per MWh for electricity when there is not a surplus from high solar power production (see Fig. 13.37). Behind the rapid expansion of solar power: its plummeting price, which makes it highly competitive with other electricity sources. In part that stems from subsidies, but much of the decline comes from the sharp drop in the cost of making solar panels and their increased efficiency in converting sunlight into electricity. In 2010, power plants in the state generated just over 15% of their electricity production from renewable sources. But that was mostly wind and geothermal power, with only a scant 0.5% from solar. Now that overall amount has grown to 27%, with solar power accounting for 10%, or most of the increase.The solar figure does not include the hundreds of thousands of rooftop solar systems that produce an additional four percentage points, a share that is ever growing. A key question in the debate is when California will be able to rely on renewable power for most or all of its needs and safely phase out fossil fuel plants, which regulators are studying. The answer depends in large part on how fast battery storage improves, so it is cheaper and can store power closer to customers for use when the sun is not shining. Solar proponents say the technology is advancing rapidly, making reliance on renewables possible far sooner than previously predicted, perhaps two decades or even less from now—which means little need for new power plants with a life span of 30 to 40 years.
503
504
Introduction to energy essentials
Fig. 13.38 Impact of wind on daily West-Iowa electricity prices in April 6–22, 2014.
The average cost of solar power for residential, commercial, and utility-scale projects declined 73% between 2010 and 2016. Solar electricity now costs 5 to 6 cents per kWh—the amount needed to light a 100-W bulb for 10 hours—to produce, or about the same as electricity produced by a natural gas plant and half the cost of a nuclear facility, according to the US Energy Information Administration. The same effect occurs with wind as shown in Fig. 13.38 in Iowa. Wind has a multiday cycle on the Great Plains and thus the daily prices of electricity vary. As matter of fact in Iowa, in addition to federal programs, the state of Iowa encourages development of renewable electricity sources through a 1 cent per kWh tax credit. Also, generation equipment and facilities receive property tax breaks, and generation equipment is exempt from sales tax [48]. In 2010 and in 2009, Iowa led the United States in the percentage of electrical power generated by wind, at 15.4% and 14.2% [49]. This was up from 7.7% in 2008, as there was a large increase in the installed capacity in 2008 [50]. Some of the wind power generated electricity is sold to utility companies in nearby states, such as Wisconsin and Illinois. See Fig. 13.39, which is illustration of wind power capacity by state. Wind farms are most prevalent in the north and west portion of Iowa. Wind maps show the winds in these areas to be stronger on average, making them better suited for the development of wind energy. Average wind speeds are not consistent from month to month. Wind maps show wind speeds are on average strongest from November through April, peaking in March. August is the month with the weakest average wind speeds. On a daily cycle, there is a slight rise in average wind speeds in the afternoon; from 1 to
Energy storage driving renewable energy
Fig. 13.39 Density of installed generation capacity. (Courtesy of AWEA 2013 4th quarter Market Report State Areas from US Census Bureau).
6 p.m. Estimates by the National Renewable Energy Laboratory indicate Iowa has potentially 570,700 MW of wind power using large turbines mounted on 80 m towers [5]. Iowa ranks seventh in the country in terms of wind energy generation potential due to the strong average wind speeds in the midsection of the United States. The Iowa Environmental Mesonet distributes current weather and wind conditions from approximately 450 monitoring stations across Iowa, providing data for modeling and predicting wind power. Fig. 13.40 is pie chart for Iowa electricity generation sources by the types of the plant.
Fig. 13.40 Iowa electricity generation sources. (Courtesy of US Energy Information Administration).
505
506
Introduction to energy essentials
All high-capital-cost and low-operating-cost technologies will collapse the price of electricity at certain times if deployed on a sufficiently large scale. The value of the product goes down with increased deployment. This price collapse occurs as solar provides ∼15% of total electricity demand, wind provides ∼30% of total electricity demand, or nuclear provides ∼70% of total electricity demand when fossil fuels provide the remainder of the electricity. The low solar fraction reflects high output in the middle of the day whereas the high nuclear fraction reflects the base-load component of the electricity demand. Price collapse economically limits the deployment of all low-carbon technologies with deployment of any low-carbon technology making the other low-carbon technologies less economic—overlapping price collapse. This market effect has two impacts. First, the deployment of these technologies favors deployment of low-capital-cost and high-operating-cost fossil plants to provide electricity at other times when prices are higher. Second, this change in the market creates the economic incentive to deploy ESSs to consume low-price energy (raise its price) and provide energy at times of higher demand. The storage times in a market with large quantities of solar generation (daily cycle) are different than the storage times in a market dominated by wind (multiday cycle). The variation of electricity demand is different across the country with large differences due to different climates. One does not expect that there will be a “single” economically optimum storage solution. The optimal storage solution will vary with location. There are three electricity markets in which energy storage has the potential to increase revenue for the owner of an existing or new plant—each with different characteristics. These three are listed here and more details can be found in ref. [48]. 1. Energy markets: Energy markets pay per unit of electricity delivered to the grid. 2. Capacity markets: There are two strategies to assure sufficient generating capacity to meet demand i. To have no capacity market and allow energy prices to climb to very high levels ($100s/MWh or more) at times of scarcity. ii. Strategy for the grid to have contracts for assured electricity supply even during cloudy day for solar radiation source or lack blowing wind for wind source of energy. 3. Auxiliary services market: This refers to other electricity grid services such as frequency control, black start (start after power outage), and reserves for rapid response grid emergencies such as another electrical generator failing. More details can be found in ref. [48] by interested reader to gain more knowledge in this area concept and principle of process steps behind the ADELE is as follows and Figure 13.11 shows a conceptual layout of such facility [11].
References [1] https://en.wikipedia.org/wiki/Hybrid_renewable_energy_system. [2] U.S. Department of Energy – Fuel Cells Technologies Program. https://www1.eere.energy.gov/ hydrogenandfuelcells/pdfs/doe_h2_fuelcell_factsheet.pdf.(Accessed 20 September 2019)
Energy storage driving renewable energy
[3] M. Khan, M. Iqbal, Pre-feasibility study of stand-alone hybrid energy systems for applications in Newfoundland—Memorial University of Newfoundland St. John’s Canada, Renew. Energy 30 (2004) 835–854. [4] http://exploringgreentechnology.com/solar-energy/hybrid-energy-systems. [5] Feasibility for a standalone solar-wind-based hybrid energy system for application in Ethiopia by Getachew Bekele and Bjorn Palm Department of Energy, KTH, Stockholm, Sweden, Appl. Energy 87 (2010) 487–495. [6] B. Zohuri, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants: An Innovative Design Approach First Edition, Springer, Cham, 2015. [7] Got Powered – Enercon E-Ship 1: Wind-powered ship. http://gotpowered.com/2011/enercon-eship-1-wind-powered-ship/(Accessed 20 September 2019). [8] http://hybrid-renewable.blogspot.com/2011/03/importance-of-hybrid-energy-systems.html. [9] http://www.cleanenergyactionproject.com/CleanEnergyActionProject/Hybrid_Renewable_ Energy_Systems_Case_Studies.html. [10] B. Zohuri, P. McDaniel, Thermodynamics in Nuclear Power Plant, Springer, Cham, 2015. [11] http://www.rwe.com/web/cms/mediablob/en/391748/data/364260/1/rwe-power-ag/innovations/Brochure-ADELE.pdf. [12] EPRI-DOE Handbook of Energy Storage for Transmission and Distribution Applications, 2003. [13] http://www.youtube.com/watch?v=grPzZ39ZyUI. [14] ADELE stands for the German acronym for adiabatic compressed air energy storage for electricity supply. [15] http://www.rwe.com/web/cms/de/365478/rwe/innovationen/stromerzeugung/energiespeicherung/druckluftspeicher/projekt-adele/. [16] C. Forsberg, Variable Electricity with Base-load Reactor Operations Fluoride-Salt-Cooled HighTemperature Reactor (FHR) with Nuclear Air-Brayton Combined Cycle (NACC) and Firebrick Resistance-Heated Energy Storage, MIT, 2014. [17] C. Forsberg, P. McDaniel, B. Zohuri, Variable electricity and steam from salt, helium, and sodium cooled base-load reactors with gas turbines and heat storage, in: Proceedings of ICAPP 2015, 03–06 May 2015 – Nice (France) Paper 15115. [18] W.M. Kays, A.L. London, Compact Heat Exchangers, McGraw Hill, New York, 1964. [19] https://www.citylab.com/environment/2015/07/the-environmentalist-case-against-100renewableenergy-plans/398906/. [20] B. Zohuri, Physics of Cryogenic an Ultra-Low Temperature Phenomena, Elsevier, 2017. [21] http://www.ecomodernism.org/. [22] http://pesd.fsi.stanford.edu/. [23] Sources: Oak Ridge National Laboratory, Transportation Energy Data Book: Edition 29, ORNL6985, July 2010, http://info.ornl.gov/sites/publications/files/Pub24318.pdf; Energy Information Administration, Petroleum Supply Annual 2009, July 2010, http://205.254.135.24/petroleum/supply/annual/volume1/archive/2009/pdf/volume1_all.pdf. [24] Sources: Oak Ridge National Laboratory, Transportation Energy Data Book: Edition 29, ORNL6985, July 2010, http://info.ornl.gov/sites/publications/files/Pub24318.pdf; Energy Information Administration, Annual Energy Outlook, Apr 2010, www.eia.doe.gov/oiaf/aeo/pdf/0383(2010).pdf. [25] DOE Hydrogen Program Record 8013, available at: http://www.hydrogen.energy.gov/pdfs/8013_ air_quality_population.pdf(Accessed 20 September 2019). [26] B. Zohuri, Hybrid Energy Systems: Driving Reliable Renewable Sources of Energy Storage, first ed., Springer, New York, NY, 2017. [27] B. Zohuri, Nuclear Energy for Hydrogen Generation Through Intermediate Heat Exchangers: A Renewable Source of Energy, Springer, 2016. [28] B. Zohuri, P. McDaniel, Combined Cycle Driven Efficiency for Next Generation Nuclear Power Plants: An Innovative Design Approach, 2nd ed., Springer, 2018. [29] “Annual Report 2014,” Gen IV International Forum, Nuclear Energy Agency, 2014. [30] “A Technology Roadmap for Generation IV Nuclear Energy Systems,” U.S. Department of Energy, GIF-002-00, 2002. [31] B. Kallman, The Very High Temperature Reactor, Stanford University, Physics 241 (2013). [32] G. Roberts, Nuclear Reactor Basics and Designs for the Future, Physics 241, 2013 .
507
508
Introduction to energy essentials
[33] G. Locatelli, M. Mancini, N. Todeschini, Generation IV nuclear reactors: current status and future prospects, Energy Policy 61 (2013) 1503. [34] M. Cooper, Small modular reactors and the future of nuclear power in the United States, Energy Res. Social Sci. 3 (2014) 161–177 at www.elsevier.com/locate/erss. [35] M. Cooper Public risk, private profit, ratepayer cost, utility imprudence: advanced cost recovery for reactor construction creates another nuclear fiasco, not a renaissance; 2013. [36] J. Downey, Westinghouse slows small reactor development, Charlotte Bus J. (2014). [37] Electric Energy Online. B&W announces restructuring of small modular reactor program. 2014. [38] B. Zohuri, Small Modular Reactors as Renewable Energy Source, first ed., Springer, 2019. [39] K. Johnson, Is nuclear power renewable energy, Wall Street J. (2009). [40] B. Zohuri, Why we need nuclear power plants, Adv. Mater. Sci. Eng. 2 (1) (2018) 1–5. [41] B.L. Cohen, Breeder reactors: a renewable energy source, Am. J. Phys. 51 (1983) 75. [42] E. Moniz, http://energy.mit.edu/news/why-we-still-need-nuclear-power/(Accessed 20 September 2019). [43] B. Zohuri, Neutronic Analysis for Nuclear Reactor Systems, Springer, 2016. [44] B. Zohuri, Compact Heat Exchangers: Selection, Application, Design and Evaluation, Springer, 2016. [45] International Council on Clean Transportation, Fuel Efficiency Trends for New Commercial Jet Aircraft: 1960 to 2014, 2015. [46] M.P. Mills, Energy and the Information Infrastructure Part 3: The Digital ‘Engines of Innovation’ & Jevons’ Delicious Paradox, Real Clear Energy, 2018 . [47] M.P. Mills, The “New Energy Economy”: An Exercise in Magical Thinking, Report | March 2019, MI. [48] Center for Advanced Nuclear Energy Systems, MIT-ANP-TR-170, Massachusetts Institute of Technology, Cambridge, MA, 2017. [49] Electric Power Monthly with Data for June 2017, U.S. Energy Information Administration, 2017. [50] https://www.siemens.com/global/en/home/company/topic-areas/sustainable-energy.html#Powergenerations.
CHAPTER 14
Cyber resilience and future of electric power system 14.1 Introduction “What makes cyber threats so dangerous is that they often go unnoticed for a while, until the real damage is done, from stolen data over power outages to destruction of physical assets and great financial loss. Over the coming years we expect cyber risks to increase further and change the way we think about integrated infrastructure and supply chain management.” The possible structure of nuclear power plants (NPPs) network computer systems can be divided into the following two separate types: Internet: The Internet is a global system of interconnected computer networks that use the standard internet protocol (IP) suite to serve billions of users worldwide. Representing homepage of NPPs must be connected to the internet so that people can access the homepage to get general information about NPP. There are also some other information systems that are publicly open for the purpose of taking applications from job-seekers or contractors in case of new work or supply chain perspective as well. Internet-directed threats in NPPs are mostly mitigated by network architecture and data diodes. Threats from supply chain, portable media, and insiders are more of a concern. Intranet: An internet is a private computer network that uses IP technologies to securely share any part of an organization’s information or network operating system (OS) within that organization. Actually, there are two types of internet: • The private network is connected to the Internet, but it is protected by information security systems such as firewall or intrusion protection system (IPS). • The private network is physically isolated from outside network. It is important to classify the functions, systems, and equipment of NPPs into safety classes. The purpose of the classification is to guarantee that each object in the NPP is getting the required attention based on its importunateness to safety. With demand for energy as an essential element of our day-to-day operations, energy in the form of electricity has a tremendous impact on our lives. The security of this source of energy is very important to our economy and society. The owners of the energy generation and distribution facilities are very conscious of their vulnerabilities due to machine-tomachine (M2M) integration and cyber physical interrelationships. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
509
510
Introduction to energy essentials
The interconnection of networks in NPPs can be composed of seven components: 1. The emergency response facility (ERF) system 2. The engineering safety feature (ESF) system 3. The plant control systems (PCS) 4. The physical security protection (PSP) system 5. The reactor protection system (RPS) 6. The radwaste treatment system (RTS) 7. Turbine control system (TCS). Among the control networks, we concentrate on the ERF system and the PSP system, which are the only routes to provide information outside thus the cybersecurity as well as physical security are critical issues in analyzing control systems in NPPs. NPPs use multiple instrumentation and control (I&C) systems that may be interconnected in various ways, including I&C for reactor safety (the RPS), reactor control, plant control, and plant health monitoring. The design of I&C for reactor protection depends strongly on the physical design of reactor safety systems. Existing nuclear plants generally use “active” safety systems with multiple, active pumps, valves, and electrical power supplies capable of performing heat removal under normal shutdown and accident conditions. New “passive” designs for advanced nuclear reactors can perform these shutdown heat removal functions without external sources of power or control and are activated to perform these functions by disconnecting external sources of power and control. However, they still require I&C to sense conditions. The possible threats of the control networks in NPPs can be identified as follows: • NPP I&C systems generally use closed data and communication networks or airgasp such that access through the Internet to the systems becomes difficult. • However, recent cases of advanced persistent threat or modern malware attacks demonstrate that NPP I&C systems may also be infected by malware enabling cyberattacks through portable devices such as notebooks, personal digital assistant (PDA) and USB thumb drivers. • It is very important to identify all the connection points between humans with external electronic devices and the I&C systems, and to analyze potential security breaches that can be exploited by cyber threats. These connection points are usually related to the plant maintenance and test tasks. This research and proposal focuses primarily on NPP I&C for reactor safety functions. It reviews current best practices for digital control of existing plants that use active safety systems. The key question that emerges is how cybersecurity best practices for existing nuclear reactors with active safety systems are relevant to advanced passive nuclear reactor control systems. Cybersecurity plays a big role in the integrity and protection of such networks and assets. Artificial intelligence (AI) integrated with the cyber physical systems (CPS) can build a resilience system (RS) that will protect. It will enable us to identify any malicious
Cyber resilience and future of electric power system
attack in the form of malware before the attack begins. However, malware attacks are evolving, so security systems should also have to cope with them. Security systems do not need more tools; they just need more rules, because fighting new threats with more tools just adds complexity and more degrees of freedom, that these new tools always bring on board. It is time to rethink our approach to cybersecurity.
14.2 Cybersecurity The cybersecurity definition is coming from “Cyber Threats a Top Priority” at the 2017 Munich Security Conference attended by more than 500 decision makers from across the globe, including over 25 heads of state and government, 80 foreign and defense ministers, international organizations, Members of Parliament, high-ranking representatives of armed forces, civil society and business, gathered in Munich to discuss major international security challenges. The energy sector is of particular concern as an attack on an OS could cause infrastructure to shut down, triggering economic or financial disruptions or even loss of life and massive environmental damage. The potential for physical damage makes this industry a prime target for cyber-criminals, state-sanctioned cyber-attacks, terrorists, hacktivists, and others looking to make a statement. With modern computer threats in form of malware we need a modern measurement to either stop these threats to our grid network or be able to prevent them from further attacks. As the average age of the U.S. light water reactor fleet exceeds 36 years [9], replacing legacy I&C is a primary effort of the industry [10,11]. To understand the issues faced in replacing this I&C, it is first useful to define key I&C systems used in nuclear power plants: • RPS: This system monitors reactor parameters that are important to safety and has the capability to shut down (SCRAM) the reactor and activate redundant equipment and power supplies to provide cooling and prevent damage to fuel if these parameters depart from allowable values. • Reactor control system: This part of the plant unit control system provides control of the reactor to enable it to transition from shutdown conditions to power operation, including the positioning of control rods that control the reactivity of the reactor core. • Plant control: This part of the plant unit control system controls “balance of plant” equipment, including power conversion, required to operate the reactor during normal operation and produce power. • Plant health monitoring: This system monitors plant state parameters to detect offnormal performance and materials degradation to inform operations, maintenance, and replacement decisions.
511
512
Introduction to energy essentials
While original I&C and human-machine interfaces in the nuclear power sector employed analog technologies, several issues have required the digital upgrade and/or replacement of some of these systems. Many of the original analog technologies are long obsolete and replacement equipment is no longer manufactured. Communications standards for technology have changed, rendering interfacing with some legacy systems difficult or impossible. Furthermore, the skill sets required to maintain these systems are diminishing through attrition or retirement of personnel. These problems are compounded with the need for changes to safety-related I&C systems to be approved by the Nuclear Regulatory Commission (NRC) and to be shown to meet regulatory requirements [1]. There are existing safety and security concerns with analog systems that may be overcome with a move toward digitalization, such as a lack of real-time updates for all relevant plant personnel during normal operation and transients. Moreover, the Browns Ferry fire in March 22, 1975, which disabled cabling entering the control room, showed that plant operators may be able to manually operate equipment needed to achieve safe shutdown, and also led to plant back-fits to add redundant, remote shutdown panels in all nuclear plants. As best practice and strategies, cybersecurity and controls research has conducted thorough studies into mitigation strategies for NPP I&C issues such as common-cause failure, monoculture, and other common pitfalls resulting from heavy reliance on new technologies. A variety of organizations have developed guidance and best practices for NPP digital I&C, such as the Electric Power Research Institute, the Institute of Nuclear Power Operations, the Nuclear Energy Institute, and the Nuclear Information Technology Strategic Leadership group [10,11]. One practice of high importance and pervasive in the nuclear industry is the division of NPP systems and networks into a level structure. Here Fig. 14.1 is an example of control room architecture that illustrates this level structure [2]. Here, we see the component (device) control level, the system (group) control level, and the plant control level. Some of the systems depicted conduct fault detection so action can be taken to mitigate the consequences of system faults. The protection system is the primary fault detection system for safety I&C containing self-diagnostics functions and alerting operators to unusual conditions or internal failures and generating control signals to shut down (trip) the reactor and initiate heat removal if operating parameters depart from acceptable limits. The signal conditioning system can take input signals including status and health monitors for the actuators it controls to inform prioritization of which signals to trust. For the operations I&C, the limitation system detects deviations from desired operational values and takes effect to reduce reactor trips and actions by the protection system. Clear cybersecurity and diversity implications exist because, for example, the RPS uses signals from sensors that are also used for plant unit control functions.
Cyber resilience and future of electric power system
Fig. 14.1 Example modern nuclear reactor instrumentation and controls architecture.
Some modern network attacks begin with a piece of malware gaining a foothold on a corporate network and deceiving an employee into downloading an attachment. The malware can tunnel from a remote connection to a command and control server, and the attacker uses this remote connection to compromise select additional machines through layers of firewalls. Once deep enough into their targeted network, these attackers ultimately launch their end-game attack: either stealing information, shutting down entire plants, or even damaging equipment. Other attack vectors at NPP include supply chain, portable media and test devices, insiders, and wireless connectivity. Having a right safety instrument system in place and augmented into CPS, then we increase safety and profitability of NPPs that are critical to the success of our operation as stand along NPP. Modern sophisticated attacks routinely defeat all software protections; including firewalls, encryption, intrusion detection systems (IDS), antivirus systems, security update programs, and strong password management systems.
14.3 CPS driving energy sector The energy sector is of particular concern where an attack on an OS could cause infrastructure to shut down, triggering economic or financial disruptions or even loss of life and massive environmental damage. The potential for physical damage makes this industry a prime target for cyber-criminals, state-sanctioned cyber-attacks, terrorists, hacktivists, and others looking to make a statement.
513
514
Introduction to energy essentials
• Smart Grid : Overlay electrical grid with sensors (phasor monitoring units - PMUs) and control systems (SCADA) to enable: –reliable and secure network monitoring, load balancing, energy efficiency via smart meters, and integration of new energy sources
Fig. 14.2 Smart grid configurations.
CPSs are increasingly being adopted in a wide range of industries such as smart power grids as illustrated in Fig. 14.2. Even though the rapid proliferation of CPSs brings huge benefits to our society, it also provides potential attackers with many new opportunities to affect the physical world such as disrupting the services controlled by CPSs. Stuxnet is an example of such an attack that was designed to interrupt the Iranian nuclear program. In this chapter, we show how the vulnerabilities exploited by Stuxnet could have been addressed at the design level.We utilize a system theoretic approach, based on prior research on system safety, that takes both physical and cyber components into account to analyze the threats exploited by Stuxnet. We conclude that such an approach is capable of identifying cyber threats toward CPSs at the design level and provide practical recommendations that CPS designers can utilize to design a more secure CPS. The recent proliferation of embedded cyber components in modern physical systems [15] has generated a variety of new security risks that threaten not only cyberspace, but our physical environment as well. However earlier security threats resided primarily in cyberspace, the increasing marriage of digital technology with mechanical systems in CPS, suggests the need for more advanced generalized CPS security measures. With the exponential increase [14] in CPS, many necessary security measures are forsaken as new areas are explored. For example, most modern-day vehicles support OnStar services. Malicious signals can be sent via the OnStar Telecommunications network to easily remotely control your vehicle while you are driving as demonstrated by Koscher et al. (2010) [3,14]. A typical modern NPP I&C system consists of control components such as distributed control systems (DCSs) or programmable logic controllers (PLCs) that interact
Cyber resilience and future of electric power system Enterprise workstations
Printer
Server
Firewall Internet/ business partners
Hub/switch
Outside world (may include vendors, customers, and even cyber attackers)
Enterprise network Instrumentation and control (I & C) network Internal servers
Engineering workstation
Man-machine interface system (MMIS)
Communications in I & C network can be the Ethernet, an industrial fieldbus network or a hardwired network.
NPP
Safety systems PLC, DCS, or other control components Sensor Control equipment
Nonsafety systems PLC, DCS, or other control components
Sensor Physical equipment
Sensor Control equipment
Sensor Physical equipment
Fig. 14.3 The typical systems and network in NPP.
with physical equipment directly and industrial PCs or engineering workstations that are used to regulate control components and their related works. See Fig. 14.3. Note that the high-level overview of I&C is depicted in Fig. 14.4 as the understanding of nuclear I&C assets.
Fig. 14.4 High-level overview of I&C main functions depiction.
515
516
Introduction to energy essentials
Fig. 14.5 The interconnection of control networks in NPPs.
As we stated in our narrative aspect of our proposal, the interconnection of networks in NPPs can be composed of the seven following components: 1. The ERF system 2. The ESF system 3. The PCS 4. The PSP system 5. The RPS 6. The RTS 7. TCS. The above seven points are depicted in Fig. 14.5. Among the control networks, we concentrate on the ERF system and the PSP system, which are the only routes to provide information outside thus the cybersecurity as well as physical security are critical issues in analyzing control systems in NPPs. The vulnerabilities of control networks in NPPs defined by the North America Electric Reliability Council (NERC) listed the top 10 vulnerabilities of control systems and recommended mitigation strategies as follows: 1. Inadequate policies, procedures, and culture that governs control system security 2. Inadequately designed control system networks that lack sufficient defense-indepth (DiD) mechanisms 3. Remote access to the control system without appropriate access control 4. System administration mechanisms and software used in control systems are not adequately scrutinized or maintained 5. Use of inadequately secured wireless communication for control 6. Use of a nondedicated communications channel for command and control and/or inappropriate use of control system network bandwidth for noncontrol purposes
Cyber resilience and future of electric power system
7. Insufficient application of tools to detect and report anomalous or inappropriate activity 8. Unauthorized or inappropriate applications or devices on control system networks 9. Control systems command and control data not authenticated. 10. Inadequately managed, designed, or implemented critical support infrastructure. These vulnerabilities contain both managerial and technical vulnerabilities. Among these vulnerabilities, items 1, 2, 7, and 9 may exist in NPP I&C systems, but other items are less related and as part of our proposal we suggest investigating all the above points and propose some enhancement to these points to improve them. Before considering these security issues, we must provide our definition of a cyberphysical system. A CPS, also referred to as an embedded system, is a conglomerate system composed of both computational and physical elements. Most CPSs interact with their environment through sensors and actuators, forming a feedback loop that alters physical world. Sensors are used to input information about the environment; the CPS performs computations on this information and uses the results of these computations to control its actuators. These actuators in turn alter the environment, which changes the information. When it comes to the stolen data and data at the volume of Big Data, as we have stated integration of AI becomes a necessity as part of preventing hacking and dealing with cybersecurity-related issues. Also, as we have seen when AI is involved, then the other two subsets of it such as machine learning (ML) and deep learning (DL) are involved and combination of these provides an RS [1] within cyber surety element of energy sector to prevent any unwanted cyberattacks. The RS built with DL is involved in data mining and data analytics of historical data and incoming new data to discriminate a new threat when comparing these data at real-time or near-real-time, while passing up this information to its upper set system known as ML according to Internet of Things (IoT) via M2M and finally to AI to inform its human operator, which is a process between machine-to-human (M2H). All these system integrations can be achieved with possibly new innovative technologies utilizing techniques such as feed forward neural network or back forward algorithm, which involves a family of methods used to efficiently train artificial neural networks as part of overall integrated systems. See book by Zohuri and Moghaddam [4] for more information on subjects of feed forward neural network or back forward algorithm. As we also, stated previously, one of the benefits of augmenting IoT as part of industrial applications is in energy sector where electricity providers deliver reliable, fair-priced services and products to their end users, namely the consumers. While the energy sector has been evolving in terms of generation and distribution, the IoT has the potential to be the most transformational if challenges related to reliability, integration, system complexity, and security can be overcome. While reliable connectivity is an ongoing problem, many companies are struggling to integrate IoT
517
518
Introduction to energy essentials
technology with the existing platforms, which tend to be overly complex, and may need to rethink their approach to data security to deploy IoT projects safely and securely. Usage of IoT within energy sector of industry enables organizations in the energy sector, including oil and gas companies and utilities, to capture and analyze increasing volumes and varieties of data streams flowing from numerous systems and connected devices, as well as shift analytics from traditional data centers toward devices at the edge. Implementation of IoT solutions along with AI integrates streaming data with analytics and visualization so you as owner of electricity producing company can: • Get the most value from your smart grid investments. Stop intentionally dropping valuable data because of bandwidth constraints.With IoT solution in place, you can use more new data sources without clogging operational systems by filtering and analyzing IoT data in motion, whether it is from a data center, edge device, or cloud. • Optimize electric vehicle (EV) and distributed energy resource (DER) integration. Forecast specific needs of EVs and availability of DERs to meet demand, ensure grid stability, and control costs. • Extend your analytics infrastructure. Take algorithms to the data, reduce data movement, and automate processes across your IoT infrastructure to reap incremental and long-term business gains. • Develop new business opportunities. The right IoT platform enables innovation in both customer and grid applications, so you can get creative as you unlock new potential in DERs, advanced energy forecasting, and smart city applications. With current demand and a need for a proper IoT innovative technology augmenting with a proper AI in place, electricity production owners and organization can deliver the cutting-edge IOT/AI solutions to their consumers and customers for energy in the way that works best for their business, which includes the following: • Advanced predictive modeling. Make better predictions of energy demand with more accurate forecasting models based on more data from more sources, including smart meters and weather stations. Automatically track model accuracy, and easily update models to reflect changes. • Smart meter analytics. Optimize smart meter deployment and manage timely customer communications to get the most value from your investments in smart meters and advanced metering infrastructure. • Comprehensive asset data. Integrate structured and unstructured data from all sources to get an enterprise view of asset performance and drive improved grid reliability. • Advanced early-warning analytics. Identify potential issues early, even before they occur, so you can proactively take corrective action to improve outcomes. • Automated monitoring and predictive alerts. Reduce downtimes, avoid major defects, and address potential performance issues before they escalate, and use built-in workflows and case management capabilities for faster problem resolution.
Cyber resilience and future of electric power system
With all the above privileges utilizing IoT and AI as a combined innovative technology of device-to-machine (D2M) and M2M, they harness the sensor data to boost uptime, performance, and productivity while lowering maintenance costs and reducing your risk of revenue loss.
14.4 Securing utilities against cyberattacks As the energy industry becomes more connected via IoT, attackers are taking advantage of the vulnerabilities created by the gap between IT security and operations. Cylance provides technology and services to close the gap and make it harder for hackers to penetrate systems. Cybersecurity is a serious challenge for the energy sector, impacting national security, public safety, and the economy. Attacks are increasingly sophisticated, as IT infrastructure becomes ever more complex. Protecting the utilities that power our lives has never been more important. When we are dealing with multi-billion giga bite data at the level of Big Data (BD) and cloud computation, securing the journey to hybrid cloud is element of necessity and the new rules arm you with solutions for any objective in your mind as your Use Case (UC) demands. We have stated that tradition Information Technology (IT) style security advice fails to address the threats facing the control systems of modern power plants. In fact, as operation technology (OT) networks in power plants operate large, complex, and dangerous physical processes and equipment, there is a need for a preventative and disciplined approach to protecting the network perimeter of power generation sites against any unwanted unidirectional threat and cyberattack that may be taking place and harming any operating hardware that is operational via software from control room, if these software are within Internet reach from outside. Damaged turbines and transformers cannot be “restored from backups,” and intrusion detection, response, and remediation can interrupt the reliable continuity of services. Thus, we need to take under consideration a modern and innovative technique and reference architecture for DiD network protection of OT networks at power plant level, where they are producing and providing electricity to the grid, by eliminating external cyber risks and enabling disciplined control to protected reliability-critical network, including the following four UCs: • Safe IT/OT integration. • Turbine vendor monitoring. • Protecting relay and safety networks. • Control center communications. To satisfy the above 4 UCs, we need a Business RS (BRS) [1] driven by AI integrated with ML and DL in place.
519
520
Introduction to energy essentials
As time goes by and our knowledge of computing and computer gets better and better by day, hackers are becoming smarter than ever. As we know, cyber threats become more sophisticated over time, and so our defenses must continue to evolve as well. Traditional IT-style defensive architectures depend unduly on firewalls and IDS. Firewalls allow attacks to pass from untrusted to trusted networks. IDS detect some attacks and miss others and in most cases remediating intrusions takes so long that IDS are not effective at preventing attackers from achieving their cyber-espionage, cybersabotage, or equipment-damaging goals. This is unacceptable, when we are tasked with securing power generation system and thus, we are in need of a unidirectional reference architecture in place that is driven by an RS integrated with AI as a modern tool. The electric grid as a nationwide network with its three nodes in United States of America has been identified by Department of Homeland Security (DHS) as a strategic target for nation-state, terrorist, hacktivist, and other types of attack; and power plants remain essential elements of the electric grid and the net needs to be protected at 100%. Sophisticated attacks on the modern industrial control system (ICS) networks risk equipment damage, injuries to operator and personal at power plant, and causes environmental damage as well. In particular, within all the digital gadget and computer in control room as illustrated in Fig. 14.6, there exits many circuitry in them that use integrated circuits (ICs), they all are vulnerable to any cyberattacks that they need to be protected as well, as these circuits are operating at the heart of the computers and gadgets. Bear in mind that the modern ICS networks is like a sitting duck is a “risk at rest.” Note that an IC or monolithic IC (also referred to as an IC, a chip, or a microchip as illustrated in Fig. 14.7) is a set of electronic circuits on one small flat piece (or “chip”) of semiconductor material that is normally silicon. The integration of large numbers of tiny metal oxide silicon transistors into a small chip results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete electronic components. Note that EPROM stands for erasable programmable read-only memory is a type of programmable read-only memory (PROM) chip that retains its data when its power
Fig. 14.6 Energy, water, and transport industrial control system (ICS).
Cyber resilience and future of electric power system
Fig. 14.7 Integrated circuit from an EPROM memory microchip. (Source. Wikipedia.org).
supply is switched off. Computer memory that can retrieve stored data after a power supply has been turned off and back on is called nonvolatile. It is an array of floatinggate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light source (such as from a mercuryvapor lamp). EPROMs are easily recognizable by the transparent fused quartz window in the top of the package, through which the silicon chip is visible, and which permits exposure to ultraviolet light during erasing. Fig. 14.7 is an IC from an EPROM memory microchip showing the memory blocks, the supporting circuitry and the fine silver wires that connect the IC die to the legs of the packaging. The IC’s mass production capability, reliability, and building-block approach to circuit design has ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the small size and low cost of ICs. Due to population growth worldwide and nation-wide, demand for electricity is growing much faster than we anticipate and consequently, we have upgraded today’s networks for both productivity and convenience, but the question is that have we upgraded our security mindset to deal with all these cyberattacks via Internet and IoT. ICS is a general term that encompasses several types of control systems and associated instrumentation used for industrial process control. Such systems can range from a few modular panel–mounted controllers to large interconnected and interactive DCSs with many thousands of field connections. See Fig. 14.8, which is a presentation of a modern ICS with panel-mounted controllers with integral displays and the process value, and set value or set point (SP) is on the same scale for easy comparison. The controller output is shown as Manipulated Variable (MV) with range 0–100%.
521
522
Introduction to energy essentials
Fig. 14.8 An illustration of Modern ICS. (Source. Wikipedia.org).
All systems receive data received from remote sensors measuring process variables (PVs), compare these with desired SPs and derive command functions that are used to control a process through the final control elements, such as control valves (CV). The larger systems are usually implemented by Supervisory Control and Data Acquisition (SCADA) systems, or DCSs, and PLCs; however, SCADA and PLC systems are scalable down to small systems with few control loops [7]. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications. Fig. 14.9 is a typical illustration of a modern DCS and a DCS control room where plant information and controls are displayed on computer graphics screens. The operators are seated as they can view and control any part of the process from their screens, while retaining a plant overview. The introduction of distributed control allowed flexible interconnection and reconfiguration of plant controls such as cascaded loops and interlocks and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to
Fig. 14.9 A modern DCS control room illustration. (Source. Wikipedia.org).
Cyber resilience and future of electric power system
reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name DCS was coined to refer to proprietary modular systems from many manufacturers that integrated high-speed networking and a full suite of displays and control racks. While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to reconfigure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where a sequential logic was becoming very complex [4]. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants. SCADA’s history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA [5] systems use open-loop control with sites that are widely separated geographically. A SCADA system uses Remote Terminal Units (RTUs) to send supervisory data back to a control center. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control. Now that we have all these basic elements understanding, the question that we need to ask ourselves is: how long an unsecured ICS network can stand any modern cyberattacks, if it connected to the Internet? Aside from generating some alarmed questions about whether this particular network existed, their question gave us a pause on how we approach the network security question with ourselves. Year over year, we stand witness to an ever-increasing number of cyberattacks, breaches, and vulnerabilities. And we continually face challenging conversations with IT and operations teams on how to implement industrial cybersecurity best practices. But rarely do we talk about the fundamental reasons for implementing these practices. This gave rise to the idea for us to frame this conversation in a perspective we do not usually see: the inherent risk of connecting to the modern Internet. Imagine you are traveling down the interstate, moving toward your destination at 70 mph. You might find yourself surrounded by crush zones engineered to absorb energy, airbags designed to reduce impact, and traction control programmed to prevent crashes. All of these systems exist to minimize the inherent risk in traveling at highway speeds. Our technology has advanced to the point where it is routine to put ourselves in inherently risky situations where only inaction is required for a loss. The modern Internet has become the information interstate of our society, and by extension our productivity. For many of our routine tasks, no longer are the industrial
523
524
Introduction to energy essentials
and commercial workflows separated by the platforms and ways in which we accomplish them. And as the speed of information has increased driven by increasing volume of data at the BD level, so has the inherent risk associated with utilizing these superhighways. What used to be taken as common practice among industrial sites has now become the equivalent of driving an original 1908 Ford Model T down the interstate. The associated risk is not a question of if there will be an accident, but of when it may take place. The Internet as we know it is an inherently risky place. Systems are automatically scanned for unpatched vulnerabilities, botnets crawl the web poking and prying every port, and state actors create exploits that opportunistically attack any target. Without implementing modern network security that should be enhanced and boosted to encounter modern cyberattacks, we put ourselves, our people, and our processes at risk. Many automation engineers will be familiar with concepts such as firewalls, network demilitarized zones (DMZs), and security through the Purdue Model [6]. It has become our job to recognize that these safety features are no longer high-end security practices. These concepts have become what should be expected as the standard safety features of the modern ICS network. To finish off this section and topic of “Securing Utilities Against Cyberattacks,” we look at a typical ICS configuration for large, centralized processes with a typical DCS as illustrated in Fig. 14.10. In addition, for geographically distributer process at remote sites for a typical SCADA, a typical ICS configuration is depicted in Fig. 14.10. In support of ICS configuration in both Figs. 14.11 and 14.12 in the following we define other components that round out a complete ICS loop as: • HMI – Human Machine Interface • ENG – Engineering workstation • HIST – Process data archive
Process − Field Devices
DPU
I/O
HMI
DPU
I/O
HMI
DPU
I/O
DPU
Historical Data Server
I/O
Firewall Corporate WAN
Fig. 14.10 Typical ICS configuration for large centralized process [5].
Cyber resilience and future of electric power system
Process piece
Process piece
I/O
I/O
PLC
PLC
Ethernet
Process piece
I/O PLC
Modem ISDN/POTS
Radio
Switch Radio
Firewall
HIST
HMI
Corporate WAN
HMI
Modem
ENG
Serial
Main supervisory site
Fig. 14.11 Geographical distributed process driving typical ICS configuration at remote site [5].
They all configured an entire loop as illustrated in Fig. 14.12. Notably, the cybersecurity against modern cyberattack for an ICS has its own unique challenges that are summarized here as: • The operating environment of ICS. • The physical process and real-time control • User and organizational motivations and considerations • Unique network configurations and protocols. To expand the above two points, we can state that in the ICS operating environment, the ICS, and software control are a physical process and thus, operate in a real-time situation within their environments. Data can become useless or stale in a fraction to
Fig. 14.12 An industrial control system (ICS) Illustration [5].
525
526
Introduction to energy essentials
a few seconds and consequently will loss process efficiency, which in return results the damaging or shutting down ICS. As a result, ICS reliability is crucial and must continue to operate even during a cyberattack in particular under modern form. ICS must function in environments that are electrically noisy, dirty, at temperature extremes, etc. and process inefficiencies and shutdowns are often very expansive, with tens of thousands of dollars an hour or much more for large processes. Furthermore, in critical infrastructure, the loss of the ICS and its digital IC as explained before, can have significant detrimental impacts on the health and functionality of society as a whole. Moreover, bear in mind, any time we are dealing with mass volume of data that need to be processed in real time, augmentation of artificial capability of ML and DL becomes a mandatory elements of a modern cybersecurity as countermeasure against measure of modern cyberattack due to bidirectional incoming data from Internet from M2M is driven by IoT. Historically, ICS were physically isolated or air-gapped from the outside world. Now systems are linked into the corporate wide area network (WAN) and Internet to allow process monitoring and maintenance for off-site groups. Due to traditional maintenance and control of ICS in old-fashioned way, the control engineers, technicians, operators typically are not skilled in cybersecurity. Conversely, IT professionals are not skilled in process control of OT.They have some time a different and conflicting, goals and management in the corporate structure and it is difficult to make a business case for cybersecurity, although it is improving with augmentation of AI, ML, and DL components. The perception is no attacks mean no threat problems in the pipe. However, the focus is on keeping the process running, not cybersecurity, where it also can add a point of failure, disrupt the process, and make maintenance more difficult [6].
14.5 Modern threats driving modern cyberattacks As we stated before, the electric grid has been identified as a strategic target for nationstate, terrorist, hacktivist, and other types of attack; and power plants remain the essential elements of the electric grid. Sophisticated attacks on ICS risk equipment damage, injuries to personnel, and environmental damage. Modern network attacks begin with a piece of malware gaining a foothold on a corporate network and deceiving an employee into downloading an attachment. The malware typically tunnels a remote connection to a command and control server, and the attacker uses this remote connection to compromise select additional machines through layers of firewalls. Once deep enough in to their targeted network, these attackers ultimately launch their end-game attack: either stealing information, shutting down entire plants, or even damaging equipment. Modern sophisticated attacks routinely defeat all software protections; including firewalls, encryption, IDS, antivirus systems, security update programs, and strong password management systems.
Cyber resilience and future of electric power system
14.6 ICS security guideline National Institute of Standard and Technology (NIST) under United States Department of Commerce has established a guideline to establish a modern cybersecurity as a countermeasure against measure of modern threats and cyberattacks.This guideline also takes into consideration the SCADA systems, DCSs, and other control system configurations such as PLCs and recommends the best possible reference architecture and now with technical progress in recent years in field of AI, we can augment it as part of new component for overall ICS. This guideline has been established by NIST under Special Publication 800-82, Natl. Inst. Stand. Technol. Spec. Publ. 800-82, 155 pages (June 2011) and the version superseded by http://dx.doi.org/10.6028/NIST.SP.800-82r1. The task of publishing such report falls under the Information Technology Laboratory (ITL) at the NIST promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof-of-concept implementations, and technical analysis to advance the development and productive use of IT. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems. This Special Publication 800-series reports on ITL’s research, guidance, and outreach efforts in computer security and its collaborative activities with industry, government, and academic organizations. In this section we extract some portion of this report in verbatim form for the purpose of educating ourselves about ICS and most common components of it that forms ICS infrastructure as a whole [5]. This document provides guidance for establishing secure ICS. These ICS, which include SCADA systems, DCSs, and other control system configurations such as skidmounted PLCs, are often found in the industrial control sectors. ICS are typically used in industries such as electric, water and wastewater, oil and natural gas, transportation, chemical, pharmaceutical, pulp and paper, food and beverage, and discrete manufacturing (e.g., automotive, aerospace, and durable goods). SCADA systems are generally used to control dispersed assets using centralized data acquisition and supervisory control. DCSs are generally used to control production systems within a local area such as a factory using supervisory and regulatory control. PLCs are generally used for discrete control for specific applications and generally provide regulatory control. These control systems are vital to the operation of the U.S. critical infrastructures that are often highly interconnected and mutually dependent systems. It is important to note that approximately 90% of the nation’s critical infrastructures are privately owned and operated. Federal agencies also operate many of the ICS mentioned above; other examples include air traffic control and materials handling (e.g., Postal Service mail handling). This chapter provides an overview of these ICS and typical system topologies, identifies typical threats and vulnerabilities to these systems, and provides recommended security countermeasures to mitigate the associated risks.
527
528
Introduction to energy essentials
Initially, ICS had little resemblance to traditional IT systems in that ICS were isolated systems running proprietary control protocols using specialized hardware and software. Widely available, low-cost IP devices are now replacing proprietary solutions, which increases the possibility of cybersecurity vulnerabilities and incidents. As ICS are adopting IT solutions to promote corporate business systems connectivity and remote access capabilities are being designed and implemented using industry standard computers, Oss, and network protocols, they are starting to resemble IT systems. This integration supports new IT capabilities, but it provides significantly less isolation for ICS from the outside world than predecessor systems, creating a greater need to secure these systems. While security solutions have been designed to deal with these security issues in typical IT systems, special precautions must be taken when introducing these same solutions to ICS environments. In some cases, new security solutions are needed that are tailored to the ICS environment. • Originally, ICS implementations were susceptible primarily to local threats because many of their components were in physically secured areas and the components were not connected to IT networks or systems. However, the trend toward integrating ICS systems with IT networks provides significantly less isolation for ICS from the outside world than predecessor systems, creating a greater need to secure these systems from remote, external threats. Also, the increasing use of wireless networking places ICS implementations at greater risk from adversaries who are in relatively close physical proximity but do not have direct physical access to the equipment. Threats to control systems can come from numerous sources, including hostile governments, terrorist groups, disgruntled employees, malicious intruders, complexities, accidents, natural disasters as well as malicious or accidental actions by insiders. ICS security objectives typically follow the priority of availability, integrity, and confidentiality, in that order. Possible incidents an ICS may face include the following: • Blocked or delayed flow of information through ICS networks, which could disrupt ICS operation. • Unauthorized changes to instructions, commands, or alarm thresholds, which could damage, disable, or shut down equipment, create environmental impacts, and/or endanger human life. • Inaccurate information sent to system operators, either to disguise unauthorized changes, or to cause the operators to initiate inappropriate actions, which could have various negative effects. • ICS software or configuration settings modified, or ICS software infected with malware, which could have various negative effects. • Interference with the operation of safety systems, which could endanger human life.
Cyber resilience and future of electric power system
Major security objectives for an ICS implementation should include the following: • Restricting logical access to the ICS network and network activity. This includes using a DMZ network architecture with firewalls to prevent network traffic from passing directly between the corporate and ICS networks, and having separate authentication mechanisms and credentials for users of the corporate and ICS networks.The ICS should also use a network topology that has multiple layers, with the most critical communications occurring in the most secure and reliable layer. • Restricting physical access to the ICS network and devices. Unauthorized physical access to components could cause serious disruption of the ICS’ functionality. A combination of physical access controls should be used, such as locks, card readers, and/or guards. • Protecting individual ICS components from exploitation. This includes deploying security patches in as expeditious a manner as possible, after testing them under field conditions; disabling all unused ports and services; restricting ICS user privileges to only those that are required for each person’s role; tracking and monitoring audit trails; and using security controls such as antivirus software and file integrity checking software where technically feasible to prevent, deter, detect, and mitigate malware. • Maintaining functionality during adverse conditions. This involves designing the ICS so that each critical component has a redundant counterpart. Additionally, if a component fails, it should fail in a manner that does not generate unnecessary traffic on the ICS or other networks, or does not cause another problem elsewhere, such as a cascading event. • Restoring system after an incident. Incidents are inevitable and an incident response plan is essential. A major characteristic of a good security program is how quickly a system can be recovered after an incident has occurred. To properly address security in an ICS, it is essential for a cross-functional cybersecurity team to share their varied domain knowledge and experience to evaluate and mitigate risk to the ICS. The cybersecurity team should consist of a member of the organization’s IT staff, control engineer, control system operator, network and system security expert, a member of the management staff, and a member of the physical security department at a minimum. For continuity and completeness, the cybersecurity team should consult with the control system vendor and/or system integrator as well.The cybersecurity team should report directly to site management (e.g., facility superintendent) or the company’s Chief Information Officer (CIO)/Chief strategy Officer (CSO), who, in turn, accepts complete responsibility and accountability for the cybersecurity of the ICS. An effective cybersecurity program for an ICS should apply a strategy known as “DiD,” layering security mechanisms such that the impact of a failure in any one mechanism is minimized. Given what we have said about cybersecurity by defending the cyberattack for defending an ICS within energy sector, for a typical ICS this means a DiD strategy that includes the following:
529
530
Introduction to energy essentials
• Developing security policies, procedures, training and educational materials that apply specifically to the ICS. • Considering ICS security policies and procedures based on the Homeland Security Advisory System Threat Level, deploying increasingly heightened security postures as the threat level increases. • Addressing security throughout the lifecycle of the ICS from architecture design to procurement to installation to maintenance to decommissioning. • Implementing a network topology for the ICS that has multiple layers, with the most critical communications occurring in the most secure and reliable layer. • Providing logical separation between the corporate and ICS networks (e.g., stateful inspection firewall(s) between the networks). • Employing a DMZ network architecture (i.e., prevent direct traffic between the corporate and ICS networks). • Ensuring that critical components are redundant and are on redundant networks. • Designing critical systems for graceful degradation (fault tolerant) to prevent catastrophic cascading events. • Disabling unused ports and services on ICS devices after testing to assure this will not impact ICS operation. • Restricting physical access to the ICS network and devices. • Restricting ICS user privileges to only those that are required to perform each person’s job (i.e., establishing role-based access control and configuring each role based on the principle of least privilege). • Considering the use of separate authentication mechanisms and credentials for users of the ICS network and the corporate network (i.e., ICS network accounts do not use corporate network user accounts). • Using modern technology, such as smart cards for Personal Identity Verification (PIV). • Implementing security controls such as intrusion detection software, antivirus software, and file integrity checking software, where technically feasible, to prevent, deter, detect, and mitigate the introduction, exposure, and propagation of malicious software to, within, and from the ICS. • Applying security techniques such as encryption and/or cryptographic hashes to ICS data storage and communications where determined appropriate. • Expeditiously deploying security patches after testing all patches under field conditions on a test system if possible, before installation on the ICS. • Tracking and monitoring audit trails on critical areas of the ICS.
14.6.1 Overview of ICS ICS is a general term that encompasses several types of control systems, including SCADA systems, DCSs, and other control system configurations such as skid-mounted PLCs often found in the industrial sectors and critical infrastructures. ICSs are typically used in
Cyber resilience and future of electric power system
industries such as electrical, water and wastewater, oil and natural gas, chemical, transportation, pharmaceutical, pulp and paper, food and beverage, and discrete manufacturing (e.g., automotive, aerospace, and durable goods).These control systems are critical to the operation of the U.S. critical infrastructures that are often highly interconnected and mutually dependent systems. It is important to note that approximately 90% of the nation’s critical infrastructures are privately owned and operated. Federal agencies also operate many of the industrial processes mentioned above; other examples include air traffic control and materials handling (e.g., Postal Service mail handling). This section provides an overview of SCADA, DCS, and PLC systems, including typical architectures and components. Several diagrams are presented by the NIST and we have used them here to depict the network connections and components typically found on each system to facilitate the understanding of these systems. Keep in mind that actual implementations of ICS may be hybrids that blur the line between DCS and SCADA systems by incorporating attributes of both. Please note that the diagrams in this section do not represent a secured ICS. Architecture security and security controls are discussed in Section 5 and Section 6 of the NIST, respectively [6].
14.6.2 Overview of CAD, DCS, and PLCs SCADA systems are highly distributed systems used to control geographically dispersed assets, often scattered over thousands of square kilometers, where centralized data acquisition and control are critical to system operation. They are used in distribution systems such as water distribution and wastewater collection systems, oil and natural gas pipelines, electrical power grids, and railway transportation systems. A SCADA control center performs centralized monitoring and control for field sites over long distance communications networks, including monitoring alarms and processing status data. Based on information received from remote stations, automated or operator-driven supervisory commands can be pushed to remote station control devices, which are often referred to as field devices. Field devices control local operations such as opening and closing valves and breakers, collecting data from sensor systems, and monitoring the local environment for alarm conditions. DCSs are used to control industrial processes such as electric power generation, oil refineries, water and wastewater treatment, and chemical, food, and automotive production. DCSs are integrated as a control architecture containing a supervisory level of control overseeing multiple, integrated sub-systems that are responsible for controlling the details of a localized process. Product and process control are usually achieved by deploying feedback or feed forward control loops whereby key product and/or process conditions are automatically maintained around a desired SP. To accomplish the desired product and/or process tolerance around a specified SP, specific PLCs are employed in the field and proportional, integral, and/or derivative settings on the PLC are tuned to provide the desired tolerance as well as the rate of self-correction during process upsets. DCSs are used extensively in process-based industries.
531
532
Introduction to energy essentials
PLCs are computer-based solid-state devices that control industrial equipment and processes. While PLCs are control system components used throughout SCADA and DCS systems, they are often the primary components in smaller control system configurations used to provide operational control of discrete processes such as automobile assembly lines and power plant soot blower controls. PLCs are used extensively in almost all industrial processes. The process-based manufacturing industries typically utilize two main processes [8]: • Continuous manufacturing processes. These processes run continuously, often with transitions to make different grades of a product. Typical continuous manufacturing processes include fuel or steam flow in a power plant, petroleum in a refinery, and distillation in a chemical plant. • Batch manufacturing processes. These processes have distinct processing steps, conducted on a quantity of material. There is a distinct start and end step to a batch process with the possibility of brief steady-state operations during intermediate steps. Typical batch manufacturing processes include food manufacturing. The discrete-based manufacturing industries typically conduct a series of steps on a single device to create the end product. Electronic and mechanical parts assembly and parts machining are typical examples of this type of industry. Both process- and discrete-based industries utilize the same types of control systems, sensors, and networks. Some facilities are a hybrid of discrete and process-based manufacturing. While control systems used in distribution and manufacturing industries are very similar in operation, they are different in some aspects. One of the primary differences is that DCS or PLC-controlled subsystems are usually located within a more confined factory or plant-centric area compared to geographically dispersed SCADA field sites. DCS and PLC communications are usually performed using local area network (LAN) technologies that are typically more reliable and high speed compared to the longdistance communication systems used by SCADA systems. In fact, SCADA systems are specifically designed to handle long-distance communication challenges such as delays, and data loss posed by the various communication media used. DCS and PLC systems usually employ greater degrees of closed loop control than SCADA systems because the control of industrial processes is typically more complicated than the supervisory control of distribution processes. These differences can be considered subtle for the scope of this document, which focuses on the integration of IT security into these systems.
14.6.3 ICS operation The basic operation of an ICS is shown in Fig. 14.13A [9]. Key components include the following: • Control loop. A control loop consists of sensors for measurement, controller hardware such as PLCs, actuators such as CVs, breakers, switches and motors, and the
Cyber resilience and future of electric power system
Fig. 14.13 (A). A typical industrial control system operation [9]. (B) SCADA system general layout [5].
communication of variables. Controlled variables are transmitted to the controller from the sensors. The controller interprets the signals and generates corresponding MVs, based on SPs, which it transmits to the actuators. Process changes from disturbances result in new sensor signals, identifying the state of the process, to again be transmitted to the controller. • HMI. Operators and engineers use HMIs to monitor and configure SPs, control algorithms, and adjust and establish parameters in the controller.The HMI also displays process status information and historical information.
533
534
Introduction to energy essentials
• Remote diagnostics and maintenance utilities. Diagnostics and maintenance utilities are used to prevent, identify, and recover from abnormal operation or failures. A typical ICS contains a proliferation of control loops, HMIs, and remote diagnostics and maintenance tools built using an array of network protocols on layered network architectures. Sometimes these control loops are nested and/or cascading, whereby the SP for one loop is based on the PV determined by another loop. Supervisory-level loops and lower level loops operate continuously over the duration of a process with cycle times ranging on the order of milliseconds to minutes.
14.6.4 Key ICS components To support subsequent discussions, this section defines key ICS components used in control and networking. Some of these components can be described generically for use in SCADA systems, DCS, and PLCs, while others are unique to one. The Glossary of ICS. Additionally, Figs. 14.13B and 14.14 show SCADA implementation examples, Fig. 14.15 shows a DCS implementation example, and Fig. 14.15 shows a PLC system implementation example that incorporates these components.
Fig. 14.14 Basic SCADA communication topologies [5].
Cyber resilience and future of electric power system
Fig. 14.15 Large SCADA communication topology.
14.6.5 Control components The following is a list of the major control components of an ICS: • Control server. The control server hosts the DCS or PLC supervisory control software that communicates with lower level control devices.The control server accesses subordinate control modules over an ICS network. • SCADA server or master terminal unit (MTU). The SCADA server is the device that acts as the master in a SCADA system. RTUs and PLC devices (as described below) located at remote field sites usually act as slaves. • RTU. The RTU, also called a remote telemetry unit, is a special purpose data acquisition and control unit designed to support SCADA remote stations. RTUs are field devices often equipped with wireless radio interfaces to support remote situations where wire-based communications are unavailable. Sometimes PLCs are implemented as field devices to serve as RTUs; in this case, the PLC is often referred to as an RTU. • PLC. The PLC is a small industrial computer originally designed to perform the logic functions executed by electrical hardware (relays, switches, and mechanical timer/counters). PLCs have evolved into controllers with the capability of controlling
535
536
Introduction to energy essentials
•
•
•
•
complex processes, and they are used substantially in SCADA systems and DCS. Other controllers used at the field level are process controllers and RTUs; they provide the same control as PLCs but are designed for specific control applications. In SCADA environments, PLCs are often used as field devices because they are more economical, versatile, flexible, and configurable than special-purpose RTUs. Intelligent electronic devices (IEDs). An IED is a “smart” sensor/actuator containing the intelligence required to acquire data, communicate to other devices, and perform local processing and control. An IED could combine an analog input sensor, analog output, low-level control capabilities, a communication system, and program memory in one device. The use of IEDs in SCADA and DCS systems allows for automatic control at the local level. HMI. The HMI is software and hardware that allows human operators to monitor the state of a process under control, modify control settings to change the control objective, and manually override automatic control operations in the event of an emergency. The HMI also allows a control engineer or operator to configure SPs or control algorithms and parameters in the controller. The HMI also displays process status information, historical information, reports, and other information to operators, administrators, managers, business partners, and other authorized users. The location, platform, and interface may vary a great deal. For example, an HMI could be a dedicated platform in the control center, a laptop on a wireless LAN, or a browser on any system connected to the Internet. Data historian. The data historian is a centralized database for logging all process information within an ICS. Information stored in this database can be accessed to support various analyses, from statistical process control to enterprise-level planning. Input/output (IO) Server. The IO server is a control component responsible for collecting, buffering, and providing access to process information from control subcomponents such as PLCs, RTUs, and IEDs. An IO server can reside on the control server or on a separate computer platform. IO servers are also used for interfacing third-party control components, such as an HMI and a control server.
14.6.6 Network components There are different network characteristics for each layer within a control system hierarchy. Network topologies across different ICS implementations vary with modern systems using Internet-based IT and enterprise integration strategies. Control networks have merged with corporate networks to allow control engineers to monitor and control systems from outside of the control system network. The connection may also allow enterprise-level decision-makers to obtain access to process data. The following is a list of the major components of an ICS network, regardless of the network topologies in use: • Fieldbus network. The fieldbus network links sensors and other devices to a PLC or other controller. Use of fieldbus technologies eliminates the need for point-to-point
Cyber resilience and future of electric power system
• •
•
•
•
wiring between the controller and each device. The devices communicate with the fieldbus controller using a variety of protocols. The messages sent between the sensors and the controller uniquely identify each of the sensors. Control network. The control network connects the supervisory control-level to lower-level control modules. Communications routers. A router is a communications device that transfers messages between two networks. Common uses for routers include connecting a LAN to a WAN and connecting MTUs and RTUs to a long-distance network medium for SCADA communication. Firewall. A firewall protects devices on a network by monitoring and controlling communication packets using predefined filtering policies. Firewalls are also useful in managing ICS network segregation strategies. Modems. A modem is a device used to convert serial digital data and a signal suitable for transmission over a telephone line to allow devices to communicate. Modems are often used in SCADA systems to enable long-distance serial communications between MTUs and remote field devices. They are also used in SCADA systems, DCS, and PLCs for gaining remote access for operational and maintenance functions such as entering commands or modifying parameters, and diagnostic purposes. Remote access points. Remote access points are distinct devices, areas, and locations of a control network for remotely configuring control systems and accessing process data. Examples include using a PDA to access data over a LAN through a wireless access point and using a laptop and modem connection to remotely access an ICS system.
14.7 SCADA systems SCADA systems are used to control dispersed assets where centralized data acquisition is as important as control [10,11]. These systems are used in distribution systems such as water distribution and wastewater collection systems, oil and natural gas pipelines, electrical utility transmission and distribution systems, and rail and other public transportation systems. SCADA systems integrate data acquisition systems with data transmission systems and HMI software to provide a centralized monitoring and control system for numerous process inputs and outputs. SCADA systems are designed to collect field information, transfer it to a central computer facility, and display the information to the operator graphically or textually, thereby allowing the operator to monitor or control an entire system from a central location in real time. Based on the sophistication and setup of the individual system, control of any individual system, operation, or task can be automatic, or it can be performed by operator commands. SCADA systems consist of both hardware and software. Typical hardware includes an MTU placed at a control center, communications equipment (e.g., radio, telephone line, cable, or satellite), and
537
538
Introduction to energy essentials
one or more geographically distributed field sites consisting of either an RTU or a PLC, which controls actuators and/or monitors sensors. The MTU stores and processes the information from RTU inputs and outputs, while the RTU or PLC controls the local process. The communications hardware allows the transfer of information and data back and forth between the MTU and the RTUs or PLCs. The software is programmed to tell the system what and when to monitor, what parameter ranges are acceptable, and what response to initiate when parameters change outside acceptable values. An IED, such as a protective relay, may communicate directly to the SCADA server, or a local RTU may poll the IEDs to collect the data and pass it to the SCADA Server. IEDs provide a direct interface to control and monitor equipment and sensors. IEDs may be directly polled and controlled by the SCADA server and in most cases have local programming that allows for the IED to act without direct instructions from the SCADA control center. SCADA systems are usually designed to be fault-tolerant systems with significant redundancy built into the system architecture. Fig. 14.13 A,B shows the components and general configuration of a SCADA system. The control center houses a SCADA server (MTU) and the communications routers. Other control center components include the HMI, ENGs, and the data historian, which are all connected by a LAN. The control center collects, and logs information gathered by the field sites, displays information to the HMI, and may generate actions based upon detected events. The control center is also responsible for centralized alarming, trend analyses, and reporting. The field site performs local control of actuators and monitors sensors. Field sites are often equipped with a remote access capability to allow field operators to perform remote diagnostics and repairs usually over a separate dial up modem or WAN connection. Standard and proprietary communication protocols running over serial communications are used to transport information between the control center and field sites using telemetry techniques such as telephone line, cable, fiber, and radio frequency such as broadcast, microwave, and satellite. MTU-RTU communication architectures vary among implementations. The various architectures used, including point-to-point, series, series-star, and multidrop [12], are shown in Fig. 14.14. Point-to-point is functionally the simplest type; however, it is expensive because of the individual channels needed for each connection. In a series configuration, the number of channels used is reduced; however, channel sharing has an impact on the efficiency and complexity of SCADA operations. Similarly, the seriesstar and multidrop configurations’ use of one channel per device results in decreased efficiency and increased system complexity. The four basic architectures shown in Fig. 14.15 can be further augmented using dedicated communication devices to manage communication exchange as well as message switching and buffering. Large SCADA systems, containing hundreds of RTUs, often employ sub-MTUs to alleviate the burden on the primary MTU. This type of topology is shown in Fig. 14.15.
Cyber resilience and future of electric power system Regional Control Center
Backup Control Center Primary Control Center
Data Historian
Engineering Workstations
HMI Station
Control Server SCADA-MTU Corporate Enterprise Network
WAN
Modem HMI Station
Printer
Control Server (SCADA-MTU)
Serial Based Radio Communication
RTU
WAN Card
PLC
RTU
Modem
Valve Pump
Modem Remote Access
Valve Pump Modem L
P
L
Valve Pump F
Level Pressure Flow Sensor Sensor Sensor Remote Station
Computer L
P
F
P
F
Level Pressure Flow Sensor Sensor Sensor Remote Station
Level Pressure Flow Sensor Sensor Sensor Remote Station
Fig. 14.16 SCADA system implementation example (distribution monitoring and control).
Fig. 14.16 shows an example of a SCADA system implementation. This particular SCADA system consists of a primary control center and three field sites. A second backup control center provides redundancy in the event of a primary control center malfunction. Point-to-point connections are used for all control center to field site communications, with two connections using radio telemetry. The third field site is local to the control center and uses the WAN for communications. A regional control center resides above the primary control center for a higher level of supervisory control. The corporate network has access to all control centers through the WAN, and field sites can be accessed remotely for troubleshooting and maintenance operations. The primary control center polls field devices for data at defined intervals (e.g., 5 sec, 60 sec) and can send new SPs to a field device as required. In addition to polling and issuing high-level commands, the SCADA server also watches for priority interrupts coming from field site alarm systems. Fig. 14.17 shows an example implementation for rail monitoring and control. This example includes a rail control center that houses the SCADA system and three sections of a rail system. The SCADA system polls the rail sections for information such as the status of the trains, signal systems, traction electrification systems, and ticket vending
539
540
Introduction to energy essentials Rail Control Center HMI Engineering Station Workstations
Data Historian
Local Area Network
Control Server (SCADA-MTU)
Router
Printer
Ring Topology
Hub
Rail Section 1
Hub Hub
Rail Section 3
Rail Section 2
RTU
RTU RTU
Train to Wayside Control Traction Electrification Substation
Signaling Train to Wayside Control
Rail Power
Power Feed (Local Utility)
PLC: Traction Electrification Monitoring and Control
Train
Traction Electrification Substation
Train to Wayside Control
Signaling
Traction Electrification Substation
Rail Power
Power Feed (Local Utility) PLC: Traction Electrification Monitoring and Control
Train
Signaling Rail Power Train
Power Feed (Local Utility)
PLC: Traction Electrification Monitoring and Control
Fig. 14.17 SCADA system implementation example (rail monitoring and control).
machines. This information is also fed to operator consoles at the HMI station within the rail control center.The SCADA system also monitors operator inputs at the rail control center and disperses high-level operator commands to the rail section components. In addition, the SCADA system monitors conditions at the individual rail sections and issues commands based on these conditions (e.g., shut down a train to prevent it from entering an area that has been determined to be flooded or occupied by another train based on condition monitoring).
14.8 Distributed control systems DCSs are used to control production systems within the same geographic location for industries such as oil refineries, water and wastewater treatment, electric power generation plants, chemical manufacturing plants, and pharmaceutical processing facilities. These systems are usually process control or discrete part control systems. A DCS uses a centralized supervisory control loop to mediate a group of localized controllers that share the overall tasks of carrying out an entire production process [13]. By
Cyber resilience and future of electric power system
Distributed Plant
Workstations Printer
Application Server
Enterprise/ Outside World
Wireless Device OPC Client/Server Modem
Internet/WAN
To: Manufacturing Execution System (MES), Management Information System (MIS), Enterprise Resource Planning (ERS) System
Redundant Control Server Control Server Main HMI
Supervisory Level
Engineering Work station
Data Historian
Local Control Network
HMI
Machine Controller
Motion Modem Control moter Network
Pressure Sensor moter
Programmable Logic Controller (PLC) Modem Light Tower
HMI
Servo Drive
Solenoid Valve
Servo Drive Servo Drive Pressure Regulator
moter Logic Control
Signal Loop Controller Modem
I/O
Variable Freq Proximity Sensors Drive Photo eye
Field Level
Process Controller
sensor actuator I/O
DC Servo Drive
Remote Access
Modem Fieldbus AC Drive
Solenoid Valve Servo Valve Temp Sensor
Modem
Computer
Pressure Regulator
Fieldbus
Pressure Sensor
Fig. 14.18 Distributed control systems (DCS) implementation example.
modularizing the production system, a DCS reduces the impact of a single fault on the overall system. In many modern systems, the DCS is interfaced with the corporate network to give business operations a view of production. An example implementation showing the components and general configuration of a DCS is depicted in Fig. 14.18. This DCS encompasses an entire facility from the bottom-level production processes up to the corporate or enterprise layer. In this example, a supervisory controller (control server) communicates to its subordinates via a control network.The supervisor sends SPs to and requests data from the distributed field controllers. The distributed controllers control their process actuators based on control server commands and sensor feedback from process sensors. Fig. 14.18 gives examples of low-level controllers found on a DCS system. The field control devices shown include a PLC, a process controller, a single loop controller, and a machine controller. The single loop controller interfaces sensors and actuators using point-to-point wiring, while the other three field devices incorporate fieldbus networks to interface with process sensors and actuators. Fieldbus networks eliminate the need for point-to-point wiring between a controller and individual field sensors and actuators.
541
542
Introduction to energy essentials
Additionally, a fieldbus allows greater functionality beyond control, including field device diagnostics, and can accomplish control algorithms within the fieldbus, thereby avoiding signal routing back to the PLC for every control operation. Standard industrial communication protocols designed by industry groups such as Modbus and Fieldbus [14] are often used on control networks and fieldbus networks. In addition to the supervisory-level and field-level control loops, intermediate levels of control may also exist. For example, in the case of a DCS controlling a discrete part manufacturing facility, there could be an intermediate-level supervisor for each cell within the plant. This supervisor would encompass a manufacturing cell containing a machine controller that processes a part and a robot controller that handles raw stock and final products. There could be several of these cells that manage field-level controllers under the main DCS supervisory control loop.
14.9 Programmable logic controllers PLCs are used in both SCADA and DCS systems as the control components of an overall hierarchical system to provide local management of processes through feedback control as described in the sections above. In the case of SCADA systems, they provide the same functionality of RTUs. When used in DCS, PLCs are implemented as local controllers within a supervisory control scheme. PLCs are also implemented as the primary components in smaller control system configurations. PLCs have a user programmable memory for storing instructions for the purpose of implementing specific functions such as I/O control, logic, timing, counting, three mode Proportional-Integral-Derivative (PID) control, communication, arithmetic, and data and file processing. Fig. 14.19 shows control of a manufacturing process being performed by a PLC over a fieldbus network. The PLC is accessible via a programming interface located on an ENG, and data are stored in a data historian, all connected on a LAN. Again, if our readers are interested in more and further information they should refer to the NIST Special Publication 800-82 “Guide to Industrial Control System (ICS) Security [6].
14.10 AI driving modern protections against modern threats The most important element in preventing an attack in frame of a new reference architecture which includes integration of AI for power plants is protecting the industrial network perimeter (INP) from less trusted, less critical external networks. Water fall Unidirectional Security Gateways (USG) enable safe IT and OT integration as well as continuous real-time monitoring of industrial operations by enterprise applications and central Security Operations Centers (SOCs), without introducing vulnerabilities to attacks that always accompany firewalled connections.
Cyber resilience and future of electric power system
Data Historian
Engineering Workstation
Local Area Network
Programmable Logic Controller (PLC) HMI
Modem Light Tower
Proximity Sensors
Variable Freq Drive
DC Servo Drive
Photo eye
Fieldbus AC Drive
Fig. 14.19 Programmable logic controllers (PLC) control system implementation example.
Replacing at least one layer of firewalls with unidirectional technology in industrial network environments results in absolute protection to control systems and operations networks from attacks originating on external networks. USG (See Fig. 14.20) enable vendor monitoring, industrial cloud services, and visibility into operations for modern enterprises and their customers.
Fig. 14.20 Unidirectional security gateway configuration.
543
544
Introduction to energy essentials
Unidirectional Gateways replicate industrial servers, emulate devices, and translate industrial data to cloud formats for external enterprise networks. As a result, Unidirectional Gateway technology is a plug-and-play replacement for firewalls, without the vulnerabilities and maintenance issues that always accompany firewall deployments. Unidirectional Gateways are combinations of hardware and software. The gateway hardware physically transmits information in only one direction, most commonly from the industrial network to an external IT network or the Internet. External users and applications interact with the replica servers in real-time as if those servers were the original, industrial systems, making the gateways seamless replacements for firewalls. Note that unidirectional Security gateway products are the foundation for secure industrial network (SIN). The Gateways in all their forms never forward messages and provide hardware-based protections for generation networks. In a unidirectional reference architecture for power generation, secure IT/OT integration is only conducted unidirectionally through the gateways, not through firewalls, and dangerous remote access paths are completely eliminated. Note that generation utilities may still carry out segmentation of their OT networks using firewalls, provided these firewalls are used between subnetworks at the same level of trust and criticality. As long as interconnections between Internet-exposed and industrial control networks are protected with Unidirectional Gateways in a DiD, layered network architecture, the path of infection from Internet-exposed networks is broken.
14.11 AI and cybersecurity AI has, in recent years, developed rapidly, serving as the basis for numerous mainstream applications. From digital assistants to healthcare and from manufacturing to education, AI is widely considered a powerhouse that has yet to unleash its full potential. But in the face of rising cybercrime rates, one question seems especially pertinent: is AI a solution for cybersecurity, or just another threat? Over the past few years, cybersecurity has emerged as an important aspect for businesses across a wide range of industries, as more and more companies need to have a strong online presence. At the core of cybercrime trends is data—broadly considered as the new currency of an increasingly digital world, data is one of the most important assets for all types of organizations, and safeguarding it is a top priority. In their efforts to keep hackers at bay, cybersecurity experts have developed sophisticated data protection techniques, such as data pseudonymization and data encryption. Data pseudonymization is a security process that sees critical data replaced with fictitious information that looks realistic. It is widely used by companies that wish to maintain referential integrity and statistical accuracy of sensitive data to minimize disruption of their operations. Data encryption is another popular technique that makes data impossible to understand for anyone who does not have access to the encryption key, thereby protecting it from intruders.
Cyber resilience and future of electric power system
As we know today, AI has emerged as a prominent technology in cybersecurity applications. This IDC Vendor Spotlight examines the advances in and recent uses of AI in cybersecurity as well as the role of “Last-Line” in this important market. Integrating a BRS [1] in place will mitigate Business Risk (BR) and driving business value with behavior-based AI security accordingly. Among large enterprises and managed security services providers, the typical ratio of level 1 to level 2 SOC analysts is 3:1. However, the goal is to change that ratio to 1.5:1 in the coming years by making level 1 SOC analysts more efficient and augmenting the abilities of the more experienced level 1 analysts to allow them to perform at level 2. As we have learned by now through previous chapters, AI is a cutting-edge technology with emerging applications in the cybersecurity industry. AI security solutions are of particular interest for enterprises facing a landscape of increasingly sophisticated determined attackers and a shortage of security talent. However, the term “AI security” encompasses a broad range of solutions and capabilities, leading to a general high level of market confusion. At one end of the scale, the hype surrounding AI promises autonomous “set and forget” security. Enterprises with more grounded expectations insist that AI holds genuine potential to deliver enhanced security efficacy and a measurable improvement in business value. What is certain is that AI technology is set to play a major role in the security architecture of future enterprise networks, which is never-ending threats, tactics, and tools. Therefore, a deeper evaluation of AI-powered security capabilities is required to understand what the technology can and should offer in practice in place of what it could be in theory. The cyberthreat landscape is diverse and complex, with one common characteristic— a high level of constant adaptation and innovation. Advanced threat actors have demonstrated a willingness to target private institutions just as readily as critical public infrastructure. Other advanced threats may choose to target a specific industry, such as a rash of Remote Access Trojans (RATs) targeting the banking industry in 2018. Additionally, hacker marketplaces sell exploit kits that grant any criminal organization the advanced tools once held only by high-level hacking groups. An innovative BRS [1] along with integrated AI augmented on the top of ML and DL allows to collect right Information from right Data (i.e. Structured and Unstructured) that are coming from omni-direction at the speed of electrons via today’s IoT enhances our Knowledge thus the Power of making right and informed decision for our business with if not nonrisk at least at bear minimum risk [12]. Business decision makers are keenly aware of the threat severity and extreme risk facing their organizations but are challenged by a traditional security model that has proven to be reactionary and inefficient. Security teams are constantly under pressure to address the latest emergency while building a broader and more coherent security
545
546
Introduction to energy essentials
architecture. As a result, modern enterprise security architectures resemble patchworks of security point products, with too few integration points and fewer people to manage the solutions. Too many alarms are going off, but IT organizations lack adequate time or hands to address every issue, leaving gaps for attackers to target. Generally speaking, AI is the science and engineering of making intelligent machines (especially intelligent computer programs) — a software technology with implications across a wide variety of applications, industries, and UCs. In the cybersecurity industry, AI has emerged as a new method for detecting threats. However, the term “AI” is used seemingly without restraint and is quickly becoming meaningless, often dismissed by customers as marketing “fluff.” Prospective buyers require further insight into the inner workings of the technology to make informed decisions. AI technology along with its component ML and DL holds tremendous potential for improving the efficacy and business value of network security solutions. AI security solutions that leverage a deep understanding of malicious behaviors can deliver superior detection accuracy and, by extension, may improve security analysts’ efficiency and effectiveness. In response to sophisticated, elusive cyberthreats, the cybersecurity industry has shifted away from static signature-based methods toward new technologies including behavioral analysis and predictive ML techniques. The blend of these latter two approaches, represented by behavior-based AI security solutions, provides a balance of predictive and deterministic methods that can yield superior detection accuracy. Importantly, AI security solutions must be founded on a deep understanding of specific behaviors and activities that indicate a true threat. For example, AI security solutions that rely solely on anomaly detection in network traffic may generate false positives as the vast majority of anomalies are simply unusual but benign activity by a legitimate user as opposed to malicious anomalies. AI security solutions that are powered by behavioral analysis will improve network security practices by helping reduce false positives and redirect attention to anomalous activities indicative of important security events. Additionally, behavior-based AI dovetails with network traffic analysis solutions to correlate data about network traffic and other sources, such as endpoints or cloud environments. Blending behavior-based AI security techniques with network traffic analysis can help find threats across various network entry points as well as more sophisticated threats that have already bypassed perimeter defenses and are moving laterally. “Threat Actors Will Adapt and Force Maturation of AI Technologies” More worrisome still is the eventual possibility of adversarial AI techniques. Threat actors are always evolving their tactics. It is only a matter of time until threat actors begin to adopt adversarial tactics designed to confuse AI, especially as security tools become more reliant on AI technologies. AI depends on data, specifically broad and highquality data. Lacking the ability to address the “breadth” of data that security vendors can leverage, sophisticated threat actors may choose instead to undermine the quality of
Cyber resilience and future of electric power system
the data on which AI security tools train. This strategy may rely on multiple tactics such as threat actors attempting to dilute AI training data with significant amounts of noise. From a security perspective, this tactic may involve a large number of nonmalicious, nonsecurity-related events that are designed to generate alerts from AI-based security tools. Once the AI finds a new, less sensitive baseline of normal, attackers can begin to escalate their activities. Or attackers may attempt to train the AI to recognize bad behaviors as acceptable. Repeating anomalous behaviors makes these behaviors, by definition, “not anomalous.” This approach requires a high level of technological skill on the part of threat actors and assumes some lack of other security controls. However, these examples simply illustrate the importance of security threat intelligence and behavior-based context in addition to AI-based security technologies. In conclusion, network security is an ongoing process to pinpoint the signs of an attack through a constant and tremendous volume of “noise.” In theory, AI may provide the means to help identify and correlate more signals in a more efficient and effective manner. In practice, AI must first be able to account for a dynamic and unpredictable world. If poorly executed, AI security solutions may end up as simply another noisy and false alarm–prone security tool. But if AI is built around a solid foundation of securityspecific behavioral insight, it can drive businesses to better security outcomes in terms of both BR and business value.
References [1] L. Chi, B. Zhang, Managing I&C Obsolescence for Nuclear Power Plant Life Extension, IAEA INIS, 2012. [2] Oak Ridge National Laboratory Preferred Licensing Services Logenecker & Associates, Advanced Reactor Licensing: Experience with Digital I&C Technology in Evolutionary Plants, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, April 2004. [3] K. Koscher, et al., Experimental Security Analysis of a Modern Automobile, in Proc. IEEE Symp. Security and Privacy, California (2010). [4] M.A. Laughton, D.J. Warne (ed), Electrical Engineer’s Reference Book, 16th Ed, Chapter 16. Programmable Controller, 2003. Newnes. [5] Cyberthreats, Vulnerabilities and Attacks on SCADA Networks” (PDF). Rosa Tang, berkeley.edu. [6] https://engineering.purdue.edu/VAAMI/ICS-modules.pdf. [7] https://csrc.nist.gov/publications/detail/sp/800-82/archive/2011-06-09. [8] R. Frazer, Process Measurement and Control – Introduction to Sensors, Communication Adjustment, and Control, Prentice-Hall, Inc., 2001. [9] J. Falco, et al., IT Security for Industrial Control Systems, 6859, NIST IR, 2003. http://www.isd.mel. nist.gov/documents/falco/ITSecurityProcess.pdf. [10] D. Bailey, W., Edwin, Practical SCADA for Industry, IDC Technologies, 2003. [11] S. Boyer, SCADA Supervisory Control and Data Acquisition, 2nd Ed, ISA, 1999. [12] K. Erickson, J. Hedrick, Plant Wide Process Control, Wiley & Sons, 1999. [13] J. Berge, Fieldbuses for Process Control: Engineering, Operation, and Maintenance, ISA (2002). [14] R.A.D.H.A Poovendran, Cyber–Physical Systems: Close Encounters Between Two Parallel Worlds [Point of View], Proc. IEEE, 98.8 (2010): 1363–1366. [15] R.R. Rajkumar, et al., Cyber-Physical Systems: The Next Computing Revolution, in Proc 47th Design Automat. Conf, Anaheim, California, 2010.
547
APPENDIX A
Plan-do-check-act (PDCA) cycle PDCA (plan–do–check–act or plan–do–check–adjust) is an iterative four-step management method used in business for the control and continual improvement of processes and products. It sounds very easy procedure as “Just plan your work and work your plan.” So, why is “plan do check act” so difficult in practice? Using the PDCA method is like climbing a hill: it starts out easy but gets harder the higher up you go.This Appendix shades more detail information PDCA.
A.1 Introduction PDCA is an iterative four-step management method used in business for the control and continual improvement of processes and products [1]. It is also known as the Deming circle/cycle/wheel, the Shewhart cycle, the control circle/cycle, or plan–do– study–act (PDSA). Another version of this PDCA cycle is OPDCA [2]. The added “O” stands for observation or as some versions say: “Observe the current condition.” This emphasis on observation and current condition has currency with the literature on lean manufacturing and the Toyota Production System [3]. The PDCA cycle, with Ishikawa’s changes, can be traced back to S. Mizuno of the Tokyo Institute of Technology in 1959. See Fig. A.1 Not that the PDCA, also called PDSA cycle, known as Deming Cycle, Shewhart Cycle as well. See Fig. A.2. It is known as PDCA for short and refers to the process approach of management or the learning loop of discovery. It is one of the cornerstones of the quality world, of the ISO 9001 standard. If executed correctly, it can help you get control over a seemingly chaotic world. Yes, it is hard to do PDCA right…but what is worth doing well that is not difficult, either. Each of the element of PDCA is describe next further down this section.
A.2 Plan The planning phase involves assessing a current process, or a new process, and figuring out how it can be improved upon. Knowing what types of outputs desired helps to develop a plan to fix or improve the process. It is often easier to plan smaller changes during this phase of the plan so that they can be easily monitored, and the outputs are more predictable. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
549
550
Introduction to energy essentials
Fig. A.1 The PDCA cycle illustration.
Document Your “Plan.” The plan is really the easiest part of PDCA. Start with goals that are measurable. Document your plan using objectives, policies, procedures, and forms. Assign individual responsibilities, and you are done. When you are building an ISO 9001 Quality Management System, this is not hard at all. But it gets harder, and pretty quickly.
A.3 Do The do phase allows the plan from the previous step to be enacted. Small changes are usually tested, and data are gathered to see how effective the change is. Using Your Plan Is “Do.” What does the “Do” in PDCA mean? This refers to using the policies, procedures, and forms to realize your objectives. This means collecting data and populating your forms. You have to use your procedures and follow your policies. While this may sound easy at first, keeping it up is the hard part — you may start with good intentions (that is the “plan” part), but as they say, “Good intentions do not pay the bills.” Your company has to follow through on its plan, and
Fig. A.2 Plan-do-check-act cycle.
Plan-do-check-act (PDCA) cycle
follow-through—commitment—starts at the top. Furthermore, management commitment is not an isolated event — it is part of the company philosophy. In case you missed that, let me say it again: “doing” takes management commitment. That is a large part of what makes PDCA so hard. As management, you get so involved in running the day-to-day aspects of the business that you forget that you started with good intentions (the plan). It is not that the plan was ill-conceived; it is that there is more to it than a piece of paper. Plans need continual reevaluation—you need to constantly “check” your progress and adjust the plan accordingly. What is so hard about checking the plan?
A.4 Check During the check phase, the data and results gathered from the do phase are evaluated. Data are compared to the expected outcomes to see any similarities and differences. The testing process is also evaluated to see if there were any changes from the original test created during the planning phase. If the data are placed in a chart it can make it easier to see any trends if the PDCA cycle is conducted multiple times. This helps to see what changes work better than others, and if said changes can be improved as well. See Fig. A.3 “Check” Your Plan. In the “check” step of PDCA, you have to convert data into information. Charting data can make this much easier but even so, a chart is just a visualization of data. A chart is not information without a target. In addition, you need enough data points to show trends. How many make a trend? Two points is a line, you need at least three points for a trend. But how much confidence will you have in three-point trend? Not much. So, 10 points would build
Fig. A.3 Continuous quality improvement with PDCA illustration.
551
552
Introduction to energy essentials
confidence in your trend. Instead of the last 3 months look at the last 13 weeks. It will build confidence in your trend. Creating information from data requires what Deming called “profound knowledge” about your system. It will help if you understand a little about statistics, which will make it easier to separate individual data points that represent the “vital few” or significant ones from the “trivial many.” Creating information out of data is not easy; often, it requires that you continually dissect the data and look at it from many different points of view.
A.5 Act The adjust phase is the alternative version of the act phase. Once PDCA has been run multiple times, the process generally has enough information for it to be considered a new standard. This is usually completed in the act phase. The adjust phase allows the process to continue to be monitored after the changes have been implemented and fix them accordingly. Doing this lets the PDCA cycle truly be for continuous improvement instead of changing a process and letting it become inefficient again. “Act” On Your Results. So, let us say you started with a good plan, you were able to collect some meaningful data, and you turned it into useful information. If you have accomplished this, the “Act” phase should be easy, right? Possibly, if you have a stable environment. Today’s business world is an increasingly unstable environment — old and new forces are continually changing the dynamic. There is local and global competition, widespread and affordable technology, weather and climate, cultures, beliefs—a host of forces acting on your business. Deciding what to do to compensate for or leverage external forces has always been difficult; it is just becoming more so. But if you do a good job at the first three phases, the “Act” phase of PDCA becomes a lot easier. You just need to make better information out of your data. An example of PDCA in action is depicted here as Fig. A.4
Plan
Inputs
Define, prepare, document
Act Evaluate, correct
Do Execute, record
Outputs
Check Requirements
Fig. A.4 Plan-do-check-act process in action.
Measure, compare
Satisfaction
Plan-do-check-act (PDCA) cycle
As indicated in Fig. A.4, the Olympics happen every few years. Some Olympic records–and a few world records–are broken over the course of the event. You watch these athletes perform and you marvel at their power, their endurance, their finesse. How do they do it? What makes them so special? Are they that different from you and me? Are they superhuman? No, not really. They are just like you and me—well, maybe not now. But we all start out on equal footing. The big difference? With a few exceptions, the athletes got their start fairly early in life. And almost from the day they laced up a pair of skates or strapped on skis, they had an ambitious, long-range goal—to be a pro, maybe even the next Wayne Gretzky or Herman Maier. Family and friends encouraged and helped them. Their parents, and then their coaches, made up their plan. They knew that to get the big goal, these future stars had to accomplish a lot of smaller goals, and they had to do it in step-wise fashion. The plan included competition, proper nutrition, and physical and mental training. Their coaches checked their performance in training and competitions. They analyzed the athlete’s performance, noted where they were achieving those small goals and where they were not, and revised the plan accordingly. Then, they executed the revised plan to improve performance. They repeated this step-wise plan over and over until they reached their big goal, whether that was turning pro, making the Olympic team, making it to the medal round, or standing on the podium at the medal ceremony. Think about that. They made a plan, executed it, checked their progress, and improved incrementally. What does that remind you of? If you thought “Deming Cycle” [4], you are right. PDCA— just like your organization should be doing (if it is not already). Your organization is just like that Olympic athlete. Improvement does not happen overnight. It happens in stages, over time, following a plan.
A.6 About PDCA was made popular by W. Edwards Deming [4], who is considered by many to be the father of modern quality control; however, he always referred to it as the “Shewhart cycle.” Later in Deming’s career, he modified PDCA to PDSA because he felt that “check” emphasized inspection over analysis [5]. The PDSA cycle was used to create the model of know-how transfer process [6], and other models. [7] The concept of PDCA is based on the scientific method, as developed from the work of Francis Bacon [12].The scientific method can be written as “hypothesis–experiment–evaluation” or as “plan–do–check.” Walter A. Shewhart described manufacture under “control”—under statistical control—as a three-step process of specification, production, and inspection. [9] He also specifically related this to the scientific method
553
554
Introduction to energy essentials
Fig. A.5 Multiple PDCA iteration process illustration.
of hypothesis, experiment, and evaluation. Shewhart said that the statistician “must help to change the demand [for goods] by showing […] how to close up the tolerance range and to improve the quality of goods” [8]. Clearly, Shewhart intended the analyst to take action based on the conclusions of the evaluation. According to Deming, during his lectures in Japan in the early 1950s, the Japanese participants shortened the steps to the now traditional plan, do, check, act [4]. Deming preferred plan, do, study, act because “study” has connotations in English closer to Shewhart’s intent than “check.” [9]. A fundamental principle of the scientific method and PDCA is iteration—once a hypothesis is confirmed (or negated), executing the cycle again will extend the knowledge further. See Fig. A.5, where multiple iterations of the PDCA cycle are repeated until the problem is solved. Repeating the PDCA cycle can bring its users closer to the goal, usually a perfect operation and output [9]. Another fundamental function of PDCA is the “hygienic” separation of each phase, for if not properly separated measurements of effects due to various simultaneous actions (causes) risk becoming confounded [10]. PDCA (and other forms of scientific problem solving) is also known as a system for developing critical thinking. At Toyota this is also known as “Building people before building cars” [11]. Toyota and other lean manufacturing companies propose that an engaged, problem-solving workforce using PDCA in a culture of critical thinking is better able to innovate and stay ahead of the competition through rigorous problem solving and the subsequent innovations [11]. Deming continually emphasized iterating toward an improved system, hence PDCA should be repeatedly implemented in spirals of increasing knowledge of the system that converge on the ultimate goal, each cycle closer than the previous. One can envision an open coil spring, with each loop being one cycle of the scientific method, and each complete cycle indicating an increase in our knowledge of the system under study. This approach is based on the belief that our knowledge and skills are limited but improving. Especially at the start of a project, key information may not be known; the PDCA—scientific method—provides feedback to justify guesses (hypotheses) and increase knowledge. Rather than enter “analysis paralysis” to get it perfect the first time, it is better to be approximately right than exactly wrong. With improved knowledge, one may choose to refine or alter the goal (ideal state). The aim of the PDCA cycle is to bring its users closer to whatever goal they choose [3].
Plan-do-check-act (PDCA) cycle
When PDCA is used for complex projects or products with a certain controversy, checking with external stakeholders should happen before the Do stage, as changes to projects and products that are already in detailed design can be costly; this is also seen as PDCA [citation needed]. Rate of change, that is, rate of improvement, is a key competitive factor in today’s world [citation needed]. PDCA allows for major “jumps” in performance (“breakthroughs” often desired in a Western approach), as well as kaizen (frequent small improvements) [citation needed]. In the United States, a PDCA approach is usually associated with a sizable project involving numerous people’s time,[citation needed] and thus managers want to see large “breakthrough” improvements to justify the effort expended. However, the scientific method and PDCA apply to all sorts of projects and improvement activities [3].
A.7 When to use PDCA Here are the list of steps for this process and techniques: • As a model for continuous improvement. • When starting a new improvement project. • When developing a new or improved design of a process, product, or service. • When defining a repetitive work process. • When planning data collection and analysis to verify and prioritize problems or root causes. • When implementing any change.
A.8 PDCA procedure The followings are procedure steps for PDCA: 1. Plan. Recognize an opportunity and plan a change. 2. Do. Test the change. Carry out a small-scale study. 3. Check. Review the test, analyze the results, and identify what you have learned. 4. Act. Take action based on what you learned in the study step: If the change did not work, go through the cycle again with a different plan. If you were successful, incorporate what you learned from the test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.
A.9 Using the process approach to create a well-design process Consider this: What are your goals for the short and long term? Do you have a plan to get there? Are you satisfied with your performance? More importantly, are your customers?
555
556
Introduction to energy essentials
How do you get better? What will it take to make your firm stand out from the rest—to get to the Games, to the medal round, and maybe even the gold, silver, or bronze? Are you monitoring and analyzing your performance to improve? Are you looking for overnight success, or are you looking for incremental improvement over time? Do you adjust your plan when you do not meet your goals? So, there you have it. Plan-Do-Check-Act — PDCA, for short. It is one of the cornerstones of the quality world, of the ISO 9001 standard. If executed correctly, it can help you get control over a seemingly chaotic world.Yes, it is hard to do PDCA right… but what is worth doing well that is not difficult, too? If it were that easy, everyone would be doing it…do not you think? Another example of PDCA is given here as: The Pearl River, NY School District, a 2001 recipient of the Malcolm Baldrige National Quality Award, uses the PDCA cycle as a model for defining most of their work processes, from the boardroom to the classroom. PDCA is the basic structure for the district’s overall strategic planning, needs–analysis, curriculum design and delivery, staff goal-setting and evaluation, provision of student services and support services, and classroom instruction. Fig. A.6 shows their “A+ Approach to Classroom Success.”This is a continuous cycle of designing curriculum and delivering classroom instruction. Improvement is not a separate activity: It is built into the work process Plan: The A+ approach begins with a “plan” step called “analyze.” In this step, students’ needs are analyzed by examining a range of data available in Pearl River’s electronic data “warehouse,” from grades to performance on standardized tests. Data
Fig. A.6 Plan-do-check-act example.
Plan-do-check-act (PDCA) cycle
Fig. A.7 Peral river analysis process.
can be analyzed for individual students or stratified by grade, gender, or any other subgroup. Because PDCA does not specify how to analyze data, a separate data analysis process (Fig. A.7) is used here as well as in other processes throughout the organization. Do: The A+ approach continues with two “do” steps: 1. “Align” asks what national and state standards require and how they will be assessed. Teaching staff also plans curriculum by looking at what is taught at earlier and later grade levels and in other disciplines to assure a clear continuity of instruction throughout the student’s schooling. Teachers develop individual goals to improve their instruction where the “analyze” step showed any gaps. 2. The second “do” step is, in this example, called “act.” This is where instruction is actually provided, following the curriculum and teaching goals. Within set parameters, teachers vary the delivery of instruction based on each student’s learning rates and styles and varying teaching methods. Check: The “check” step is called “assess” in this example. Formal and informal assessments take place continually, from daily teacher “dipstick” assessments to everysix-weeks progress reports to annual standardized tests. Teachers also can access comparative data on the electronic database to identify trends. High-need students are monitored by a special child study team. Throughout the school year, if assessments show students are not learning as expected, mid-course corrections are made such as reinstruction, changing teaching methods and more direct teacher mentoring. Assessment data become input for the next step in the cycle.
557
558
Introduction to energy essentials
Act: In this example the “act” step is called “standardize.” When goals are met, the curriculum design and teaching methods are considered standardized. Teachers share best practices in formal and informal settings. Results from this cycle become input for the “analyze” phase of the next A+ cycle.
References [1] NR. Tague, The Quality Toolbox, 2nd Ed., ASQ Quality Press. pp. 390–392. ISBN 0873896394. OCLC 57251077. Retrieved 2017-10-21. [2] Foresight University, The Foresight Guide, Shewhart’s Learning and Deming’s Quality Cycle, See Ref. 1. [3] M. Rother, Toyota Kata: Managing People for Improvement, Adaptiveness, and Superior Results, McGraw-Hill, New York, (2010). ISBN 0071635238. OCLC 318409119. [4] W.E Deming, Out of the Crisis, Massachusetts Institute of Technology, Center for Advanced Engineering Study, Cambridge, Massachusetts, (1986) p. 88. ISBN 0911379010. OCLC 13126265. [5] R. Aguayo, Dr. Deming: the American Who Taught the Japanese About Quality. A Lyle Stuart Book, Carol Pub. Group, Secaucus, New Jersey, (1990) p. 76. ISBN 0818405198. OCLC 22347078. Also published by Simon & Schuster, 1991. [6] M. Dubickis, E. Gaile-Sarkane,Transfer of know-how based on learning outcomes for development of open innovation, J. Open Innov. Technol. Market Complexity 3 (1) (December 2017) 4, doi:10.1186/ s40852-017-0053-4. [7] H. Dubberly, How do you design?: a compendium of models, dubberly.com (2008) [2004]. Retrieved 21 October 2017. [8] W.A. Shewhart, Statistical Method from the Viewpoint of Quality Control, Dover, New York, 1986, (1939). ISBN 0486652327. OCLC 13822053. Reprint, Originally published: Graduate School of the Department of Agriculture, Washington, DC, (1939). [9] R. Moen, C. Norman, Evolution of the PDCA cycle (PDF), westga.edu. Paper delivered to the Asian Network for Quality Conference in Tokyo on, September 17 (2009). Retrieved 1 October 2011. [10] J. Berengueres, The Toyota Production System Re-contextualized, JoséBerengueres, Tokyo, (2007), p. 74 ISBN 1847534775. OCLC 906982187. [11] JK. Liker, The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer, McGraw-Hill, New York, (2004). ISBN 0071392319. OCLC. [12] F. Bacon, Novum Organum, England, 1620, https://en.wikipedia.org/wiki/Novum_Organum accessed 01/10/2020.
Appendix B
Cumulative sum control chart (CUSUM) This appendix presents the generates cumulative sum (CUSUM) control charts and what is it all about. The format of the control chart is fully customizable. The data for the subgroups can be in a single column or in multiple columns.This procedure permits the defining of stages. The target value and sigma may be estimated from the data (or a subset of the data), or a target value and sigma may be entered directly. The CUSUM chart may be used for subgroup data or for single observations at each time point. A fast initial response (head start) may be employed by the CUSUM chart if desired.
B.1 Introduction In statistical quality control, the CUSUM is a sequential analysis technique developed by E. S. Page of the University of Cambridge. It is typically used for monitoring change detection [1]. CUSUM was announced in Biometrika, in 1954, a few years after the publication of Wald’s SPRT algorithm [2]. Page referred to as “quality number” θ by which he meant a parameter of the probability distribution—for example, the mean. He devised CUSUM as a method to determine changes in it and proposed a criterion for deciding when to take corrective action. When the CUSUM method is applied to changes in mean, it can be used for step detection of a time series. A few years later, George Alfred Barnard developed a visualization method, the V-mask chart, to detect both increases and decreases in θ [3]. The CUSUM chart is used to monitor the mean of a process based on samples taken from the process at given times (hours, shifts, days, weeks, months, etc.). The measurements of the samples at a given time constitute a subgroup. Rather than examining the mean of each subgroup independently, the CUSUM chart shows the accumulation of information on current and previous samples. For this reason, the CUSUM chart is generally better than the X-bar chart for detecting small shifts in the mean of a process. The CUSUM chart relies on the specification of a target value and a known or reliable estimate of the standard deviation. For this reason, the CUSUM chart is better used after process control has been established (see Fig. B.1). The CUSUM chart typically signals an out-of-control process by an upward or downward drift of the cumulative sum until it crosses the boundary. An assignable cause is suspected whenever the CUSUM chart indicates an out-of-control process. A typical presentation of CUSUM chart driving parameters are shown in Table B.1 Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
559
Introduction to energy essentials
Cumulative sum chart
6
120
5.0
4
2 Cumulative sum
560
0
−2
−4 −5.0 −6
0
10
20
30
40
50
60 70 Subgroup
80
90
100
110
120
Fig. B.1 Cumulative sum chart illustration. Table B.1 CUSUM chart variables. CUSUM chart
Originally proposed by
E. S. Page Process observations
Rational subgroup size Measurement type Quality characteristic type Underlying distribution
n = 1 Cumulative sum of a quality characteristic Variable data Normal distribution Performance
Size of shift to detect
≤1.5σ Process variation chart
Not applicable Process mean chart
Center line Upper control limit
The target value, T, of the quality characteristic
Ci+ = max 0, xi − (T + K ) + Ci+−1
Lower control limit
Ci− = max 0, (T − K ) − xi + Ci−−1
Plotted statistic
Ci = ∑x j − T
i
j =1
Cumulative sum control chart (CUSUM)
B.2 Method As its name implies, CUSUM involves the calculation of a cumulative sum (which is what makes it “sequential”). Samples from a process xn are assigned weights ωn, and summed as follows:
S0 = 0 (B.1)
Sn +1 = max(0, Sn + xn − ωn ) (B.2)
When the value of S exceeds a certain threshold value, a change in value has been found. The aforementioned formula only detects changes in a positive direction. When negative changes need to be found as well, the min operation should be used instead of the max operation, and this time a change has been found when the value of S is below the (negative) value of the threshold value. Page did not explicitly say that ω represents the likelihood function, but this is common usage. Note that this differs from SPRT by always using zero function as the lower “holding barrier” rather than a lower “holding barrier” [1]. Also, CUSUM does not require the use of the likelihood function. As a means of assessing CUSUM’s performance, page defined the average run length (ARL) metric—“the expected number of articles sampled before action is taken.” He further wrote [2]: When the quality of the output is satisfactory, the ARL. is a measure of the expense incurred by the scheme when it gives false alarms, that is, type I errors [4]. On the other hand, for constant poor quality the ARL. measures the delay, and thus, the amount of scrap produced before the rectifying action is taken, that is, type II errors. Note: A type I error occurs when the null hypothesis (H0) is true but is rejected. It is asserting something that is absent, a false hit. A type I error may be likened to a socalled “false positive” (a result that indicates that a given condition is present when it actually is not present). In terms of folk tales, an investigator may see the wolf when there is none (raising a false alarm) where the null hypothesis, H0 , is no wolf. The type I error rate or significance level is the probability of rejecting the null hypothesis given that it is true. It is denoted by the Greek letter α (alpha) and is also called the alpha level. Often, the significance level is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the null hypothesis. Note: A type II error is an error that occurs when the null hypothesis is false but erroneously fails to be rejected. It is failing to assert what is present, a miss. A type II error may be compared with a so-called “false negative” (where an actual “hit” was disregarded by the test and seen as a “miss”) in a test checking for a single condition with a definitive result of true or false. A type II error is committed when we fail to believe a true alternative hypothesis.
561
562
Introduction to energy essentials
In terms of folk tales, an investigator may fail to see the wolf when it is present (failing to raise an alarm). Again, H0 is no wolf. The rate of the type II error is denoted by the Greek letter β (beta) and related to the power of a test (which equals 1 - β ). Type I and type II examples: Example 1 Hypothesis: “Adding water to toothpaste protects against cavities.” Null hypothesis (H0): “Adding water does not make toothpaste more effective in fighting cavities.” This null hypothesis is tested against experimental data with a view to nullifying it with evidence to the contrary. A type I error occurs when detecting an effect (adding water to toothpaste protects against cavities) that is not present. The null hypothesis is true (i.e., it is true that adding water to toothpaste does not make it more effective in protecting against cavities), but this null hypothesis is rejected based on bad experimental data or an extreme outcome of chance alone. Example 2 Hypothesis: “Adding fluoride to toothpaste protects against cavities. Null hypothesis (H0 ): “Adding fluoride to toothpaste has no effect on cavities.” This null hypothesis is tested against experimental data with a view to nullifying it with evidence to the contrary. A type II error occurs when failing to detect an effect (adding fluoride to toothpaste protects against cavities) that is present. The null hypothesis is false (i.e., adding fluoride is actually effective against cavities), but the experimental data is such that the null hypothesis cannot be rejected. Example 3 Hypothesis: “The evidence produced before the court proves that this man is guilty.” Null hypothesis (H0): “This man is innocent.” A type I error occurs when convicting an innocent person (a miscarriage of justice). A type II error occurs when letting a guilty person go free (an error of impunity). A positive correct outcome occurs when convicting a guilty person. A negative correct outcome occurs when letting an innocent person go free. Example 4 Hypothesis: “A patient’s symptoms improve after treatment A more rapidly than after a placebo treatment.” Null hypothesis (H0): “A patient’s symptoms after treatment A are indistinguishable from a placebo.” A type I error would falsely indicate that treatment A is more effective than the placebo, whereas a type II error would be a failure to demonstrate that treatment A is more effective than placebo even though it actually is more effective.
Cumulative sum control chart (CUSUM)
CUSUM Example The following example shows 20 observations of a process with a mean value of X equal to 0 and a standard deviation of 0.5. It can be seen as the value of Z is never greater than 3, so other control charts should not be detected as a failure while using the CUSUM 17 that shows the value of SH is greater than 4. Fig. B.2 is an illustration of variants where cumulative observed minus expected plots are a related method.
Fig. B.2 Cumulative observed minus expected plots.
563
564
Introduction to energy essentials
B.2.1 Other control charts for the mean of a process The X-bar chart is the most common control chart for monitoring the process mean. The X-bar chart is usually used in phase I monitoring when process control is being established. The X-bar chart is useful for detecting large changes in the process mean. The CUSUM chart is based on an established target mean and a reliable value for sigma. The CUSUM chart is useful for quickly detecting small shifts in the process mean. An alternative to the CUSUM chart is the exponentially weighted moving average (EWMA) chart. The EWMA chart has similar properties to the CUSUM chart and is also useful for detecting smaller shifts in the process mean. When only a single response is available at each time point, then the individuals and moving range (I-MR) control charts can be used for early-phase monitoring of the mean and variation. CUSUM and EWMA charts may also be used for single responses and are useful when small changes in the mean need to be detected.
B.3 Control chart formula Suppose we have k subgroups, each of size n. Let xij represent the measurement in the jth sample of the ith subgroup. The ith subgroup mean is calculated using n
xi =
∑x j =1
ij
(B.3)
n
B.4 Estimating the target value In the CUSUM procedure, the target value may be input directly, or it may be estimated from a series of subgroups. If it is estimated from the subgroups, the formula for the grand average is k
x=
ni
∑∑x i =1 j =1 k
∑n i =1
ij
(B.4) i
If the subgroups are of equal size, the aforementioned equation for the grand mean reduces to k
x=
∑x i =1
k
i
=
x1 + x 2 + + xk (B.5) k
Cumulative sum control chart (CUSUM)
B.5 Estimating sigma—sample range Either the range or the standard deviation of the subgroups may be used to estimate sigma, or a known (standard) sigma value may be entered directly. If the standard deviation (sigma) is to be estimated from the ranges, it is estimated as R (B.6) d2
σˆ =
where
k
R=
∑R
i
i =1
k (B.7) E ( R ) µR = d2 = σ σ
The calculation of E(R) requires the knowledge of the underlying distribution of the xij. Making the assumption that the xij follow, the normal distribution with constant mean and variance, the values for d2 are derived through the use of numerical integration. It is important to note that the normality assumption is used and that the accuracy of this estimate requires that this assumption be valid. When n is one, we cannot calculate Ri since it requires at least two measurements. The procedure in this case is to use the ranges of successive pairs of observations. Hence, the range of the first and second observation is computed, the range of the second and third is computed, and so on. The average of these approximate ranges is used to estimate σ.
B.6 Estimating sigma—mean of standard deviations The true standard deviation (sigma) may be input directly, or it may be estimated from the standard deviations by
σˆ =
where
s (B.8) c4
k
s =
∑s i =1
i
k (B.9) E( s ) µs = c4 = σ σ
565
566
Introduction to energy essentials
The calculation of E(s) requires the knowledge of the underlying distribution of the xij. Making the assumption that the xij follow the normal distribution with constant mean and variance, the values for c4 are obtained from
2 c4 = n −1
n Γ 2 n − 1 (B.10) Γ 2
B.7 Estimating sigma—weighted approach When the sample size is variable across subgroups, a weighted approach is recommended for estimating sigma as:
k 2 ∑(ni − 1)si σˆ = s = i =1k ∑(ni − k ) i =1
1/ 2
(B.11)
B.8 CUSUM charts Following the CUSUM procedure presented here, the steps for creating a CUSUM chart may be summarized as follows: 1. Calculate the zi using the formula
zi =
xi − x (B.12) σˆ x
2. Calculate the lower and upper cumulative sums as follows
SLi = − max 0, ( −zi − k ) + SLi −1
SHi = max 0, ( zi − k ) + SHi −1
(B.13)
3. Plot SHi and SLi on a control chart. The control limits are chosen as plus or minus h. The usual choice for k is 0.5 (for detecting one-sigma shifts in the mean), and h is typically set to 5. 4. When an out-of-control situation is detected, the corresponding sum may be left as it is or reset to an appropriate starting value. In NCSS the software from NCSS.com company [5], the restarting value may be set to zero or the fast initial restart (FIR) value of h/2.
Cumulative sum control chart (CUSUM)
Fig. B.3 CUSUM chart sample plot.
The following plot in Fig. B.3 is a sample of CUSUM chart and for more details, the reader should refer to Ref. [5] at the end of this appendix.
References [1] G.V.T. Farewell, D.J. Spiegel halter, et al., The use of risk-adjusted CUSUM and RSPRT charts for monitoring in medical contexts, Statistical Methods in Medical Research 12 (2) (2003) 147–170, doi:10.1177/096228020301200205. PMID 12665208. [2] E. S., Continuous inspection scheme, Biometrika 41 (1/2) (June 1954) 100–115, doi:10.1093/ Biomet/41.1-2.100. JSTOR 2333009. [3] G.A. Barnard, Control charts and stochastic processes, Journal of the Royal Statistical Society 21(2) (1959) 239–271, (JSTOR 2983801. [4] J. Neyman, Sufficient statistics and uniformly most powerful tests of statistical hypotheses, Statistical Research Memoirs I (1936) 113–137. [5] https://ncss-wpengine.netdna-ssl.com/wp-content/themes/ncss/pdf/Procedures/NCSS/CUSUM_ Charts.pdf.
567
APPENDIX C
Basic of heat transfer On a microscopic scale, thermal energy is related to the kinetic energy of molecules. The greater a material’s temperature, the greater the thermal agitation of its constituent molecules (manifested both in linear motion and vibrational modes). It is natural for regions containing greater molecular kinetic energy to pass this energy to regions with less kinetic energy.
C.1 Introduction Heat energy transfers between a solid and fluid when there is a temperature difference between the fluid and the solid. This is known as “convection heat transfer.” Generally, convection heat transfer cannot be ignored when there is a significant fluid motion around the solid. The temperature of the solid due to an external field such as fluid buoyancy can induce a fluid motion. This is known as “natural convection,” and it is a strong function of the temperature difference between the solid and the fluid. Blowing air over the solid by using external devices such as fans and pumps can also generate a fluid motion. This is known as “forced convection.” Fluid mechanics plays a major role in determining convection heat transfer. For each kind of convection heat transfer, the fluid flow can be either laminar or turbulent. Laminar flow generally occurs in relatively low velocities in a smooth laminar boundary layer over smooth small objects, while turbulent flow forms when the boundary layer is shedding or breaking due to higher velocities or rough geometries. As is common with fluid mechanics analysis, a number of dimensionless parameters are employed to describe convective heat transfer. A summary of these variables is included in the tables later. In the simplest of terms, the discipline of heat transfer is concerned with only two things: 1. Temperature 2. Heat Flow Temperature represents the amount of thermal energy available, whereas heat flow represents the movement of thermal energy from place to place. Several material properties serve to modulate the heat transferred between two regions at differing temperatures. Examples include thermal conductivities, specific heats, material densities, fluid velocities, fluid viscosities, surface emissivities, and more. Taken together, these properties serve to make the solution of many heat transfer problems an involved process. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
569
570
Introduction to energy essentials
C.2 Heat transfer mechanisms Heat transfer mechanisms can be grouped into three broad categories: 1. Conduction Regions with greater molecular kinetic energy will pass their thermal energy to regions with less molecular energy through direct molecular collisions, a process known as conduction. In metals, a significant portion of the transported thermal energy is also carried by conduction-band electrons. 2. Convection When heat conducts into a static fluid, it leads to a local volumetric expansion. As a result of gravity-induced pressure gradients, the expanded fluid parcel becomes buoyant and displaces, thereby transporting heat by fluid motion (i.e., convection) in addition to conduction. Such heat-induced fluid motion in initially static fluids is known as “free convection.” 3. Radiation All materials radiate thermal energy in amounts determined by their temperature where the energy is carried by photons of light in the infrared and visible portions of the electromagnetic spectrum. When temperatures are uniform, the radiative flux between objects is in equilibrium, and no net thermal energy is exchanged. The balance is upset when temperatures are not uniform, and thermal energy is transported from surfaces of higher to surfaces of lower temperature.
C.3 Fourier Law of heat conduction When there exists a temperature gradient within a body, heat energy will flow from the region of high temperature to the region of low temperature. This phenomenon is known as conduction heat transfer and is described by Fourier’s Law (named after the French physicist Joseph Fourier). q = −k∇T (C.1) This equation determines the heat flux vector q for a given temperature profile T and thermal conductivity k. The minus sign ensures that heat flows down the temperature gradient.
C.4 Heat equation (temperature determination) The temperature profile within a body depends upon the rate of its internally generated heat, its capacity to store some of this heat, and its rate of thermal conduction to its boundaries (where the heat is transferred to the surrounding environment). Mathematically, this is stated by the heat equation as
1 ∂T 1 ∇T 2 + = − qgen (C.2) α ∂t k
Basic of heat transfer
along with its boundary conditions, equations that prescribe either the temperature T on, or the heat flux q through, all of the body boundaries Ω,
T ( Ωa ) = Tprescribed q( Ωb ) = qprescribed (C.3) Ω a ∪ Ωb = Ω
In the heat equation, the power generated per unit volume is expressed by qgen. The thermal diffusivity α is related to the thermal conductivity k, the specific heat c, and the density ρ by
α=
k (C.4) ρc
For steady-state problems, the heat equation Eq. (C.2), simplifies to 1 ∇T 2 = − qgen (C.5) k
C.5 The heat equation derivation The heat equation follows from the conservation of energy for a small element within the body Heat Heat generated Heat Change in energy + = + conducted in within conducted out stored within
We can combine the heats conducted in and out into one “net heat conducted out” term to give Net heat conducted out = Heat generated within − Change in energy stored within
Mathematically, this equation is expressed as de ∇ ⋅ q = qgen − (C.6) dt The change in internal energy e is related to the body’s ability to store heat by raising its temperature, given by
dT de (C.7) = ρc dt dt One can substitute for q using Fourier’s Law of heat conduction from above to arrive at the heat equation
dT k T = qgen − ρ c ∇ ⋅ − ∇ dt ∂ T (C.8) = qgen −k∇T + ρ c ∂t 1 ∂T 1 2 ∇ T − α ∂t = − k qgen
(
)
571
572
Introduction to energy essentials
C.6 Thermal hydraulics dimensionless numbers There is some well-known dimensionless number that we encounter in thermalhydraulics and fluid mechanics along with heat transfer analysis that is briefly written here.
C.6.1 General convection (forced and free) Parameters
Prandtl number:
Nusselt number:
Formulas
Pr =
v cPµ = α k
Nu =
hL k
Interpretations
Ratio of fluid velocity boundary layer thickness to the fluid temperature boundary layer thickness Ratio of heat transferred from the surface to heat conducted away by fluid
C.6.2 Forced convection only Parameters
Reynolds number: Reynolds number: Stanton number:
Formulas
Re L =
u∞ L ρ u∞ L = v µ
Re D =
u∞ D ρ u∞ D = v µ
St =
Interpretations
Ratio of fluid inertia stress to viscous stress (for flow over flat plates) Reynolds number for pipe flow
Nu h = ρ c P u∞ Re ⋅ Pr
C.6.3 Free convection only Parameters
Grashof number: Rayleigh number:
Formulas
Interpretation
g β∆TL3 v2 Ra = Gr ⋅ Pr
Ratio of fluid buoyancy stress to viscous stress
Gr =
C.6.4 Newton’s law of cooling The essential ingredients of forced convection heat transfer analysis are given by Newton’s law of cooling as
Q = hA (Tw − T∞ ) = hA ⋅ ∆T (C.9)
Basic of heat transfer
The rate of heat Q transferred to the surrounding fluid is proportional to the object’s exposed area A, and the difference between the object temperature Tw and the fluid free-stream temperature T∞. The constant of proportionality h is termed the convection heat-transfer coefficient. Other terms describing h include film coefficient and film conductance.
C.7 Definition of symbols There are symbols involved in any heat transfer mechanics as a sort of standard symbols and are listed here as follows: 1. Independent parameters for the fluid Quantity
Symbols
Units
Bulk temperature Kinematic viscosity Coef. of thermal expansion Dynamic viscosity Density Thermal diffusivity Specific heat Thermal conductivity
T∞ v β μ ρ α cp k
K m2/s 1/K kg/m-s kg/m3 m2/s J/kg-K W/m-K
2. Independent parameters for the object Quantity
Symbols
Units
Surface reference length Surface diameter (for pipes) Surface (wall) temperature
L D Tw
M M K
Quantity
Symbols
Units
Surface to fluid temperature difference Heat transfer coefficient
ΔT
K
h
W/m2-K
3. Dependent parameters
C.8 Radiation heat transfer introduction Radiation heat transfer is concerned with the exchange of thermal radiation energy between two or more bodies. Thermal radiation is defined as electromagnetic radiation in the wavelength range of 0.1 to 100 microns (which encompasses the visible light regime) and arises as a result of a temperature difference between two bodies. No medium need exists between the two bodies for heat transfer to take place as is needed by conduction and convection. Rather, the intermediaries are photons that travel at the speed of light.
573
574
Introduction to energy essentials
The heat transferred into or out of an object by thermal radiation is a function of several components. These include its surface reflectivity, emissivity, surface area, temperature, and geometric orientation with respect to other thermally participating objects. In turn, an object’s surface reflectivity and emissivity is a function of its surface conditions (roughness, finish, etc.) and composition.
C.9 Absorption and emissivity Radiation heat transfer must account for both incoming and outgoing thermal radiation. Incoming radiation can be either absorbed, reflected, or transmitted. This decomposition can be expressed by the relative fractions.
1 = ε reflected + ε absorbed + ε transmitted
Since most solid bodies are opaque to thermal radiation, we can ignore the transmission component and write
1 = ε reflected + ε absorbed
To account for a body’s outgoing radiation (or its emissive power, defined as the heat flux per unit time), one makes a comparison to a perfect body that emits as much thermal radiation as possible. Such an object is known as a blackbody, and the ratio of the actual emissive power E to the emissive power of a blackbody is defined as the surface emissivity ε
ε=
E E blackbody
(C.10)
By stating that a body’s surface emissivity is equal to its absorption fraction, Kirchhoff’s identity binds incoming and outgoing radiation into a useful dependent relationship
ε = ε absorbed
The heat emitted by a blackbody (per unit time) at an absolute temperature of T is given by the Stefan–Boltzmann law of thermal radiation.
Q = AσT 4 = AE blackbody (C.11)
where Q has unit of Watts, A is the total radiating area of the blackbody, and σ is the Stefan–Boltzmann constant. A small blackbody at absolute temperature T enclosed by a much larger blackbody at absolute temperature Te will transfer a net heat flow of
Q = Aσ (T 4 − Te4 ) (C.12)
Basic of heat transfer
Why is this a “net” heat flow? The small blackbody still emits a total heat flow given by the Stefan–Boltzmann law. However, the small blackbody also receives and absorbs all the thermal energy emitted by the large enclosing blackbody, which is a function of its temperatureTe. The difference between these two heat flows is the net heat flow lost by the small blackbody.
C.10 Gray body radiation heat transfer Bodies that emit less thermal radiation than a blackbody have surface emissivities ε less than 1. If the surface emissivity is independent of wavelength, then the body is called a “gray” body in that no particular wavelength (or color) is favored. The net heat transfer from a small gray body at absolute temperature T with surface emissivity ε to a much larger enclosing gray (or black) body at absolute temperature Te is given by
Q = ε Aσ (T 4 − Te4 ) (C.13)
C.11 Radiation view factors The previous equations for blackbodies and graybodies assumed that the small body could see only the large enclosing body and nothing else. Hence, all radiation leaving the small body would reach the large body. For the case where two objects can see more than just each other, then one must introduce a view factor F, and the heat transfer calculations become significantly more involved. The view factor F12 is used to parameterize the fraction of thermal power leaving object 1 and reaching object 2. Specifically, this quantity is equal to
Q 1→2 = A1F12ε 1σ T14 (C.14)
Likewise, the fraction of thermal power leaving object 2 and reaching object 1 is given by
Q 2→1 = A2 F21ε 2σ T24 (C.15)
The case of two blackbodies in thermal equilibrium can be used to derive the following reciprocity relationship for view factors.
A1F12 = A2 F21 (C.16)
Thus, once one knows F12 , F21 can be calculated immediately. Radiation view factors can be analytically derived for simple geometries and are tabulated in several references on heat transfer (e.g., Holman, 1986)1. They range from zero (e.g., two small bodies spaced very far apart) to 1 (e.g., one body is enclosed by the other).
575
576
Introduction to energy essentials
C.12 Heat transfer between two finite gray bodies The heat flow transferred from object 1 to object 2 where the two objects see only a fraction of each other and nothing else is given by −1
1 − ε 1 1 1 − ε 2 A1 4 4 Q = + A1σ (T1 − T2 ) (C.17) F12 ε 2 A2 ε 1
This equation demonstrates the usage of F12, but it represents a nonphysical case since it would be impossible to position two finite objects such that they can see only a portion of each other and “nothing” else. On the contrary, the complementary view factor (1 − F12 ) cannot be neglected as radiation energy sent in those directions must be accounted for in the thermal bottom line. A more realistic problem would consider the same two objects surrounded by a third surface that can absorb and readmit thermal radiation yet is nonconducting. In this manner, all thermal energy that is absorbed by this third surface will be readmitted; no energy can be removed from the system through this surface. The equation describing the heat flow from object 1 to object 2 for this arrangement is −1
1 − ε 1 A1 + A2 − 2 A1F12 1 − ε 2 A1 4 4 Q = + + A1σ T1 − T2 (C.18) 2 A2 − A1 ( F12 ) ε 1 ε 2 A2
C.13 Some definition and symbols in radiation Blackbody
Density, ρ
Emissive power Gray body
Heat flux, q
A body with a surface emissivity of 1. Such a body will emit all of the thermal radiation—it can (as described by theory) and will absorb 100% of the thermal radiation striking it. Most physical objects have surface emissivities less than 1 and hence do not have blackbody surface properties. The amount of mass per unit volume. In heat transfer problems, the density works with the specific heat to determine how much energy a body can store per unit increase in temperature. Its units are kg/m3. The heat per unit time (and per unit area) emitted by an object. For a blackbody, this is given by the Stefan– Boltzmann relation εT4 A body that emits only a fraction of the thermal energy emitted by an equivalent blackbody. By definition, a gray body has a surface emissivity less than 1, and a surface reflectivity greater than zero. The rate of heat flowing past a reference datum. Its units are W/m2.
Basic of heat transfer
A measure of the internal energy stored within a material per unit volume. For most heat transfer problems, this energy consists just of thermal energy. The amount of thermal energy stored in a body is manifested by its temperature. The fraction of thermal energy leaving the surface of Radiation view factor, F12 object 1 and reaching the surface of object 2, determined entirely from geometrical considerations. Stated in other words, F12 is the fraction of object 2 visible from the surface of object 1 and ranges from zero to 1. This quantity is also known as the radiation shape factor. Its units are dimensionless. Rate of heat generation, qgen A function of the position that describes the rate of heat generation within a body. Typically, this new heat must be conducted to the body boundaries and removed via convection and/or radiation heat transfer. Its units are W/m3. Specific heat, c A material property that indicates the amount of energy a body stores for each degree increase in temperature, on a per unit mass basis. Its units are J/kg-K. Stefan–Boltzmann constant, σ Constant of proportionality used in radiation heat transfer whose value is 5.669 × 10−8 W/m2-K4. For a blackbody, the heat flux emitted is given by the product of ■ and the absolute temperature to the fourth power. Surface emissivity, ε The relative emissive power of a body compared to that of an ideal blackbody. In other words, the fraction of thermal radiation emitted compared to the amount emitted if the body was a blackbody. By definition, a blackbody has a surface emissivity of 1. The emissivity is also equal to the absorption coefficient or the fraction of any thermal energy incident on a body that is absorbed. Thermal conductivity, k A material property that describes the rate at which heat flows within a body for a given temperature difference. Its units are W/m-k. Thermal diffusivity, α A material property that describes the rate at which heat diffuses through a body. It is a function of the body’s thermal conductivity and its specific heat. A high thermal conductivity will increase the body’s thermal diffusivity, as heat will be able to conduct across the body quickly. Conversely, high specific heat will lower the body’s thermal diffusivity, since heat is preferentially stored as internal energy within the body instead of being conducted through it. Its units are m2/s. Internal energy, e
577
578
Introduction to energy essentials
C.14 Forced laminar flow over an isothermal plate Air (or any other fluid) forced over a hot plate will remove heat from the plate according to the rules of forced convection. If the air’s velocity is slow enough and the plate length short enough, we can expect the flow in the boundary layer near the plate to be laminar. Use this calculator to calculate the heat rate removed from the plate under such a laminar flow assumption. We also assume that the plate is maintained at a constant temperature (i.e., isothermal). (The default calculation is for a relatively hot, small rectangular plate immersed in a medium-velocity air stream at room temperature, with answers rounded to three significant figures.) Inputs:
Answers:
APPENDIX D
Permafrost phenomena Permafrost is a ground that continuously remains frozen for two or more years, located on land or under the ocean. Permafrost does not have to be the first layer that is on the ground. It can be an inch to over miles deep inside the earth surface. Some of the most common permafrost locations are located in the Northern Hemisphere. Eighty-five percent of Alaska, Greenland, Canada, and Siberia is sitting on the top of a layer of permafrost, which is almost a quarter of the Northern Hemisphere. It can also be located in the Southern Hemisphere, just on mountain tops. Permafrost frequently occurs in ground ice, but it also can be presented in nonporous bedrock. Permafrost is formed from ice holding all different sorts of soil, sand, and rock combination together (www.wikipedia.com).
D.1 Introduction According to the National Aeronautics and Space Administration (NASA) and its ClimateKids site, the short answer to what is permafrost is as follow [1]: Permafrost is any ground that remains completely frozen (−32 °F or 0 °C) or colder— for at least two years straight. These permanently frozen grounds are most common in regions with high mountains and in earth’s higher latitudes—near the North and South Poles. Permafrost covers large regions of the earth. Almost a quarter of the land area in the Northern Hemisphere has permafrost underneath. Although the ground is frozen, permafrost regions are not always covered in snow. As Fig. D.1 shows much of the Alaskan tundra is permafrost. The polygon shapes in the snow are a sign that this permafrost is thawing.
D.2 What is permafrost made of? As Illustrating in Fig. D.2, permafrost is made of a combination of soil, rocks, and sand that are held together by ice. The soil and ice in permafrost stay frozen all year long. Near the surface, permafrost soils also contain large quantities of organic carbon—a material leftover from dead plants that couldn’t decompose, or rot away, due to cold. Lower permafrost layers contain soils made mostly of minerals. A layer of soil on top of the permafrost does not stay frozen all year. This layer, called the active layer, thaws during the warm summer months and freezes again in the fall. In Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
579
580
Introduction to energy essentials
Fig. D.1 Alaskan tundra illustration. (Source: NASA/JPL-Caltech/Charles Miller).
colder regions, the ground rarely thaws, even in the summer. There, the active layer is very thin—only 4 to 6 in. (10 to 15 cm). In warmer permafrost regions, the active layer can be several meters thick.
D.3 How does climate change effect permafrost? As earth’s climate warms, the permafrost is thawing. That means the ice inside the permafrost melts, leaving behind water and soil. Thawing permafrost can have dramatic impacts on our planet and the things living on it. For example: • Many northern villages are built on permafrost.When permafrost is frozen, it’s harder than concrete. However, thawing permafrost can destroy houses, roads, and other infrastructures.
Fig. D.2 The layers of permafrost. Source: Benjamin Jones, USGS. Public domain (modified).
Permafrost phenomena
Fig. D.3 A block of thawing permafrost that fell into the ocean on Alaska’s Arctic coast. (Source: U.S. Geological Survey).
• When permafrost is frozen, the plant material in the soil—called organic carbon— can’t decompose or rot away. As permafrost thaws, microbes begin decomposing this material.This process releases greenhouse gases like carbon dioxide and methane into the atmosphere. • When permafrost thaws, so do ancient bacteria and viruses in the ice and soil. These newly unfrozen microbes could make humans and animals very sick. Scientists have discovered microbes more than 400,000 years old in thawed permafrost. As Fig. D.3 demonstrates, because of these dangers, scientists are closely monitoring Earth’s permafrost. Scientists use satellite observations from space to look at large regions of permafrost that would be difficult to study from the ground. NASA’s Soil Moisture Active Passive (SMAP), mission orbits Earth collecting information about moisture in the soil. It measures the amount of water in the top 2 in. (5 cm) of soil everywhere on Earth’s surface. It can also tell if the water within the soil is frozen or thawed. SMAP’s measurements will help scientists understand where and how quickly the permafrost is thawing [1].
D.4 Study and classification of permafrost In contrast to the relative dearth of reports on frozen ground in North America prior to World War II, a vast literature on the engineering aspects of permafrost was available in Russian [2]. Beginning in 1942, Siemon William Muller [3] delved into the relevant Russian literature held by the Library of Congress and the US Geological Survey Library so that he could able to furnish the government with an engineering field guide and a technical report about permafrost by 19433 in which he coined the term as a contraction of permanently frozen ground [4]. Although originally classified (as US Army. Office of the Chief of Engineers, Strategic Engineering Study, no. 62, 1943) [4-8], in 1947 a revised report was released publicly, which is regarded as the first North American treatise on the subject [2].
581
582
Introduction to energy essentials
Fig. D.4 Slope failure of permafrost soil, revealing ice lenses.
Fig. D.4 is illustrating slope failure of permafrost soil, revealing ice lenses. Note that ice lenses, as illustrated in Fig. D.5, are bodies of ice formed when moisture diffused within soil or rock accumulates in a localized zone.The ice initially accumulates within small collocated pores or preexisting cracks, and, as long as the conditions remain favorable, continues to collect in the ice layer or ice lens, wedging the soil or rock apart. Ice lenses grow parallel to the surface and from several centimeters to several decimeters (inches to feet) deep in the soil or rock. Studies between the year 1990 and the present have demonstrated that rock fracture by ice segregation (i.e., the fracture of intact rock by ice lenses that grow by drawing water from their surroundings during periods of sustained subfreezing temperatures) is a more effective weathering process than the freeze-thaw process, which older texts proposed. Ice lenses play a key role in frost-induced heaving of soils and fracture of bedrock, which are fundamental to weathering in cold regions. Frost heaving creates debris and dramatically shapes landscapes into complex patterns. Although rock fracture in periglacial regions (alpine, subpolar and polar) has often been attributed to the freezing and
Fig. D.5 Pingo formed in Arctic tundra as a result of periodically spaced ice lens formation. (Source: www.wikipedia.com.
Permafrost phenomena
Fig. D.6 The Tibetan Plateau Illustration. (Source: www.wikipedia.com).
volumetric expansion of water trapped within pores and cracks, the majority of frost heaving and bedrock fracture results instead from ice segregation and lens growth in the near-surface frozen regions. Ice segregation results in rock fracture and frost heave.
D.5 Permafrost extent Permafrost is soil, rock, or sediment that is frozen for more than two consecutive years. In areas not overlain by ice, it exists beneath a layer of soil, rock, or sediment, which freezes and thaws annually and is called the “active layer” [9]. In practice, this means that permafrost occurs at a mean annual temperature of −2°C (28.4°F) or below. Active layer thickness varies with the season but is 0.3 to 4 m thick (shallow along the Arctic coast; deep in southern Siberia as illustrated in Fig. D.6 and the Qinghai–Tibetan Plateau). The extent of permafrost varies with the climate: in the Northern Hemisphere today, 24% of the ice-free land area, equivalent to 19 million square kilometers [10], is more or less influenced by permafrost. Of this area slightly more than half is underlain by continuous permafrost, around 20% by discontinuous permafrost and a little less than 30% by sporadic permafrost [11]. Note that the Tibetan Plateau lies between the Himalayan range to the south and the Taklamakan Desert to the north (composite image). Most of this area is found in Siberia, northern Canada, Alaska, and Greenland. Beneath the active layer annual temperature swings of permafrost become smaller with depth. The deepest depth of permafrost occurs where geothermal heat maintains a temperature above freezing. Above that bottom limit there may be permafrost with a consistent annual temperature—“isothermal permafrost” [12].
D.6 Continuity of coverage Permafrost typically forms in any climate where the mean annual air temperature is less than the freezing point of water. Exceptions are found in humid boreal forests, such as in northern Scandinavia and the northeastern part of European Russia west of the Urals where snow acts as an insulating blanket. Glaciated areas may also be exceptions.
583
584
Introduction to energy essentials
Fig. D.7 (Red lines) Seasonal temperature extremes; (dotted lines) average.
Since all glaciers are warmed at their base by geothermal heat, temperate glaciers, which are near the pressure-melting point throughout, may have liquid water at the interface with the ground and are therefore free of underlying permafrost [13]. “Fossil” cold anomalies in the geothermal gradient areas where deep permafrost developed during the Pleistocene persist down to several hundred meters (Fig. D.7). This is evident from temperature measurements in boreholes in North America and Europe [14].
D.7 Alpine permafrost Alpine permafrost occurs at the elevations with low enough average temperatures to sustain perennially frozen ground; much alpine permafrost is discontinuous [15]. Estimates of the total area of alpine permafrost vary. Bockheim et al. [16] combined three sources and made the tabulated estimates by region, totaling 3,560,000 km2 (1,370,000 mi2). Alpine permafrost in the Andes has not been mapped [17]. Its extent has been modeled to assess the amount of water bound up in these areas [18]. In 2009, a researcher
Permafrost phenomena
from Alaska found permafrost at the 4,700 m (15,400 ft) level on Africa’s highest peak, Mount Kilimanjaro, approximately 3 north of the equator [19].
D.8 Subsea permafrost Subsea permafrost occurs beneath the seabed and exists in the continental shelves of the polar regions [20]. These areas formed during the past ice age when a larger portion of Earth’s water was bound up in ice sheets on land and when sea levels were low. As the ice sheets melted to again become seawater, the permafrost became submerged shelves under relatively warm and salty boundary conditions compared to surface permafrost. Therefore, subsea permafrost exists in conditions that lead to its diminishment. According to Osterkamp, subsea permafrost is a factor in the “design, construction, and operation of coastal facilities, structures found on the seabed, artificial islands, subsea pipelines, and wells drilled for exploration and production” [21]. It also contains gas hydrates in places, which are “"potentially abundant source of energy” but may also destabilize as subsea permafrost warms and thaws, producing large amounts of methane gas, which is a potent greenhouse gas [21-22]. More details and other information about the subject permafrost can be found at www.wikipedi.com [3].
References [1] J.P. Holman, Thermodynamics, fourth ed., McGraw-Hill, Inc, New York, (1988). [2] https://en.wikipedia.org/wiki/Permafrost. [3] https://en.wikipedia.org/wiki/Siemon_Muller. [4] W.H. Jesse, Frozen in time, permafrost and engineering problems review, Arctic 63 (4) (December 2010) 477, doi:10.14430/arctic3340. [5] L.L. Ray, Permafrost - USGS [=United States Geological Survey] Library Publications Warehouse, (Archived from the original on May 02, 2017. Retrieved November 19, 2018). [6] US Geological Survey, United States. Army. Corps of Engineers, Strategic Intelligence Branch, Permafrost or permanently frozen ground and related engineering problems, Strategic Engineering Study, 62 (1943) p. 231, OCLC 22879846. [7] Occurrences on Google Books. [8] S.W. Muller, Permafrost. or, Permanently Frozen Ground and Related Engineering Problems, Ann Arbor, Michigan: Edwards, (1947) 1646047 OCLC. [9] International Permafrost Association Staff, What is Permafrost? (Archived from the original on November 08 (2014). Retrieved February 02 (2014). [10] Tarnocai, et al. Soil organic carbon pools in the northern circumpolar permafrost region, Global Biogeochemical Cycles 23 (2) (2009), GB2023. Bibcode:2009GBioC..23.2023T, doi:10.1029/2008 gb003327. [11] A.J. Heginbottom, J. Brown, O. Humlum, H. Svensson, State of the earth’s cryosphere at the beginning of the 21st century: glaciers, global snow cover, floating ice, and permafrost and periglacial environments, p. A435. [12] G. Delisle, Near-surface permafrost degradation: How severe during the 21st century?, Geophysical Research Letters 34(L09503) (2007) 4, Bibcode:2007GeoRL..34.9503D, doi:10.1029/2007GL029323.
585
586
Introduction to energy essentials
[13] R.P. Sharp, Living Ice: Understanding Glaciers and Glaciation, Cambridge University Press, London UK, (1988) 27, ISBN 978-0-521-33009-1. [14] J. Majorowicz, Permafrost at the ice base of recent Pleistocene glaciations—inferences from borehole temperatures profiles, Bulletin of Geography 5 (2012) 7–28. [15] Alpine permafrost. Encyclopedia Britannica (Retrieved April 16, 20200). [16] J.G. Bockheim, J.S. Munroe, Organic carbon pools and genesis of alpine soils with permafrost: a review, Arctic, Antarctic, and Alpine Research 46 (4) (2014) 987–1006, doi:10.1657/1938-424646.4.987. (Archived from the original on September 09, 2016. Retrieved April 25, 2016). [17] G. Azocar, Modeling of Permafrost Distribution in the Semi-arid Chilean Andes, University of Waterloo: Waterloo, Ontario, (2014). (Archived from the original on 2016-05-30, Retrieved 2016-04-24). [18] L. Ruiz, D.T. Liaudat, Mountain Permafrost Distribution in the Andes of Chubut (Argentina) Based on a Statistical Model (PDF), Tenth International Conference on Permafrost, Mendoza, Argentina, Instituto Argentino de Nivología Glaciología y Ciencias Ambientales, (2012), pp. 365–370. Archived from the original May 13, 2016. Retrieved April 24, 2016). [19] N. Rozell, Permafrost near equator; hummingbirds near subarctic, Capitol City Weekly, Juneau, Alaska, (November 18, 2009). [20] International Permafrost Association Editors, What is Permafrost?, Archived from the original August 11, 2014. Retrieved August 11, 2014). [21] T.E. Osterkamp, Sub-sea permafrost, Encyclopedia of Ocean Sciences (2001) 2902–2912, doi:10.1006/ rwos.2001.0008. ISBN 9780122274305. [22] IPCC AR4, Climate Change 2007: Working Group I: The Physical Science Basis, 2007. (Retrieved April 12, 2014).
APPENDIX E
Glossary The following is the list of high-level glossary definitions that are useful for the understanding of this book. Automatic monitoring and targeting (AM&T): Products specifically designed to measure, record, and distribute energy data, and analyze and report on energy consumption. CUSUM analysis: The difference between the baseline (expected consumption) and the actual consumption of energy over a period of time; provides a trendline and shows variations in performance. Degree days (heating or cooling): The difference between the outside temperature and static theoretical indoor temperature (often referred to as the base temperature) that is comfortable for carrying out everyday activities without the need for heating or cooling. For example, if the outside temperature is higher than the base temperature, the heating system should not need to be turned on, and the heating degree days equal to zero. Discounted cash flow (DCF): Uses future cash flow projections and discounts to arrive at a present value estimate of an investment opportunity. Energy audit: An inspection, survey, and analysis of energy use within a building, process, or other energy system undertaken to identify areas of wasted energy and the corresponding opportunities to improve energy efficiency. Energy efficiency: The use of the minimum amount of energy while maintaining a desired level of economic activity or service; the amount of useful output achieved per unit of energy input. Energy management: The systematic approach to continuous improvement of energy efficiency within an organization. Energy performance: A measure of the energy efficiency or energy use of an organization, process, building, or other assets; can include aspects such as shifting energy demand or using waste energy. Energy performance indicators (EnPIs): Metrics by which an organization can relate its energy demand to the various driving factors that have an impact on that consumption. Energy policy of an organization: A written statement of senior management’s commitment to managing energy. Often it forms part of a wider Corporate Social Responsibility (CSR) policy. For large organizations an energy policy should be no more than two pages long; a few paragraphs may be sufficient for smaller organizations. Introduction to Energy Essentials DOI: 10.1016/C2020-0-03623-6
© 2021 Elsevier Inc. All rights reserved.
587
588
Introduction to energy essentials
Energy strategy: A working document setting out how energy will be managed in an organization. It should contain an action plan of tasks, which will initially involve understanding the organization’s current position and establishing the management framework. Exception report: A document that identifies which is abnormal or not as forecasted and requires attention or explanation. Internal rate of return (IRR): The discount rate at which the net present value of costs equals the net present value of profits for a particular project or investment. A valuation method used to estimate the profitability of an investment opportunity. Life cycle cost analysis (LCCA): A tool to determine the most cost-effective option among different comparing alternatives to purchase, own, operate, maintain, and finally dispose of an object or process. Measurement and verification (M&V): The process of quantifying savings delivered through an energy-saving action or measure; enables savings to be properly evaluated. Net present value (NPV): The sum of the values of incoming and outgoing cash flow, as valued at specified times.This takes into account the time value of money where a cash flow today is worth a different amount from the same cash flow in the future. This difference in values is due to the interest-earning potential of money and can also take into account inflation and other variables. Simple playback period (SPP): The period of time, measured in years or operating hours, required to recover the funds expended in an investment.
Index Page numbers followed by f indicate figures.
A Absolute pressure, 435 ADELE, 457, 462 Advanced Adiabatic Compressed Air Energy Storage (AA-CAES), 459, 462 Alabama’s Electric Cooperative (AEC), 460 Alternating current (AC), 284 Artificial intelligence (AI), 510 Artificial neural networks (ANNs), 517 Atomic Energy Commission (AEC), 323 Automatic generation control (AGC), 283 Automatic monitoring and targeting (AM&T), 333 Axial Power Rating (APC), 391
B Back forward algorithm, 517 Big Data (BD), 517 Billion cubic feet per Day (BCFD), 35 Boiling water reactors (BWR), 90, 132f British Petroleum (BP), 322 Brooklyn Microgrid (BMG), 280 Business Resilience System (BRS), 519, 545 Business Risk (BR), 545 Business-to-business (B2B), 337
C California Energy Commission (CEC), 470 California Public Utilities Commission (CPUC), 470 Capillary Pumped Loop (CPL), 381 Carbon Capture and Sequestration (CCS), 354 Chief Information Officer (CIO), 529 Chief strategy Officer (CSO), 529 Closed systems, 438 Closed-Cycle Cooling System, 128 Coal Power Plants (CPPs), 134 Coiled tube air heat exchanger (CTAH), 467 Combined cycle, 74 Combined cycle combustion turbine (CCCT), 362 Combined cycle turbine technology (CCGT), 86
Combined-cycle gas turbine (CCGT), 274 Combustion turbine (CT), 362 Compressed air energy storage (CAES), 460 Computational Fluid Dynamics (CFD), 444 Conduction, 422 Constant conductance heat pipe (CCHP), 382 Control valves (CV), 522 Cooling towers, 60 Critical point, 133 Cumulative sum control chart (CUSUM), 342 Cyber attacks, 511, 513 Cyber physical systems (CPS), 510 Cyber surety, 517
D Deep learning (DL), 517 Defense-in-depth (DiD), 516 Demilitarized zones (DMZs), 524 Deming cycle, 329 Density, 71, 198 Department of energy (DOE), 33, 56, 121, 123 Department of Homeland Security (DHS), 520 Deutsches Electronen Synchrotron (DESY), 298 Device-to-machine (D2M), 519 Direct current (DC), 333 Distributed control systems (DCS), 514, 540, 541f Dry region, 133
E Economic and Managements (EMs), 401 Electric Power Research Institute (EPRI), 460 Electric vehicles (EVs), 393, 385f, 408 Emergency response facility (ERF), 510 Energy information administration (EIA), 223, 238 Energy insight (EI), 321 Energy management system (EnMS), 321, 324, 326, 329, 330f Energy performance indicators (EnPIs), 333–334 Energy Saving Opportunity Scheme (ESOS), 327 Engineering safety feature (ESF), 510 English (E) system, 431 589
590
Index
Enriched fuel, 61 Enrico Fermi, 45, 45f, 48, 53, 53f Environmental Management Systems (EMS), 329 Environmental Protection Agency (EPA), 88 European Fusion Development Agreement (EFDA), 186 European Transmission System Operation (ETSO), 281 European Union (EU), 186, 326 European Union’s Energy Efficiency Directive, 326
F Fast breeder reactors (FBR), 59, 90 Fast ignition (FI), 208–209 Fast Initial Restart (FIR), 555 Federal Energy Regulatory Commission (FERC), 256, 319 Feed Forward Neural Network, 517 Final control elements (FCEs), 522 Firebrick Resistance-Heated Energy Storage (FIRES), 466, 467f Fission, 44, 47 Fixed conductance heat pipe (FCHP), 382 Florida Reliability Coordinating Council (FRCC), 264 Fluoride salt-cooled High-temperature Reactor(FHR), 43, 67 Forschungs Zentrum Karlsruhe (FZK), 298
G Gage pressure, 435 Gas fast reactor (GFR), 488 General Design Criteria (GDC), 67 General Electric (GE), 462, 463f Generation IV, 2, 58 Generation IV International Forum (GIF), 94, 488 Gen-III Reactors, 487 German Aerospace Center (DLR), 462 Global Business Network (GBN), 88 Greenhouse gas (GHG), 249 Gross domestic product (GDP), 1
H Hanford, 50 Heat exchanger (HX), 99, 76 Heat flux vector, 371 Heat Recovery Steam Generator (HRSG), 145 Heating, Ventilating, and Air Conditioning (HVAC), 371
Heating, Ventilation and Cooling (HVAC), 327 Heavy water (D2O), 59 Heavy Water Reactors (HWR), 61 Helically Symmetric Experiment (HSX), 197f High Temperature Gas Cooled Reactor (HTGR), 90, 466 High-Temperature Superconducting (HTS), 352 High Voltage Direct Current (HVDC), 283, 285f Human Development Index (HDI), 1, 3f Hybrid Energy System (HES), 435 Hybrid Renewable Energy Systems (HRES), 447
I Idaho National Laboratory (INL), 78, 102, 495 Independent System Operator (ISO), 264 Individuals and Moving Range (I-MR), 553 Industrial control systems (ICS), 520, 520f, 525f, 533f Industrial Network Perimeter (INP), 542 Inertial Confinement Fusion (ICF), 186, 188f, 206–207 Information Administration (EIA), 223, 224f, 238, 244 Information and Communications Technology (ICT), 450 Information Technology (IT), 519 Information Technology Laboratory (ITL), 527 Inherently risky, 523–524 Injection Laser System (ILS), 217 Instrumentation and Control (I&C), 510, 513f Integrated circuits (ICs), 520 Intensive variables, 433 Intermediate heat exchanger (IHX), 98, 146 International Atomic Energy Agency (IAEA), 64, 94, 115, 121 International Energy Agency (IEA), 7, 88 International Renewable Energy Agency (IRENA), 496 International Standards Organization (ISO), 331 International Thermonuclear Experimental Reactor (ITER), 187 Internet of Things (IoT), 256, 517 Intrusion Detection Systems(IDS), 513
J Japan Atomic Energy Research Institute (JAERI), 297 Joint European Torus (JET), 187
Index
K
N
Kelvins degree, 193 Kinetic energy recovery system (KERS), 295 Kinetic energy storage (KES), 293, 293f
National Aeronautics and Space Administration (NASA), 63 National Ignition Facility (NIF), 188, 215f National Institute of Standard and Technology (NIST), 527 National Renewable Energy Laboratory (NREL), 504 Nationally determined contributions (NDCs), 2 Natural gas, 6 Natural gas plant liquids (NGPL), 230, 232f Near Infrared (NIR), 191 New Mexico, 50 Newtonian mechanics, 430 Next Generation Nuclear Plant (NGNP), 95, 465 Non-condensable Gas (NCG), 384 North America Electric Reliability Council (NERC), 516 North American Electric Reliability Corporation (NERC), 281 Nuclear air-Brayton combined cycle (NACC), 129, 467f Nuclear air-Brayton recuperated Cycle (NARC), 129 Nuclear criticality, 52 Nuclear energy (NE), 53 Nuclear power plant (NPP), 44, 62 Nuclear Regulatory Commission (NRC), 68, 123 O.D (Outer Diameter), 373
L Lawrence Livermore National Laboratory (LLNL), 188, 215f Lawson criterion, 191 Lead Fast Reactor (LFR), 488 Leo Szilard, 48 Levitated Dipole Experiment (LDX), 203 Licensing Technical Support (LTS), 124 Life cycle cost cnalysis(LCCA), 336 Light emission diode (LED), 336 Light water (H2O), 59 Light water reactor (LWR), 55, 131f Line replaceable units (LRUs), 218 Liquid Controlled Heat Pipe (LCHP), 383 Liquid metal fast breeder reactor (LMFBR), 74, 377 Liquid natural gas (LNG), 259 Lithium-ion (Li-ion), 303 Load-serving entity (LSE), 265 Local Area Network (LAN), 532, 540f Long-run marginal cost (LRMC), 259 Loop heat pipe (LHP), 376, 379f Los Alamos, 50, 52
M Machine learning (ML), 517 Machine-to-human (M2H), 517 Machine-to-machine (M2M), 509 Magnetic confinement fusion (MCF), 189 Magnetic field effect (MFE), 191 Manhattan Project, 53 Manipulated Variable (MV), 521 Maximum expected operating pressures (MEOP), 402 Measurement and verification (M&V), 338 Metal oxide silicon (MOS), 520 Micro nuclear reactor (MNR), 53 Microbial Fuel Cells (MFCs), 318 Million barrels per day oil equivalent (MBDOE), 11 Minimum Offer Price Rule (MOPR), 264 Mirror fusion test facility (MFTF), 193 Mixtures of pure substances, 433 Molten salt reactor (MSR), 67 Monitoring and Targeting (M&T), 324, 332
O O.D (Outer Diameter), 373 Oak Ridge, 50–51 Observation-Plan–Do–Study–Act (OPDCA), 329 Open systems, 306, 509 Open-air cooling system, 128 Operating systems (OS), 306, 509 Operation and maintenance (O&M), 366 Operation technology (OT), 519 Organization for Economic Co-operation & Development (OECD), 1, 107, 322 Organization of Petroleum Exporting Countries (OPEC), 106 Organizations/Independent System Operators (RTOs/ISOs), 264
P Parts Per Million (PPM), 306, 485 Pennsylvania, Jersey, Maryland (PJM), 259
591
592
Index
Permafrost, 405 Personal Digital Assistant (PDA), 510 Photovoltaic (PV), 30, 277 Physical Security Protection (PSP), 510 Plan-Do-check-Act (PDCA), 324, 329f Plant Control Systems (PCS), 510 Plug-in Hybrid Electric Vehicle (PHEV), 303 Point-of-Care (POC), 318 Polar direct drive (PDD), 221 Power Conditioning System (PCS), 295 Power Conversion system (PCS), 95, 123, 129 Power Purchase Agreements (PPAs), 31, 336 Preamplifier Modules (PAMs), 217 Pressurize Water Reactor (PWR), 485 Pressurized Water Reactor (PWR), 61, 61f, 89, 119 Princeton Plasma Physics Laboratory (PPPL), 197–198 Probability of Risk Analysis (PRA), 485 Process variables (PVs), 522 Program on Energy and Sustainable Development (PESD), 477 Programmable Logic Controllers (PLC), 514, 542, 543f Programmable Read-Only Memory (PROM), 197–198 Proportional-Integral-Derivative (PID), 542 Protection system (RPS), 125, 286, 512 Public relation (PR), 307 Pure substances, 433
Q Quality Assurance (QA), 430
R Radwaste Treatment System (RTS), 510 Rayleigh–Taylor (RT) instabilities, 212 Reactor Kinetics, 65 Reactor protection system (RPS), 510 Reactor Safety Study (RSS), 70 Reliability-critical network, 510 Remote Access Trojans (RATs), 545 Remote Terminal Units (RTUs), 523 Renewable energy power system (REPS), 448 Research and Development (R&D), 488 Return On Investment (ROI), 124 Reversed Field Pinch (RFP), 200 Rheinisch-Westfälisches Elektrizitätswerk (RWE), 462
S Saturation dome, 133 Security Operations Center (SOCs), 542 SERC Reliability Corporation (SERC), 263–264 Set Points (SPs), 521 Set Value (SV), 521 Short-Run Marginal Cost (SRMC), 259 Simple cycle (SC), 362 Simple payback periods (SPP), 336 Small Modular Reactor (SMR), 53, 64–65, 119 Sodium Fast Reactor (SFR), 488 Sodium-Cooled Fast Reactor (SFR), 94 Solar PV (SPV), 31 Southern Company (SOCO), 264 Specific heat, 150 Specific volume, 133, 416 Starting Lighting Ignition (SLI), 302 Steady Flow-Steady State (SFSS), 420 Super Critical Water Reactor (SCWR), 488 Superconducting Magnetic Storage System (SMES), 295, 296f Supervisory Control and Data Acquisition (SCADA), 522 System Codes, 439 System International (SI), 431
T Tandem Mirror Experiment (TMX), 193 Temperature, 76 Tennessee, 50 Tennessee Valley Authority (TVA), 264 Thermal conductivity, 423 Thermal energy, 128, 188 Thermal Hydraulics (T/H), 429 Thermal reactor, 59, 61 Thermonuclear Experimental Research, 579 Tokamak à Configuration Variable (TCV), 190 Tokyo Electric Power Co. LTD (TEPCO), 313 Total Cost of Ownership (TCO), 130 Transmission Organizations (RTOs), 264 Turbine Control System (TCS), 510
U Ultra-Violet (UV), 220 Unidirectional Security Gateways (USG), 542 Uniform Flow-Uniform State (UFUS), 420 Uninterrupted Power Sources (UPS), 300 Uninterruptible Power Supply (UPS), 294, 294f
Index
United Nations Development Program (UNDP), 1 United Nations Framework Convention on Climate Change (UNFCCC), 4f United States, 33 United States Air Force (USAF), 298 United States of America (USA), 480, 520 University of Chicago, 45 Use Case (UC), 7, 519
V Vacuum pressure, 435 Value-Added Tax (VAT), 274
Variable Conductance Heat Pipe (VCHP), 375, 386 VVery High Temperature Reactor (VHTR), 94, 488f VYCON Direct Connect (VDC), 293
W Washington, 50 Water Cooled Graphite Moderated, 90 Wendelstein 7-X experiment, 194
Z Zero Energy Thermonuclear Assembly (ZETA), 194
593