146 31 18MB
English Pages 509 [493] Year 2023
Women in Engineering and Science
Jill S. Tietjen Marija D. Ilic Lina Bertling Tjernberg Noel N. Schulz Editors
Women in Power
Research and Development Advances in Electric Power Systems
Women in Engineering and Science Series Editor Jill S. Tietjen, Greenwood Village, CO, USA
The Springer Women in Engineering and Science series highlights women’s accomplishments in these critical fields. The foundational volume in the series provides a broad overview of women’s multi-faceted contributions to engineering over the last century. Each subsequent volume is dedicated to illuminating women’s research and achievements in key, targeted areas of contemporary engineering and science endeavors.The goal for the series is to raise awareness of the pivotal work women are undertaking in areas of keen importance to our global community.
Jill S. Tietjen • Marija D. Ilic • Lina Bertling Tjernberg • Noel N. Schulz Editors
Women in Power Research and Development Advances in Electric Power Systems
Editors Jill S. Tietjen Technically Speaking, Inc. Greenwood Village, CO, USA Lina Bertling Tjernberg KTH Royal Institute of Technology Stockholm, Sweden
Marija D. Ilic Massachusetts Institute of Technology Cambridge, MA, USA Noel N. Schulz School of Electrical Engineering and Computer Science Washington State University Pullman Pullman, WA, USA
ISSN 2509-6427 ISSN 2509-6435 (electronic) Women in Engineering and Science ISBN 978-3-031-29723-6 ISBN 978-3-031-29724-3 (eBook) https://doi.org/10.1007/978-3-031-29724-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Women have been impacting power engineering for over 100 years. In 1922, Edith Clarke became the first woman to be professionally employed as an electrical engineer in the United States, and the first female professor of electrical engineering in the country. She was the first woman to deliver a paper and named as a Fellow of the American Institute of Electrical Engineers (which became the Institute of Electrical and Electronic Engineers, IEEE, in 1963). She specialized in electrical power system analysis and wrote a famous and influential book Circuit Analysis of A-C Power Systems. For the last 85 years, every power engineering student has been learning the Clarke Transformation Theory. Her fundamental method of symmetrical components enabled the power engineering community to analyze large power systems and has been applied in every energy management system at all power grid control centers across the world. I learned the Clarke equations in 1984 when I was a junior college student. I used them right away at my first job – designing generator exciters and applying the flexible AC transmission system (FACTS) technologies to the power grid with several technical papers published in the IEEE journals. Through IEEE publications, conferences, and volunteer work, I met many inspiring women in power engineering who made important contributions, including Wanda Reder, the first female President (2008–2009) of the IEEE Power & Energy Society (PES) in the 124-year history, and Noel Schulz, the second female PES President (2012– 2013). President Schulz initiated this book project. Research shows that when it comes to keeping females in the engineering field, the importance of belonging, mentorship, and seeing someone like yourself in positions of leadership is key. In 2012, President Reder and President Schulz established the PES Women in Power Committee, which nominated me as a candidate for the PES Secretary position in 2015. I was elected and served as the PES Secretary from 2016 to 2019, PES President-Elect from 2020 to 2021, and became the third female PES President for 2022–2023. In addition, the 2024–2025 PES President will be a woman as well – two consecutive female Presidents – the first for the Society.
v
vi
Foreword
The power industry is undergoing one of its most dramatic transformations in a century driven by the need to reduce dependence on fossil fuels for generation, integrate clean energy technologies, and adapt to the realities of climate change. “Where are all the women in this energy transformation?” This book takes a look at innovating women in the power industry and the technology advances they made, ranging from shipboard power systems to converters to microgrids to machine learning. It is also a book about innovation and an exploration of the most creative (female) minds in the field, such as in electricity regulation and energy justice. While accomplishing a successful energy transition to a low-carbon power system, the public still requires keeping the lights on as its reliance on electricity grows. Lina Bertling Tjernberg, who was a former PES Secretary (2014-2015) and coached me how to be effective in that role, shares her findings in this book on Reliability-Centered Asset Management with models for maintenance optimization and predictive maintenance: including case studies for wind turbines, and how policy makers should update existing planning, investment, and operational frameworks to maintain reliability. Severe weather events (flooding, drought, strong winds, ice, snowstorms, extreme heat or cold, wildfires, earthquakes, etc.) have been happening more frequently with increasing severity. Anamika Dubey’s chapter Preparing the Power Grid for Extreme Weather Events: Resilience Modeling and Optimization addresses how to reduce extreme weather impacts on the power grid. The decarbonization of the electric power industry also requires the development of a workforce equipped with the skills required to run the new system, e.g., offshore wind. Furthermore, according to the National Center for Science and Engineering Statistics, women account for over half of the college-educated workforce, but only 16% of engineers and 27% of computer and mathematical scientists. As the Society of Women Engineers reports, over 32% of women switch out of degrees in STEM (science, technology, engineering, and mathematics), and female engineers earn 10% less than male engineers. The chapter written by Hendriksen, Schmidt, and Farsee, offers advice on Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA Jill Tietjen’s chapter Those Electrifying Women is a collection of stories from female power leaders including one of the women who inspired me and who I mentioned above, Edith Clarke. It further shines a spotlight on women’s accomplishments in the field and inspires up-and-coming female engineers to join in. Geared toward women who are considering jumping into the electric utility industry, this book is a great read if you are just launching your career, or if you are looking for a boost further up the career ladder. Or you may want to get a few copies of this book to pass on to your mentees or women you want to encourage into the field.
Foreword
Dr. Jessica Bian, P.E. President, IEEE Power & Energy Society Vice President, Grid-X Partners
vii
Preface
Having spent my career in the power systems engineering field and knowing that electrification has been called the greatest engineering achievement of the twentieth century (by the National Academy of Engineering), I was delighted to consider serving as the volume editor for the Women in Power volume of the Springer Women in Engineering and Science series. There are now four co-volume editors, including myself, and this Women in Power volume is a reality. Thank you to Jill, Marija and Lina. The idea for this volume came from a meeting with Jill in Denver during the 2015 IEEE Power and Energy Society Annual Conference. My original vision was that topics to be addressed would cover the breadth of the field from generation to transmission to distribution to policy to research. And, so it did come to be. Women pioneers in the field are profiled. Policy areas including energy justice, workforce concerns, and electricity regulation have chapters. Nuclear power is discussed as are substation automation and system protection. Areas of research around the world from preparing the grid for resiliency to evaluating transient stability are provided. Authors from the United States are joined by those from Sweden, Belgium, India, and China, and other countries. Electricity makes our modern world possible. Women contribute to that infrastructure miracle in a myriad of ways. Enjoy reading about their significant accomplishments. I think as you read their bios, you will understand the satisfaction that they gained over their careers – knowing that they were making meaningful contributions to the quality of life and standard of living around the world.
ix
x
Noel N. Schulz, Ph.D. Edmund O. Schweitzer III Chair in Power Apparatus and Systems, School of Electrical Engineering and Computer Science, Washington State University Pullman Chief Scientist, Pacific Northwest National Laboratory, Pullman, WA, USA
Preface
Noel Schulz Co-Volume Editor Biography
Perhaps it was fated that growing up in Blacksburg, Virginia, the daughter of an electrical engineer and an elementary school teacher, that I would not only attend Virginia Tech but also graduate with bachelor’s and master’s degrees in electrical engineering. Like all of us, I am multi-dimensional and was a walk-on tennis player during my time in college. Later, I got my PhD from the University of MinnesotaTwin Cities, while my kids were young. In the third grade, I did a tour of a manufacturing facility with my family, and when I was in middle school, my dad has a Heath kit TV set and I helped him solder parts. I even had resistor earrings with different colors. I always liked math and science. I really like power engineering because it is a field where you feel you can make a difference – because there are electrical utilities everywhere. I love teaching engineering because you get to work on engineering problems, solving them creatively with other people. Mentoring is a big part of my job as a faculty member. I help connect students with internships, projects, and promotions. I consider myself an “academic mom” to my students, whether they are undergraduates, graduates, or doctorate students. I still have former students who call me to catch up and give me their life updates. Ensuring diversity in the engineering workforce is very important to me and I have worked for years to get more women and multicultural students in science, technology, engineering, and mathematics professions. I was gratified to receive the 2014 Institute of Electrical and Electronics Engineers (IEEE) Education Society Hewlett-Packard Harriet B. Rigas Award for outstanding contributions in advancing recruitment and retention of women in IEEE. I am also concerned about the power engineering workforce and put efforts into that area as well. As IEEE Power and Energy Society President in 2012-2013, I helped launch Women in Power activities. Part of encouraging future students to pursue STEM careers is showing them that you can integrate work and life. I have a calligraphy sign that says “Love an Engineer, they build families too!” Working together through mentors, networks and sharing best practices we can have productive careers and families too.
xi
xii
Noel Schulz Co-Volume Editor Biography
I enjoy taking care of my corgi and making handmade cards. I also love to travel with my husband, Kirk. I am very proud of my two sons, Tim and Andrew.
Left to right: Ward Nunnally (brother), Andrew Schulz, Kirk Schulz, Noel Schulz, Tim Schulz, Joan Nunnally (mother) and Charles “Butch” Nunnally (father)
Contents
1
Those Electrifying Women!. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jill S. Tietjen
2
Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Missy Henriksen, Rosa Schmidt, and Angie Farsee
1
21
3
Electricity Regulation in the USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Angela V. (Angie) Sheffield and Christina V. Bigelow
39
4
Algorithms for Energy Justice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johanna L. Mathieu
67
Part I Planning and Generation 5
Reliability-Centered Asset Management with Models for Maintenance Optimization and Predictive Maintenance: Including Case Studies for Wind Turbines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lina Bertling Tjernberg
87
6
Nuclear Power in the Twenty-First Century? – A Personal View . . . . . 157 Jasmina Vujic
7
Security of Electricity Supply in the Future Intelligent and Integrated Power System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Gerd H. Kjølle
8
Preparing the Power Grid for Extreme Weather Events: Resilience Modeling and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Anamika Dubey
xiii
xiv
Contents
Part II Operation, Automation and Control: End-to-End Power Systems 9
Power Systems Operation and Control: Contributions of the Liège Group, 1970 –2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Mania Pavella, Louis Wehenkel, and Damien Ernst
10
Reinforcement Learning for Decision-Making and Control in Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Xin Chen, Guannan Qu, Yujie Tang, Steven Low, and Na Li
11
System Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Ariana Hargrave
12
Interaction Variables-Based Modeling and Control of Energy Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Marija D. Ilic
13
Facilitating Interdisciplinary Research in Smart Grid . . . . . . . . . . . . . . . . . 351 Yanli Liu
Part III Operation, Automation and Control: Local Distribution Power Systems 14
Substation Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Mini Shaji Thomas
15
Electric Power Distribution Systems: Time Window Selection and Feasible Control Sequence Methods for Advanced Distribution Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Karen Miu Miller and Nicole Segal
16
Intelligent and Self-Sufficient Control for Time Controllable Consumers in Low-Voltage Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Stephanie Uhrig, Sonja Baumgartner, and Veronika Barta
17
Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 ˇ Milosavljevi´c, S. Huseinbegovi´c, B. Peruniˇci´c-Draženovi´c, C. B. Veseli´c, and M. Petronijevi´c
18
Self-Healing Shipboard Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Karen Butler-Purry, Sarma (NDR) Nuthalapati, and Sanjeev K. Srivastava
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Chapter 1
Those Electrifying Women! Jill S. Tietjen
1.1 Introduction Women have participated in every phase of the development of the US electric utility industry. Many made their contributions while employed in the two big corporations in the early days of the industry – Westinghouse and General Electric. Some designed turbine generators and motors while others worked in the area of heat transfer. A woman developed significant tools for the analysis of long electric transmission lines and wrote the textbook on the topic. A woman made significant contributions to the advancement of solar energy. Another helped set the standards for grid reliability. Others designed control systems or served in the regulatory arena. Recently, one woman started a company to retrofit dams without generating facilities to be able to produce power. Let us learn about these electrifying women.
1.2 Bertha Lamme (Feicht) (1869–1943) The first woman to graduate with a degree in engineering other than civil engineering, Bertha Lamme (Fig. 1.1) graduated in mechanical engineering (with an electrical engineering emphasis) from The Ohio State University in 1893, the second woman to receive an engineering degree at all and the only woman in her class. Lamme was considered an expert in motor design and had joined her brother (who had also graduated in mechanical engineering from The Ohio State University) at the Westinghouse Electric and Manufacturing Company in Pittsburgh, Pennsylvania. She was a member of her brother’s team at Westinghouse and their projects together
J. S. Tietjen () Technically Speaking, Greenwood Village, CO, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_1
1
2
J. S. Tietjen
Fig. 1.1 Bertha Lamme. (Courtesy George Westinghouse Museum Collection, Detre Library & Archives, Senator John Heinz History Center)
included the first turbogenerators for the hydroelectric plant at Niagara Falls and the motors that operated the power plant of the Manhattan Elevated Railroad. An unpublished document from Westinghouse files refers to Lamme as a “designing engineer” and “one of the few women who have made a notable success of this work.” Although she was part of a team and restricted from the shop floor or the field due to her gender, the Pittsburgh Dispatch in 1907 reported: . . . even in that hothouse of gifted electricians and inventors. She is accounted a master of the slide rule and can untangle the most intricate problems in ohms and amperes as easily and quickly as any man expert in the shop.
Earlier, in December 1899, the Woman’s Journal – based on a report in the New York Sun from the previous month – called Lamme “the particular star among American women electricians,” and noted that she “designs machinery, makes calculations, and does exactly the work of a male electrical engineer” [1]. Both her brother and her husband attained fame for their accomplishments and it is not clear how much she helped either one. Her brother lived with Lamme and her husband throughout the remainder of his life. Lamme remained at Westinghouse until 1905 when she married her supervisor (and was required to leave employment). The Ohio State University named the Lamme Power Systems Laboratory in honor of Bertha and her brother Benjamin. Lamme’s slide rule and a drawing she made of a drill bit are in the permanent “Pittsburgh: A Tradition of Innovation” at the Senator John Heinz History Center. The Center opines “A woman in a man’s world. Never
1 Those Electrifying Women!
3
before had a female sat at the drafting table with men to compute, calculate and design the tools, motors and machinery that powered the new era of electricity.” The Society of Women Engineers (SWE) annually awards a Bertha Lamme Memorial Scholarship, established in 1973, in conjunction with the Westinghouse Educational Foundation. Her daughter became a physicist [2–5].
1.3 Edith Clarke (1883–1959) A woman engineer with many firsts to her name, Edith Clarke (Fig. 1.2), grew up in Maryland without any intentions of even going to college. After graduating from Vassar with an A.B. in mathematics and astronomy in 1908 (Phi Beta Kappa), Clarke taught math and science for 3 years in San Francisco and West Virginia. But teaching was not holding her interest and she decided to pursue becoming an engineer instead. She enrolled as a civil engineering undergraduate student at the University of Wisconsin and remained there for a year. Then, she went to work for American Telephone & Telegraph Company (AT&T) as a computing assistant. She intended to return to the University of Wisconsin to complete her engineering studies but found the work so interesting at AT&T that she stayed for 6 years. During World War I, she supervised the women at AT&T who did computations for research engineers in the Transmission Department. She simultaneously studied radio at Hunter College and electrical engineering at Columbia University at night. Eventually, she enrolled at MIT and received her master’s degree in electrical engineering in 1919; the first woman awarded that degree from MIT. Upon graduation, she wanted to work for either General Electric (GE) or Westinghouse. But even with her stellar credentials, no one would hire her as an engineer because of her gender – they had no openings for a woman engineer! In 1920, after a long job search, GE offered Clarke a computing job, directing women computers who were calculating the mechanical stresses in turbines for the turbine engineering department at GE. Fig. 1.2 Edith Clarke. (Courtesy Walter P. Reuther Library, Wayne State University)
4
J. S. Tietjen
But Clarke wanted to be an electrical engineer! Since that was not the job she was offered and since she wanted to travel the world, she left GE in 1921 to teach physics at the Constantinople Women’s College (now Istanbul American College) in Turkey. A year later GE did offer her a job as an electrical engineer in the central station engineering department. When she accepted this job, she became the first professionally employed female electrical engineer in the USA. Clarke’s area of specialty was electric power systems and problems related to its operation. She made innovations in long-distance power transmission and the development of the theory of symmetrical components and circuit analysis. Symmetrical components are a mathematical means by which engineers can study and solve problems of power system losses and the performance of electrical equipment. Clarke literally wrote the textbook: Circuit Analysis of AC Power Systems, Symmetrical and Related Components (1943) and a second volume in 1950. This textbook, in its two volumes, was used to educate all power system engineers for many years. She published 18 technical papers during her employment at GE reflecting her status as an authority on the topics of hyperbolic functions, equivalent circuits, and graphical analysis within electric power systems. “Simplified Transmission Line Calculations,” which appeared in the General Electric Review in May 1926, provided charts for transmission line calculations. She was also involved in the design of hydroelectric dams in the Western USA. Clarke received a patent in 1925 (1,552,113) for her “graphical calculator” – a method of considering the impacts of capacity and inductance on long electrical transmission lines (Fig. 1.3). It greatly simplified the calculations that needed to be done. In 1926, she was the first woman to address what is today the Institute of Electrical and Electronics Engineers (IEEE) – at the time it was the American Institute of Electrical Engineers (AIEE). Her topic was “Steady-State Stability in Transmission Systems.” In 1932, Clarke became the first woman to present a paper before the AIEE; her paper, “Three-Phase Multiple-Conductor Circuits,” was named the best paper of the year in the northeastern district. This paper examined the use of multiple conductor transmission lines with the aim of increasing the capacity of the power lines. In 1948, Clarke was named one of the first three women fellows of IEEE. She had previously become the first female full-voting member of IEEE. Clarke was one of the few women who were licensed professional engineers in New York State. A year after her retirement from GE in 1945, Clarke became an associate professor of electrical engineering at the University of Texas. In 1947, she rose to full professorship becoming the first woman professor of electrical engineering in the USA. She served on numerous committees and provided special assistance to graduate students through her position as a graduate student advisor. In 1954, Clarke received the SWE Achievement Award “in recognition of her many original contributions to stability theory and circuit analysis.” In 2015, she was posthumously inducted into the National Inventors Hall of Fame for her invention of the graphical calculator [1, 3, 4, 6–8].
1 Those Electrifying Women!
Fig. 1.3 Edith Clarke’s Patent Number 1,552,113 calculator
5
6
J. S. Tietjen
1.4 Hilda Counts Edgecomb (1893–1989) Hilda Counts Edgecomb first received an A.B. degree from the University of Colorado. After 2 years of teaching high school mathematics and physics, she returned to the University, earning her B.S. in electrical engineering in 1919. This was the first electrical engineering degree awarded to a woman by the University of Colorado. In those days, the character was important – and one young woman was not accepted as an engineering student because she smoked and swore! After her graduation, Edgecomb joined Westinghouse Electric Corporation where she had been selected from a field of over 300 applicants for one of 31 positions in a student training course. After 2 years with Westinghouse, she returned to the University of Colorado for further education. There, she met her husband and got married, retiring from Westinghouse until after her husband’s death. After 14 years of retirement, she re-entered the engineering workforce and served as an electrical engineer on the staff of the Rural Electrification Administration (today the Rural Utilities Service). Her work at the REA consisted of reviewing engineering plans for transmission lines and substations and making sure that power could be reliably delivered to rural areas [5, 7].
1.5 Florence Folger Buckland (1898–1967) Florence Fogler Buckland (Fig. 1.4) went to work for General Electric in 1921 after she had completed her college degree from MIT. She performed calculations to find ways to increase the power output in steam turbines. She earned a master’s degree in electrical engineering from Union College in 1925. Later, she was regarded as a heat transfer expert for GE as she could find ways to cool a motor or to make a heating element smaller and lighter. She also wrote reference manuals to assist future generations of GE engineers. Buckland worked for GE except during the 16year hiatus during which she raised her two children [5].
1.6 Maria Telkes (1900–1995) A celebrated innovator in the field of solar energy, one of the first people to research practical ways for humans to use solar energy, and the so-called Sun Queen, Maria Telkes (Fig. 1.5) was born in Budapest, Hungary. She built her first chemistry laboratory when she was 10 years old. Educated at Budapest University as a physical chemist (B.A. in 1920 and Ph.D. in 1924), she became interested in solar energy as early as her freshman year in college when she read a book titled Energy Sources of the Future by Kornel Zelowitch, which described experiments with solar energy that were taking place, primarily in the USA.
1 Those Electrifying Women!
7
Fig. 1.4 Florence Folger Buckland. (Courtesy of MIT)
Fig. 1.5 Maria Telkes receiving the Society of Women Engineers’ Achievement Award. (Courtesy Walter P. Reuther Library, Wayne State University)
Telkes served as an instructor at Budapest University after receiving her Ph.D. Her life changed significantly, however, when she traveled to Cleveland, Ohio to
8
J. S. Tietjen
visit her uncle who was the Hungarian consul. During her lengthy visit, she was offered a position as a biophysicist at the Cleveland Clinic Foundation working with American surgeon George Washington Crile. She accepted in 1925. Telkes would spend her entire professional career in the USA. In 1937, the same year she became a naturalized citizen, Telkes began her employment with Westinghouse Electric where for 2 years she developed and patented instruments for converting heat energy into electrical energy, so-called thermoelectric devices. In 1939, she began her work with solar energy as part of the Solar Energy Conversion Project at MIT. Initially, her role was working on thermoelectric devices that were powered by sunlight. During World War II, Telkes served as a civilian advisor to the U.S. Office of Scientific Research and Development (OSRD) where she was asked to figure out how to develop a device to convert saltwater into drinking water. This assignment resulted in one of her most important inventions, a solar distiller that vaporized seawater and then recondensed it into drinkable water. Its significant advancement used solar energy (sunlight) to heat the seawater so that the salt was separated from the water. This distillation device (also referred to as a solar still) was included in the military’s emergency medical kits on life rafts and saved the lives of both downed airmen and torpedoed sailors. It could provide one quart of fresh water daily through the use of a clear plastic film and the heat of the sun and was very effective in warm, humid, and tropical environments. Later, the distillation device was scaled up and used to supplement the water demands of the Virgin Islands. For her work, Telkes received the OSRD Certificate of Merit in 1945. Telkes was named an associate research professor in metallurgy at MIT in 1945. During her years at MIT, she created a new type of solar heating system – one that converted the solar energy to chemical energy through the crystallization of a sodium sulfate solution (Glauber’s salt). In 1948, Telkes and architect Eleanor Raymond developed a prototype five-room home built in Dover, Massachusetts. Called the Dover Sun House, this was the world’s first modern residence heated with solar energy and it used Telkes’s solar heating system. The system was both efficient and cost-effective. She next spent 5 years at New York University (NYU) (1953–1958) as a solar energy researcher. At NYU, Telkes established a laboratory dedicated to solar energy research and continued working on solar stills, heating systems, and solar ovens. Her solar ovens proved to be cheap to make, simple, and easy to build and could be used by villagers worldwide. Her work also led her to the discovery of a faster way to dry crops. In 1954, she received a $45,000 grant from the Ford Foundation to further develop her solar ovens. After NYU, she worked for Curtis-Wright Company as director of research for their solar energy laboratory (1958–1961). Here, she worked on solar dryers as well as the possible use of solar thermoelectric systems in outer space. She also designed the heating and energy storage systems for a laboratory building constructed by her employer in Princeton, New Jersey. This building included solar-heated rooms, a swimming pool, laboratories, solar water heaters, dryers for fruits and vegetables, and solar cooking stoves.
1 Those Electrifying Women!
9
In 1961, she moved to Cryo-Therm where she spent 2 years as a researcher working on space-proof and sea-proof materials for use in protecting sensitive equipment from the temperature extremes that would be experienced in those environments. Her work at Cryo-Therm was used on both the Apollo and Polaris projects. Subsequently, she served as the director of Melpar, Inc.’s solar energy laboratory looking at obtaining freshwater from seawater (1963–1969) before returning to academia at the University of Delaware. At the University of Delaware, Telkes served as a professor and research director for the Institute of Energy Conversion (1969–1977) and emerita professor from 1978. Here, she worked on materials used to store solar energy as well as heat exchangers that could efficiently transfer energy. The experimental solar-heated building constructed at the University of Delaware, known as Solar One, used her methods. In addition, she researched air-conditioning systems that could store coolness during the night to be used during the heat of the following day. After her retirement, she continued to serve as a consultant on solar energy matters. In 1980, after the 1970s oil crisis and a renewed interest nationwide in solar energy, Telkes was involved with a second experimental solar-heated house, the Carlisle House, which was built in Carlisle, Massachusetts. In 1952, Telkes was the first recipient of SWE’s Achievement Award. The citation reads “In recognition of her meritorious contributions to the utilization of solar energy.” In 1977, she received the Charles Greely Abbot Award from the American Section of the International Solar Energy Society which was in recognition of her being one of the world’s foremost pioneers in the field of solar energy. In that same year, she was honored by the National Academy of Sciences Building Research Advisory Board for her work in solar-heated building technology. The holder of more than 20 patents (shown in Table 1.1), in 2012, Telkes was inducted into the National Inventors Hall of Fame. In addition to her patents, Telkes also had many publications on the topics of using the sunlight for heating, thermoelectric/solar generators and distillers, and the electrical conductivity properties of solid electrolytes. She believed so strongly in using solar energy that she said, “Sunlight will be used as a source of energy sooner or later . . . Why wait?” [4, 9–16]. Table 1.1 Maria Telkes patents Number 2,229,481 2,229,482 2,246,329 2,289,152 2,366,881 2,595,905 2,677,243
Date January 21, 1941 January 21, 1941 June 17, 1941 July 7, 1942 January 9, 1945 May 6, 1952 May 4, 1954
Title Thermoelectric couple Thermoelectric couple Heat absorber Method of assembling thermos-electric generators Thermoelectric alloys Radiant energy heat transfer device Method and apparatus for the storage of heat (continued)
10
J. S. Tietjen
Table 1.1 (continued) Title Heat storage unit Composition of matter for the storage of heat Apparatus for storing and releasing heat Method for storing and releasing heat Cooking device and method Temperature-stabilized fluid heater and a composition of matter for the storage of heat therefor Temperature stabilized container and materials therefor 2,989,856 June 27, 1961 3,206,892 September 21, 1965 Collapsible cold frame Method and apparatus for making large celled material 3,248,464 April 26, 1966 3,270,515 September 6, 1966 Dew collecting method and apparatus 3,415,719 December 10, 1968 Collapsible solar still with water vapor permeable membrane Large celled material 3,440,130 April 22, 1969 3,695,903 October 3, 1972 Time/temperature indicators 3,986,969 October 19, 1976 Thixotropic mixture and making of same 4,010,620 March 8, 1977 Cooling system Selective black for absorption of solar energy 4,011,190 March 8, 1977 Solar heating method and apparatus 4,034,736 July 12, 1977 Phase change thermal storage materials with crust forming 4,187,189 February 5, 1980 stabilizers Thermal energy storage to increase furnace efficiency 4,250,866 February 17, 1981 4,954,278 September 4, 1990 Eutectic composition for coolness storage Number 2,677,367 2,677,664 2,808,494 2,856,506 2,915,397 2,936,741
Date May 4, 1954 May 4, 1954 October 1, 1957 October 14, 1958 December 1, 1959 Mary 17, 1960
1.7 Emma Barth (1912–1995) After receiving her B.A. (1931) and M.A. (1937), both in German, and her M.A. in education, Emma Barth (Fig. 1.6) took the only employment that was available at the time (the Depression era) – part-time work in evening and summer schools. Wanting full-time work and being interested in engineering, Barth enrolled in drafting classes Fig. 1.6 Emma Barth. (Courtesy Walter P. Reuther Library, Wayne State University)
1 Those Electrifying Women!
11
at the Pittsburgh Aeronautics Institute in 1942 and secured a draftsman position for H.J. Heinz Co. also in 1942. Wartime generated employment opportunities. At Heinz, she drafted wooden wings for gliders. In 1944, she was able to secure employment as a draftsman in the Turbine Generator Division at Westinghouse’s East Pittsburgh Office. In order to advance in her career, Barth decided to pursue a degree in engineering. Paying for her education herself and taking classes at night from 1944 to 1951, Barth enrolled at the University of Pittsburgh and earned her B.S. in general engineering in 1951 – the only female in her class and the first woman to graduate in engineering from the University of Pittsburgh’s evening school. With her degree, she was immediately promoted from draftsman to associate engineer. In 1960, she was promoted to engineer, and in 1975, advanced engineer. Her career at Westinghouse was spent designing turbines and generators. She was also committed to the engineering profession. She was a founding member and later president of the Pittsburgh Section of SWE. She was the first editor of the SWE Newsletter. She became licensed as a professional engineer in Pennsylvania in 1961. Barth was the first woman to receive the Westinghouse Community Service Award (in 1977). Outside of work, she was a serious drama student, participated in a local theater group, and practiced voice exercises daily [17].
1.8 Mabel MacFerran Rockwell (1925–1981) Credited as possibly the first female aeronautical engineer in the USA, Mabel MacFerran Rockwell (Fig. 1.7) received her B.S. from MIT in 1925 in science, teaching, and mathematics, and a B.S. from Stanford University in electrical engineering. Before World War II, she served as a technical assistant with the Southern California Edison Company, where she was a pioneer in the application of symmetrical components to transmission relay problems in power systems. Through Fig. 1.7 Mabel MacFerran Rockwell. (Courtesy Walter P. Reuther Library, Wayne State University)
12
J. S. Tietjen
this work, she made it easier to diagnose system malfunctions and to enhance the reliability of multiple-circuit lines. Rockwell then worked for the Metropolitan Water District in Southern California where she was a member of the team that designed the Colorado River Aqueduct’s power system and the only woman to participate in the creation of the electrical installations at the Hoover Dam. Later, Rockwell joined Lockheed Aircraft Corporation and worked to improve the manufacturing operations of aircraft. Her many innovations included refining the process of spotwelding and developing techniques for maintaining cleaner working surfaces so that the welds completely fused. After the war, Rockwell went to work for Westinghouse where she designed the electrical control system for the Polaris missile launcher. At Conair, she developed the launching and ground controls for the Atlas-guided missile systems. In 1958, President Eisenhower named her Woman Engineer of the Year. Also in 1958, she received the Society of Women Engineers’ (SWE) Achievement Award “in recognition of her significant contributions to the field of electrical control systems” [4, 18].
1.9 Nancy Deloye Fitzroy (1927–) The first woman President of the American Society of Mechanical Engineers (ASME), at a young age Nancy Deloye Fitzroy (Fig. 1.8) asked her parents for a record player – and she got one – completely unassembled. With dogged determination, she built that record player. At Rensselaer Polytechnic Institute, she was the only woman. Her classmates encouraged her and she graduated in chemical engineering in 1949. Fitzroy worked at General Electric from 1950 to 1987. Her contributions to heat transfer have led to improvements in space reentry vehicles (heat shields), rocket engines (fuel use), nuclear submarines (cooling systems and shielding), nuclear reactors (cooling and shielding), household appliances (including electric motors), steam and gas turbines (electric power generation), and heat transfer physics. She said “I worked on everything from toasters and television Fig. 1.8 Nancy Deloye Fitzroy. (Courtesy Walter P. Reuther Library, Wayne State University)
1 Those Electrifying Women!
13
tubes to submarines and satellites.” The recipient of SWE’s Achievement Award in 1972 “In recognition of her significant contributions to the fields of heat transfer, fluid flow, properties of materials, and thermal engineering,” Fitzroy was elected to the National Academy of Engineering in 1995. She wrote more than 100 papers and holds three patents. An avid sailor and flyer, Fitzroy was one of the first female helicopters pilots in the world. She also spent much of her time encouraging women to pursue engineering careers. In 2008, the ASME awarded her honorary membership to recognize “her tireless efforts and lasting influence as an advocate of the mechanical engineering profession.” In 2011, the ASME established the Nancy Deloye Fitzroy and Roland V. Fitzroy Medal which “recognizes pioneering contributions to the frontiers of engineering leading to a breakthrough(s) in existing technology or leading to new applications or new areas of engineering endeavor” [18–21].
1.10 Ada Pressman (1927–2003) Ada Pressman (Fig. 1.9) was a pioneer in combustion control and burner management for supercritical power plants including the input logic and fuel-air mixes associated therewith. She was directly involved in early design efforts toward more automated controls of equipment and systems, the new packaging techniques, and breakthroughs in improved precision and reliability of sensors and controls. As she progressed through the management ranks at Bechtel (earning her MBA during the process), she was recognized as one of the nation’s outstanding experts in power plant controls and process instrumentation and worked on fossil-fired and nuclear
Fig. 1.9 Ada Pressman at work. (Courtesy Walter P. Reuther Library, Wayne State University)
14
J. S. Tietjen
power plants. Pressman is credited with significantly improving the safety of both coal-fired and nuclear power plants for workers as well as nearby residents. Planning to become a secretary after she graduated from high school in Ohio, Pressman was encouraged to attend college by her father. She earned her B.S. in mechanical engineering from The Ohio State University. Pressman characterized her professional experience as including the engineering management of millions of individual hours of power generation plant design and construction and of economic studies and proposals for potential projects. She continually monitored the costs for each project as well as the technical engineering details as the design progressed. A dedicated advocate for women who served as Society President of SWE, Pressman received SWE’s Achievement Award in 1976 “For her significant contribution in the field of power control systems engineering” [22, 23].
1.11 Virginia Sulzberger (1941?–) After receiving her B.S. (1962) and M.S. (1966) degrees from the Electrical Engineering Department at the Newark College of Engineering (today the New Jersey Institute of Technology), Virginia Sulzberger worked as a senior planning advisor for Exxon Corporation and in various positions at Public Service Electric & Gas (PSE&G). She worked from 1985 until her retirement in 2006 at the North American Electric Reliability Corporation (NERC). During those years, Sulzberger served as the director of engineering and coordinated planning and engineering activities of NERC’s Planning Committee and its many subgroups, which involved more than 200 engineer representatives from the USA and Canada. Sulzberger also developed one of NERC’s flagship documents, the long-term reliability assessment. She then served as a consultant for Electric Power Systems. In 2015, she was elected to the National Academy of Engineering “For leadership and development of electric power system reliability standards.” She received the 2014 IEEE Power and Energy Society Lifetime Achievement Award with a citation that reads “For pioneering leadership in developing reliability analysis methods, performing reliability assessments of interconnected transmission systems, and establishing transmission reliability planning standards applied in North America during a 50-year career.” On the occasion of this award, Gerry Cauley, president and CEO of NERC, said “NERC is pleased that Virginia’s many contributions are being recognized by IEEE. Virginia paved the way for NERC’s independent reliability assessments of the North American bulk power system and also was instrumental in developing the original NERC planning standards.” Her other honors include Fellow in the IEEE, and membership in Eta Kappa Nu and Tau Beta Pi [24–27].
1 Those Electrifying Women!
15
1.12 E. Gail de Planque (1944–2010) The first woman and the first health physicist to be appointed to the U.S. Nuclear Regulatory Commission, Dr. E. Gail de Planque was a trailblazer for women throughout her entire career. When she joined the Atomic Energy Commission’s Health and Safety Laboratory (HSL) as an entry-level physicist, she was told not to expect much in the way of opportunities for advancement because women would eventually leave for marriage. She did not leave and eventually became the lab’s director. During her tenure at HSL, she earned her M.S. in physics and her Ph.D. in environmental health sciences. Her master’s thesis was titled “Radiation Induced Breast Cancer from Mammography”; ironically, she was later a breast cancer survivor. Dr. de Planque was the recipient of numerous awards for her pioneering role as a woman in science and her contributions to the peaceful uses of nuclear energy. One of the most significant was the election to the National Academy of Engineering (NAE) with the citation “For leadership of the national nuclear programs and contributions to radiation protection devices and standards.” In 2003, she received the Henry DeWolf Smith award for Nuclear Statesmanship from the American Nuclear Society and the Nuclear Energy Institute for her contributions to the peaceful use of nuclear energy. In 2015, she was inducted into the Maryland Women’s Hall of Fame. Her areas of expertise included nuclear physics and environmental radiation studies. While at the U.S. NRC, Dr. de Planque often had a pivotal role in matters relating to equal employment opportunities, flexiplace and flexitime, sexual harassment policy, and management. After her tenure at the U.S. NRC was complete, Dr. de Planque was sought after nationally and internationally including by the United Nations International Atomic Agency. As Chair of the NAE’s Celebration of Women in Engineering Steering Committee, she led the national effort to change the national dialogue on increasing the number and percentage of women in engineering and implement national and local programs that would implement new wide-reaching efforts to get closer to parity [28].
1.13 Shirley Jackson (1946–) The first African American woman to receive a Ph.D. from the Massachusetts Institute of Technology (MIT), theoretical physicist Shirley Jackson (Fig. 1.10) is now the President of Rensselaer Polytechnic Institute (RPI). As a physicist, Jackson’s area of expertise is particle physics – the branch of physics that predicts the existence of subatomic particles and the forces that bind them together. Jackson was encouraged in her interest in science by her father who helped her with projects for her science classes. She took accelerated math and science
16
J. S. Tietjen
Fig. 1.10 Shirley Jackson. (Courtesy of Rensselaer Polytechnic University)
classes in high school and graduated as valedictorian. At MIT, she was one of less than 20 African American students on campus, the only African American studying physics, and one of about 43 women in the freshmen class of 900. After obtaining her BS at MIT, she opted to stay for her doctoral work in order to encourage more African American students to attend the institution. She completed her dissertation and obtained her Ph.D. in 1973. After postdoctoral work at prestigious laboratories in the USA and abroad, Jackson joined the Theoretical Physics Research Department at AT&T Bell Laboratories in 1976. She served on the faculty at Rutgers University from 1991 to 1995 and then became the first woman and African American Chairman of the Nuclear Regulatory Commission. In 1999, she became the first African American and first woman President of RPI. Her numerous honors and awards include induction into the National Women’s Hall of Fame and the Women in Technology International Hall of Fame, the Thomas Alva Edison Science Award, and the CIBA-GEIGY Exceptional Black Scientist. Jackson actively promotes women in science [3, 29–31].
1.14 Kristina Johnson (1957–) Inducted into the National Inventors Hall of Fame in 2015, Dr. Kristina Johnson (Fig. 1.11) was the co-founder and CEO of Cube Hydro Partners, a company that acquired and modernized hydroelectric facilities and developed power at unpowered dams. She is a strong advocate for clean energy. Since 2020, she has served as President of The Ohio State University. Prior to Cube Hydro Partners, Johnson served as Under Secretary of Energy at the U.S. Department of Energy. As Under Secretary, Johnson was responsible for unifying and managing a broad $10.5 billion Energy and Environment portfolio, including an additional $37 billion in energy and environment investments from the American Recovery and Reinvestment Act (ARRA).
1 Those Electrifying Women!
17
Fig. 1.11 Kristina M. Johnson. (Courtesy of Colorado Women’s Hall of Fame)
Prior to joining the Department of Energy, Dr. Johnson served as Provost and Vice President for Academic Affairs at Johns Hopkins University. From 1999 to 2007, Dr. Johnson was Dean of the Pratt School of Engineering at Duke University, the first woman to serve in that position. Before joining Duke University, Dr. Johnson served as a professor of electrical and computer engineering at the University of Colorado at Boulder, where she was a leader in interdisciplinary research on optoelectronics, a field that melds light with electronics. In 1994, Johnson helped found the Colorado Advanced Technology Institute Center for Excellence in Optoelectronics. She also co-founded several companies including ColorLink Inc., KAJ, LLC, and Southeast Techinventures (STI). ColorLink makes color components for high-definition television and other image projection devices utilizing the polarization, or vibrational, states of light. KAJ, LLC is an intellectual property licensing company that assists new firms using technology pioneered at the Optoelectronics Computing Systems Center at the University of Colorado at Boulder. STI is a technology acceleration company for commercializing intellectual property developed at Duke and other universities in the Southeast USA. She is also the inventor of Real D, 3D the technology used for 3D movies today worldwide. In addition to her academic career, Johnson is an inventor and entrepreneur, holding 118 US and international patents. Johnson has received numerous recognitions for her contributions to the field of engineering, entrepreneurship, and innovation, including the John Fritz Medal, considered the highest award made in the engineering profession [32–34].
References 1. Meade, Jeff, Ahead of Their Time, chapter included in Margaret E. Layne, P.E., editor, Women in Engineering: Pioneers and Trailblazers, Reston, Virginia: ASCE (American Society of Civil Engineers) Press, 2009. 2. Letizia, Anthony, Bertha Lamme: Pioneering Westinghouse Engineer, http:// www.geekpittsburgh.com/innovation/bertha-lamme, October 26, 2015.
18
J. S. Tietjen
3. Profitt, Pamela, Editor, Notable Women Scientists, Detroit, Michigan: The Gale Group, 1999. 4. Ogilvie, Marilyn and Joy Harvey, editors. The Biographical Dictionary of Women in Science: Pioneering Lives from Ancient Times to the Mid-20th Century, New York, New York: Routledge, 2000. 5. Hatch, Sybil E. Changing Our World: True Stories of Women Engineers. Reston, Virginia: ASCE (American Society of Civil Engineers) Press, 2006. 6. Goff, Alice C., Women Can Be Engineers, Ann Arbor, Michigan: Edwards Brothers, Inc., 1946. 7. Ingels, Margaret, “Petticoats and Slide Rules,” Western Society of Engineers, September 4, 1952. 8. National Inventors Hall of Fame, Edith Clarke, https://www.invent.org/inductees/edith-clarke, accessed November 30, 2018. 9. National Inventors Hall of Fame, Maria Telkes, https://www.invent.org/inductees/maria-telkes, accessed November 30, 2018. 10. Rafferty, John P., Maria Telkes: American Physical Chemist and Biophysicist, https:// www.britannica.com/biography/Maria-Telkes, accessed November 30, y. 11. Society of Women Engineers – Philadelphia Section, Maria Telkes, http://philadelphia.swe.org/ hall-of-fame-m%2D%2D-z.html, accessed November 30, 2018. 12. Maria Telkes: The Telkes Solar Cooker, https://lemelson.mit.edu/resources/maria-telkes, accessed December 1, 2018. 13. Maria Telkes, https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-andmaps/telkes-maria, accessed December 1, 2018. 14. Nichols, Burt E., and Steven J. Strong, “The Carlisle House: An All-Solar Electric Residence,” DOE/ET/20279-133. 15. Telkes, Maria. Preliminary Inventory of the Maria Telkes Papers 1893-2000 (Bulk 1950s1980s). [http://www.azarchivesonline.org/xtf/view?docId=ead/asu/telkes_acc.xml], accessed December 1, 2018. 16. Boyd, Andrew, Engines of Our Ingenuity: No. 2608, Maria Telkes, https://www.uh.edu/ engines/epi2608.htm, accessed December 1, 2018. 17. Kata, Lauren, “Spotlight on Emma Barth, P.E.: A Typical Woman Engineer?, chapter included in Margaret E. Layne, P.E., editor, Women in Engineering: Pioneers and Trailblazers, Reston, Virginia: ASCE (American Society of Civil Engineers ) Press, 2009. 18. Society of Women Engineers, Historical Record of Policy and Interpretation, in the author’s possession, approved November 7, 1990. 19. “Nancy Deloye Fitzroy”, Edison Tech Center, Engineering Hall of Fame, https:// edisontechcenter.org/NancyFitzroy.html, accessed March 20, 2022. 20. Nancy Fitzroy, http://www.nancyfitzroy.org/, accessed March 20, 2022. 21. American Society of Mechanical Engineers, Nancy Deloye Fitzroy and Roland V. Fitzroy Medal, https://www.asme.org/about-asme/honors-awards/achievement-awards/nancy-deloyefitzroy-and-roland-v-fitzroy-medal, accessed March 20, 2022. 22. Tietjen, Jill S., “Honoring the Legacy of Ada Pressman, P.E.,” SWE: Magazine of the Society of Women Engineers, Fall 2008. 23. Oakes, Elizabeth H. “Pressman, Ada Irene.” Encyclopedia of World Scientists, Revised Edition. New York, New York: Facts on File. 2007. 24. National Academy of Engineering, National Academy of Engineering Elects 67 Members and 12 Foreign Members, https://www.nae.edu/Projects/MediaRoom/20095/130169/130172.aspx, February 5, 2015. 25. NJIT News Room, NJIT Alumna Will be Inducted into the National Academy of Engineering, http://www.njit.edu/news/2015/2015-203.php, August 4, 2015. 26. NERC, Virginia Sulzberger Receives Prestigious IEEE Lifetime Achievement Awared, http:// www.nerc.com/news/Headlines%20DL/Sulzberger%2017APR14.pdf, April 17, 2014. 27. Business Wire, IEEE Power & Energy Society Awards Recognize Important Member Contributions at Its 2014 General Meeting, http://www.businesswire.com/news/home/ 20140729006030/en/IEEE-Power-Energy-Society-Awards-Recognize-Important, July 30, 2014.
1 Those Electrifying Women!
19
28. Unpublished, E. Gail E. de Planque, Nomination to the Maryland Women’s Hall of Fame, 2014, in the files of the author. 29. Ambrose, Susan A., Kristin L. Dunkle, Barbara B. Lazarus, Indira Nair and Deborah A. Harkus. Journeys of Women in Science and Engineering: No Universal Constants. Philadelphia: Temple University Press, 1997, pp. 422–425. 30. “First Lady Hillary Rodham Clinton to Speak at Inaugural Gala for Rensselaer’s 18th President, The Honorable Dr. Shirley Ann Jackson,” Press Release, September 17, 1999, www.rpi.edu/ dept/NewsComm/New_president/presshillary.htm, accessed November 23, 1999. 31. Perusek, Anne M., “Saluting African Americans in The National Academy of Engineering Class of 2001,” SWE: Magazine of the Society of Women Engineers, February/March 2002, pp. 24–26. 32. National Inventors Hall of Fame, Kristina M. Johnson, http://invent.org/inductees/johnsonkristina/, accessed June 6, 2015. 33. Unpublished nomination in the files of the author. 34. Kristina M. Johnson, Office of the President: About the President, The Ohio State University, https://president.osu.edu/about-president-johnson, accessed March 20, 2022. Jill S. Tietjen, P.E., entered the University of Virginia in the Fall of 1972 (the third year that women were admitted as undergraduates after a suit was filed in court by women seeking admission) intending to be a mathematics major. But midway through her first semester, she found engineering and made all of the arrangements necessary to transfer. In 1976, she graduated with a B.S. in Applied Mathematics (minor in Electrical Engineering) (Tau Beta Pi, Virginia Alpha) and went to work in the electric utility industry. Galvanized by the fact that no one, not even her Ph.D. engineer father, had encouraged her to pursue an engineering education and that only after her graduation did she discover that her degree was not ABET-accredited, she joined the Society of Women Engineers (SWE) and for more than 40 years has worked to encourage young women to pursue science, technology, engineering, and mathematics (STEM) careers. In 1982, she became licensed as a professional engineer in Colorado. Tietjen started working on jigsaw puzzles at age two and has always loved to solve problems. She derives tremendous satisfaction seeing the result of her work – the electricity product that is so reliable that most Americans just take its provision for granted. Flying at night and seeing the lights below, she knows that she had a hand in this infrastructure miracle. An expert witness, she works to plan new power plants. Her efforts to nominate women for awards began in SWE and have progressed to her acknowledgment as one of the top nominators of women in the country. Her nominees have received the National Medal of Technology and the Kate Gleason Medal; they have been inducted into the National Women’s Hall of Fame and state Halls including Colorado, Maryland and Delaware; and have received university and professional society recognition. Tietjen believes that it is imperative to nominate women for awards – for the role modeling and knowledge of women’s accomplishments that it provides for the youth of our country. Tietjen received her MBA from the University of North Carolina at Charlotte. She has been the recipient of many awards
20
J. S. Tietjen including the Distinguished Service Award from SWE (of which she has been named a Fellow and is a Society Past President), and the Distinguished Alumna Award from both the University of Virginia and the University of North Carolina at Charlotte. She has been inducted into the Colorado Women’s Hall of Fame and the Colorado Authors’ Hall of Fame. Tietjen sits on the board of Georgia Transmission Corporation and spent 11 years on the board of Merrick & Company. Her publications include the bestselling and award-winning books Her Story: A Timeline of the Women Who Changed America for which she received the Daughters of the American Revolution History Award Medal and Hollywood: Her Story, An Illustrated History of Women and the Movies which has received numerous awards. Her award-winning book Over, Under, Around and Through: How Hall of Famers Surmount Obstacles was published in 2022.
Chapter 2
Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA Missy Henriksen, Rosa Schmidt, and Angie Farsee
2.1 Introduction The Center for Energy Workforce Development (CEWD) was created in 2006 to unite the energy industry on action toward widespread concerns about projected retirements. At the time, it was thought the organization would address the retirement fears and disband. However, those involved recognized the incredible value in working together on critical workforce development issues and accordingly, the Center continues its work today. The history of collaboration and strategic action has proved to be essential as there has never been a time in recent history when there have been as many critical workforce-related challenges as the present day. Business leaders are responding to a myriad of issues that will impact the future of work including the impacts of COVID-19; the “Great Resignation”; generational transition; the gig economy; diversity, equity, and inclusion; carbon reduction commitments; the shortage of people in the workforce; needs to modernize training; changing skill sets; and more. Complicating matters is the fact that some traditional advantages of the energy industry – including higher wages, expectations of lifetime employment, and community engagement – have been looked at differently by younger generations and there is greater competition from other sectors. Non-retirement attrition is rising and there are increasing reports of organizations not being able to fill critical roles in a timely manner [1].
M. Henriksen () · R. Schmidt Center for Energy Workforce Development, Washington, DC, USA e-mail: [email protected] A. Farsee Georgia Transmission Corporation, Tucker, GA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_2
21
22
M. Henriksen et al.
2.2 State of the Utility Industry Workforce According to the 2021 Gaps in the Energy Workforce Report published by CEWD, there are approximately six million people working in the energy sector with about 613,623 of these individuals working in utilities, employed by investor-owned electric companies, public power utilities, and rural electric cooperatives [1]. Since initial concerns about the industry’s aging workforce in 2006, the workforce has continued to grow younger. With a focus on the creation of energy education pathways in high schools, community colleges, and universities, companies have seen an improvement in size and capability in the talent pool for recruiting and hiring into these high-skill positions. Jobs such as lineworkers, skilled technicians, and plant operators generally require some level of postsecondary education prior to hiring, and companies have made considerable progress in partnering with education providers and workforce systems to develop education that leads to the competencies needed for these high-skilled, high-paying careers. The CEWD regularly surveys the industry and has documented a change in the distribution of the age curve from the 2020 survey conducted by CEWD when compared to previous years. While in the first survey in 2006, there was an older population, the 2018 survey continued to show a younger population within the sector, but still with a considerable population in the age ranges 53 and higher [1]. In the 2021 data, there is a considerable increase in the younger populations within the energy workforce (Gen Z – 1.5%, Millenials – 32%, Gen X – 37%, and Boomers – 28.9%). Electric cooperatives showed the youngest workforce, with 16% of their employees under the age of 32. On the other side of the age range, investorowned electric companies have the largest population of workers older than 53, representing 29.8% of their workforce [1]. Although energy companies historically have had lower attrition rates than other industries, non-retirement attrition has been rising within select key jobs (lineworkers, power plant operators, technicians, and engineers) since 2012. The 5year non-retirement attrition average within the key jobs is around 15.4%. It appears that 64% of the total company non-retirement attrition occurs within the first 5 years of employment. The percentage varies amongst job categories, but it is significant enough to require organizations to focus on retention efforts and strategies for new hires. The highest turnover occurs amongst the youngest age groups, with 60% of the non-retirement attrition coming in the 23–37 age range. Considering the high costs of hiring and training for the new workers as well as the time invested in said training, this high turnover can represent a significant expense for companies and a reduction in the return on investment on building a talent pipeline. Creating strategies to promote retention becomes essential considering this information [1].
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
23
2.3 Expected Workforce Demand There are a variety of data sets that explore the future demand of America’s Clean Energy Economy. A recent report by CEWD suggests the industry will hire 215,000 employees for select positions over the next 3 years [2]. Deloitte projects the power sector could expect to triple its workforce over the next 10 years under certain carbon reduction scenarios [3]. The 2021 U.S. Energy Employment Report (USEER) noted that prior to the COVID-19 pandemic, “the energy sector had been one of the country’s fastest growing job markets, citing that from 2015 to 2019, the annual growth rate for energy employment in the United States was 3%— double compared to 1.5% in the general economy” [4]. While various reports and scenarios offer different outlooks on the growth projection of the energy workforce, they all point toward growth, a larger workforce that needs to be prepared for an increasingly dynamic energy industry, and a workforce that needs to be identified, courted, trained, and nurtured.
2.4 Commitment to Diversity, Equity, and Inclusion When building the talent pipeline, energy companies are focusing not only on the numbers and qualifications in talent supply but on the diversity of that pipeline as well. This diversity, whether through the hiring of minorities, veterans, women, or those with differing abilities, strengthens businesses and promotes economic prosperity in the region’s companies serve. Energy industry leaders believe that the industry’s workforce should reflect the communities and customers they serve. Minorities (defined as racial and ethnic diversity) currently represent 24% of the workforce, up from 22% since CEWD’s survey in 2019 [1]. This movement reflects the focus on increasing the diversity of the workforce through broadened recruitment priorities and inclusion initiatives that have been helpful in supporting workforce diversity. The strategies taken by energy companies, whether with elementary schools to attract more girls into engineering or specialized training academies for underrepresented communities for entry into a variety of apprenticeships, have helped increase the diversity of education pathways. Hiring and retention of a diverse population is ensuring employee populations more closely reflect the communities they serve. Women now comprise 22% of the energy workforce, slightly lower than in 2019, though more women were actually hired into the energy workforce over the past 2 years [1]. This may be because women were heavily affected during the global pandemic, with many having left their jobs to stay at home and care for their families [5].
24
M. Henriksen et al.
Veterans represent approximately 8% of the workforce [1]. While this value is lower than in 2019, where veterans represented 9.6% of the workforce population, it should be noted the percentage of veterans in the national workforce is decreasing and now stands at 5.6%. Forward-leaning companies continue to explore workplace culture, recruiting practices, outreach, and engagement opportunities to determine how best to enrich their teams with the inclusion of those who have historically been underrepresented. With the recognition of the importance of creating a more diverse, equitable, and inclusive workforce, industry leaders are creating a collaborative roadmap to support the entire sector’s advances in this area and through CEWD they are sharing examples of effective programs to increase diversity in all levels of the workforce. One of the most challenging areas to increase diversity has been in skilled trade roles. The International Brotherhood of Electrical Workers and other unions are leading programs to build diversity within their membership bases and collaborating with their business partners to ensure entry-level positions in the trades are attainable to all who are qualified. Many energy companies are creating pre-apprenticeship and training programs for communities of people historically under-represented in the industry. These training groups allow individuals who have shared life experiences to establish bonds with one another and more readily build peer groups when they eventually begin their energy careers. There also has been an increased focus on reaching out to low-income communities for these training opportunities, with the recognition that energy careers offer family supporting wages and can support economic development in the country’s most vulnerable communities.
2.5 Changing Skill Sets for Today’s Electric Utility Workforce Jobs in the energy industry are decidedly going digital, requiring the recruitment or retraining/upskilling of staff. Information Technology (IT) jobs constituted 35% of all utility job openings in mid-2021, according to a report by Deloitte [3]. That is the second-highest share of any non-tech industry. The trend puts the energy industry in direct competition with other industries when there already is a shortage of workers with IT, engineering, and other technical skills. Job functions that did not exist or were rare at many energy companies a decade ago – cybersecurity, data science, AI, cloud-based computing, etc. – now are in high demand. A report from Deloitte noted that significantly reducing carbon emissions from the US electric power sector “hinges on a significant workforce transformation. The changes may exacerbate existing inequities, skills shortages, and talent pipeline difficulties.” Digitization also is impacting existing jobs in the energy industry. A detailed 2018 study initiated by National Grid in response to New York State’s 2014 energy plan noted that customers and market participants will be demanding more information and control in the future [6]. “The capability of our engineering, data
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
25
management, customer service, and IT professionals must evolve to provide these new services to customers,” National Grid’s study said. It also noted that because of grid modernization and smart technology, field workers “will require materially different work methods and capabilities.” Given all the changes, “Not far in the future every [field] tech will be an IT person,” said Will Markow, vice president of Applied Research – Talent at labor market analytics firm EMSI Burning Glass [7]. In addition to expected skill set changes from an increasingly digital economy, the energy industry will experience skill set shifts associated with how we power and fuel America. At the close of 2020, the Bureau of Labor Statistics estimated there were fewer than 50,000 coal mining jobs remaining [8]. Business and labor leaders are working to reskill this workforce, providing them the knowledge to transition into new areas of energy production. Leaders in the fossil fuel industry are exploring opportunities with hydrogen and renewable natural gas and assessing the skills that will be required of the future natural gas workforce. Moreover, nearly 22 million electric vehicles (EVs) are expected on US roads in 2030, supported by more than $3.4 billion in investments from electric companies to deploy EV charging infrastructure and accelerate electric transportation [9]. With these changes and the modernization of the energy grid, we can expect our future workforce to prepare, train, and work differently and the currently identified core competencies will evolve. In fact, corporate learning officers already are working to identify the knowledge, skills, and abilities that will be required of the workforce of 2035.
2.6 Focus on Career Awareness For decades, utilities have not been as challenged to recruit into their workforce as they are today. They have traditionally enjoyed recognition as stalwart employers of the community, providing stable employment, good benefits, strong earning potential, and the opportunity to serve and support the community. In fact, career longevity has been a hallmark of the energy workforce, particularly from those who were impacted by the Great Recession as they have placed a high value on job security. That comfortable and perhaps taken-for-granted position has dramatically changed in recent years. More than 76% of US utilities have reported difficulty in hiring new employees [10]. Value-driven career searches, a heightened commitment to diversity, equity, and inclusion, and the lack of qualified people to carry out the work that needs to be done are forcing energy companies to re-examine how they position themselves and message about the industry to appeal to their future workforce. Gallup dubbed Millennials (born between 1980 and 1996) the “Job-Hopping Generation” [11]. As of autumn 2021, more than a third of millennials and Gen Zs were reported to be looking for a new job [12]. They feel empowered by the current labor shortage and motivated by social issues. They are looking for employers who will accommodate their desires for remote work and flexible schedules.
26
M. Henriksen et al.
Some competing industries are becoming more aggressive in recruiting and more competitive in compensation. Fewer utility company employees are covered by the “golden handcuffs” of defined-benefit retirement plans (66% in 2021 vs. 81% in 2011 [13]), making it easier to change employers in mid- and late-career. Energy companies, often stereo-typed for their conservative reputation, are now finding it essential to rebrand themselves in the competition for talent. They must bring greater awareness to the breadth of opportunities that exist in the workforce, promote the sometimes-under-recognized use of technology, challenge misperceptions about industry employment, and present themselves in new ways that resonate with what today’s workers seek. For instance, millennials and Gen Zs are more concerned than past generations about working for companies that align with their attitudes on environmental, social, and governance (ESG) issues and other values [14]. While energy companies have made significant strides in this area and often are among the top performers, their reputation still lags in some quarters. In bringing greater awareness to industry careers, energy companies must also tap into their “hip side,” to engage with their target workforce. That means, they must be active and engaged where their messaging will be most visible and most highly received, such as social media platforms like TikTok and Instagram, and they might even work to gain traction with the aid of social media influencers. They must continue to look for alliances with schools, community-based organizations, workforce systems, philanthropic groups, and other groups to raise awareness and understanding of employment opportunities in the sector. The industry would be well served to unify its messaging and showcase the vast opportunities available within the greater energy sector. Many industries today have well-oiled marketing, advertising, and public relations campaigns that offer sizzle and shine to students, their influencers, and career seekers. The energy industry is behind in that positioning, demonstrating an opportunity for collaborative action.
2.7 Talent Retention With today’s challenge of recruiting talent, especially in an industry that relies so heavily on those with science, technology, engineering, and mathematics (STEM) skills – disciplines in high demand – energy companies are increasing their attention on how to increase retention. Accenture recently authored a “Point of View” on Reinventing the Utility Employee Experience, addressing employer questions about how to ensure they do not lose the talent they have worked so hard to attract and train. The publication suggests five things companies should be focused on to create a work environment that supports employee longevity including enabling continuous learning; listening to what employees need; using technology to enable flexible work arrangements; championing workforce well-being and equality; and setting and sharing people metrics [10]. The authors, and others who study workforce cultures, stress the importance of focusing on the full needs of employees,
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
27
including their mental health and personal aspirations, realities that have never been more apparent than throughout the pandemic. Much has been written about new work environments spawned by the pandemic. Millions of Americans successfully worked from home for 2 years and they expect such flexibility in a post-pandemic era. Those who struggled working remotely want to be in the office on a routine basis for social, professional, and personal reasons. Still other employees, especially those on the front line of generating and transmitting energy, had to show up to work on a daily basis while they watched coworkers and peers work from the convenience of anywhere and everywhere. Energy companies and other businesses will need to navigate the “what’s next” in office norms. Hybrid work models are likely here to stay, as is reliance on technology to connect with people, likely diminishing some expectations of travel. Energy leaders will need to keep a close eye on what is taking place in other sectors to remain competitive in newly defined norms to ensure they do not lose employees to businesses with more flexibility or cultures that better nurture individuals’ needs.
2.8 Conclusion The energy sector preceded many industries in their long-term focus on workforce development. Many are catching on and catching up now as labor shortages wreak havoc on business operations and these newly focused actions will further complicate the challenges that exist in staffing the energy workforce. Energy leaders must engage in strategic workforce planning to ensure they have what has been dubbed the “5 Rights” – the “right number of people, with the right skills, in the right place, at the right level and at the right cost” [15]. Some examples of energy leaders follow. Raquel Mercado VP of Environmental. Health & Safety AVANGRID
There has never been a more exciting time to be in energy. Traditional service providers have been shifting from a singular focus of “keeping the lights on” – to becoming an integral part of our customers’ lives, as the dependency on technology and the quest for energy options continue to expand. The competencies of the
28
M. Henriksen et al.
future workforce will need to possess a propensity for data; whether they are direct data wranglers or interpreters for decision-making. The notion of building fully connected, highly digitized utilities to effectively manage the increasing complexity of the grid is quickly becoming a reality – and with that comes the demand for data-driven decision-makers. Another key attribute is the ability to adapt to change at unprecedented levels. Here, I am referencing skills beyond traditional change management; skills required to be successful in a myriad of multi-level changes and needed to keep up. Today’s energy companies are on a steep learning curve, trying to keep up with new technology enablers and regulatory demands, and all geared to provide optimal service to their customers. So, how does one survive in a highstake game of building the very best energy options to meet new demands? Personal resilience and emotional intelligence will serve as a foundation for the interpersonal influencing skills needed to collaborate with many stakeholders in this very agile environment. Creativity and innovation to advance the energy models will require highly effective teams – non-cognitive and cognitive skills will be needed (e.g., interpersonal interaction and integrity will be just as important as critical thinking). Being a woman in the energy industry has come with its challenges, but it has also been a great motivator that pushes me to continue to grow. Now, more than ever, companies are focusing on the power of diversity to propel new models that can advance the agenda. It is a great time to be a woman in energy – know that you will be challenged and that you are on a journey with peaks and valleys. If you want to stay on top of your game – get comfortable with change, develop habits around continuous learning, and do not lose sight of balancing deliverables with kindness – your personal brand matters. The key is to gather as many tools as possible for your toolbox along the way. Celebrate your accomplishments and learn from your failures. In my own journey, I have a varied skill set in traditional transmission and distribution (T&D) engineering and operations, but I have also taken on stretch assignments when asked. I have managed major project portfolios, served as chief of staff (T&D) for the CEO Office, led my company’s Innovation Office and I am currently leading our workforce sustainability effort as part of the Human Resource team. As you shape your own path, focus on building your network. Your network should include allies, mentors, and sponsors – and yes, you will need all three. An ally typically is someone you trust and who can speak positively to higher levels; a mentor typically is a few levels above that and this person can provide advice on work and career trajectory. Finally, a sponsor has the ability to champion you and can, in fact, impact your trajectory (e.g., they can leverage their assets to help you advance). Building your network should be a constant. Know that you will evolve along the way. Your team of supporters should be both inside and outside your organization. Do not forget to show gratitude to those that take the time to play a key role in your development – and finally, do not forget to pay it forward.
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
29
Angie Farsee Vice President, Human Resources Georgia Transmission Corporation
I have to start with my first mentor, Clara Shorter. I started working when I was 14 for a youth employment organization and I reported to Clara. As a person of color, Clara was someone I could relate to. Not only did she set an example for being a business professional, but she also taught me office etiquette. I used what she taught me to pursue a business career and took advantage of the high school’s cooperative education program which allowed me to finish high school and sharpen my skills while on the job. When I transitioned into human resources, I had an opportunity to get to know and reach out to many talented human resources professionals. I became a member of the Society for Human Resources Management (SHRM) early in my human resources career. The organization provides a great opportunity for individuals to learn and share best practices amongst peers. As a human resources professional, I give credit to Judy Bohrofen for being a great mentor and playing a huge role in helping me learn the business side of human resources. I reported to Judy while I was working for InterCoast Energy, the unregulated subsidiary of MidAmerican Energy. This came about when our corporate offices moved from Davenport, Iowa to Des Moines, Iowa. With the move, our CEO wanted to add a human resources executive to the team. Initially, I felt slighted, but when I met Judy, I understood why the organization and I needed her! Judy understood the financials and the strategic role human resources played in the success of the company. Judy was a great teacher and through her I gained an understanding of how to look at the “big picture.” Recently, I have transitioned into “paying it forward.” I, along with several of my electric cooperative human resources peers, contributed to developing a mentoring program for electric cooperative human resources professionals. The mentor program pairs up experienced human resources professionals with those who are new to their role; and, in addition to helping develop the program, I served as a mentor. Finally, within my organization, we have a job rotation program and the program participants are assigned an executive. To date, I have had the privilege to mentor two female rotation program participants.
30
M. Henriksen et al.
Clarissa Michaud Southern Company
Women are underrepresented in the energy industry, despite being equally as qualified to enter the industry as men. I would recommend that other women consider careers in energy as it is rapidly growing with departments covering many aspects including operations and beyond, from finance to safety. There is significant room for growth within the companies and across these departments, enabling employees to become well-rounded and find niches that appeal to them. The SkillBridge program enables transitioning servicemembers to try a new career without long-term commitment or financial instability. While retaining their military salary, servicemembers gain valuable civilian work experience in a field of their choosing, network and have an interview opportunity, and gain insight into the functioning of civilian organizations, while not creating an obligation if the industry or company ends up not being an appropriate fit. Even those who choose alternate opportunities have gained valuable work experience and networking opportunities. I participated in the SkillBridge program as an intern at a wind operations facility and was offered full-time employment upon completion, enabling me to leave the military with another career already started. The military trains people for many skills that are valuable in the workplace, including time management, leadership, and attention to detail, and especially how to work with individuals of all backgrounds. Regardless of military occupation, servicemembers learn how to treat others with dignity and respect, adhere to a set of values, and place safety in a place of utmost importance. Servicemembers learn many transferable skills such as supply chain management, personnel management, computer skills, and information collection and dissemination, especially as an Officer. Many servicemembers are not aware of the skills and talents that they have, or how they can be useful in the civilian sector, as it does take some translation to convey. I was a 19A Armor Officer in the Army, and I spent most of my time on tanks. Therefore, I had a lot of experience with organizing maintenance schedules and ordering parts, resourcing all classes of supply, property management, and personnel management. I am now in renewable operations on a wind site, and I have found that much of my Army knowledge can be applied to wind turbines, as the basic concepts remain the same. Tracking maintenance status, services schedules,
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
31
using digital logistical systems, anticipation of resourcing needs, and coordination of contracts are all skills that seamlessly carried over to my new career. Once the learning curve from military terminology to the field’s terminology is complete, the general concepts remain the same.
Suzy Macke Journeyman Lineman/Troubleman Duke Energy
In 2009, I was 34 years old and struggling to find a career that would allow me to help support my family, while holding my interest and providing daily challenges. I did not have a degree and found myself working multiple jobs just to make ends meet. One day my husband suggested that I apply for the lineworker position opening. “What’s a lineworker?” I asked. I had always been independent and worked at jobs that were physically challenging, outdoors, or outside the norm for what was “expected” of a young woman, and while I loved parts of all of them, they all had me yearning for something more. Through all of my experiences, I had never thought of the opportunities of the energy industry. Little did I know that a career in one of the most exciting positions in this industry would offer me all of those things and so much more. It has not always been the easiest of roads to travel, but, most times, the truly rewarding ones never are. It took multiple times applying and testing before I was offered a position to start my apprenticeship. I was going to get paid to learn. I excitedly began a 5-year journey to become a lineworker. I was trained by amazing professionals who taught me the trade and the importance of safety. They encouraged me to learn and grow every day. There were, of course, challenges. Working in a male-dominated field full of strong personalities presents an interesting environment. From finding proper-fitting gear and equipment to learning to leverage my tools instead of my body, I continued to improve and grow. I earned the respect
32
M. Henriksen et al.
of my peers with each accomplishment and as I passed each test. And then, 1 day, I became a journeyman lineman! I have found not just a job, but a CAREER. The industry is constantly growing and evolving, allowing endless opportunities to learn and teach. Electricity is used in almost every corner of the globe. The possibilities are limitless on where you can choose to live and work. The rewards are never-ending when you think of the service you provide to family, neighbors, and even strangers. I am honored by the gratitude they express when you turn their lights back on, when a furnace turns on to heat their home, and when their refrigerator hums back to life and begins to cool a week’s worth of groceries. I am humbled by the cheers from little boys and girls when they no longer have to be afraid of the dark. As a woman in the field, I have a unique opportunity and responsibility to help diversify our industry by being a role model and showing that women belong in this career. I take pride in the fact that I am challenging long-standing cultures in some companies and schools and that this work is for men only. I am also proud that I can inspire and support our next generation of women. I am proud of the struggles I have endured, the challenges that I have faced, and the accomplishments I have made. I can truly say that the sky is the limit as a lineworker and the lights are not the only thing shining brightly at the end of my workday. Kim Greene Chairman, President, and Chief Executive Officer Southern Company Gas
If you care about making a difference and want to make the world better, you need to be a part of the energy industry. Energy makes life better, and innovation is critical for the energy industry to continue to deliver both traditional and new forms of energy safely, reliably, and efficiently. This energy industry is changing and growing. At a time when we need to move faster toward a clean energy transition and provide new products and services to our customers, it is crucial that this industry has the right mix of mindsets and perspectives to meet the challenges ahead. Involving more women and more people from diverse backgrounds in the work of the energy sector is about making sure we capture all the perspectives we will need to navigate the transition well. As a woman in the energy industry, I have found success by building strong business relationships. This large network, filled with talented and passionate men and women, embodies my passion for energy and my professional drive. This industry needs the brightest minds and dynamic thinkers to solve the world’s energy challenges, and we are embracing innovation and new technologies.
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
33
Do not assume that the energy sector is not for you based on stereotypes. Misconceptions can quickly discourage people from pursuing a role in a STEM field before it has been properly researched. Speak to people in the industry and get first-hand insight into what it is like, what the opportunities are, and ask questions to find out if it is for you. This is an exciting time to become part of the energy industry workforce. If you want to positively impact our communities, our economy, and our environment, then a role in energy will inevitably be the perfect fit.
References 1. Center for Energy Workforce Development. Gaps in the Energy Workforce: 2021 Pipeline Survey Results. https://cewd.org/about/2021-gaps-in-the-energy-workforce-pipeline-surveyresults/ 2. Center for Energy Workforce Development. Gaps in the Energy Workforce: 2019 Pipeline Survey Results. https://cewd.org/about/2021-gaps-in-the-energy-workforce-pipeline-surveyresults/ 3. “The decarbonized power workforce,” Jim Thomson, Brad Denny, Ben Jones and Kate Hardin, Deloitte Insights, June 9, 2021, https://www2.deloitte.com/us/en/insights/industry/power-andutilities/decarbonization-strategy-power-workforce.html 4. US Department of Energy, United States Energy & Employment Jobs Report, 2021, https:// www.energy.gov/us-energy-employment-jobs-report-useer 5. United States Census Bureau, Misty L. Heggeness, Jason Fields, Yazmin A. García Trejo and Anthony Schulzetenberg, Moms, Work and the Pandemic: Tracking Job Losses for Mothers of School-Age Children During a Health Crisis, https://www.census.gov/library/stories/2021/03/ moms-work-and-the-pandemic.html, March 3, 2021. 6. Reforming the Energy Vision (REV) – New York Workforce Impact Study, National Grid – Strategic Workforce Planning, Human Resources, April 2018. 7. Interview with Will Markow, vice president of Applied Research – Talent, EMSI Burning Glass, November 16, 2021. 8. U.S. Bureau of Labor Statistics. National Industry-Specific Occupational Employment and Wage Estimates. https://www.bls.gov/oes/2020/may/oessrci.htm. May, 2020. 9. Edison Electric Institute - Issues & Policy: Electric Transportation, https://www.eei.org/ issuesandpolicy/electrictransportation/Pages/default.aspx, accessed April 7, 2022. 10. Accenture. “Reinventing the Utility Employee Experience; A framework for a new work environment,” https://cewd.org/wp-content/uploads/2021/02/Accenture-CEWD-utility-workforcePOV.pdf. 2021. 11. Amy Adkins, Gallup Workplace, “Millennials: The Job-Hopping Generation,” https:// www.gallup.com/workplace/231587/millennials-job-hopping-generation.aspx. May 2016. 12. CNBC, Morgan Smith. “Gen Z and millennial workers are leading the latest quitting spree—here’s why,” https://www.cnbc.com/2021/09/03/gen-z-and-millennial-workersare-leading-the-latest-quitting-spree-.html, September 7, 2021. 13. U.S. Bureau of Labor Statistics, “Percent of private industry workers participating in retirement benefits: defined benefit plans; in utilities,” National Compensation Survey-Benefits, (Series ID NBU29044000000000026291), https://www.bls.gov/ncs/ebs/benefits/2021/home.htm 14. Ed O’Boyle, Gallup Workplace, “4 Things Gen Z and Millennials Expect From Their Workplace,” https://www.gallup.com/workplace/336275/things-gen-millennials-expectworkplace.aspx, March 30, 2021. 15. Ryan Hill, “Building Reliable Workforce Forecasts,” presentation, Gartner, Inc., 2020.
34
M. Henriksen et al. Missy Henriksen has always prioritized advancing programs, causes, and processes by connecting people. This has been evident in her life-long commitment to volunteerism and engagement. She was the Student Government, newspaper editor, service-club chair child who grew into the PTA president, Girl Scout leader, and team mom adult. It is no surprise she has valued those same opportunities to mobilize people into effective action throughout her professional life as well. Other than a brief stint on Capitol Hill, she has spent her entire career working with and for nonprofits, mostly trade associations, uniting stakeholders to move the needle within their industry or profession. Today she serves as the Executive Director of the Center for Energy Workforce Development (CEWD), a post she has held since December 2021. CEWD is an organization committed to ensuring a skilled, diverse energy workforce through its programming to (1) increase awareness of energy careers; (2) support the industry’s efforts to develop a diverse, equitable, and inclusive workforce; (3) support energy companies in preparing people for increasingly technical and dynamic energy careers; and (4) support the industry’s workforce development practitioners with the resources, data, information, and education they need. She recognizes the tremendous significance of leading workforce development initiatives in the energy sector in today’s environment, a time that calls for increasing innovation, stewardship, and corporate responsibility. She began focusing on the workforce development space a few years earlier for the National Association of Landscape Professionals (NALP) where she helped to establish their workforce development initiative, while also working to increase consumer understanding of the importance of healthy lawns and landscapes. Similarly, she worked for several years as the Executive Director of the Professional Pest Management Alliance, educating the public about the health and property risks associated with pests. One of the most interesting times of her career was serving as a primary spokesperson for the pest management industry during the resurgence of bed bugs in 2010 when she conducted more than 500 interviews on this pest through almost every major print and broadcast news outlet. The majority of her career was spent with the American Composites Manufacturers Association where she held many positions, including service as its Executive Director. Henriksen studied Rhetoric and Communication and Government Relations at the University of Virginia, where she received her Bachelor of Arts degree.
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
35
Rosa Schmidt received a master’s degree in Organization Development and Human Resources with high honors and distinction from American University in 1995. She never dreamed she would go to college, never mind received a master’s degree. She grew up in a very strong traditional ethnic background with the expectation that, as a woman, she would marry by age 18 and either have children and stay home as a housewife, or she would work as a secretary. Her parents had never completed high school, but instead they had to work to help support their families. So she met everyone’s expectations in her community. She married at age 18, had a child a year later, and worked as a secretary. Growing up, Rosa worked as a clerk and a travel agent, becoming a legal secretary upon her high school graduation. Her parents saw this as an accomplishment, but for Rosa it was very unfulfilling. Sitting at a desk typing all day was something she did not enjoy nor did she find it challenging or rewarding. After all, Rosa was always very social and loved being around people. One day, she learned of a clerk position in the energy industry by chance through a friend, as many others do when it comes to the energy industry. Even though the job was similar to that of a secretary, it had variety, and it also provided Rosa with great insight into so many career options available in the industry and that is when she discovered Human Resources and fell in love with that kind of work. After all, the job consisted of dealing with people and how awesome was that? Rosa was fortunate to have a leader in the organization that saw her potential and he took the time to encourage her to recognize her talents and strengths and to believe in her abilities. As a mentor, he guided her in her career by encouraging her to go back to school and obtain her degrees. Without this mentor, she would have never made the leap to apply for positions that catapulted her from an administrative role to one in management. She went back to school, while raising a family and working full time, completing both her associate’s degree and bachelor’s degree through the tuition reimbursement process offered by Public Service Electric & Gas. Rosa learned quickly that the opportunities within PSE&G were many and she felt challenged with every job and was promoted throughout her career within PSE&G company. She is proud to say that she began her career as a clerk and ended up in a senior position in human resources helping lines of business with the development and implementation of human resources strategies. During her tenure, she also received her HRCI Certified Senior Professional in Human Resources Certificate, and a Certified Senior Professional in Human Resources from the Society of Human Resources and has been able to maintain those certifications to this day. Rosa’s experiences growing up have fueled her commitment to helping young women understand that opportunities are endless and they can achieve anything they set out to do. The need to believe in yourself is so critical, especially when you do not have someone else who believes in you. As a result, over the years Rosa has mentored many young ladies living in underprivileged areas helping them see their value and believing in their own capabilities. She finds every opportunity to take a young girl under
36
M. Henriksen et al. her wing and help her find those hidden talents, to begin believing in her abilities, and pushing them to dream big. Rosa has been the recipient of the Tribute to Women and Industry Award as a result of her growth and accomplishments in the energy industry. She was honored to receive the Women of Distinction Award from Soroptimist International recognizing her many accomplishments. In addition to these awards, Rosa has been recognized for many of her volunteer efforts to help youth and the under-privileged, through her leadership on the following boards: Chair for Junior Achievement of New Jersey and she currently sits on the board of the Salvation Army. Giving back to those in need is at the core of who Rosa is. Rosa is also proud to say that she retired from the industry and started her own consulting firm in Human Resources, something she never thought possible, and now as part of the Center for Energy Workforce Development, her mission is to help others learn about the energy industry and the many great careers it offers. Very often, she asks: “Would I be where I am today, if it were not for the energy industry and her 25 years of endless opportunities, learning and growing.” She wants everyone to know that anything is possible if you set your mind to it, especially girls and young ladies who often under value their worth and talents.
Angie Farsee established her human resources career shortly after graduating high school. Life experiences greatly influenced and prepared Angela (Angie) Farsee, PHR, for a career in human resources. The completion of her education took a longer path to complete. Farsee entered the Kennesaw State University as a non-traditional student 20 years after graduating high school to complete her major in business administration while working full-time. Farsee recognized that while she had a wealth of onthe-job experience, her career efforts would not be complete until she obtained her degree. She stayed true to her objectives and completed her degree with the distinction magna cum laude. Not wanting to stop there, she entered the MBA program of Kennesaw State University and completed her MBA. Along with her degrees, she has her professional certification in human resources. Farsee grew up the eldest of five siblings in a small industrial community in the Midwest during the height of the Civil Rights Movement. While the employment opportunities for many members of the community, including African Americans, were focused on work in the factories, career pathways for African Americans were beginning to broaden. Farsee was exposed to African Americans who were entrepreneurs, service workers, educators, and business professionals. This was also a time when women were becoming more independent as showcased on television shows such as “The Lucille Ball Show” and “Julia.” Farsee envisioned herself working as a secretary in an office, just like Lucille Ball.
2 Attracting, Training, and Retaining a Skilled, Diverse Energy Workforce in the USA
37
Farsee’s development into a human resources professional began early. Being the eldest of five siblings taught her to be accountable for others and how to lead others. Farsee’s first glimpse into a professional career presented itself in elementary school when she was nominated to be class secretary. In junior high school, she began working for a youth-oriented work program as an office assistant. There she met her first mentor, an African American woman who taught her business etiquette. During her high school senior year, she entered the cooperative education program to gain additional secretarial work experience. She was assigned to work for a financial institution and during her assignment was given an opportunity to assist the secretary to the chairman of the board and other senior executives. Upon high school graduation, the financial institution hired her full-time where she began her career in human resources and subsequently become the first African American to be appointed to the title of assistant vice president. Her next opportunity introduced her to a human resources career in the energy industry. In addition to Farsee’s nearly 40 years of service in human resources, she has been active in numerous organizations promoting the human resources profession. While living in the Midwest she served on the Board of her favorite local United Way agencies and volunteered as a mentor for an at-risk female youth group. Upon relocating to the Southeast, she became active in the local SHRM chapter. Farsee shifted her focus after recognizing that the energy industry could face a wave of attrition due to retirements. She subsequently became one of the founding organizers and currently serves on the executive committee of Georgia’s statewide energy workforce consortium to build a talent pipeline and promote careers in the energy industry. Farsee has assisted in developing a mentoring program utilized by NRECA to assist new cooperative energy human resource professionals. Farsee also serves as parliamentarian of the executive committee for the generation and transmission energy cooperative’s human resources organization.
Chapter 3
Electricity Regulation in the USA Angela V. (Angie) Sheffield and Christina V. Bigelow
3.1 Introduction Electric utilities began to spring up in the 1880s and were rapidly spreading across the USA by the early 1900s. As electricity became a valuable, even critical, commodity for the country, Congress identified the need to regulate the electric utility industry and, in particular, to oversee the financial management and operations of electric utilities to protect consumers. As a result, the first federal law governing electric utilities – the Federal Power Act of 1920 – was enacted. Since that time, the electric utility industry has continued to grow, evolve, and become even more critical to national security, economic security, and public health and safety. Throughout this evolution and growth, additional laws and regulations, at both the state and federal levels, have been enacted to ensure that the critical infrastructure serving as this country’s electric backbone remains secure, reliable, and accessible. Now, this complex, multi-jurisdictional regulatory framework addresses topics ranging from: • • • • •
Entity structure, securities, and financial health, e.g., set rates of return Reliable, secure system operations Non-discriminatory market access and market power mitigation Just and reasonable consumer rates Environmental, labor/employment, tax, and other typical business regulations
A. V. (Angie) Sheffield () Georgia Transmission Corporation, Tucker, GA, USA e-mail: [email protected] C. V. Bigelow Pine Gate Renewables, Asheville, NC, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_3
39
40
A. V. (Angie) Sheffield and C. V. Bigelow
This framework underlies the overall environment in which electric utilities operate and has shaped the industry’s approaches and commitment to compliance. Today, electric utilities regard compliance as an essential element of operational excellence, reliability, and security. Simply put, compliance and compliance assurance are both recognized as vital to successful, reliable, and secure utility operations. In recognition of the criticality of compliance and compliance assurance to the electric utility industry, a national critical infrastructure sector, and electric utility operations, the purpose of this chapter is to provide a broad perspective of current electric utility regulation and how it has shaped and continues to shape compliance as an essential part of electric utility culture. We will provide an overview of the players, policies, and issues involved in the regulation of the electricity sector. The chapter will also touch on common utility structures and the electricity regulatory landscape particular to those structures. Finally, since the electric utility industry is such a complex, highly regulated industry, we will also explore the various compliance approaches typically implemented by utilities and discuss best practices for ensuring compliance and, thereby, operational excellence, reliability, and security.
3.2 Electric Utility Industry Structures The electric utility industry is immense, both economically and geographically. Per the US Energy Information Administration (EIA), in 2021, there were an estimated 3300 electric utilities serving retail customers in the USA [1]. Across these various electric utilities, the ownership, governance, management, and regulatory structures are diverse and complex. The structures of the utilities primarily are distinguished by ownership: private or public. This section describes the various electric utility structures and their governance, highlighting their similarities and differences. Private sector electric utilities include entities such as investor-owned utilities (IOU) and independent power producers (IPP). These electric utilities typically are owned by shareholders and operate on a for-profit basis. IOUs are generally larger than public sector electric utilities from both a financial and service territory perspective. Serving larger urban and rural areas, the EIA has estimated that IOUs serve the majority of load in the USA, likely greater than 72% today [2]. Quite a few IOUs are organized as holding companies, or parent companies, with multiple subsidiaries or affiliated companies. Board members elected by each electric utility’s shareholders govern the IOU and set overall policy for each electric utility. IPPs have emerged in the electric utility industry more recently. Their emergence was prompted, in part, by the issuance of regulations intended to foster competition and equal access within the electric utility industry and, in part, by the movement toward retail competition. IPPs are private entities that own and operate electric
3 Electricity Regulation in the USA
41
generation facilities to sell the resultant energy to a customer – such as an electric utility, government agency, or large end user. IPPs are typically private sector entities but may also be public. Additionally, a company that generates power for its own operations, but whose primary business is not electric generation, can feed their excess energy fed back into the system and, therefore, may also be considered an IPP. In contrast to private sector electric utilities, public sector utilities are consumerowned (COUs), i.e., the ratepayers own the utility rather than corporate shareholders. COUs include municipally owned utilities (Munis), public power and irrigation districts (PPUs), rural electric cooperative utilities (Co-ops), tribal utilities (TUs), state agencies or entities, and other government or quasi-governmental entities. COUs also include, at the federal level, the Tennessee Valley Authority (TVA) and federal power marketing administrations (PMAs) – such as Bonneville Power Administration. COUs serve most of the remaining load in the USA [2]. Munis and PPUs are not-for-profit entities operated as a division of local government (similar to public schools and public libraries). They are often governed by a local city council or board, which can be elected or appointed. Since Munis and PPUs are owned by the community, citizens have a direct voice in PPU decisions such as establishing retail electricity rates and determining sources of generation. Co-ops are similar to Munis and PPUs in that they are not-for-profit and membercontrolled, typically via member-elected Boards of Directors. The growth of Co-ops was fueled by the establishment of the Rural Electrification Administration (REA), an agency born out of Franklin D. Roosevelt’s New Deal in 1935. The purpose of the REA was to promote rural electrification by providing low-cost federal loans to encourage utilities to build electric distribution facilities to serve rural areas of the USA. Co-ops consist of distribution Co-ops that deliver power directly to their members and generation and transmission (G&T) Co-ops that provide wholesale power to the distribution Co-ops [3]. Other players in the electric utility industry include the PMAs and hybrid entities such as market operators. PMAs operate and sell the electrical output of federally owned and operated hydroelectric dams [4]. Although their primary objective is to market power, they also operate enabling electric systems. PMAs operate as separate agencies under the authority of the Department of Energy (DOE) and may be overseen by boards with appointed directors. Market operators typically operate as not-for-profit entities that are privately governed by a Board of Directors. Market operators are either classified as independent system operators (ISOs) or regional transmission organizations (RTOs). Their primary purpose is to promote economic efficiency, ensure grid reliability, and enforce non-discriminatory practices. The concept of ISOs was first introduced in 1996 [5]. Then, in 1999, the Federal Energy Regulatory Commission (FERC or sometimes “the Commission”) encouraged the voluntary formation of market operators [6]. Figure 3.1 provides a breakdown of US electric utilities by ownership type as of 2017.
42
A. V. (Angie) Sheffield and C. V. Bigelow
Fig. 3.1 US Electric utilities by ownership type (2017)
3.3 Electric Utility Regulation Electric utility regulation is complex, multi-faceted, and multi-jurisdictional. Electric utilities may be regulated by more than one agency or jurisdiction as the regulatory framework has been established to respect the jurisdictional boundary between the federal and state governments. Where an electric utility’s regulatory authority sits within, this framework is driven by the scope, location, and characteristics of its structure and operations. Simply put, states and, sometimes, municipalities or other local government regulate an electric utility’s local distribution and retail sale of electricity for both the private and public sectors. Where an electric utility’s operations are solely intrastate, public service or utility commissions (PS/UC) or other local or state agencies have sole regulatory authority over rates and other utility operations within the state. Congressional authority to regulate interstate transmission of electricity and wholesale sale of electricity whether bilaterally or through markets has been delegated from Congress to the DOE and/or its independent agency, FERC, which has, in some instances, further delegated its authority. Exceptions do exist – even within this already complex framework. PMAs are typically regulated through their DOE structure and their Board of Directors. TUs are subject to regulation by their sovereign tribal authority. These electric utilities along with Co-ops that utilize financing provided through the Rural Utilities Service (RUS) are exempt from regulation by FERC, subject to limited exceptions – such as the regulation of the reliability of the Bulk Electric System (BES), wholesale power sales, and reciprocal transmission services. COUs are typically subject to oversight by one or more of the following: their board, local government’s regulatory body, or their applicable PS/UC. Conversely, Co-ops using financing provided through the RUS are regulated by the RUS, and, in some jurisdictions, their applicable PS/UC. Finally, utilities operating within the State of Texas may solely be subject to the authority of its PS/UC given its operation of a separate interconnection and market that are wholly contained intrastate. It is important to note that, despite the complexity of the jurisdictional boundaries discussed above, electric utilities are typically organized as corporations or, in some
3 Electricity Regulation in the USA
43
cases, municipal or governmental departments or agencies. As a result, these organizations are also subject to the same regulatory authorities as any other business or governmental/quasi-governmental entity, such as the Securities and Exchange Commission (SEC), Internal Revenue Service (IRS), and the Occupational Safety and Health Administration (OSHA). Furthermore, other federal agencies with utility-specific regulations include the Nuclear Regulatory Commission (NRC), the Environmental Protection Agency (EPA), and the Commodity Futures Trading Commission (CFTC). The North American Electric Reliability Corporation (NERC) is an international regulatory authority that derives its authority from a delegation by FERC. The division in state and federal oversight that affects electric utilities originates from and correlates directly to the division of federal and state powers contemplated in the US Constitution. In particular, the Commerce Clause, which is Clause 3 of Article I, Section 8, of the Constitution, allows the federal government to “ . . . regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.” Simply put, the Commerce Clause grants the federal government broad authority to affirmatively regulate activities that have an effect on commerce between the states. The Commerce Clause cannot be read in isolation – it must be read in consideration of the Tenth Amendment to the Constitution – which provides that “[p]owers not delegated to the United States by the Constitution, nor prohibited to it by the States, are reserved to the States respectively, or to the people.” This amendment serves to reinforce that all powers that were not specifically enumerated to the federal government were retained by the states and provides each state with broad authority to govern and regulate persons and businesses for the protection of the health, safety, and welfare of its citizens. As a direct result of these provisions, electric utilities operate within a complex regulatory framework and, often, with both federal and state oversight. When the commercial electric industry began to develop in the late nineteenth century, regulation at both the state and federal levels also began to develop. The penultimate delineation of oversight authority between the federal and state governments occurred in 1996 with the issuance by FERC of Order 888 [7]. This delineation of jurisdictional oversight, which was affirmed by the Supreme Court in New York v. FERC, 535 U.S. 1 (2003), still provides the foundation for overall jurisdictional oversight of electric utilities within the USA. In Order 888, FERC determined that it has authority over wholesale sales of electric energy, wholesale transmission service, and the transmission component of unbundled retail rates, while PS/UCs have authority over the distribution and generation components of retail service as well as the transmission component of bundled retail service. Since Order 888, there has been significant, rapid evolution and change within the electric utility industry. This necessitated further evolution of its regulatory oversight. The following sections delve into the basics of jurisdictional oversight, the applicability of general business obligations, and the evolution of the regulatory framework. Figure 3.2 provides a high-level illustration of the regulatory constraints that comprise electric utility regulation.
44
A. V. (Angie) Sheffield and C. V. Bigelow
Fig. 3.2 Electric Utility Regulatory Constraints
3.3.1 State Commission Jurisdiction As the electric utility industry began scaling into commercial operation, the states, acknowledging that electricity had become an essential service and recognizing that utilities were natural monopolies and therefore not subject to competitive price controls, identified the need to regulate the services and rates of electric utilities. By 1920, several states had developed regulatory frameworks to oversee utilities – predominant regulation by PS/UCs or other government entities. Currently, all states subject most of the electric utilities operating within their borders to regulatory oversight through a defined regulatory body and process. Under state law, PS/UCs have an obligation to ensure customers receive safe and reliable utility services and to ensure that those services are provided at rates that are fair and reasonable for customers. PS/UCs primarily regulate IOUs, but, while not common, the scope of PS/UC oversight occasionally extends to COUs. The traditional state-level regulatory scheme provides each electric utility with a franchised/dedicated service territory to which the electric utility offers the distribution and sale of electric energy to end users as a bundled product. Alternatively, some states have introduced a regulatory scheme that allows for retail choice, which preserves the concept of a franchised territory for distribution but allows electric energy to be sold as a commodity to end users. Electricity is an essential service, the loss of which can adversely impact the health and safety of a state’s citizens. Accordingly, consistent with the Tenth Amendment, PS/UCs are typically focused on ensuring that utilities (whether electric or otherwise) provide its customers with access to adequate, reliable, safe, and efficient service at just and reasonable rates. This focus and the overall scope of oversight has evolved as the electric utility industry has evolved.
3 Electricity Regulation in the USA
45
3.3.2 Federal Agency Jurisdiction There was no federal regulation of the electric utility industry until 1920, when Congress enacted the Federal Water Power Act (FWPA), which focused on regulating the development of hydroelectric plants on navigable US waters. Since then, the overall delineation of jurisdictional oversight between the state and the federal governments has been determined through various Congressional actions and the court decisions interpreting such enactments. One of the earliest cases, [Rhode Island PUC v. Attleboro Steam and Electric Company, 273 US 83 (1927)], resulted in the reservation of the power to regulate interstate rates to the federal government. That decision created a gap in overall oversight that was not addressed until the enactment of the Federal Power Act in 1935. Since then, other enactments have continued to define, refine, change, and expand the scope of federal oversight via grants of authority to the FERC. In addition to the FERC, the RUS promoted and continues to promote electrification of rural America by providing financing to Co-ops. As discussed above, Co-ops with financing from the RUS are obligated to comply with various loan covenants and regulations that affect their operations but are exempted from the majority of other regulatory oversight. As well, in 1946, as the development and use of nuclear generation facilities within the USA increased, the federal government created the Atomic Energy Commission and, eventually, transitioned it to the NRC. Currently, electric utilities with nuclear generation facilities are subject to the oversight of the NRC for such facilities. The NRC focuses on the protection of health and safety as related to the development and operation of nuclear generation facilities. NRC regulations and oversight have also had to evolve as the electric utility industry has evolved.
3.3.3 General Business Obligations Under the Tenth Amendment, states are free to adopt any scheme for regulating businesses operating within its borders, and other federal agencies may have oversight authority of an electric utility based on the scope of its business operations. As business entities, in addition to the likely oversight by FERC and any applicable PS/UC, electric utilities that are constituted as business entities are, at a minimum, subject to the same level of regulation as other, similarly situated, business entities. More specifically, based on the scope of their business operations, electric utilities are also subject to oversight by numerous federal agencies such as the SEC, OSHA, IRS, EPA, CFTC, Department of Labor (DOL), the Federal Trade Commission (FTC), and the Department of Justice (DOJ), as well as any corresponding state agencies and state Attorney Generals. Areas of oversight and regulation include safety and health, labor and employment practices, revenue and taxation, anticompetitive and unfair trade practices, transportation, public health, and environmental regulation, among others. Electric utilities do not, by virtue
46
A. V. (Angie) Sheffield and C. V. Bigelow
of their unique regulatory framework, experience any reduction in their overall oversight as a business entity and, further, must be acutely aware of the potential for additional intersections between these agencies and their jurisdiction and the unique regulatory framework in effect between FERC and their applicable PS/UC. These general business obligations are part of an electric utility’s function as a business entity and must also be actively managed long-term. This active management includes responding to changes in regulatory oversight and regulations as regulations evolve and as their operations transform.
3.4 Current Regulatory Landscape 3.4.1 Evolution of State/Federal Oversight As discussed above, states were the first regulatory authorities to recognize the need to regulate electric utilities. This oversight began as a way to ensure that a state’s citizens had the opportunity to receive the services of a utility company at a reasonable cost and has evolved significantly since then to address financial aspects of utilities and make siting decisions for transmission lines and generation facilities. Currently, most state PS/UCs have regulatory authority to: 1. Establish franchised service territory to each utility serving its citizens. 2. Ensure that utilities serving its citizens supply reliable, cost-effective service on a non-discriminatory basis within their service territory. 3. Approve rates for service based on each utility’s cost of providing service plus a reasonable return on investment and prohibit the charging of any other, unapproved rates. 4. Require utilities to make specific service improvements, including approval of resource or transmission expansion plans. 5. Approve any mergers, acquisitions, or transfers of control of significant company assets. 6. Oversee financial affairs such as PS/UC-approved accounting methods, debt and security issuances, and any proposed affiliate transactions. Driven by the rise of market operators, IPPs, and other structures within the electric utility industry and the evolving complexity of the electric utility industry, many PS/UCs have evolved and added to their traditional authorities and roles. These new authorities include areas such as considering and implementing market deregulation, establishing renewable generation targets or standards, and extending their regulatory authority to reliability and security. Indeed, as the electric utility industry has evolved, PS/UCs have had to evolve as have their regulatory frameworks and their overall areas of responsibility and coordination. Such evolutions have included the development of state regulations to ensure the reliable, secure provision of service within a state, multi-state committees where a utility spans multiple states or where a market operator has members that span multiple states,
3 Electricity Regulation in the USA
47
and joint state/state and federal/state task forces intended to address issues that have been identified as having complex, multi-jurisdictional impacts. A similar evolution has occurred within the regulatory oversight of electric utilities at the federal level. Since the FWPA, the following congressional actions and issuances have evolved and increased federal oversight of electric utilities: the Federal Power Act of 1935 (FPA), the Public Utility Regulatory Policies Act of 1978 (PURPA), the Energy Policy Act of 1992, and the Energy Policy Act of 2005 (EPAct 2005) with more issuances and actions on the horizon. Currently, the FERC has regulatory authority over: 1. “[T]he transmission of electric energy in interstate commerce and . . . the sale of electric energy at wholesale in interstate commerce” (16 United States Code (U.S.C.) § 824(b)(1)) 2. “’[The] sale of electric energy in interstate commerce’ . . . means a sale of electric energy to any person for resale” (16 U.S.C. § 824(d)) 3. “[A]all facilities for such transmission or sale of electric energy” (16 U.S.C. § 824(b)(1)) FERC’s authority is generally applicable to “public utilities,” which are defined as “any person who owns or operates facilities subject to the jurisdiction of the Commission” (16 U.S.C. § 824(e)). This jurisdiction specifically excludes “the United States, a State or any political subdivision of a State [, or] an electric cooperative that receives financing under the Rural Electrification Act of 1936” “or that sells less than 4,000,000 megawatt hours of electricity per year” in the absence of a specific provision to the contrary (16 U.S.C. § 824(f)). Prior to EPAct 2005, earlier exercises or grants of authority to FERC that specifically included COUs or other exempt entities were infrequent and narrowly construed. However, in EPAct 2005, FERC was granted specific authority over all electric utilities relative to development and enforcement of electric reliability standards (ERS), an important evolutionary development that we discuss in detail in the next section. Otherwise, FERC’s jurisdictional authority encompasses: 1. The assurance of the principle of Open Access through the filing and approval of all rates and charges “made, demanded, or received by any public utility” relating to FERC-jurisdictional activities (16 U.S.C. § 824d(1)) 1.1. This includes filing and approval of rules governing the interconnection of generators to the transmission system. 2. Licensing and regulation of hydroelectric generating facilities (16 U.S.C. § 797) 3. Adoption of rules encouraging cogeneration and small power production (16 U.S.C. § 824a-3) 4. Approval of proposed dispositions, mergers, consolidations, acquisitions, or changes in control involving assets subject to the FERC’s transfer authority (16 U.S.C. § 824a-4) 5. Approval of applications for authority to construct transmission facilities in national interest electric transmission corridors under limited circumstances (16 U.S.C. § 824p)
48
A. V. (Angie) Sheffield and C. V. Bigelow
6. Market regulation, which includes, but is not limited to: 6.1. Market power transparency and mitigation 6.2. Approval of the formation of ISOs/RTOs and the development and operation of their organized markets 6.3. Monitoring and enforcing market rule adherence, including sanctioning market manipulation (16 U.S.C. § 824v) 6.4. Ensuring price transparency in markets for the sale and transmission of electric energy in interstate commerce (16 U.S.C. § 824t) As indicated above, a very important evolution in FERC’s recent history included the congressional grant of authority to FERC (16 U.S.C. § 824o) to develop, implement, and enforce mandatory ERS for the BES, which includes all electric transmission elements operated at 100 kilovolts (kV) or higher and real and reactive power resources connected at 100 kV or higher. The BES does not include facilities used in the local distribution of electric energy. FERC also was granted authority to impose penalties in amounts now greater than $1 million per day per violation and to form an oversight Electric Reliability Organization (ERO) with delegated authority to facilitate FERC’s oversight of reliability and security. This grant of authority was a significant expansion of the FERC’s powers as, prior to EPAct 2005, FERC’s authority to sanction or penalize public utilities was extremely limited, and, further, FERC previously had little authority over a utility’s operational or technical practices unless those practices affected Open Access or resulted in an abuse of market power or affiliate/self-dealing. Since EPAct 2005, this expansion of FERC’s regulatory framework and authority has driven major changes in electric utility practices and compliance efforts, becoming a major, unprecedented force for change across the entire electric utility industry.
3.4.2 Evolution to Include Critical Infrastructure, Reliability, and Security While EPAct 2005 was the singular action that expanded FERC’s authority over electric utilities, it was not a snap development or decision. In fact, it was the culmination of the evolution of the electric utility industry over several preceding decades. As the provision of electric service grew and expanded across the USA, the electric utility industry evolved from individual, isolated systems serving local loads to a broad, interconnected system where a problem in one system could affect an adjacent system, potentially adversely affecting the neighboring system. This increased interconnectedness and dependency began to bring to light the need for cooperation and coordination between neighboring systems relative to both the planning of operations and real-time operations. Then, in November 1965, the Great Northeast Blackout occurred affecting some 30 million customers and bringing electric utility operations into the forefront of the public eye. Shortly after the 1965 Northeast Blackout, the public, as well as government, began to focus
3 Electricity Regulation in the USA
49
attention on the need for a reliable supply of electricity and the challenges associated with maintaining the stability and reliability of such a highly integrated system. The result of this focus was the enactment of legislation known as the Electric Power Reliability Act of 1967 giving the Federal Power Commission (FPC), the predecessor to FERC, some authority over electric power grid reliability. It required the formation of regional reliability councils to help establish best practices and ensure cooperation and coordination within regions. This, along with recommendations by the FPC to create “a council on power coordination made up of representatives from each of the nation’s regional coordinating organizations to exchange and disseminate information on regional coordinating practices to all of the regional organizations and to review, discuss, and assist in resolving matters affecting interregional coordination” [8] led to the establishment in 1968 of the North American Reliability Council, the predecessor to today’s NERC. Over the next three decades, NERC expanded its organization and activities to meet its mission to promote the reliability of the interconnected BES. During this time, the rules and regulations of NERC were voluntary and had no legal repercussions if violated. Then, in August 2003, everything changed. The largest blackout in US history blanketed the northeastern section of North America. Regulation often follows events, and this event was no different. The devastation caused by the blackout led to the passage of the EPAct 2005 and the formation of an Electric Reliability Organization (ERO). The role of the ERO is to improve the reliability and security of the bulk power system by developing and enforcing mandatory electric reliability standards (ERS), assessing seasonal and long-term reliability, monitoring the bulk power system through system awareness, and educating, training, and certifying industry personnel. FERC approved NERC as the ERO, subject to FERC oversight, in 2006. Since then, NERC has developed, and FERC has approved, the initial version of ERS in 2007 known as version 0 and consisting of 83 standards. During the time since the initial ERS were approved, there has been tremendous, ongoing change in the body of the standards. One of the most significant additions to the ERS was approval of a collection of critical infrastructure protection (CIP) ERS to address cyber and physical security of BES assets in 2008. Figure 3.3 provides a timeline of the major milestones leading up to the development of mandatory ERS. Over time, to address the myriad of emerging risks facing the industry, other ERS have been added related to areas such as geomagnetic storms (solar weather), protection and control system loadability, generator and system models, and supply chain cyber security risk management. Additionally, many of the “Version 0” ERS have been changed, in some cases, numerous times, to improve the requirements or address emerging risks. These risks ranged from known events such as cyber and physical attacks on energy providers and infrastructure, e.g., Stuxnet, the Metcalf Substation attack, and cyberattacks on Saudi Aramco, Ukraine’s power grid, and South Korean Hydro generation facilities, etc. to events and impacts from the electric utility industry’s transition to smart devices and renewable resources, e.g., Blue Cut Fire inverter event. Figure 3.4 shows the various drivers influencing the current state of reliability and security regulation.
50
A. V. (Angie) Sheffield and C. V. Bigelow
Fig. 3.3 Timeline of major events in the development of the ERS [9]
Fig. 3.4 Drivers for the continued evolution of the ERS
3 Electricity Regulation in the USA
51
Concurrent with this transformation of the ERS, there has also been meaningful change to how compliance with the ERS is monitored. These changes have been pivotal drivers of change and maturation for electric utility compliance frameworks and cultures. They continue to drive change and maturity some 15 years after the “Version 0” ERS took effect.
3.5 Electric Utility Compliance Frameworks and Culture A convergence of factors has driven interest in and emphasis on the compliance culture and programs of electric utilities, the most influential of which has been the enactment and maturation of regulatory and enforcement authority by applicable regulators. The first of these, an initiating driver for electric utilities, was the US Federal Sentencing Guidelines for Organizations (UFSGO), which became effective November 1, 1991 [10]. Interestingly, however, it wasn’t until the early 2000s, when two major financial scandals erupted involving utility companies, WorldCom and Enron, that utilities and their compliance culture and programs fell under more focused regulatory scrutiny. While electric utilities have always been subject to oversight and had some type of monitoring framework around their riskier regulatory and financial activities, this new, trickle-down focus on an organization’s culture of compliance and compliance programs drove electric utilities toward a more holistic focus on corporate compliance culture and entity-wide compliance programs. The Enron and WorldCom scandals involved and affected numerous electric utilities, which further reinforced the need for corporate cultures that appropriately prioritize compliance and to implement frameworks to provide reasonable assurance of this objective. Since the early 2000s, the expansion of regulatory requirements and oversight of electric utilities have driven increased focus by utilities on their compliance culture and programs. When Congress granted FERC enforcement and penalty authority in its enactment of EPAct 2005, FERC recognized a need to set forth its expectations relative to compliance and enforcement. It began issuing policy statements on compliance, enforcement, and penalty assessment in 2007. These policy statements further influenced electric utilities’ development of and investment in compliance programs and culture.
3.6 An Overview of the Development of the US Federal Sentencing Guidelines for Organizations (USFGO) Following the Sentencing Reform Act of 1984, the United States Sentencing Commission (USSC) was created and charged with developing and maintaining a
52
A. V. (Angie) Sheffield and C. V. Bigelow
set of sentencing guidelines for use when determining the appropriate sentence or punishment during the prosecution of federal crimes for which an offender has been found guilty [11]. The initial set of sentencing guidelines was effective in 1987 and applied solely to individual offenders [12]. It wasn’t until November 1991, 4 years later, that the USFGO became effective [10]. When these new guidelines were promulgated, their focus was not only deterrence but also providing organizations with an incentive to develop and implement an effective compliance program. Unfortunately, the first issuance of the USFGO did not provide business entities with any information that could be used in determining whether a downward departure to its potential sentence was warranted under §8C2.5(F), Effective Program to Prevent and Detect Violations of Law. As well, any mitigating credit would only be applied once an organization was subject to sentencing. That changed on June 16, 1999, when an internal DOJ memorandum provided guidance to prosecutors that included an effective compliance program as a factor to consider when deciding whether to charge an organization [13]. This important issuance elevated an effective compliance program and culture from merely a factor that could mitigate the criminal penalty imposed on an organization to a factor that could also help to defer or avoid federal prosecution following an investigation. Still, it was not until the issuance of amendments to the USFGO in November 1, 2004, that the USSC made clear the criteria to be used to judge whether an organization had an effective compliance program and culture. The 2004 USFGSO at §8B2.1., Effective Compliance and Ethics Program, enumerated specific elements or hallmarks that would be considered indicative of an effective compliance program and qualify as a mitigating factor for sentencing [14]. These elements were the following: 1. 2. 3. 4. 5.
Compliance standards and procedures must be established to deter crime. High-level personnel must be involved in oversight. Substantial discretionary authority must be carefully delegated. Compliance standards and procedures must be communicated to employees. Steps must be taken to achieve compliance in establishment of monitoring and auditing systems and of reporting systems with protective safeguards. 6. Standards must be consistently enforced. 7. Any violations require appropriate responses, which may include modification of compliance standards and procedures and other preventive measures [14]. The USFGO and, in particular, §8B2.1., Effective Compliance and Ethics Program, have, since 2004, been a driving force behind the evolution of compliance as a discipline and career path, generating significant job creation, from the C-Suite to individual contributors. Notably, the USFGO are not the only important issuances that have shaped the development of compliance frameworks and programs for electric utilities. Other influential issuances include the Justice Manual (formerly known as the US Attorney’s Manual) [15], A Resource Guide to the US Foreign Corrupt Practices
3 Electricity Regulation in the USA
53
Act (now in its second edition) [16], and, most recently, the Evaluation of Corporate Compliance Program Guidelines [17]. All these publications provide corporations guidance about what their regulators consider an effective compliance program. It is important to note that these publications influence more than just policies and actions of the DOJ and the SEC. Numerous regulatory agencies, such as FERC, have taken their cue from the DOJ and SEC and issued policy statements or other publications to provide incentives to organizations under their regulatory oversight and authority to develop effective compliance programs. For electric utilities, their early compliance programs, driven by the USFGO, became the foundations upon which their ERS compliance program and other regulatory compliance programs were built.
3.7 FERC Policy Statements: An Overview The Administrative Procedure Act is a federal act that governs how federal administrative agencies make rules. The Act gives federal agencies the authority to issue “general statements of policy” and to do so more swiftly as such policy statements are exempted from the specific public interest provisions set forth at 5 U.S.C. 553 [18]. General Statements of Policy (Policy Statements) have been defined as “statements issued by an agency to advise the public prospectively of the manner in which the agency proposes to exercise a discretionary power” [19]. Federal agencies, including FERC, issue Policy Statements intended to provide guidance and regulatory certainty regarding statutes, orders, rules, and regulations that they administer. When given enforcement authority through the enactment of EPAct 2005, FERC issued its first Policy Statement on Enforcement [20]. It set forth the factors that FERC intended to consider when “determining remedies for violations, including applying the enhanced civil penalty authority provided by the Energy Policy Act of 2005 (EPAct 2005)” [20]. The Policy Statement stated its purpose was to “encourage regulated entities to have comprehensive compliance programs, to develop a culture of compliance within their organizations . . . ” and discussed a concurrently issued Notice of Proposed Rulemaking that set forth new regulations for the imposition of civil penalties [20]. Importantly, FERC described its review of other agencies’ Policy Statements on enforcement including the DOJ, SEC, and CFTC [20]. In a section titled “Internal Compliance,” its review of other agency guidance and approaches is evident, as FERC outlined its intent to, where applicable, give credit for an entity’s commitment to compliance [20]. Just three short years later, FERC revised its Policy Statement on Enforcement [21], providing electric utilities with a much more robust description of how it conducts its enforcement activities and, when necessary, imposes civil penalties. This revised issuance included an expanded section on how it will evaluate and determine whether or not to give entities credit for their “Commitment
54
A. V. (Angie) Sheffield and C. V. Bigelow
to Compliance” [21]. Significantly, FERC stated that “ . . . the most important in determining the amount of the penalty are the seriousness of the offense and the strength of the entity’s commitment to compliance” [21]. FERC’s reiteration of the importance of a commitment to compliance through these Policy Statements drove electric utilities to review and, in some cases, increase the resources invested in their internal compliance programs. Confirmation of FERC’s emphasis on internal compliance programs came in the form of a third Policy Statement issued 5 months after its Revised Policy Statement on Enforcement. In October 2008, FERC issued its Policy Statement on Compliance [22]. This Policy Statement was solely focused on providing guidance on the factors or elements that comprise an effective compliance program as well as how FERC will evaluate such factors when it is considering applying a civil penalty. The factors enumerated in the Policy Statement on Compliance echo those utilized by the DOJ. FERC explicitly stated its reliance upon and adoption of approaches to penalty imposition that are similar to those utilized by other executive agencies such as the DOJ, the SEC, and the EPA [22]. This issuance most clearly guided electric utilities on what their compliance programs should include and how they should be structured. It also assured electric utilities that they could model their programs focused on compliance with FERC’s regulations after their existing corporate compliance programs. Finally, in 2010, FERC issued its first Policy Statement on Penalty Guidelines with a revision following just 6 months thereafter [23]. This Policy Statement is modeled directly on the USFGO and includes specific provisions to provide credit for effective internal compliance programs. In fact, under the Revised Policy Statement on Penalty Guidelines, the culpability reduction for an effective compliance program can be up to 95% [23]. As well, the Revised Policy Statement on Penalty Guidelines includes an explicit statement about the importance of compliance, providing that “[a]chieving compliance, not assessing penalties, is the central goal of the Commission’s enforcement efforts” [24]. FERC’s Policy Statements have filtered down to the compliance and enforcement approaches used by the ERO and Regional Entities (collectively, the ERO Enterprise). Although the ERO Enterprise’s enforcement approach does not track as closely to the USFGO as FERC’s does, its objective is also not the imposition of penalties. Rather, it is compliance with the ERS and, as a result, enhanced reliability and security of the BES. This shared objective across multiple agencies has led to remarkable similarity in the elements used to determine what comprises an effective compliance program. Table 3.1 compares the elements used by the DOJ, FERC, and NERC to identify an effective compliance program. This focus on obtaining and maintaining compliance and effective compliance programs has heavily influenced electric utilities’ investment in and prioritization of compliance programs and frameworks as discussed next.
3 Electricity Regulation in the USA
55
Table 3.1 Comparison of compliance program elements DOJ elements Standards, policies, procedures Compliance officer and governance Communication and education Internal monitoring Reporting and investigation Enforcement and discipline Response and prevention
Senior management leadership
NERC elements Standards, policies, procedures Oversight
Communication and education
Education and training
Internal monitoring and audit functions Reporting and investigation Incentives, enforcement, and discipline Effective detection, response, and implementation
Monitoring and auditing
FERC elements Standards, policies, procedures
Reporting Enforcement and discipline Response and prevention
3.8 Current Compliance and Control Program Approaches 3.8.1 Addition of the ERS to Compliance and Control Frameworks When the ERS were initially effective and subject to mandatory enforcement, electric utilities raced to implement the processes, documentation, and procedures to achieve compliance. These initial efforts were focused on achieving operational compliance with the “letter of the law” in each ERS. Shortly after initial implementation, the ERO Enterprise began conducting compliance monitoring, which was focused on assuring that electric utilities were adhering to the newly effective ERS. In the early days of the mandatory regime, the ERO Enterprise audited electric utilities on a regular schedule – every 3 years for those performing transmission operations, balancing, and reliability coordination activities and every 6 years for the remaining electric utilities, such as transmission owners and planning coordinators. The scope of these audits was based on a “one-size-fits-all” assessment of risk without much, if any, consideration of each electric utility’s inherent risk profile or past performance. Audits were typically weeklong engagements that needed more than a year of preparation. Audit results were subjective and often led to compliance enforcement activities that provided little value to the overall reliability and security of the BES. These early audit results came about because the ERO Enterprise enforcement process and approach was highly mechanistic. Every violation of an ERS was handled in the same manner, regardless of the severity or risk associated with the lapse in compliance, resulting in a “zero tolerance” approach to compliance. If all the requirements contained in the ERS addressed major threats to system reliability, such an approach might have been more successful; however, a good portion of the ERS requirements were administrative tasks that posed little risk. This meant enforcement of these minor violations resulted in the same long, drawn-
56
A. V. (Angie) Sheffield and C. V. Bigelow
out enforcement proceedings as larger more operational violations. This approach resulted in significant costs of time and resources to the ERO Enterprise and the electric utility industry to focus on areas that provide little benefit to reliability or security. One example of this lack of discernment in processing violations was described in a September 2011 FERC filing where a small entity did not maintain contact information for its local FBI office to report possible sabotage events as required by CIP-001, requirement R4. The resulting notice of penalty was over 40 pages long and took nearly 2 years to resolve. As these zero-tolerance activities continued, violations continued to amass, resulting in an overwhelming processing backlog. As the ERO Enterprise’s monitoring and enforcement activities ramped up, electric utilities began to expand their compliance programs to encompass compliance with the ERS. Entire manuals and documents devoted to ERS compliance programs and processes were developed. Departments and resources were dedicated to expanding processes typically used to ensure regulatory compliance for other functions such as financial, environmental, and safety, to compliance with the ERS. New vendors with ERS-focused software emerged to help electric utilities implement their compliance monitoring processes, and several existing vendors modified their offerings to make them useful for ERS compliance and monitoring. Simply put, both the regulated and the regulator were learning, growing, and maturing together. Concurrent with the growing enforcement backlog and efforts to develop ERSfocused compliance programs, the ERS also continued to evolve and expand. These coincident pressure points on the ERO Enterprise and electric utilities led to NERC concluding that its current processes were “not practical, effective, or sustainable” and initiating development of a more tailored approach to compliance monitoring and enforcement [25]. This began a transformation of how the ERO Enterprise viewed the ERS and how they monitored and enforced compliance, prompting similar transformations in electric utilities’ ERS-focused compliance programs. Out of these changes grew an important transition for both the ERO Enterprise and the electric utility industry, the transition from highly mechanistic programs and processes to those that better internalized and considered important characteristics such as risk probability and likelihood and effectiveness of internal controls. The ERO Enterprise sought and received near unanimous input from the electric utility industry regarding the significant level of effort required to maintain and document compliance with a multitude of administrative requirements having little to no reliability impact. This input, coupled with the ever-increasing backlog of potential violations, allowed the ERO Enterprise to begin evolving its processes toward more risk-based compliance and enforcement in 2013. To accomplish this, the ERO Enterprise undertook several multi-year initiatives: a review of the current body of ERS to identify and remove those requirements that involve little risk to the BES; development of improved auditing and monitoring procedures; and a review of the enforcement processes used to resolve minimal risk violations. In the years since these transformational efforts began, the ERO has matured its program and processes to consider the effectiveness of an electric utility’s internal
3 Electricity Regulation in the USA
57
controls in determining the depth and breadth of compliance enforcement and monitoring required for each electric utility. As well, ERS compliance programs have matured to include internal control frameworks and to consider risk as an essential element of such programs, e.g., relative to internal compliance monitoring processes.
3.8.2 Increased Focus on and Use of Internal Control Frameworks The ERO Enterprise has recognized that effective internal controls support the reliability of the BES by identifying, assessing, and correcting issues as they occur. In 2017, NERC finalized and published an ERO Enterprise Guide for Internal Controls describing how internal controls will be considered during compliance monitoring and enforcement activities [26]. It further included guidance on how the evaluation of internal controls will be used in development of an electric utility’s compliance oversight plan. Leading up to this important publication, electric utilities had been working collaboratively with the ERO Enterprise to understand and define how its compliance and monitoring activities would be informed by an electric utility’s internal compliance and/or controls program and how the effectiveness of such programs would be evaluated. By this time, electric utilities had been managing compliance with ERS for almost a decade and had many established practices, processes, and procedures. Recognizing that there could be both compliance and reliability benefits derived from a well-designed and effective program of internal controls, electric utilities began to formalize risk-based internal control programs designed to provide reasonable assurance of both compliance and reliability. While this was uncharted territory in the ERS regime, internal controls programs are common in other regulatory environments, most notably the Sarbanes-Oxley Act of 2002 (SOX), which was enacted in response to widespread and highly publicized corporate financial scandals and fraud. SOX section 404 requires that management establish and maintain effective internal controls around financial reporting. As such, electric utilities often leveraged their existing internal controls expertise to help define what an internal controls program for ERS compliance could or should look like. Internal control models, such as the Committee of Sponsoring Organizations of the Treadway Committee’s Internal Control – Integrated Framework (COSO), hitherto only familiar to audit and financial professionals, soon became an invaluable tool for ERS compliance professionals. Figure 3.5 provides an illustration of COSO Internal Control – Integrated Framework Principles [27]. This leveraging of existing internal control frameworks allowed electric utilities to advance the maturity of their ERS compliance programs. Similarly, the ERO Enterprise continued to refine its compliance monitoring and enforcement approaches and methods to consider this important aspect of each electric utility’s
58
A. V. (Angie) Sheffield and C. V. Bigelow
Fig. 3.5 COSO Internal Control – Integrated Framework Principles
3 Electricity Regulation in the USA
59
compliance program. Although these new approaches have not in and of themselves changed or affected the ERS applicable to each utility, they have brought improved clarity and focus to overall reliability and security and have improved the management of risks to the BES across the ERS regime.
3.9 On the Horizon 3.9.1 From Corporate Social Responsibility to Environment, Sustainability, and Governance In the last decade, while all these developments regarding reliability and security were ongoing, the entire corporate sector began to experience pressure from consumers and investors to account for the totality of their impacts on society and the environment. In this age of modern technology and conveniences, 24-hour news cycle, and social media, the general populace has access to much more information than ever before. Whether it is one person’s experience with a company or a media report on a corporation’s donations, lobbying, or business practices, information and data can be shared more quickly and to a much broader audience than ever before. This influx and disclosure of information and data resulted in increased scrutiny of corporations and, in some cases, led to boycotts, protests, government action, and internal management changes. Simply put, consumers and investors demand certain, ethical behavior of the companies with which they spend their hard-earned money, and technology gives them a platform to “crowd-fund” their expression of those expectations. As corporations began to see impacts to the bottom line where their corporate behavior did not meet the expectations of their consumers and investors, the concept of Corporate Social Responsibility (CSR) took shape. CSR is a type of self-regulation that is focused on corporate accountability for sustainable business practices. It focused on reviewing, structuring, and publicizing that the company’s overall business practices have a positive impact on its employees and consumers, investors, environment, and community/society in general. While CSR was an important development, it was more conceptual and philosophical than actionable and impactful; however, without the CSR movement, the Environment, Sustainability, and Governance (ESG) concept would not have had the opportunity to develop. Whereas CSR was focused on identifying and publicizing the right business practices, ESG is focused on using criteria to determine whether those practices are actually effective in achieving their objectives. ESG is how corporations prove that their business practices are, in fact, sustainable, ethical, socially responsible, and/or environmentally sound. The obvious question about the ESG movement is, “Why would this affect electric utilities, which have captive customers?” Given their business models, it is reasonable to assume that electric utilities would be less impacted by societal
60
A. V. (Angie) Sheffield and C. V. Bigelow
pressures from CSR and ESG. This assumption discounts the corporate structure of most electric utilities which are subject to the expectations of their shareholders and investors. As well, the exercise of each state’s PS/UC authority over electric utilities cannot be discounted – especially where the composition of such PS/UCs are premised upon public elections. For these reasons, electric utilities (in particular, IOUs and IPPs) are acutely aware of the CSR and ESG movements and have been responding accordingly. Several electric utilities have developed or are developing ESG programs and reporting on the status of those programs. This trend is likely to continue, mature, and evolve as ESG expectations and programs become the norm in terms of business practices. Where these processes and programs live from an operational and organizational perspective is undefined and varies from company to company. A trend toward management of CSR and/or ESG programs within compliance organizations is emerging. Internal compliance departments typically already have significant expertise and experience in data and program management and may already have or be involved in data collection, monitoring, and reporting processes that would also support a company’s CSR or ESG program reporting.
3.9.2 Supply Chain and Critical Infrastructure Risk Management Another important and developing area of focus for electric utilities is the evolving threat landscape. These threats have manifested through both the cyber and physical environments. Examples of these manifestations were discussed in Sect. 3.4.2 above. The ERO Enterprise and electric utilities have taken many steps to respond to this continuously evolving threat landscape. With the approval of the initial body of CIP standards in 2008, the ERO Enterprise mandated a risk-based, defense-in-depth approach to cybersecurity of the BES. Since their first approval, the CIP standards have been modified multiple times to address emerging risks. Two of the most significant additions to the CIP standards occurred in 2014 with the approval of a standard to protect the physical security of critical transmission stations (CIP-014), and, in 2017, with the addition of supply chain cyber security risk management requirements (CIP-013). As is common with many government mandates, the precursor to the regulation is often a highly publicized event. For CIP-014, the precursor was the Metcalf (California) substation attack. In April 2013, a shooter, who has yet to be identified, fired on and severely damaged numerous electrical transformers in the Metcalf transmission substation. The sophisticated attack resulted in more than $15 million of damage to equipment but, fortunately, did not result in any loss of electrical load. In the wake of the attack, FERC ordered the development of mandatory physical security requirements for critical substations and control centers. The resulting ERS
3 Electricity Regulation in the USA
61
(CIP-014) was developed by NERC and became mandatory with FERC’s approval of in 2014. In July 2016, precipitated by an attack on the Ukrainian power grid and several malware campaigns targeting supply chain vendors, FERC directed NERC to develop a standard to address supply chain risk management for industrial control system hardware, software, and computing and networking services associated with BES operations. After much collaboration between stakeholders and the regulators, CIP-013-1 was approved by FERC and became mandatory and enforceable on October 1, 2020. The requirements in CIP-013-1 were designed to protect the BES by limiting the potential for exposure to malware, tampering, and other cyber risks that can originate with third-party vendors and suppliers. Since the initial implementation of the standard, revisions have been proposed and are being developed to expand the scope of the assets to which CIP-013 applies. In addition to activity by FERC and the ERO Enterprise to protect the BES from a threat landscape that is continuously evolving and changing, the electric utility industry has seen efforts by DOE and the Department of Homeland Security to ensure that critical infrastructure (such a BES assets) is protected from physical and cyberattacks that could damage, degrade, or destroy such infrastructure. As well, electric utilities have observed legislative efforts at both the state and federal levels to address risks to BES reliability and security. Indeed, many PS/UCs have developed and enacted legislation intended to enhance the security and reliability of the BES within their state. These efforts are occurring contemporaneously with federal legislative efforts that have the same objectives but are focused on the interconnected, interstate BES. What the overarching regulatory regime for electric utilities will ultimately look like is unknown, but electric utilities are proactively working with their regulators at all levels and within all branches of government to ensure that new enactments are complementary and serve the public interest without overburdening ratepayers. Electric utilities are also engaged in the continuous evaluation of threats to ensure that their existing compliance programs and associated processes and controls address such threats. Where additional risk mitigation is needed, processes and controls that will effectively mitigate the risk posed by emerging threats are developed and implemented.
3.10 Conclusion Electric utility regulation continues to evolve and expand, including regulator expectations for electric utility compliance programs. To ensure sustainable compliance with applicable regulations, electric utilities are finding significant value in unified compliance programs and frameworks. As regulators align on the elements and value of an effective compliance program and compliance program expectations converge, the portability of internal controls frameworks is paying dividends by
62
A. V. (Angie) Sheffield and C. V. Bigelow
providing foundations upon which to build compliance programs addressing new areas of regulation. Given the rapid evolution of electric utility regulation, electric utilities have been in a continuous state of flux, evolution, and transformation for at least the last decade. With the current shifts, changes, and expansion in electric industry operations, this state is expected to continue as is continued investment in and prioritization of compliance programs. In the future, electric utilities are hopeful that economies of scale will develop across their compliance and control programs and frameworks. Whether or not such economies of scale materialize, there is little doubt that compliance programs will continue to be valuable assets across the electric utility industry for the foreseeable future.
References 1. U.S. Energy Information Administration, Electric Power Annual 2020 (reissued March 2022). 2. EIA (2019, August 15), Investor-owned utilities served 72% of U.S. electricity customers in 2017, https://www.eia.gov/todayinenergy/detail.php?id=40913 3. National Rural Electric Cooperative Association (2022) at https://www.electric.coop/electriccooperative-fact-sheet 4. EIA (2013, June 12) at https://www.eia.gov/todayinenergy/detail.php?id=11651. 5. Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities and Recovery of Stranded Costs by Public Utilities and Transmitting Utilities, Order No. 888, 61 FR 21,540 (May 10, 1996) (Order 888). 6. Regional Transmission Organizations, Order No. 2000, 65 Fed. Reg. 809 (January 6, 2000), FERC Stats. and Regs. ¶ 31,089 (1999) 7. Order No. 888, 61 FR 21,540 (May 10, 1996). 8. U.S. Federal Power Commission. “Prevention of Power Failures, v. I-III.” Washington, DC: US Government Printing Office, June-July 1967. 9. Adapted from the “History Of NERC – August 2020”at https://www.nerc.com/news/ Documents/HistoryofNERC_20AUG20.pdf 10. United States Sentencing Commission (USSC), U.S. Sentencing Guidelines Manual, Chapter 8 (effective November 1, 1991). 11. Sentencing Reform Act of 1984, Pub. L. No. 98-473, 98 Stat. 1987 (codified as 18 U.S.C. §§ 3551-3742 (1994) and 28 U.S.C. §§ 991-998 (1994)) (also known as Title II of the Comprehensive Crime Control Act of 1984). 12. USSC, U.S. Sentencing Guidelines Manual, (effective November 1, 1987). 13. DOJ, Memorandum from Eric J. Holder, Bringing Criminal Charges Against Corporations, (June 16, 1999) at https://www.justice.gov/sites/default/files/criminal-fraud/legacy/2010/04/ 11/charging-corps.PDF 14. USSC, U.S. Sentencing Guidelines Manual, §8B2.1., Effective Compliance and Ethics Program, (effective November 1, 2004). 15. DOJ, Justice Manual, at https://www.justice.gov/archive/usao/usam/index.html. 16. DOJ and SEC, A Resource Guide to the U.S. Foreign Corrupt Practices Act, at https:// www.justice.gov/criminal-fraud/fcpa-resource-guide. 17. U.S. Department of Justice, Criminal Division, Evaluation of Corporate Compliance Programs (June 2020) at https://www.justice.gov/criminal-fraud/page/file/937501/download. 18. 5 USC 553. 19. Attorney General’s Manual on the Administrative Procedure Act 30 n.3 (1947) 20. Enforcement of Statutes, Orders, Rules, and Regulations, 113 FERC ¶ 61,068 (2005).
3 Electricity Regulation in the USA
63
21. Revised Policy Statement on Enforcement, 123 FERC ¶ 61,156 (2008) 22. Compliance with Statutes, Regulations, and Orders, 125 FERC ¶ 61,058 (2008). 23. Enforcement of Statutes, Orders, Rules, and Regulations, 130 FERC ¶ 61,220 (2010) and Enforcement of Statutes, Orders, Rules, and Regulations, 132 FERC ¶ 61,216 (2010). 24. Enforcement of Statutes, Orders, Rules, and Regulations, 132 FERC ¶ 61,216 (2010) at §1A1.1, ¶2. 25. (NERC. (2012). Incorporating risk concepts into the implementation of compliance and enforcement. Retrieved from http://www.nerc.com/pa/comp/Reliability%20Assurance%20 Initiative/White%20Paper%20%E2%80%93%20The%20Need%20for%20Change%20%28 paper%201%29.pdf (Need for Change Whitepaper). 26. NERC, ERO Enterprise Guide for Internal Controls Version 2 (September 2017) at https:// www.nerc.com/pa/comp/Reliability%20Assurance%20Initiative/Guide_for_Internal_Controls _Final12212016.pdf. 27. See COSO website at https://www.coso.org/Documents/COSO-ICIF-11x17-Cube-Graphic.pdf
Angela V. (Angie) Sheffield CIA, CFE, had studied Psychology at the University of South Alabama – intending to become an organizational psychologist. Little did she know when she took a summer job at Gulf Power Company to help pay for college that it would be the first step toward her dream job. Upon graduating cum laude with a Bachelor’s degree in Psychology, she applied to the master’s program at the University of West Florida in Pensacola, Florida. She enjoyed her psychology classes and did well in them, but, in the back of her mind, she knew that numbers were really her thing. This is what had led her to minor in Business. So, rather than accepting the graduate assistant position she was offered, Sheffield decided to take a short break from school to ponder her future. During this time, she was offered a full-time position in the customer call center at Gulf Power, where she had worked for two summers. She took the position and her career in the electricity utility industry was born. After 2 years working the customer service call line at Gulf Power, Sheffield decided to return to school and satisfy her need to work with numbers by finishing up her Accounting degree. She resigned her full-time position, enrolled again in the University of West Florida, and immediately signed up for the cooperative education program. This led her back to Gulf Power again – this time as an accounting co-op student. Upon graduation, she accepted a second full-time position at the utility and began working in the Financial Planning department. After a couple of years of financial planning, and working on two separate rate cases, Sheffield was recruited into the Internal Auditing group. This is where she finally found her niche and felt at home. Sheffield has worked in the Internal Audit field for close to 30 years now. She considers internal auditing her passion – it is the driving force behind her earning the designations of Certified Internal Auditor and Certified Fraud Examiner. In 2002, Sheffield left the investor-owned utility segment and went to work for Georgia Transmission Corporation (GTC), a cooperative utility located in Tucker, Ga., as the Director
64
A. V. (Angie) Sheffield and C. V. Bigelow of Internal Audit. While out on maternity leave in 2007, and subsequent to the approval of new mandatory reliability standards for electric utilities, Sheffield received a call from GTC’s CEO asking if she’d be willing to serve as the Chief Compliance Officer for the company in addition to her internal audit role. Always interested in new challenges, Sheffield accepted and was soon promoted to Vice President, General Auditor, and Chief Regulatory Compliance Officer – all titles she still holds today. Sheffield later received her MBA from Kennesaw State University, fulfilling her goal of being the first person in her family to earn an advanced degree. From a professional perspective, Sheffield has served as the Chair of the North American Transmission Forum (NATF) Risk, Controls, and Compliance Practice Group and served as the Secretary of the local chapter of the Institute of Internal Auditors. On a personal level, Sheffield currently serves on the Board, and as past Chair, of the Spectrum Autism Support Group, a not-for-profit foundation which provides support to families impacted by autism in the Atlanta metro area. This cause is close to Sheffield’s heart as she is the mom to a 21-year-old son with autism, as well as a 14-year-old daughter. Both keep her family life busy. Her son is an avid and accomplished fisherman and her daughter is active in the high school marching band and swim team.
Christina V. Bigelow is currently a Senior Director, ISO/RTO Affairs for Pine Gate Renewables. Always inquisitive (and often called troublesome), she entered college intent on becoming a journalist. However, after her first year, she could not resist the allure of science and changed her major to Pre-med Biology. In 1997, she completed her Bachelor of Science in Biology at the University of New Orleans, having discovered a love of environmental science along the way. In 1998, she entered a Master of Science program at Louisiana State University’s Center for Environmental Studies. It was there that she took her first prelaw class and felt an instant connection to the field of law. After an internship at the Environmental Protection Agency in 2000, she began working in the environmental consulting field and, in 2001, she began attending law school at the Loyola University New Orleans College of Law. Working full-time, Christina attended law school at night, completing her Juris Doctor in 2004 and passing the Louisiana bar exam in 2005. After receiving her bar license, Christina accepted employment in a small regional firm, where, after demonstrating aptitude in commercial litigation, she acted as the case manager for the firm’s medical malpractice clients. Still drawn to more technical areas of focus and administrative/regulatory law, she left the firm, after having her first child, to pursue an opportunity in the electric utility industry. It was there, at Entergy Services, Inc., shortly after the Energy Policy Act of 2005 was effective, that Christina found her true professional love and passion, electric transmission and compliance. Although, it must be noted that she is still an environmentalist at heart!
3 Electricity Regulation in the USA
65
Since 2007, Christina has had another beautiful child and held various legal and compliance positions in the electric utility industry in organizations ranging from market operators to vertically integrated utilities to independent power producers. Christina earned her Corporate Compliance and Ethics Professional certification in 2011 and is passionate about energy regulatory law and policy, compliance, and ethics. She is an avid advocate for her organizations and for strong compliance and ethics programs and cultures. Christina is always working on her work-life balance and enjoys reading, Zumba, hiking, and camping. She also tries to find time to volunteer. Christina currently lives in Georgia with her family, two cats, and dog.
Chapter 4
Algorithms for Energy Justice Johanna L. Mathieu
According to the Initiative for Energy Justice, “Energy justice refers to the goal of achieving equity in both the social and economic participation in the energy system, while also remediating social, economic, and health burdens on those historically harmed by the energy system (“‘frontline communities’”). Energy justice explicitly centers the concerns of marginalized communities and aims to make energy more accessible, affordable, and clean and democratically managed for all communities” [1]. Given this definition, energy justice clearly has social, economic, and health components; less explicit is the technological component that, at least in part, underlies our ability to make energy more accessible, affordable, and clean. Power system researchers do not often put our work into the context of energy justice. Energy justice papers are much more common in social science communities. However, I argue that power system engineers have a unique role to play in supporting and directly contributing to energy justice. Though given the techno-economic focus of our field, to make an impact, we must do this work in collaboration with social scientists, and ideally community-based stakeholders. In this chapter, I develop an energy justice research agenda for power systems researchers, define an example problem, and propose algorithmic approaches to solve it.
4.1 Introduction In response to climate change, the energy sector is going through a massive transition. In addition to decarbonizing the electricity sector by transitioning to renewable electricity sources, we are electrifying other sectors that have traditionJ. L. Mathieu () University of Michigan, Ann Arbor, MI, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_4
67
68
J. L. Mathieu
ally used fossil fuels, including the transportation sector, industrial sector, and the commercial/residential building sector that still uses fossil fuels for space and water heating, cooking, and so on. Access to clean and affordable energy is inequitably distributed in our global society, both across countries and within countries. For example, low-income households and African American households have the highest energy burdens in the USA, where energy burden is defined as “the percent of household income that is spent on energy bills” [2]. The energy transition may exacerbate inequities unless we take a holistic approach that considers existing inequities; explores how inequities may change (improve or get worse) as a result of technological, economic, environmental, and social changes that will occur through the energy transition; and considers these factors in decision-making processes. As the International Institute for Sustainable Development explains, “Energy transitions are about people: the ones who make the decisions and the ones affected by those decisions. A ‘just transition’ approach ensures that the affected people are considered by those making decisions” [3]. The “just transition” is related to the concept of environmental justice, which the U.S. Environmental Protection Agency (EPA) defines as “the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income with respect to the development, implementation and enforcement of environmental laws, regulations and policies” [4]. In 2021, the Biden Administration issued an executive order establishing “a White House Environmental Justice Interagency Council . . . to prioritize environmental justice and ensure a whole-of-government approach to addressing current and historical environmental injustices, including strengthening environmental justice monitoring and enforcement . . . ” [5]. It also created the “Justice40 Initiative with the goal of delivering 40% of the overall benefits of relevant federal investments to disadvantaged communities . . . ” [5]. While focused broadly on environmental investments, the Justice40 Initiative also targets energy efficiency and clean energy investments [6]. For researchers in the USA, a key impact of the Justice40 Initiative is increased federal funding for research projects that address the needs of underserved communities. Regardless of the availability of funding, which undoubtedly will ebb and flow with politics, power system researchers have an important role to play in the “just transition.” The technologies that we design and develop have a direct or indirect impact on people’s lives – through the cost of electricity, the frequency of power outages, the health impacts of fossil fuel plants, and so on. More homes have smart and connected devices that allow them to see their electricity consumption in realtime and modify it. Distributed Energy Resources (DERs) in or at homes, such as electric vehicles, flexible appliances, solar photovoltaics (PV), and battery energy storage, are taking a more active role in power system operations. However, these impacts and innovations affect different people differently. Minority households in Detroit, Michigan are disproportionally impacted by sulfur dioxide (SO2 ) pollution from nearby power plants [7]. Increased adoption of solar PV by high-income homes can increase electricity costs for low-to-moderate (LMI) homes without solar PV [8, 9]. LMI households less able to afford the switch to electric heating, water heating,
4 Algorithms for Energy Justice
69
and clothes drying will be stuck paying for legacy gas infrastructure costs [10]. One might argue that historic and ongoing energy inequity that has led to these unequal energy outcomes stems from sources other than the technology itself – racism, classism, and so on – and so it is not the role of power systems researchers to right these wrongs. However, I would argue that through technology innovation we have levers to mitigate inequity. And if we can mitigate inequity, why would not we? Specifically, power systems researchers can directly contribute to “energy justice.” Related to the concepts of environmental justice and the just transition, the Initiative for Energy Justice defines “energy justice” as “the goal of achieving equity in both the social and economic participation in the energy system, while also remediating social, economic, and health burdens on those historically harmed by the energy system (‘frontline communities’). Energy justice explicitly centers the concerns of marginalized communities and aims to make energy more accessible, affordable, and clean and democratically managed for all communities” [1]. Power system researchers do not often put our work into the context of energy justice. However, technology, at least in part, underlies our ability to make energy more accessible, affordable, and clean, and therefore can contribute to the goal of energy justice. Of course, technology alone will not mitigate all inequalities, and so, to make a significant impact, we must work in collaboration with social scientists, and ideally community-based stakeholders. In this chapter, I begin with a discussion of key concepts within the field of energy justice and briefly review the broad literature in this field. I then connect the power systems literature to the field of energy justice, including linking work that has not been explicitly linked to the term “energy justice” before. Next, I describe a set of energy justice-related challenges that power system engineers are uniquely positioned to tackle. Finally, I define an example problem and describe algorithmic approaches to address that problem, which my collaborators and I are currently developing and testing to improve energy access, affordability, and equity in LMI homes in Detroit, Michigan.
4.2 What Is Energy Justice? Energy justice is a concept that has appeared in social science, engineering, economics, and policy literature. Sovacool and Dworkin’s book on “Global Energy Justice” provides a broad overview of energy justice issues from the perspective of political theory [11]. Jenkins et al. provide a social science review and research agenda for energy justice, which they state should evaluate “(a) where injustices emerge, (b) which affected sections of society are ignored, (c) which processes exist for their remediation in order to (i) reveal, and (ii) reduce such injustices” [12]. Hernández argues for four energy-justice-related rights, that is, the right to (1) healthy sustainable energy production, (2) best available energy infrastructure, (3) affordable energy, and (4) uninterrupted energy service [13].
70
J. L. Mathieu
More technically focused modeling and/or data analysis studies have tried to characterize energy inequity in terms of racial/ethnic and socioeconomic disparities, as a step toward rectifying those inequities and achieving energy justice. For example, Reames used data-driven models to explore disparities in heating energy use intensity (EUI), a metric used to characterize energy efficiency, in Kansas City, Missouri, and argued that an understanding of these disparities can facilitate the targeting of energy efficiency interventions [14]. A latter study explored disparities in heating energy consumption and efficiency in Detroit, Michigan [15]. Both studies found significant correlations between heating efficiency and both racial/ethnic makeup and income, that is, houses in areas with lower incomes and/or more racial/ethnic minority households had higher heating EUIs. Recently, Tong et al. used fine-scale spatial data from Tallahassee, Florida, and St. Paul, Minnesota, to explore these same relationships between EUI and income/race for both electricity and gas energy consumption and obtained similar results [16]. They also found distinct income and racial effects [16]. Cong et al. explore hidden energy poverty through a data-driven approach that estimates energy-limiting behavior (e.g., delaying turning on air conditioning) in low-income households [17]. Other studies focus on the role of policy. In [18], Bednar and Reames argue that we should recognize energy poverty, defined as “the inability of a household to meet their energy needs” [18], as a distinct problem, different from general poverty, to enable a more effective response to energy poverty. They review existing US federal-funded energy programs that aim to reduce energy bills and find that these programs’ metrics are not well-aligned with the overall goal of reducing energy poverty. Straddling social science, engineering, and policy, Baker et al. describe the challenges of and solutions to including qualitative understandings of stakeholder preferences within quantitative electricity system models used for sustainable and equitable electricity system planning [19]. This chapter details some of the key challenges we, as power systems engineers, face in addressing energy justice in our own work – specifically, how do we bring energy justice concepts into our formulations as metrics, objectives, constraints, and costs?
4.3 Energy Justice in Power Systems Research As of April 2022, an Institute of Electrical and Electronics Engineers (IEEE) Xplore search for “energy justice” brings up just three items: a conference panel abstract and two conference papers [20–22]. Kostyk et al. discussed the impact of energy justice on the energy transition on a panel at the 2011 IEEE International Symposium on Technology and Society [20]. More recently, Ref. [21] proposed an approach to explicitly include energy justice in the electrification planning process. The paper extends an agent-based planning approach introduced by Ref. [23] developed for electricity system planning in developing countries. Finally, Ref. [22] studied residential solar adoption in Connecticut and shows that Connecticut’s
4 Algorithms for Energy Justice
71
Residential Solar Investment Program (RSIP), designed to increase access to solar by LMI homes, has reduced racial and ethnic disparities in solar adoption. Arguably, Ref. [21] (and therefore [23]) is the closest to the field of power systems. Of course, not all power systems papers are IEEE publications. Nock et al. formulate a generation expansion planning model incorporating preferences for equity and budget and discuss their findings in the context of energy access and energy poverty [24]. There is a significant body of power systems research that directly relates to energy justice even if the authors of that work have not explicitly connected their research to the term energy justice. Many papers published in the IEEE Power and Energy Society PowerAfrica Conference address electricity technology challenges and solutions for marginalized communities. For example, Refs. [25–27] describe the feasibility and challenges of renewable-energy-based microgrids in underserved communities in Africa. In an IEEE Transactions on Sustainable Energy article, Arriaga et al. explore a variety of scenarios for transitioning remote communities in Northern Ontario, Canada away from diesel to renewable energy resources [28]. In an IEEE SmartGridComm paper, Porter et al. design an algorithm to coordinate energy storage with diesel generation for a rural community in the Philippines and describe how prepaid electricity tariffs can be used to finance storage investment [29]. This list is not meant to be exhaustive (and I apologize to the authors of all the papers that have been missed) but is meant to demonstrate that power systems researchers already work on a variety of research problems with the goal of making energy more accessible, affordable, and/or clean for marginalized communities. I would argue that by explicitly linking our work to energy justice and putting energy justice goals front and center we can make even more impact in mitigating inequity. A key linkage between power systems research and energy justice is the proliferation of DERs, which are a significant component of the energy transition. In Amory Lovins’s seminal 1976 article “Energy Strategy: The Road not Taken?” [30], Lovins argues for the “soft” energy technologies – renewable, diverse, flexible, low technology, and matched in scale and quality to end-user needs. Taylor et al. later linked this argument directly to the need for and benefits of DERs [31]. Lovins, the co-founder and former chief scientist of the Rocky Mountain Institute, argued against nuclear, coal, gas, and other “hard” energy technologies for a variety of environmental, political, and technical reasons, but also for society and equity. He explains, “Though neither glamorous nor militarily useful [referring to nuclear], these [soft] technologies are socially effective—especially in poor countries that need such scale, versatility and simplicity even more than we do” [30]. He goes on, The soft path has novel and important international implications. Just as improvements in end-use efficiency can be used at home (via innovative financing and neighborhood self-help schemes) to lessen first the disproportionate burden of energy waste on the poor, so can soft technologies and reduced pressure on oil markets especially benefit the poor abroad. Soft technologies are ideally suited for rural villagers and urban poor alike, directly helping the more than two billion people who have no electric outlet nor anything to plug into it but who need ways to heat, cook, light and pump. Soft technologies do not carry with them inappropriate cultural patterns or values; they capitalize on poor countries’ most abundant resources (including such protein-poor plants as cassava, eminently suited to making fuel
72
J. L. Mathieu alcohols), helping to redress the severe energy imbalance between temperate and tropical regions; they can often be made locally from local materials and do not require a technical elite to maintain them; they resist technological dependence and commercial monopoly; they conform to modern concepts of agriculturally based eco-development from the bottom up, particularly in the rural villages. [30]
Laying out a roadmap for “Power Systems without Fuel,” power systems researchers Taylor et al. discuss the fundamental challenges and power systems research opportunities associated with the transition to 100% renewable power systems [31]. However, they do not link back to Lovins’ argument about the role of “soft” energy technologies/DERs1 in mitigating inequities and therefore advancing energy justice. If we do make this link, it opens up a wide array of research questions that we, as power systems researchers, are uniquely positioned to tackle, as I will describe next.
4.4 An Energy Justice Research Agenda for Power Systems Researchers Many of the papers described in the previous sections, directly or indirectly, relate to power systems research. I next attempt to organize these ideas into a list of power systems research topics in order to create an energy justice research agenda for power systems researchers. Given the definition of energy justice above, all work we do to increase the affordability of electrical energy (which can increase access) and/or enable a cleaner energy system (e.g., through renewables) is nominally related to energy justice; however, here I focus on problems that place energy justice objectives front and center. I note that this list is not comprehensive.
4.4.1 Equitable Electricity System Planning Arguably, the most straightforward way of integrating energy justice into power systems research is within electricity systems planning, that is, planning of new/expanded generation, transmission systems, distribution systems, and demandside management programs. Several of the studies mentioned above have proposed electricity system planning paradigms that include energy justice objectives, both for planning to increase access in developing countries and planning to mitigate historic inequities in countries with more advanced electricity infrastructure. Electricity planning studies often involve simulation modeling and/or formulating and solving optimization problems. Key research questions include how to define quantitative energy justice metrics that can be used to evaluate simulation outcomes
1 Albeit
not all “soft” energy technologies are DERs, and vice versa.
4 Algorithms for Energy Justice
73
and/or can be embedded within optimization formulations. For example, how do we define the “cost” of unequal electricity reliability or unequal health impacts due to fuel extraction, processing, and use, or should we include constraints to enforce more equitable solutions? Do the answers to these questions differ when we consider traditional grid development and expansion (large-scale conventional power plants, transmission) versus the development and deployment of DERs (which people interact with much more directly)? How will the distributional impacts of power systems change throughout the energy transition, and how can we steer the transition to achieve energy justice?
4.4.2 Equitable Electricity System Operation and Control Beyond planning to improve equity, can we operate and control existing grids in ways that promote equity? The usual goal of security-constrained economic dispatch is to achieve the lowest-cost power plant dispatch that meets the grid technical and reliability constraints. How can we embed energy justice metrics within this optimization problem, for example, by penalizing spatially unequal reliability and pollutant emissions outcomes? How can we design power system controls to achieve more equitable outcomes, for example, by ensuring that emergency load shedding does not always affect the same neighborhoods? The energy transition will lead to the need for fundamentally different approaches to operate and control grids dominated by highly distributed inverter-interfaced energy resources [31]; how do we embed energy equity objectives within these approaches?
4.4.3 Equitable DER Adoption and Coordination Digging a little deeper into the last point, we expect DER coordination to be a key element of future low-carbon power grids. Residential DERs, such as flexible appliances, battery energy storage, electric vehicles, and rooftop solar, can be coordinated to help balance renewable/load variability and also manage grid constraints. However, LMI households are less able to afford the upfront costs of DERs. Therefore these households will shoulder more and more of the cost of legacy energy infrastructure before they are ever able to benefit from these technologies. LMI households that adopt DERs may also find that their DERs are unequally coordinated. There are a variety of approaches to coordinate DERs including pricebased control, market-based control (e.g., transactive energy), and direct control by a utility or third-party aggregator, which generally involves establishing a contract detailing the flexibility and compensation. If prices/markets drive coordination decisions, LMI households are more likely to shoulder a great burden of coordination, for example, they may be more likely to shift appliance load and vehicle charging to inconvenient times to obtain lower electricity rates, and/or to reduce heating/air
74
J. L. Mathieu
conditioning to uncomfortable temperatures to reduce electricity costs during highprice hours. While these types of actions are exactly what DER coordination tries to achieve, unequal energy burdens mean LMI homes will have far more to lose if they make decisions inconsistent with economic signals. Even direct control does not necessarily solve this problem; LMI homes participating in direct control may choose to offer more flexibility to the utility or aggregator than higherincome homes to achieve higher compensation (lower electricity rates and/or higher participation incentives), which could make their homes less comfortable. Therefore, power systems researchers have a variety of research questions to address. First, with respect to adoption, how can we increase equitable adoption, for example, through the design of innovative business models linking DER adoption and coordination for LMI households? While designing “business models” may not seem like the task of a power systems researcher, in this case the “business model” is inherently linked with the coordination strategy (i.e., an optimization and/or controlbased approach tailored to the physical capabilities and constraints of the DERs and the grid) and so this research question is well-aligned with power systems research. An example model is one in which an aggregator owns/operates DERs within homes, delivering contracted services (renewable power, heating, cooling, etc.) to the homes for a fee, while coordinating the DERs to provide grid services, in turn providing income to the aggregator, some of which is passed on the homeowner. The economics of this model – both whether this is profitable to the aggregator and appealing to LMI homeowners – are a function of the ability of the aggregator to provide reliable grid services with the DERs, which is a power systems research topic. The second research question is how to design DER coordination algorithms that do not place additional burdens on LMI households. Can DER coordination also serve to mitigate existing inequality, for example, by increasing the comfort of LMI households?
4.4.4 Equitable Electricity Rate and Demand-Side Management Program Design As mentioned above, LMI households will end up paying more for electricity to cover grid costs as higher-income homes adopt solar PV. This is due to how most existing electricity rates are structured, with fixed kWh charges that cover both energy (fuel) costs and network fees. As homes with solar PV consume less kWh, they will pay for less energy and also a lower portion of the network fees, though their network connection will still support essentially the same level of reliability. This may increase kWh costs, which will lead to a higher energy burden for LMI households without solar PV. We could adapt electricity rates to this new reality, for example, by having homes with solar pay a separate network charge. This is highly controversial as it is seen as a “tax” on solar PV, discouraging homes from investing in renewables.
4 Algorithms for Energy Justice
75
Clearly, we need new rate designs, but how do we do this in fair and effective ways? Rate design is a topic often explored by economists, for example, [32] explored how electricity rates in California should change to ensure equity through the energy transition. However, power systems researchers also have a role to play in rate design, since rate designs affect electricity consumption patterns, which directly affect the operation of power grids. Research questions include how proposed “equitable” rates affect consumption, and in turn operations and control, and subsequently grid costs, reliability across space and time, and health impacts across populations. Can we design equitable rates through formulations that specifically consider these dependencies, for example, via bilevel optimization problems that optimize power system operation including cost, reliability, and equity objectives subject to the optimization problem of electricity consumers? If we agree that a basic level of electricity access is a human right [33], how can we design electricity rates to provide that level for free or very low cost while ensuring sufficient cost recovery for the utility and dynamic stability? Beyond rates, power systems researchers can also contribute to the design of demand-side management programs that also affect electricity consumption patterns. Energy assistance programs enable LMI homes to make home/appliance upgrades and provide a variety of other energy-related support. For example, the US government’s Low Income Home Energy Assistance Program (LIHEAP) “assists eligible low-income households with their heating and cooling energy costs, bill payment assistance, energy crisis assistance, weatherization and energy-related home repairs” [34]. As Ref. [18] explains, assistance program metrics are not always aligned with overall programmatic goals to reduce energy poverty and achieve energy justice. Energy efficiency programs are more broadly accessible to the population and aim to reduce energy consumption. Demand response programs aim to achieve load flexibility by incentivizing demand shedding or shifting at key times to reduce peak load, manage whole electricity price volatility, improve grid reliability, help manage renewable energy generation intermittency, and so on. All three types of programs need to evolve to drive toward energy justice and enable the energy transition. How can we redesign energy assistance program metrics to better align with overall programmatic goals? How can we restructure energy efficiency programs to reach the homes that can benefit the most, that is, those that suffer the highest energy burden? How should we consider energy justice goals within demand response programs, which traditionally do not consider equity or justice goals at all? While broad, these research questions must be tackled by researchers in energy policy, economics, and power systems, ideally working in collaboration with one another.
76
J. L. Mathieu
4.4.5 Recommender Systems for Electricity Rates and Demand-Side Management Programs In addition to designing new electricity rates and demand-side management programs, we also need to develop better ways to link homeowners with the programs that best suit their needs. “Recommender systems” have been popularized through web and social media applications, for example, the Netflix Prize Competition [35]. However, “recommender systems” can also be used for energy applications. Designing a recommender system to recommend electricity rates and/or demandside management programs to LMI households requires not only expertise in the machine-learning approaches that underpin the algorithms, but also expertise in power systems research. More details are provided in the next section.
4.4.6 Reducing Bias in Data-Driven Algorithms for Power Systems In mentioning recommender systems, I would be remiss if I did not also discuss bias in machine learning and artificial intelligence (AI) systems. The National Renewable Energy Laboratory recently held a Workshop on Responsibility and Trustworthy AI in Clean Energy with the goal of “establish[ing] practices, principles, and behaviors needed for responsible and trustworthy AI for clean energy” [36]. More and more power systems research is leveraging emerging tools from machine learning and AI to solve power systems challenges in data-driven ways; however, we need to be aware that these tools and techniques can be biased, which can lead to the perpetuation of inequities. Can we characterize the bias inherent in these tools and understand how it would impact the use of machine learning and AI in power systems applications? How can we redesign these tools to mitigate bias in our applications?
4.5 Example Problem and Algorithmic Approaches This section describes an example problem that we, as power systems researchers, are well-positioned to tackle. I describe the inherent challenges to solving this problem and some proposed algorithmic solutions to the problem. I also highlight open problems.
4 Algorithms for Energy Justice
77
4.5.1 Problem: How Can We More Effectively Recommend Energy Assistance, Energy Efficiency, and Electricity Rate Programs to LMI Homes? As part of a U.S. National Science Foundation Smart and Connected Communities project, my collaborators and I are currently developing data-driven algorithms to increase energy access, affordability, and equity in LMI households in Detroit, Michigan. In collaboration with researchers in power systems, energy justice, public health, and survey research, along with several community-based organizations, the team is conducting an intervention in 100 LMI households to explore the effectiveness of energy case managers in improving access to energy assistance, energy efficiency, and electricity rate programs, and in turn reducing energy burdens and/or improving comfort in these LMI households. The energy case managers will use household data, including survey data, smart meter data, and submetering data (when available) to develop energy improvement plans and make recommendations for programs. Available energy data can be leveraged to determine whether households qualify for specific programs and whether they would benefit from participation in programs. To qualify for energy assistance programs, a homeowner usually needs to prove their LMI status. Some energy efficiency and rate programs also have qualification requirements that depend on income or other factors. How much households would benefit from these programs – in terms of monetary savings and/or increased comfort – depends upon how the household currently consumes electricity and how the program will change their energy consumption patterns. It also depends on their electricity rate. One method of estimating these benefits is by modeling the household’s electricity consumption and leveraging available household data to parameterize the models. Then, one can simulate how the household’s electricity consumption would change if the house were weatherized, old appliances were replaced, and so on. To determine the best electricity rate, one could run historic consumption data through alternate rate structures to determine whether homes would save money simply by switching to another rate. Different rates can also induce behavior change, for example, time-of-use rates encourage households to shift consumption to off-peak hours. To estimate the benefits of a new rate plus behavioral change, one can again use models to simulate how the household’s electricity consumption would change.
4.5.2 Challenges Developing accurate models of household electricity consumption and human behavior is nontrivial. To capture the impact of weatherization (which affects heating and cooling energy consumption), appliance replacements, and behavioral change affecting the usage of specific appliances, we need to model the major loads
78
J. L. Mathieu
within the home. Moreover, while modeling the impact of behavioral change on electricity consumption (e.g., the impact of a homeowner shifting clothes drying from 6 pm to 10 pm) is difficult, modeling the behavioral change itself (e.g., the new clothes drying time, here assumed to be 10 pm) is much much harder.
4.5.3 Algorithmic Solutions 1. Modeling Loads within a Home Most homes in the USA now have smart meters that record data hourly or half-hourly. How do we go from this data to household-specific models of all of the major electric loads within a house? The first step could be to use NonIntrusive Load Modeling (NILM), also referred to as Energy Disaggregation, to disaggregate time-series home-level electric load data into estimates of time-series individual electric load data [37]. More formally, NILM takes in a time series of smart meter measurements s and uses a supervised or unsupervised learning algorithm f to output a set of time series Y representing the consumption of a particular household’s electric loads, that is, Y = f (s). Additional data, such as partial submetering data from one or multiple household loads Z, can also be leveraged, that is, Y = f (s, Z). NILM is not a new field – the problem was first posed in the early 1990s [38] – and many papers exist with different methods and case studies; however, a key open problem is how well existing NILM methods can disaggregate data from LMI households or households of racial/ethnic minorities, which may have different load consumption patterns than other households. NILM methods generally require training on real data, ideally from the household in question, or at least on data from similar households. However, there is a dearth of publicly available data from diverse households in the USA. Pecan Street Inc., which provides Dataport [39], a massive database with high-resolution submetering data from homes in multiple states, has little data from LMI household data. Our team is working with Pecan Street Inc. to add submetering to 75 owner-occupied LMI households in two Detroit neighborhoods with predominantly Black and Hispanic populations. Once appliance consumption is estimated, it can be used to fit the parameters a ∈ Rn of physics-based load models g, for example, air conditioning models that capture how their power consumption p depends on their on/off mode m and temperature θ dynamics, that is, p = g(m, θ , a) where m ∈ {0, 1} [40, 41]. However, using messy real-world data for load parameter identification is difficult. While we would like to identify the parameters of a model that captures the salient physical processes, we may struggle to measure all desirable states and the parameters may not be identifiable from the available measurements. A key open problem is how to determine the model complexity that presents the best trade-offs between simplicity and accuracy in predicting the electricity consumption impacts of changes in appliance efficiency or usage.
4 Algorithms for Energy Justice
79
2. Modeling Behavioral Change While modeling behavior itself is more or less out of our purview as power systems researchers (though some of our colleagues do work on this), we do routinely model optimal decision-making. Of course, people are not rational decision-makers and so it is insufficient to model behavior as the outcome of an optimization problem. However, behavioral choices that can be automated, such as appliance settings, can be informed by optimization models. The results of these optimization models can be presented to householders together with program/rate recommendations, for example, “If you switch to x rate you will save y, and if you optimally schedule your appliances/thermostats, you will save z.” The generic optimization problem can be formulated as minimizing h(x) subject to q(x) = 0, r(x) ≤ 0, where x is the decisions, for example, appliance and thermostat settings, battery and/or vehicle charging schedules, and so on. The function h(x) encodes the “costs” including the cost of electricity and the (negative) benefit of home comfort and conveniences. The constraints q(x) = 0, r(x) ≤ 0 encode the equality and inequality constraints, respectively, around load usage needs and physical load capabilities/constraints. There are a large number of papers focused on formulating and solving such decision-making problems to schedule and control residential electric loads, energy storage, and solar PV systems. Commercial Home Energy Management Systems (HEMS) can already be used to schedule smart appliances and thermostats to reduce energy costs. But a key open problem is whether existing approaches accommodate the diverse needs of LMI households or households of racial/ethnic minorities, such that the outputs of these optimization problems are actually useful to diverse homes. Moreover, how can we embed energy justicepromoting costs and constraints into these formulations?
4.6 Conclusions This chapter detailed the intersection of energy justice and power systems research. I described a set of energy justice-related challenges that power system engineers are uniquely positioned to tackle, along with an example problem and some proposed algorithmic approaches to address it. My goal was to link together some seemingly disparate but inherently interlinked literature in order to make the case that power systems researchers can and should contribute directly to energy justice within their own work. The energy transition is not straightforward – there are enumerable paths we could take – but one that centers energy justice together with the need to combat climate change will lead to better distributional outcomes and a better chance of an overall equitable, fair, and stable solution. Acknowledgments This work was supported by NSF Grant EECS-1952038. I would like to thank my collaborators and students for stimulating discussions that inspired this chapter – Tony Reames, Carina Gronlund, Marie O’Neill, Gibran Washington, Joshua Brooks, and Xavier Farrell.
80
J. L. Mathieu
References 1. Initiative for Energy Justice, “Section 1 – Defining Energy Justice: Connections to Environmental Justice, Climate Justice, and the Just Transition,” https://iejusa.org/section-1-definingenergy-justice/, Accessed: January 16, 2022. 2. Ariel Drehobl and Lauren Ross, “Lifting the High Energy Burden in America’s Largest Cities: How Energy Efficiency Can Improve Low Income and Underserved Communities,” Tech Report: American Council for an Energy Efficient Economy, 2016. 3. International Institute for Sustainable Development, “Just Transition,” https://www.iisd.org/ topics/just-transition. Accessed: May 2, 2022. 4. US EPA, “Learn About Environmental Justice,” https://www.epa.gov/environmentaljustice/ learn-about-environmental-justice#:~:text=Environmental%20justice%20(EJ)%20is%20the, environmental%20laws%2C%20regulations%20and%20policies. Accessed: May 2, 2022. 5. The White House, “FACT SHEET: President Biden Takes Executive Actions to Tackle the Climate Crisis at Home and Abroad, Create Jobs, and Restore Scientific Integrity Across Federal Government,” https://www.whitehouse.gov/briefing-room/statements-releases/2021/ 01/27/fact-sheet-president-biden-takes-executive-actions-to-tackle-the-climate-crisis-athome-and-abroad-create-jobs-and-restore-scientific-integrity-across-federal-government/, January 27, 2021. 6. Shalanda Young, Brenda Mallory, and Gina McCarthy, “The Path to Achieving Justice40,” https://www.whitehouse.gov/omb/briefing-room/2021/07/20/the-path-to-achieving-justice40/ July 20, 2021. 7. Sheena E. Martenies, Chad W. Milando, and Stuart A. Batterman, “Air Pollutant Strategies to Reduce Adverse Health Impacts and Health Inequalities: A Quantitative Assessment for Detroit, Michigan,” Air Quality, Atmosphere & Health 11 (2018): 409–422. 8. Erik Johnson, Ross Beppler, Chris Blackburn, Benjamin Staver, Marilyn Brown, and Daniel Matisoff, “Peak Shifting and Cross-Class Subsidization: The Impacts of Solar PV on Changes in Electricity Costs,” Energy Policy 106 (2017): 436–444. 9. Eric O’Shaughnessy, Galen Barboes, Ryan Wiser, Sydney Forrester, and Naïm Darghouth, “The Impact of Policies and Business Models on Income Equity in Rooftop Solar Adaption,” Nature Energy 6 (2021): 84–91. 10. Lucas W. Davis and Catherine Hausman, “Who Will Pay for Legacy Utility Costs?” Energy Institute at Haas White Paper 317R, Mar 2022. 11. Benjamin K. Sovacool and Michael H. Dworkin, Global Energy Justice: Problems, Principles, and Practice, Cambridge University Press, 2014. 12. Kirsten Jenkins, Darren McCauley, Raphael Heffron, Hannes Stephan, Robert Rehner, “Energy Justice: A Conceptual Review,” Energy Research & Social Science 11 (2016): 174–182. 13. Diana Hernández, “Sacrifice Along the Energy Continuum: A Call for Energy Justice,” Environmental Justice 8.4 (2015): 151–156. 14. Tony G. Reames, “Targeting energy justice: Exploring spatial, racial/ethnic and socioeconomic disparities in urban residential heating energy efficiency,” Energy Policy 97 (2016): 549–558. 15. Dominic J. Bednar, Tony G. Reames, and Gregory A. Keoleian, “The Intersection of Energy and Justice: Modeling the Spatial Racial/Ethnic and Socioeconomic Patterns of Urban Residential Heating Consumption and Efficiency in Detroit, Michigan,” Energy & Buildings 143 (2017): 22–34. 16. Kangkang Tong, Anu Ramaswami, Corey (Kewei) Xu, Richard Feiock, Patrick Schmitz, and Michael Ohlsen, “Measuring Social Equity in Urban Energy Use and Interventions Using FineScale Data,” Proceedings of the National Academy of Sciences 118.24 (2021): e2023554118. 17. Shuchen Cong, Destenie Nock, Yueming (Lucy) Qiu, Bo Xing, “Unveiling Hidden Energy Poverty Using the Energy Equity Gap,” Nature Communications 13 (2022): 2456. 18. Dominic J. Bednar and Tony G. Reames, “Recognition of and Response to Energy Poverty in the United States,” Nature Energy 5 (2020): 432–439. 19. Erin Baker, Destenie Nock, Todd Levin, Samuel A. Atarah, Anthony Afful-Dadzie, David Dodoo-Arhin, Léonce Ndikumana, Ekundayo Shittu, Edwin Muchapondwa, Charles Van-Hein
4 Algorithms for Energy Justice
81
Sackey, “Who is Marginalized in Energy Justice? Amplifying Community Leader Perspectives of Energy Transitions in Ghana,” Energy Research & Social Science 73 (2021): 101933. 20. Timothy Kostyk, Josepth Herkert, Clinton J. Andrews, and Clark Miller, “Energy and Society: Challenges Ahead,” Proceedings of the IEEE International Symposium on Technology and Society, 2011. 21. Bethel Tarekegne and Mark Rouleau, “An Energy Justice Based Approach for Electrification Planning – An Agent-Based Model,” Proceedings of the IEEE Global Humanitarian Technology Conference, 2019. 22. Emily Holt and Deborah A. Sunter, “Historical Patterns for Rooftop Solar Adoption Growth Rates in Connecticut,” Proceedings of the IEEE Photovoltaics Specialists Conference, 2021. 23. Jose F. Alfaro, Shelie Miller, Jeremiah X. Johnson and Rick R. Riolo, “Improving rural electricity system planning: An agent-based model for stakeholder engagement and decision making,” Energy Policy, 101 (2017): 317–331. 24. Destenie Nock, Todd Levin, and Erin Baker, “Changing the Policy Paradigm: A Benefit Maximization Approach to Electricity Planning in Developing Countries,” Applied Energy 264 (2020): 114583. 25. Andrew Harrison Hubble and Taha Selim Ustun, “Scaling renewable energy based microgrids in underserved communities: Latin America, South Asia, and Sub-Saharan Africa,” Proceedings of IEEE PES PowerAfrica, 2016. 26. Oluleke Babayomi and Taiwo Okharedia, “Challenges to Sub-Saharan Africa’s Renewable Microgrid Expansion – A CETEP Solution Model,” Proceedings of IEEE PES/IAS PowerAfrica, 2019. 27. Oluleke Babayomi, Tobi Shomefun, Zhenbin Zhang, “Energy Efficiency of Sustainable Renewable Microgrids for Off-Grid Electrification,” Proceedings of IEEE PES/IAS PowerAfrica, 2020. 28. Mariano Arriaga, Claudio A. Cañizares, and Mehrdad Kazerani, “Renewable Energy Alternatives for Remote Communities in Northern Ontario, Canada,” IEEE Transactions on Sustainable Energy 4.3 (2013): 661–670. 29. Jared Porter, Michael Pedrasa, Andy Woo, Kameshwar Poolla, “Combining storage and generation for prepaid electricity service,” Proceedings of the IEEE International Conference on Smart Grid Communications, 2015. 30. Amory Lovins, “Energy Strategy: The Road Not Taken?” Foreign Affairs 55 (1976): 65–96. 31. Joshua A. Taylor, Sairaj V. Dhople, and Duncan S. Callaway, “Power Systems Without Fuel,” Renewable and Sustainable Energy Reviews 57 (2016): 1322-1336. 32. Severin Borenstein, Meredith Fowlie, and James Sallee, “Designing Electricity Rates for an Equitable Energy Transition,” Next10 https://www.next10.org/publications/electricity-rates, 2021. 33. Gordon Walker, Neil Simcock, and Rosie Day, “Necessary energy uses and a minimum standard of living in the United Kingdom: Energy justice or escalating expectations?” Energy Research & Social Science 18 (2016): 129–138. 34. Benefits.gov, “Low Income Home Energy Assistance Program (LIHEAP),” https:// www.benefits.gov/benefit/623 Accessed: April 29, 2022. 35. Wikipedia, “Netflix Prize,” https://en.wikipedia.org/wiki/Netflix_Prize Accessed: May 8, 2022. 36. Brooke Van Zandt, “Building Trust in AI To Ensure Equitable Solutions: NREL Leads Research to Address Bias in Clean Energy Innovation,” National Renewable Energy Laboratory, https://www.nrel.gov/news/program/2022/building-trust-in-ai-to-ensureequitable-solutions.html, 2022. 37. K. Carrie Armel, Abhay Gupta, Gireesh Shrimali, and Adrian Albert, “Is Disaggregation the Holy Grail of Energy Efficiency? The Case of Electricity,” Energy Policy 52 (2013): 213–234. 38. G.W. Hart, “Nonintrusive appliance load monitoring.” Proceedings of the IEEE, 80.12 (1992): 1870–1891. 39. Pecan Street Inc. “Pecan Street Dataport” https://www.pecanstreet.org/dataport Accessed: May 8, 2022.
82
J. L. Mathieu
40. Satoru Ihara and Fred C. Schweppe, “Physically based modeling of cold load pickup,” IEEE Transactions on Power Apparatus and Systems, PAS-100.9 (1981): 4142–4150. 41. C.Y. Chong and A.S. Debs, “Statistical synthesis of power system functional load models,” Proceedings of the IEEE Conference on Decision and Control, 1979.
Johanna L. Mathieu is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. Her research focuses on ways to reduce the environmental impact, cost, and inefficiency of electric power systems via new operational and control strategies. In particular, much of her work develops new methods to actively engage distributed energy resources such as energy storage, flexible electric loads, and distributed renewable resources in power system operation. This is especially important in power systems with high penetrations of intermittent renewable energy sources such as wind and solar. She uses methods from a variety of fields including control systems and optimization. Mathieu received her SB from MIT in Ocean Engineering with a minor in Ancient and Medieval Studies in 2004. She had always loved math but was also intrigued by archaeology and decided to declare ocean engineering as her major as the result of a field trip to the Woods Hole Oceanographic Institution (WHOI) during her freshman year at college. There she learned about underwater archaeology and how WHOI researchers were developing underwater robots for underwater archaeological site exploration and mapping. In her senior year, she learned about another side of ocean engineering – wave energy harvesting for electricity generation. Her senior design project was to develop a wave energy harvester that she and her team built and deployed in the Charles River, her first foray into renewable energy technologies. Burnt out after 4 years at MIT and wanting to do something that made a direct impact on people’s lives, Mathieu joined the US Peace Corps after college and spent a year in Tanzania teaching high school math and physics. In 2006, she started her MS/PhD at the University of California, Berkeley, in Mechanical Engineering without a clear plan for what exactly she planned to do, though with a declared focus on control systems. She had hoped to marry her interest in engineering with her experiences working in a developing country, and so took a class called “Design for Sustainable Communities” which led to her MS research with Prof Ashok Gadgil on arsenic remediation of drinking water for Bangladesh. She traveled to Bangladesh several times to test the technology and build prototypes. However, she found herself doing more chemistry than math, and no control systems at all. Working with Prof Gadgil, Mathieu was an affiliate at Lawrence Berkeley National Laboratory (LBNL), and so she started attending more seminars and meetings there with researchers working on building efficiency and demand response. She volunteered to help with some projects within the LBNL Demand Response Research Center and also started working with Prof Duncan Callaway who had recently joined Berkeley within the Energy and Resources Group. The work was an ideal match
4 Algorithms for Energy Justice
83
of math-heavy engineering with social impact; she had found her calling. Her PhD research focused on demand response – both characterizing commercial building response to demand response signals and developing control system approaches to coordinate residential load participation in grid ancillary services. In the final year of her PhD, Mathieu was unsure if she should pursue a career in a national laboratory or academia. She had collaborated on a project with a visiting student from ETH Zurich and was intrigued by the research coming out of his group. With advice from a trusted mentor Sila Kiliccote, then at LBNL, that Europe was the place one should go if one really wanted to learn about cutting-edge grid/renewable energy technologies, she reached out to Prof Göran Andersson at ETH Zurich and was offered a postdoc. She spent a year and a half in Zurich working for Prof Andersson contributing to a number of research projects related to energy storage and stochastic optimal power flow, and also learning how to mentor PhD students. In January 2014, Mathieu started as an Assistant Professor at the University of Michigan. She is still continuing research related to her PhD and postdoc but has broadened her research to also work on topics such as coordination of coupled infrastructure networks, learning-based approaches to disaggregate feeder load, and characterizing the environmental impacts of storage. Now, post-tenure, she is also pivoting back to some of the fields that she was once passionate about, in particular, developing technologies for marginalized communities. She currently has an NSF Smart and Connected Communities project to develop data-driven approaches to increase access to energy assistance, efficiency, and rate programs by LMI households in Detroit, MI. Mathieu is the recipient of a 2019 NSF CAREER Award, the Ernest and Bettine Kuh Distinguished Faculty Award, and the U-M Henry Russel Award. She was a speaker at the 2021 National Academy of Engineering US Frontiers of Engineering Symposium. Mathieu is a Senior Member of the IEEE and currently serves as the Chair of the IEEE Power and Energy Society Technical Committee on Smart Buildings, Loads, and Customer Systems.
Part I
Planning and Generation
Chapter 5
Reliability-Centered Asset Management with Models for Maintenance Optimization and Predictive Maintenance: Including Case Studies for Wind Turbines Lina Bertling Tjernberg
5.1 Introduction 5.1.1 Energy Transition and the Role of Wind Power Operation and Maintenance The energy system is in a global transition, motivated by climate and energy goals and a growth in energy needs. The United Nations adapted a resolution for sustainable development with 17 goals (sustainable development goals – SDG) to be achieved by 2030 [1]. In 2020, the EU Framework Program for Research and Innovation presented the European Green Deal Call of A C1 billion investment to boost the green and digital transition, with targets for no net emissions of greenhouse gases by 2050 [2]. Moving toward a carbon neutral society, electricity is becoming the dominant energy carrier. In this process, electrical power systems are going through a paradigm shift with the three key features: decarbonization, decentralization, and digitalization. Solving these challenges will require coordination of widely ranging parts of the solution, to reach a reliable, economic, and sustainable system in an efficient way. Some major opportunities come from the rapid development of digitalization and data processing, including artificial-intelligence methods in order to deal with these challenges. An intelligent and adaptive grid is necessary to provide sustainable, cost-effective, and resilient power supply. Digital technologies will be applied in devices and systems to enable enhanced monitoring and control of grid elements, by more widespread communication, more powerful computing, and finer control. This should achieve smart solutions for the grid edge, system level
L. B. Tjernberg () KTH, Stockholm, Sweden e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_5
87
88
L. B. Tjernberg
control, and local automation and secure interaction among all actors, as well as grid reliability, cybersecurity, and resilience. In the past decades, there has been a huge increase in new electricity generation from renewable energy resources (RE), especially from wind and solar. The International Energy Agency (IEA) has predicted that electricity generation from renewable energy resources will expand by over 8% in 2021, reaching 8300 TWh, the highest annual rise since the 1970s [3]. Further, solar and wind energy are expected to account for two-thirds of renewable growth. According to GWEC Market Intelligence, the sector’s total onshore and offshore wind installations will exceed 1TW by 2025. This development is a result both of a transformation of the energy system for a sustainable development and of technology developments with electrification and digitalization. The struggle with the pandemic and the war in Europe, following the invasion of Ukraine in February of 2022, are also factors that have been accelerating this transition of the energy system. The future energy system should not only reach goals for sustainable developments but also goals for economic growth and for independence in energy supply. Offshore wind power fulfills many of these factors. In order for to succeed, however, there is a need for high availability and low cost for operation and maintenance (O&M). Thus, optimization is becoming more crucial in accelerating the expansion of wind energy and facilitating the energy transition. With larger capacity factors than other RE sources, onshore and especially offshore wind give better energy dependability to rising economies where power demand is increasing, particularly when aggregated across vast geographical regions. Onshore wind is a mature and mainstream energy source that is cost competitive with new coal and gas facilities and, in many markets, undercuts the operational costs of fully depreciated conventional generating assets [4]. Because many failures do not occur as a consequence of a single incident, but rather as a result of continual wear and tear, it is feasible to discover degradation at an early stage by constantly monitoring the critical components of the Wind Turbine (WT). Due to the low cost of supervisory control and data acquisition (SCADA) systems, and its ability to provide complete system coverage, which serves as a digital footprint of the WTs’ current operating conditions, the use of SCADA signals for advanced fault identification in WTs has been the subject of much research by academics, with the goal of preventing severe failures [5–15]. Furthermore, the operating conditions of the primary components, as well as any abnormalities, are determined using algorithms that include critical parameters such as speed, temperature, relative humidity, operational torque, and operating vibrations throughout the whole frequency spectrum. This allows the operating state of critical components to be continually monitored in real-time, reducing unexpected downtime by optimizing maintenance interventions, which is particularly significant for offshore wind applications where early defect detection is critical. This chapter includes a selected number of case studies demonstrating models for O&M of WTs.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
89
5.1.2 GreenGrids Vision and Thematic Areas There are high ambitions for the energy transition into a sustainable society. In Europe the EU taxonomy for sustainable activities [16] has been developed as a tool to support industry and investors in the transition of the energy system and avoiding “Green washing” and to drive investments toward the transition. In the revised EU taxonomy, sustainable electricity generation is defined as both from renewable energy resources (like wind and solar) and electricity generation from nuclear power and natural gas that are from nonrenewable energy resources. For the latter, there are specific requirements on low emissions and handling of waste. For example, in Germany, natural gas is considered as a step in between replacement of oil and coal and renewable energy resources (i.e., continued developments of large offshore wind parks). A sustainable society implies however not only electricity generation from sustainable generation but also changes at the consumption end such as electrified road transportation and low emission processes in the industries. In the current debate, the electricity grid is typically portrayed as inadequate to support this transition, exemplified with industrial development hampered by lack of grid connection, or establishment of new data centers being delayed. Another perspective is however to see the power grid as an enabler of the energy transition which is what the vision of GreenGrids represents [17]. Much in the way that the Internet shaped the development of the entertainment industry through efficient distribution of media, the electricity grid will be what facilitates the increase of renewables and reshaping of industries and societies by providing secure and flexible access to electricity at a competitive cost. Framed in the SDGs [1], GreenGrids clearly contributes to Goal #7 for sustainable and clean energy but equally so to Goals #9 and #11 by enabling the transition of industry, infrastructure, and society toward sustainability. Admittedly, development of electricity grids – like many societal infrastructures – is prone to be slow, focused very much on economies of scale and requiring large investments. Therefore, usually significant legal and regulatory constraints lead to long processes to build grids. Once built, keeping them operational and reliable at a low cost is the ruling paradigm. The GreenGrids vision is presented in three thematic areas that provide a 360degree view on electrification ingredients toward a sustainable society as shown in Fig. 5.1. There is an overlying system-level aspect, including the key aspects and expertise areas of asset management (AM) and circular economy aiming toward the overall goals to fulfill the SDGs. The three interlinked three thematic areas of GreenGrids are further explained as follows: 1. The intelligent and adaptable grid to create new values for utilities and customers by: e.g., intelligent tools for real-time monitoring and management, predictive grid management for transmission, and distribution networks, medium and low voltage integration
90
L. B. Tjernberg
Fig. 5.1 GreenGrids vision for a sustainable society with interlinked thematic areas [17]
2. The flexible grid infrastructure to be ready for power grids evolution and decarbonization trend by: new power electronic converters for integration of renewables, uncertainty management toward grid flexibility, optimal sizing of renewable, non-dispatchable, sources, and energy storage systems to mitigate variability in production, enabling installation of large amounts of variable sources without compromising grid stability 3. Improved future power devices and enhanced material capability and environmentally friendly solutions by: novel transformer, cable, and switching technologies and enhanced material capability and predictability to understand the materials aging and to achieve reliable aging models This chapter will focus on AM with contributions to two parts of the GreenGrids topics: firstly, to thematic area (1) in proposing novel methods for predictive maintenance and specifically including a fault detection framework using neural networks for condition monitoring of high voltage equipment in wind turbines, and, secondly, to (3) in proposing methods for reliability-centered maintenance and LCC for a circular economy.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
91
5.1.3 Power System Reliability Definition The concept of reliability is widely used for engineering systems meaning the ability of a system or component to perform its required functions under stated conditions for a specified time. Reliability engineering can be considered as a subdiscipline of engineering system emphasizing dependability in the life cycle management of a product. This includes designing in the ability to maintain, test, and support the product throughout its total life cycle [18].
5.1.3.1
Basic Definitions for Reliability Performance
Availability According to [19], availability is defined by the probability that the system is capable of functioning at a time t. Different types of availability are given by: asymptotic availability, asymptotic average availability, average availability, and instantaneous availability [20]. The asymptotic availability is a fundamental measure of reliability performance and is commonly expressed by availability. It can be defined as follows: A fundamental measure of reliability performance is availability. The availability can be defined by the asymptotic availability A (∞) = A(t)
.
where A describes availability or A=
.
ms ms + MDT
where ms denotes the average service time and MDT the mean downtime. This concept can be used if failure intensity and repair intensity are constant. When these conditions are valid, the definitions for availability and asymptotic availability are the same and are commonly called availability. According to [19], the asymptotic availability is identical to asymptotic average availability when the threshold value for the asymptotic availability exists, and this is then called availability A. The availability A is defined as: A=
.
MTTF E(T ) = MTTF + MTTR E(T ) + E(D)
where T is the lifetime and D is the repair time, and MTTF and MTTR are defined below. Mean Time Between Failures Mean time between failures (MTBF) is the most common way to determine a maintenance interval. It indicates the time from a component failure to its next failure. This time includes mean downtime (MDT) and mean time to failure
92
L. B. Tjernberg
(MTTF). MTBF = MDT + MTTF
.
According to [19], MTTF can be written as: MTTF = E(T ) =
∞
t f (t)dt
.
n
With the help of historical data, MTBF can be estimated to get an approximate value. The large numbers law is valid for estimation of values and says that, when the number of observations (N) goes to infinity, the value of the estimated parameter goes to the parameter’s true value [19]: N
n=1 tn
MTTR =
.
N
where tn = time between failure for failure n and N is a large population. If the component is not in operation 100% of the time, this must be taken into consideration. Mean Time to Recovery (MTTR) MTTR together with the MTW is the MDT: MDT = MTTR + MTW
.
where MTW is the mean waiting time and is also estimated with help of historical data. MTTR is estimated with the help of historical data in the same way as MTBF: N MDT =
.
n=1 rn
N
rn = repair time between failure for failure n and N is a large population. Failure Rate Failure rate, λ, indicates how many times a component fails. MTBF indicates how much time passes between each failure. The relation between these is therefore: λ=
.
5.1.3.2
1 MTRF
Power System Reliability Evaluation Evolving Over Time
A classical definition of power system reliability is given by: “The Reliability of a power system is the degree to which the performance of the elements of the system
5 Reliability-Centered Asset Management with Models for Maintenance. . . Fig. 5.2 Fundamental concepts of power system reliability
93
Reliability
Security
Adequacy
results in power being delivered to consumers within accepted standards and in the amount desired.” Figure 5.2 illustrates the traditional classification of power system reliability that is defined to be either security or adequacy [21]. Security is the ability of the power system to withstand sudden disturbances. It refers to the degree of risk in its ability to survive imminent disturbances (contingencies) without interruption of customer service. It relates to the robustness of the system in a context of imminent disturbances and depends on the power system operating condition before the disturbance and the contingent probability of disturbances. Adequacy is the ability of the system to supply the aggregate electric power and energy requirements of the customers at all times, taking into account scheduled and unscheduled outages of the system components. Power system reliability with applications to power systems has evolved over time together with the development of the power system and the power grid. A central focus has always been on power system operation and transmission. Power system distribution however has grown in focus with the increased complexity of the distribution system, and market and customer perspectives entered with the introduction of the market in the mid-1990s. Activities in the field have gathered experts from industry and academia in the Institute of Electrical and Electronic Engineers (IEEE) subcommittees from the 1970s and are today held within the subcommittee on Reliability, Risk and Probability Applications (RRPA). A bibliography of papers on the subject of power system reliability evaluation has been published regularly from [22, 23] to [24]. In the latter, the following categories of topics were included: static generation capacity reliability evaluation, multiarea reliability evaluation, composite generation-transmission reliability evaluation, operating reserve evaluation, transmission and distribution system reliability evaluation, equipment outage data, and reliability cost/worth analysis, The GreenGrids vision has been introduced with the energy transition. Each of the three themes of GreenGrids would result in new perspectives and possibilities for power system reliability assessment. As one example, the digitalization (thematic area 1) provides access to more data and information of the power system, its components and its users. This access to data can be used to develop more advanced tools for operation and also for more advanced models for power system reliability.
94
L. B. Tjernberg
5.2 Asset Management and Predictive Maintenance 5.2.1 Asset Management Asset management (AM) is defined as the coordinated activity of an organization to realize value from assets [25]. The aim of AM is to handle assets in an optimal way in order to fulfill an organization’s goal while considering risk [26]. Typical goals of an organization are given by maximizing asset value, maximizing benefit or minimizing life cycle cost. Recently, another goal is coming into focus that relates to the circular economy. Circular economy is an economic model developed to minimize resource input and waste and at the same time minimize the climate footprint. The risks to an organization could be defined by: a lack in service, e.g., unavailability of power supply to electricity customers, risk of a cyberattacks, or lack of recourses in personnel or material. The unavailability in power supply could be defined by the probability of failure occurrence and its consequences. There are different possible actions to handle these assets including: acquire, maintain, replace, and redesign. With a perspective of circular economy, the assets could also be exposed to a second lifetime or to lifetime extension. In brief, AM provides inputs to decision-makers on [26]: • • • •
What assets to apply actions for What actions to apply How to apply the actions When to apply the actions
In order to understand the choices, decision theory presents two alternative methods: normative or optimal decision theory. In the normative theory, the outcomes of a decision are analyzed, and a specific situation comes into focus. In optimal theory, the focus is on investigating why choices are made and their assumptions. In decision theory, studies are made of the logic and mathematical properties of decision-making under uncertainty [27]. In performing decision theory, typically scenarios are used or alternative case studies. In order to make comparative assessments between different decision analysis techniques, a set of defined scenarios can be used. An example of scenarios for the energy transition is given by “The Long-Term Energy Scenarios for the Clean Energy Transition campaign” [28]. These are also known as the LTES and were initiated under the Clean Energy Ministerial (CEM) to promote the improved use and development of LTES for the clean energy transition. In practice, a specific set of alternative investment and design solutions could be investigated e.g. formulated as different cases. The information needed to make the decisions include [26]: • Condition data • Failure statistics • Reliability modeling techniques
5 Reliability-Centered Asset Management with Models for Maintenance. . .
95
• Reliability assessment tools • Maintenance planning tools • Systematic techniques for maintenance planning, for example, the reliabilitycentered maintenance (RCM), and reliability-centered asset management (RCAM) methods The next section provides an overview of some of this input needs for AM with a special focus on predictive maintenance and methods for maintenance optimization.
5.2.2 ISO 55000 Standard The ISO 55000 family of standards is the first set of international standards for AM [25]. They are emerging from the Publicly Available Specification (PAS) 55. The PAS55 was launched by the British Standards Institute (BSI), in 2004, as a result of an effort led by the Institute of Asset Management (IAM) and is considered as the first internationally recognized specification for AM. The PAS55 considers the optimal management of physical assets, but the ISO 55000 is a standard for any asset type. The ISO 55000 family of standards aligns with other major management systems. This includes, e.g., the ISO 9001 for quality management, the ISO 14001 for environmental management, and ISO 31000 for risk management. The ISO 55000 family of standards provides the first management standard to implement the ISO Annex SL. The Annex SL is a high-level structure to provide a universal structure, identical core text, and common terms and definitions for all management standards as shown in Table 5.1. Table 5.1 Selected terminology for Asset Management (AM) from the ISO 55000 [25] Terminology AM Asset AM plan
AM system Capability Corrective action Competence Continual improvement
Description Coordinated activity of an organization to realize value from assets Item, thing, or entity that has potential or actual value to an organization. Documented information that specifies the activities, resources, and timescales required for an individual asset, or a grouping of assets, to achieve the organization’s AM objectives Management system for AM whose function is to establish the AM policy and AM objectives Measure of capacity and the ability of an entity (system, person, or organization) to achieve its objectives Action to eliminate the cause of a nonconformity and to prevent recurrence Ability to apply knowledge and skills to achieve intended results Recurring activity to enhance performance (continued)
96
L. B. Tjernberg
Table 5.1 (continued) Terminology Effectiveness Life cycle Management system
Monitoring Measurement Objective Policy Predictive action Preventive action Requirement Risk Strategic AM Plan (SAMP)
Description Extent to which planned activities are realized and planned results achieved Stages involved in the management of an asset Set of interrelated or interacting elements of an organization to establish policies and objectives and processes to achieve those objectives Determining the status of a system, a process, or an activity Process to determine a value An objective can be strategic, tactical, or operational Intentions and direction of an organization as formally expressed by its top management Action to monitor the condition of an asset and predict the need for preventive action or corrective action Action to eliminate the cause of a potential nonconformity or other undesirable potential situation Need or expectation that is stated, generally implied or obligatory Effect of uncertainty on objectives Documented information that specifies how organizational objectives are to be converted into AM objectives, the approach for developing AM plans, and the role of the AM system in supporting achievement of the AM objectives
5.2.3 Maintenance as a Strategic Tool Maintenance is one of the main tools of AM. Its goal is to increase the duration of useful component life and postpone failures that typically would require expensive repairs. Figure 5.3 illustrates the impact of maintenance policies. An increasing deterioration is expressed in terms of decreasing asset value. The asset value curves in the diagram are here referred to as life curves. The life curves illustrate how maintenance can be used as a tool for AM by showing the benefits of different maintenance policies. The figure illustrates conditions for two maintenance policies and for the case where no maintenance is carried out at all. Maintenance has its own costs which have to be taken into account when comparing policies and the most cost-effective policy chosen. The costs of maintenance should be balanced against the gains resulting from increased reliability. When costs are considered, Policy 2 may or may not be superior to Policy 1. Maintenance is a combination of all technical, administrative, and managerial actions during the life cycle of an item. It can be carried out in different ways. Figure 5.4 illustrates the different types of maintenance.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
97
Fig. 5.3 Illustration of the impact of maintenance policies on life curves. (Amended from [26] and in original from [29])
Fig. 5.4 Overview of maintenance concepts. (Reproduced from standard EN13306 Maintenance Terminology [26])
Corrective maintenance (CM) is carried out after fault recognition and is intended to put an item into a state in which it can perform a required function. It is typically performed when there are no cost-effective means to detect or prevent a failure. CM can be used if there is no way to detect or prevent a failure, or it is not worth doing. It simply means that the asset is run until a failure occurs and then the system function is restored. This may of course not always be an option if the consequences of a breakdown are severe. Preventive maintenance (PM) is carried out at predetermined intervals or according to prescribed criteria and intended to reduce the probability of failure or the degradation of the functioning of an item. Predetermined maintenance, or scheduled maintenance, is a preventive maintenance carried out in accordance with established intervals of time or number of units of use but without previous condition investigation. Predetermined maintenance
98
L. B. Tjernberg
would be an option if a failure is age related and the probabilities of failure in time can be established. Depending on the consequences of a failure, different maintenance intervals can be chosen. If the consequences of a failure are not too severe and the cost for predetermined tasks are high, one might choose to allow the intervals between tasks to be longer than if the functional failure leads to a safety hazard which has consequences that cannot be tolerated. A scheduled restoration is an example of a predetermined maintenance task. There are two main approaches for preventive maintenance strategies: 1. Time-based maintenance (TBM) is preventive maintenance carried out in accordance with established intervals of time or number of units of use but without previous condition investigation. TBM is suitable for failures that are age-related and for which the probability distribution of failure can be established. 2. Condition-based maintenance (CBM) is preventive maintenance-based on performance and/or parameter monitoring. CBM consists of all maintenance strategies involving inspections or permanently installed condition monitoring systems (CMS) to decide on the maintenance actions. Inspection can involve the use of human senses (noise, visual, etc.), monitoring techniques, or function tests. CBM can be used for non-age-related failures if the activity has the ability to detect/diagnose the degradation in time in a cost-effective manner. The ability to detect the deterioration in time is linked to the concept of P-F curve, which represents a typical deterioration of the state of a component in time, as shown in Fig. 5.5. A CBM strategy is effective if it can observe the deterioration sufficiently in advance and ideally give a prognosis of the time to failure, in order to schedule a repair/replacement of the component before failure. This has the advantage of minimizing downtime cost and often reduces further consequential damages. If no cost-effective maintenance strategy exists for critical components or failures, design/manufacturing improvement should be considered in order to increase the inherent reliability of the component. Fig. 5.5 P-F curve concept. P represents the point in time when the indication of a potential failure can be first detected. F represents the point in time when the deterioration leads to a failure
F Deterioration limit
P
Time
5 Reliability-Centered Asset Management with Models for Maintenance. . .
99
5.2.4 Data Analytics, Artificial Intelligence, and Machine Learning Data sets can consist of either qualitative or quantitative variables: where qualitative properties are observed and can generally not be measured, and quantitative properties on the other hand have numerical values. Statistics is a branch of mathematics including the collection, analysis, interpretation, presentation, and organization of data. The data analytics term for analysis of this data is exploratory data analytics (EDA). The EDA aims to find patterns and relationships in data. The latter draw conclusions from the data that are subject to random variation and are commonly referred to as probability theory. The data analytics term for the analysis of this data is confirmatory data analysis (CDA). The CDA applies statistical techniques to determine whether hypotheses about a data set are true or false. Artificial intelligence (AI) is the field of developing computers and robots that are capable of behaving like humans or even go beyond human capabilities. Examples of AI applications are given by web search engines (e.g., Google), recommendation systems (e.g., Netflix), or self-driving cars (e.g., Tesla). Machine learning (ML) is a subset of AI and a subfield of computer science that gives “computers the ability to learn without being explicitly programmed” by [30]. The ML algorithms have been inspired by the learning process of the human and learn iteratively from the data. Mathematical models that are built using ML algorithms can be used for object recognition and classification. The goal of ML is to enable machines to learn themselves using data. Artificial neural networks (ANNs) (or neural networks NNs) are computing systems inspired by the biological neural network that constitute animal brains. An ANN is based on a collection of collected units called artificial neurons. The connection, called edges, can transmit a signal to other neurons. The signal at a connection is a number, and the output from each neuron is computed by some nonlinear function of the sum of its inputs. Edges and neurons have a weight that adjusts as learning proceeds. Neurons are aggregated into layers. Different layers may perform different transformations of their inputs. Signals travels from the input layer (the first layer) to the output layer (the last layer). Additional layers can help to refine the accuracy of the model. Deep learning is a subset of ML, with three or more layers, that structures algorithms in layers to create an ANN. Data analytics (DA) is the process of examining data sets. Here are three examples of DA methods that are applicable for AM and predictive maintenance: 1. Data mining involving the sorting of large data, e.g., identifying trends, patterns, and relationships 2. Predictive analytics seeking to predict such things as equipment failure, future events 3. ML and AI that uses automated algorithms, making possible solving larger problems faster compared to traditional analytical models
100
L. B. Tjernberg
5.3 Reliability-Centered Maintenance (RCM) 5.3.1 RCM The RCM methodology provides a framework for developing optimally scheduled maintenance programs. The term RCM identifies the role of focusing maintenance activities on reliability aspects. The aim of RCM is to optimize the maintenance achievements (efforts, performance) in a systematic way. The RCM method requires maintenance plans and leads to a systematic maintenance effort. Central to this approach is identifying the items that are significant for system function. The aim is to achieve cost-effectiveness by controlling the maintenance performance, which implies a trade-off between corrective and preventive maintenance and the use of optimal methods.
5.3.2 The Concept of RCM The RCM concept originated in the civil aircraft industry in the 1960s with the creation of the Boeing 747 series of aircraft (the Jumbo). One prerequisite for obtaining a license for this aircraft was having in place an approved plan of PM. However, this aircraft type was much larger and more complex than any previous aircraft type; thus, PM was expected to be very expensive. Therefore, it was necessary to develop a new PM strategy. United Airlines led the developments, and a new strategy was created. This was primarily concerned with identifying maintenance tasks that would eliminate the cost of unnecessary maintenance without decreasing safety or operating performance. The resulting method included an understanding of the time aspects of reliability (aging) and identifying critical maintenance actions for system functions. The maintenance program was a success. The good outcome raised interest and the program spread. It was further improved, and in 1975 the US Department of Commerce defined the concept as RCM and declared that all major military systems should apply RCM. The first full description was published in 1978 [31], and in the 1980s the Electric Power Research Institute (EPRI) introduced RCM to the nuclear power industry. Today RCM has already been implemented by many electrical power utilities for managing maintenance planning. RCM provides a formal framework for handling the complexity of the maintenance issues but does not add anything new in a strictly technical sense. RCM principles and procedures can be expressed in different ways; however, the concept and fundamental principles of RCM remain the same. The following features originate from the first definition of RCM [32] and define and characterize the RCM method. The RCM method facilitates the: 1. Preservation of system function 2. Identification of failure modes
5 Reliability-Centered Asset Management with Models for Maintenance. . .
101
3. Prioritizing of function needs 4. Selection of applicable and effective maintenance tasks A comprehensive introduction to the RCM method is given in [33], which summarizes the key attributes of RCM in seven basic questions: 1. What are the functions and associated desired standards of performance of the asset in its present operating context? 2. In what ways can it fail to fulfil its functions? 3. What causes each functional failure? 4. What happens when each failure occurs? 5. In what way does each failure matter? 6. What should be done to predict or prevent each failure? 7. What should be done if a suitable proactive task cannot be found?
5.3.3 The Concept of RCAM The reliability-centered asset maintenance (RCAM) merges the proven systematic approach of RCM with quantitative maintenance optimization technique. While sole RCM as a qualitative method is limited in assessing the cost-effectiveness of different maintenance strategies, mathematical maintenance optimization techniques alone do not ensure that the maintenance efforts address the most relevant components and failures. The RCAM has been formulated based on an understanding of RCM concepts and experience gained from RCM application studies. The RCAM identifies the central role of defining the relationship between component behavior and system reliability that is made through the evaluation of the causes of failures. The RCAM method was first published in [34] with applications for electrical distribution systems. The RCAM approach has been applied to power transmission systems, hydro plants, nuclear power plants, and wind power systems and their equipment. The book [26] gives a detailed overview of both the RCM and the RCAM method including case studies. In this chapter, selected applications for wind power are included: firstly, studies focusing on implementation of RCM and maintenance optimization and, secondly, recent work on using machine learning methods for predictive maintenance as an input for the RCAM. The mathematical models which are required for the implementation of the RCAM method but not at all limited to this method are explained in the following sections.
5.3.3.1
The Different Steps in a RCAM Analysis
Table 5.2 presents the RCAM that includes the main procedures for developing RCM plans and consequently is also the first result in the development process. It also identifies the following issues: (i) the logical order of the different procedures
102
L. B. Tjernberg
Table 5.2 Steps in performing an RCAM analysis [26] Step
Procedure
Level Data required
Results
1
Reliability analysis
S
Component data
Reliability indices
2
Sensitivity analysis
C
Component data
Critical components
3
Analysis of critical components
C
Failure modes
Critical components affected by maintenance
4
Analysis of failure modes
C
Failure modes, Frequency of maintenance causes of failures, etc.
5
Estimation of composite failure rate
C
Maintenance frequency
Composite failure rate
6
Sensitivity analysis
S
Frequency of maintenance
Relationship reliability and PM
7
Cost/benefit analysis
S
Costs
LCC – an RCAM plan
required, (ii) the need for interaction between the system and the component levels, and (iii) an indication of the different input data needed. The following three main stages can be identified for the procedures. • Stage 1. System reliability analysis (system level analysis) defines the system and evaluates critical components for system reliability (Steps 1 and 2). • Stage 2. Evaluation of PM and component behavior (component level analysis) analyzes the components in detail, and, with the support of necessary input data, a quantitative relation between reliability and PM measures can be defined (Steps 3–5). • Stage 3. System reliability and cost/benefit analysis (system level analysis) puts the understanding of the component behavior gained in a system perspective. The effect of PM on components is analyzed with respect to system reliability and benefit in cost for different PM strategies and methods (Steps 6 and 7). Figure 5.6 shows a detailed logic diagram for performing an RCAM analysis. It illustrates the different stages and steps in the method as well as the systematic process for analyzing the system components and their failure causes. It includes the same three stages as included in Table 5.2 moving from a system, to a component, and back to a system perspective. The final result is a life cycle cost (LCC) assessment. More details on different steps in the RCAM are presented in more details in the following parts of the chapter.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
103
Stage 2: Component reliability analysis
Identify failure causes by failure mode analysis
Define strategy for preventive maintennace: when, what, how
Are there more causes of failures?
Yes
No Are ther alternative prev. maintenance methods?
Yes
for each critical component i
Identify critical compnents by reliability analysis
Model effect of preventive maintenance on reliability
failure cause k
Define reliability model and required input data
Stage 3: System reliability cost-benefit analysis
prev. maint. method j, and
Stage 1: System reliability analysis
Define a failure rate model
No
Estimate component failure rate Compare reliability for preventive maintenance methods and strategies Identify cost-effective preventive maintenance strategy
Deduce preventive maintenance plans and evaluate resulting model Are there more critical components?
Yes
No
Fig. 5.6 Logic diagram of the RCAM method. (Adapted from [26])
The central part of the RCAM is the definition of a relationship between reliability and preventive maintenance, that is, Stage 2 in the RCAM analysis discussed previously. In practice, this could be accomplished through a simple approach: an organization can build the best RCM program by analyzing the statistical performance and failure parameters of each component, identifying important indicators that are failure precursors, and setting up inspection or sensor regimes to monitor for those precursor signs in crucial components. In a more advanced approach, condition monitoring could be used for selected equipment. The condition monitoring system features intelligent data storage and data analysis. Classification methods with standardized categories are typically preferred to interpret measurement data to identify different types of failure modes [35]. In some cases, due to the lack of labelled data, regression models could be utilized to model normal behaviors [36]. Furthermore, deep learning models are applied to model operational data of electrical equipment with the data volume gradually increasing, like recurrent neural networks [37]. Based on the analysis results, health indices are proposed to prioritize the condition status of the equipment and to determine the corresponding preventive maintenance, replacement, and refurbishment [38]. The two last examples included in this chapter shows results from RCAM using these advanced models with wind power applications.
104
L. B. Tjernberg
5.3.4 Failure Mode and Effects Analysis (FMEA) Failure mode and effects analysis (FMEA) is a useful tool when performing an RCM analysis. FMEA is a way to evaluate potential failure modes and their effects and causes in a systematic and structured manner. Failure modes mean the ways in which something could fail. Effects analysis refers to studying the consequences of those failures. The purpose of the FMEA is to take actions to eliminate or reduce failures, starting with the highest-priority ones. By itself, an FMEA is not a problem solver; it should be used in combination with other problem-solving tools. The analysis can be done either in a qualitative or quantitative way. The basic steps in performing an FMEA could be [26]: 1. Define the system to be analyzed. Complete system definition includes definition of system boundaries, identification of internal and interface functions, expected performance, and failure definitions. 2. Identify failure modes associated with system failures. For each function, identify all the ways failure could happen. These are potential failure modes. 3. Identify potential effects of failure modes. For each failure mode, identify all the consequences on the system. “What happens when the failure occurs”? 4. Determine and rank how serious each effect is. The most critical pieces of equipment which affected the overall function of the system need to be identified and determined. 5. For each failure mode, determine all the potential root causes. 6. For each cause, identify available detection methods. 7. Identify recommended actions for each cause that can reduce the severity of each failure. The FMEA could be extended with a criticality analysis (CA). The CA will provide the estimates of system critical failure rates based on past history and current information. The resulting FMECA (failure mode, effects, and criticality analysis) is a reliability evaluation and design technique that examines the potential failure modes within a system in order to determine the effects of the overall system and the equipment within the system. The FMECA should be initiated as soon as preliminary design information is available. The FMECA is a living document that is not only beneficial when used in the design phase but also during system use. As more information on the system becomes available, the analysis should be updated in order to provide the most benefit. Generic maintenance data is a valuable tool when historical information is not available to establish a maintenance-based line for a new system.
5.3.5 Life Cycle Cost Analysis The LCC for a technical system is its total cost during its lifetime. The goal is to minimize the total lifetime cost. The total cost includes all costs associated with
5 Reliability-Centered Asset Management with Models for Maintenance. . .
105
planning, purchasing, O&M, and liquidation of the system. Power plant financial concerns would typically include investment, maintenance, production loss, and rest value. The LCC is different for different technical systems. For circular economy evaluations, the LCC needs to include a perspective that also incorporates waste recovery and could include new possibilities such as a second lifetime. An example of an LCC model for wind turbines is provided later in this chapter in order to allow a full understanding of the RCAM analysis. For a general detailed introduction of LCC analysis, the reader is referred to available literature for further details as in [26].
5.3.6 Systematic Asset Management Process with RCM and RCAM The overall goal of the RCM and RCAM is to reach a systematic AM process. This does not mean only to implement the RCM and RCAM, but most importantly to find a process for updating the maintenance programs resulting from performing RCM analysis. Figure 5.7 gives an overview of the process to perform RCM and RCAM analysis including the needed input data. An important reflection is that RCM and RCAM are both part of the design and development process and the operation and
Fig. 5.7 An overview of process of performing RCM and the RCAM including the needed data [26]
106
L. B. Tjernberg
support phase and consequently represent a continuous process during the lifetime of the system and the equipment being considered. The RCM process starts in the design phase and continues for the life of the system as shown in Fig. 5.7. There are several major tasks required to implement the RCM concept. According to, these tasks can be grouped into two main categories as follows. 1. Conduct supporting analyses: RCM is an information-intensive process. Supporting analyses providing these data include the FMEA, fault tree analysis, functional analysis, and others. 2. Conduct the RCM analysis: The RCM analysis consists of using a logic tree to identify effective, economical, and required preventive maintenance. Planning to implement an RCM approach to define the preventive maintenance for a system or product must address each of the tasks noted above. The plan must address the supporting design phase analyses needed to conduct the RCM analysis. Based on the analysis, an initial maintenance plan, consisting of the identified preventive maintenance with all other maintenance being corrective, by default, is developed. This initial plan should be updated through life exploration during which initial analytical results concerning frequency of failure occurrence, effects of failure, costs of repair, and so on are modified based on actual operating and maintenance experience. Thus, the RCM process is iterative, with field experience being used to improve upon analytical projections.
5.3.7 From RCM to RCAM and Quantitative Maintenance Optimization (QMO) 5.3.7.1
Approaches for Maintenance Optimization
The approaches to reaching optimal maintenance can in principle be separated into two classes, qualitative and quantitative methodologies. This chapter has introduced both the RCM and RCAM approaches. RCM is a qualitative approach widely established and successfully applied in a variety of industries. The RCAM approach on the other hand is a quantitative approach of RCM relating preventive maintenance of equipment to system reliability and total cost with several applications for power system applications and involves quantitative maintenance optimization (QMA). Figure 5.8 illustrates the interrelations between the three concepts. QMA techniques are characterized by the utilization of mathematical models which quantify both the cost and the benefit of maintenance and determine an optimum balance between these [40]. The task in QMO is often to find the minimum total cost consisting of: • The direct maintenance costs, e.g., for labor, materials, and administration, which increases with the intensity of maintenance actions
5 Reliability-Centered Asset Management with Models for Maintenance. . .
RCM alone − as a qualitative method − is limited in determining which maintenance Reliability Centred strategies are Asset the most cost Management effective options (RCAM) available
107
QMO techniques alone do not ensure that maintenance efforts focus on the relevant components
Fig. 5.8 Interrelation of RCM, QMO, and RCAM [39]
• The costs resulting from not performing maintenance as required, i.e., due to loss of production and due to additional labor and materials after component breakdowns The main purpose of quantitative maintenance optimization is to assist management in decision-making, by utilizing available data and thus reducing reliance on subjective judgment of experts. A significant limitation of RCM as a purely qualitative method is its lack of capability of determining which maintenance strategies are the most cost-effective options available. On the other hand, a drawback of QMO techniques to be aware of is that these alone do not ensure that the maintenance efforts are targeted on the relevant components.
5.3.7.2
Mathematical Models for Maintenance Optimization and Step 3 of RCAM
A mathematical model for maintenance strategy optimization is composed of: • An objective function, which can be used to minimize the total maintenance costs, maximize the availability, or minimize the risk of failure for the system • Constraints on the maintenance budget that minimize availability requirements • A model of the failure and impact of the maintenance on the failure process The failure and maintenance modelling is the most challenging part of the RCAM process. It has to be sufficiently realistic for the results to be accurate and sufficiently simple to be tractable and practical. Many models have been proposed in the literature, but few have been applied in practice due to complexity and/or lack of
108
L. B. Tjernberg
data [40]. Common mathematical models for quantitative maintenance optimization are discussed in [41–43]. The modelling of the failures can often be separated into three types of approaches: 1. Black box models where the failure occurrences are modelled with probability distribution. Examples from the literature are given by: [41–44]. 2. Grey box models where the deterioration process underlying the failure is modelled. This implies that the deterioration can be observed (classified or measured) directly or indirectly by relevant deterioration indicators. The mathematical model is often a regression model or stochastic processes. Examples from literature are given by [45–48]. 3. White box models: The physical process behind the deterioration is modelled, using stress factor input (such as load) to estimate the deterioration of the system in time. Examples from literature are given by [49]. While black and grey box approaches involve comprehensive statistical data to validate the models, the white box approach requires detailed knowledge on the physical processes that lead to failures. In the context of maintenance modelling, it is of particular importance to consider if the component is repairable. If the component is non-repairable, the maintenance can be modelled by means of renewal processes [41]. The modelling of repairs is an advanced subject that is discussed in [44, 50]. When an economic objective function is considered as it is implied in the Stage 3 of the RCAM method, LCC models are often used in order to take into account the value of money in time. This is especially important when considering CMS maintenance strategies that have an initial investment cost. A LCC model is the sum of the discounted capital and operational expenditure over the lifetime of a system. For maintenance costs, an example of a general LCC model for a wind turbine is given by [51]: LCC = CInv +
N
.
i i i i CPM · (1 + r)−i + CCM + CPL + CSer
i=0
where • CInv is the initial investment for the maintenance strategy (equipment, monitoring system). i , C i , and .C i are the costs for corrective maintenance, preventive mainte• .CPM CM PL nance costs, and production losses, respectively. i is the cost for service (monitoring, analysis, administration), during year i. • .CSer • N is the expected lifetime of the system (in years). • r is the interest rate which depends on the bank if the investment is financed by a loan or on the expectation on the return rate from other investments (opportunity cost) if the investment is financed by the company’s own funds.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
109
5.4 RCAM for Wind Power Plants from Statistics to Optimization The section aims to give an understanding of the maintenance optimization problem for wind power plants with a particular focus on maintenance strategy optimization by means of the RCAM concept. Results are included from several previous studies, starting with a survey of failures in wind power systems [52] which is widely cited in wind power research and provides an important input for reliability evaluation of wind turbines. The section includes two applications for maintenance optimization which have previously been published in [39].
5.4.1 The Need of Maintenance Optimization Maintenance of wind power plants, especially offshore facilities, is known to be extensive and costly. This is due to frequent and unexpected failures, spare part and equipment availability, and weather conditions that may lead to long downtimes. In order to meet the SDGs, there is however a need for the rapid development of electricity generation from renewable energy with huge ongoing and planned investments in wind power including onshore, offshore, and floating structures. However, the uncertainties of the economic returns of the projects (due to maintenance costs and production incomes) may slow down the investment rate necessary to reach the SDG target. There is thus a need to implement models like RCAM in order to support developments for the sustainable energy system. A failure in a wind turbine leads to direct costs for the spare part, maintenance equipment, and maintenance staff required for correcting the failures, as well as indirect costs due to production losses. In order to reduce the present high maintenance costs, the maintenance strategies and organization need to be clearly defined, implemented, and optimized. The maintenance optimization can be separated into interconnected tasks: • Definition of suitable maintenance strategies for the components of the system according to their failure modes, probabilities, and consequences studied in [53– 55]. • Scheduling of the maintenance activities studied in [56–58]. • Build a support organization, i.e., spare part and staff management, as well as investment decisions for transportation and maintenance equipment, studied in [59–61]. Maintenance optimization aims at determining the optimal balance between the maintenance costs (investments, intensity of implemented maintenance) and the benefits with respect to the energy income, in order to maximize the profits over the lifetime of the wind power system.
110
L. B. Tjernberg Failure frequency [f/yr] 0.06 0.04 0.02
Downtime [h] 0
40
80 120 160 200 240 280
Electrical system Sensors Blades/Pitch system Hydraulic systems Control system Gearbox Yaw system Generator Entire unit Structure Mechanical brakes Main shaft and bearing Hub
Fig. 5.9 Average failure frequency and downtime per subsystem and year in wind turbines in Sweden in the period 1997–2005. (Adapted from [39] with data from [52])
5.4.2 Reliability of Wind Turbines A. Definition of Reliability and Availability Reliability, as discussed previously, can be defined as the ability of a component or system to perform required functions under stated conditions for a stated period of time [62]. One commonly used method to estimate the reliability performance is by means of yearly failure frequency and associated average downtime per component or subsystem failure as shown in Fig. 5.9. Together these would define the availability (see also definitions in previous section on reliability performance and availability). B. Reliability Data Sources An example of failure frequency distribution per subsystems for a population of wind turbines in Sweden is shown in Fig. 5.9. The data shown are based on manual reporting [52]. In a similar way, data have been collected by institutes in Germany [55, 63–66] and Finland [67]. Discrepancies can be observed in the average number of failure per year and turbines for the different surveys (e.g., 0.8 f/year in Sweden, 1.5 f/year in Finland, and 2.4 f/year in Germany). This can be described to differences in the populations (age, size, wind turbine model, environment) as well as differences in the quality of reporting. Failure data have usually been reported at a high component structure level with the cause for the failures. However, the data does not provide information on the failure modes, i.e., the way a system fails to perform its function and the maintenance activity undertaken to correct the failure. This is an important limitation of the data available for maintenance optimization purposes. An example of an investigation of the reliability of multi-MW wind turbines is given by [68] which includes a detailed analysis of failure modes.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
111
C. Component Reliability Sensors and pitch control systems in wind turbines have on average the highest failure rates. It has also been found that larger wind turbines have higher failure frequencies due to their more sophisticated control systems and electrical components [66]. Failures of large and heavy components, such as the gearbox, the generator, or the main shaft, lead to long downtimes. High failure frequencies coincide with considerable downtimes in the case of the gearbox, making this subsystem especially critical for the technical availability of the whole wind turbine. High gearbox failure rates are also reported in a recent survey stating the average lifetime of the gearboxes to be no longer than 6–8 years [69]. D. Reliability Through Time An average failure frequency over the lifetime does not carry any time information (i.e., if the failure has a higher probability to occur during specific periods over the life cycle of a component). Due to this reason, average failure rates can only be used to assess the average risk and criticality of component failure, but they are often not suitable for the purpose of maintenance strategy optimization, e.g., if the components are subject to aging. The development of reliability of wind turbines over time has recently been discussed in the literature, e.g., in [70] for a large population of wind turbines and in [71] for specific components in sub-MW wind turbines. An overall tendency is that the average failure rate of wind turbines decreases with the operational age and production age, due to the continuous improvement of component design [65]. However, age-related failures (probably due to wear) can be observed for specific components, such as for the gearbox and generator [70].
5.4.3 Examples of Applications to Wind Power Systems This section presents two applications of maintenance strategy optimization for wind turbines [39]. The first application focuses on the cost/benefit of vibration CMS for the drive train. The second application compares different maintenance strategies for the blades. The applications focus on critical components according to Stage 1 in the RCAM method: components whose failure will not only lead to direct high maintenance costs for the spare parts, transportation, and maintenance equipment but also to large production losses (Fig. 5.9). The models described in Sections A and B are cost/benefit analyses of maintenance strategies in the Stage 3 of the RCAM method. In both cases, due to a lack of reliability and deterioration data, plausible assumptions were used for certain model parameters based on information from the literature and expert discussion. In these cases, sensitivity analyses were used in order to assess the effect of the models’ parameters on the resulting total maintenance costs.
112
L. B. Tjernberg
A. Cost/Benefit Analysis of Vibration CMS for the Drive Train The components of the drive train in a wind turbine are the generator, gearbox, and main shaft and bearing. In order to minimize the cost of maintenance for the drive train, vibration CMS can be used to identify incipient failures and plan preventive repair or replacement. Sensors are mounted on specific locations on the main bearing, gearbox, and generator to monitor the vibration content, i.e., spectrum, of the components and their parts. The type of sensor depends on the rotational frequency of the parts. Accelerometers are common for high frequency. Velocity and position transducer are used for middle and low frequency, respectively. An incipient failure will modify the spectrum of the components, which can be identified automatically if the level of the vibration exceeds defined threshold limits. Vibration analysts are generally required to determine what the failure is. The objective of the model is to estimate the cost/benefit of vibration CMS as a function of: • The failure probability for the components of the drive train (gearbox, generator, and main bearing) • The efficiency of the CMS and benefits on damage reduction • The advantage on the maintenance planning • The investment discount rate The model consists of a LCC model where failures over the lifetime of the wind turbine are modelled according to a renewal process. The failures are assumed to follow a Weibull probability distribution (black box approach). At failure, the components are replaced or repaired depending on the severity of the failure. The analysis was performed analytically for the sensitivity analysis, by estimating the number of renewals per year using a recursive approach based on [43] and by use of Monte Carlo simulation for performing a risk analysis. The CMS is assumed to have an efficiency of 90%, which means that 90% of the failures can be corrected with preventive repair, preventing production losses and lowering consequential damages by 50% in cost. Moreover, the cost for the CMS was assumed to be 100,000 A C according to [51], which is high compared to solutions proposed today and thus a conservative assumption regarding the cost/benefit. An example of the results from this investigation is shown in Fig. 5.10. The gearbox is the most critical component of the drive train with respect to its impact on maintenance costs. The reliability of the gearbox has a large impact on the economic benefit of the CMS. Even under the conservative assumption made regarding the CMS investment cost, the CMS is expected to be beneficial if the MTTF of the gearbox is lower than 18 years, which is clearly the case in the majority of wind turbine gearboxes in operation today. Moreover, it could be shown that the use of CMS reduces the risk of high maintenance costs; see [54] for more details.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
113
350
Economic benefit of CMS (in thousand Euros)
300 250 200 150 100 50 0 -50
5
10
15 MTTF Gearbox (in years)
20
25
Fig. 5.10 Expected economic benefit of vibration CMS for a wind turbine as a function of the Mean Time to Failure (MTTF) for the gearbox [54]
B. Condition-Based Maintenance Optimization for Blades Another component that is critical for the reliability, availability, and profitability of wind turbines is the rotor, today usually consisting of three rotor blades. Wind turbine blades are usually inspected once a year at service maintenance and on additional occasions, e.g., after a lightning strike of a turbine equipped with lightning detectors. Condition monitoring can be used in order to detect cracks and delamination, either at inspection or continuously. Inspection monitoring devices consist of infrared or ultrasound sensors installed on an inspection robot that will scan and examine the inner material of the blade [72]. Online condition monitoring systems can be achieved by inserting fiber optical sensors, whose emission frequency is modified by measured properties, i.e., strain or temperature, mounted on an optical line in the material of the blades [72]. The objectives of the proposed maintenance optimization model are the following: • To compare the expected maintenance costs for the three maintenance strategies (a) visual inspection, (b) inspection with condition monitoring technique, and (c) online condition monitoring. • To optimize the inspection interval for the inspection strategies. The model consists of a LCC model for the lifetime of the wind turbine. A Markov chain is used for the deterioration model (grey box approach) and for maintenance modelling.
114
L. B. Tjernberg 120 Condition monitoring inspection Visual inspection Condition monitoring system
115
Total maintenance costs (in thousand Euros)
110 105 100 95 90 85 80 75 70
0
1
2 3 Inspection interval (in years)
4
5
Fig. 5.11 Expected total maintenance costs of one blade over the lifetime of a wind turbine for different maintenance strategies and as a function of inspection interval [56]
The model takes into account sudden failures that could occur such as those caused by lightning impact. The analysis has been performed by simulating the maintenance over the finite lifetime horizon. The proposed optimization model is presented in further detail in [56]. The central result of this analysis is shown in Fig. 5.11, where the expected maintenance costs for the different maintenance strategies are plotted as a function of the inspection interval. Note that in case of the application of a condition monitoring system, the resulting cost is independent of any inspection interval. It can be observed that the optimal strategy among the alternatives (a–c) named above is to install a condition monitoring system (c), followed by inspection with condition monitoring techniques every 6 months (b) and, as the least cost-effective solution, the visual inspection of blades every 3 months (a). The benefit of the different maintenance strategies depends however strongly on the dynamics of the failure process (see P-F curve in Fig. 5.5), as it can be observed in Fig. 5.12. Figure 5.12 presents the expected maintenance costs for the three maintenance strategies a–c (with optimal inspection intervals in the cases (a) and (b)) as a function of the crack time to failure, i.e., the time between when a crack begins to form until it leads to a fault. The shorter this time is, the less probable the detection of a crack by offline inspection and the more beneficial condition monitoring systems prove to be. This is due to the need for short inspection intervals for inspection strategies to be efficient and the resulting high inspection costs.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
115
110 Condition monitoring inspection Visual inspection Condition monitoring system
105
Expected maintenance costs (in thousand Euros)
100 95 90 85 80 75 70 65
0
0.5
1 1.5 Crack time to failure (in years)
2
Fig. 5.12 Expected maintenance costs of one blade over the lifetime of a wind turbine for different maintenance strategies as a function of the crack time to failure l [56]
5.5 A Reliability-Centered Maintenance Analysis of Wind Turbines This section presents results from an RCM analysis performed for two different wind turbines that are the Vestas V44-600 kW and V90-2 MW. The RCM analysis has been carried out within a workgroup involving a wind turbine owner and operator, a maintenance service provider, a provider of condition-monitoring services, and wind turbine component supplier as well as researchers in academia. Selected results have been included in this section. Results from the RCM study have previously been published in [73]. The RCM study forms the basis for the development of quantitative models for maintenance strategy selection and optimization. Taking into account both the results of failure statistics and expert opinion, the analysis focuses on the most critical subsystems with respect to failure frequency and consequences. The analysis provides the most relevant functional failures and their failure causes as well as suitable measures to prevent either the failure itself or to avoid critical secondary damage. In this chapter, results for the subsystems gearbox, generator, and rotor current control/converter are included. Challenges identified by the RCM workgroup which are considered to impede the achievement of cost-effective O&M of wind turbines are discussed together with proposed solutions. Standardized and automated collection of in-depth failure and maintenance data, enhanced training of maintenance personnel, and the utilization of quantitative methods for decision
116
L. B. Tjernberg
support in wind turbine maintenance are identified as important steps to improve the reliability, availability, and profitability of wind turbines.
5.5.1 Introduction to the RCM Analysis The RCM analysis presented in this section is an essential part in the implementation of RCAM. Its purpose in the context of RCAM is to reveal the components, the failure modes, as well as the major underlying failure causes, which are most relevant for the system reliability and availability, and to identify suitable preventive maintenance measures. In this way, the RCM study forms the basis of RCAM; it ensures the subsequent development and application of mathematical models focus on the practically relevant items and failures. The RCM analysis of two wind turbine models is described in the following sections. These are the following: 1. The Vestas V44-600 kW, a turbine with an early, limited variable-speed capability, the design of which was state of the art in the mid-1990s. There are 35 turbines in operation in Sweden and more than 300 turbines still operating worldwide. 2. The Vestas V90-2 MW, a variable-speed wind turbine of contemporary design, with 124 turbines in operation in Sweden and approximately 2800 delivered worldwide [74, 75]. The RCM study has been carried out in a workgroup with representatives from: the owner and operator of wind turbines of these types, the maintenance service provider, the provider of condition-monitoring services, the wind turbine component supplier, as well as the research group leading the study. The combination of practical experience and theoretical expertise demonstrated by the makeup of the group is inherent to the RCAM method and is considered to be of crucial importance for the development of maintenance management and decision support tools for wind turbine O&M. The parallel analysis of the two wind turbines V44-600 kW and V90-2 MW has been chosen to account for the different reliability characteristics of turbines originating from different generations of technology (see [52, 76]) but also the potentially different applicable preventive maintenance measures. It is important to note that the turbines have been selected for analysis not due to any abnormal occurrence of failures but because of the fact that these are of particular interest to the project partners and, in the case of the V44-600 kW, because of the available experience with O&M of this turbine type in the RCM workgroup.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
117
5.5.2 Implemented RCM Process The RCM analysis summarized in this section follows the methodology of a study described in [26] which combined statistical analysis and practical experience. In addition, it is based on the guideline given in [33]. The implemented limited-scope RCM analysis has covered the following steps: • • • • • •
System selection and definition Identification of system functions and functional failures Selection of critical items Data collection and analysis FMECA including failure causes and mechanisms of the dominant failure modes Selection of maintenance actions
The determination of maintenance intervals and the comparative analysis of preventive maintenance measures by means of mathematical models are not considered in this chapter. The level of analysis moves from the system level (whole wind turbine) to the subsystem level (e.g., electrical system, gearbox) for which failure data is available and further on to selected critical components (e.g., resistor bundle in rotor current control unit, gearbox bearings) of these subsystems. The range of analysis has been limited to the most relevant subsystems of each turbine with respect to failure frequency and resulting downtime as well as their dominant failures. The focus has been on providing an in-depth understanding of the functions, main failure modes, failure consequences, failure causes, and the underlying failure mechanisms on the one hand and suitable maintenance measures to prevent these on the other hand. The consequences of failure have been assessed for the four criteria: 1. 2. 3. 4.
Safety of personnel Environmental impact (in a wind turbine, e.g., discharge of oil or glycol) Production availability (i.e., the impact on electricity generation) Material loss (including primary damage to the component itself but also secondary damage to other equipment)
5.5.3 System Description In the following, the two wind turbine models which are subject of the RCM analysis are described together with their system-level function and functional failure of interest in this context.
118
L. B. Tjernberg
Fig. 5.13 Structure of the wound rotor asynchronous generator with OptiSlip technology [77]
Stator
Rotor
IRotor
Pord
Current Transformer PWM
IGBT
external resistance
5.5.3.1
The V44-600 kW Wind Turbine
The Vestas V44-600 kW was launched in 1996. It is an upwind turbine with three blades and an electrically driven yaw system. Its rotor has a diameter of 44 m and a weight of 8.4 t. The rated rotational speed is 28 rpm. A hydraulically actuated pitch system is used for speed control, optimization of power production, for start-up and for stop (aerodynamic braking) of the turbine. Additional breaking functionality is provided by a disc brake located on the high-speed side of the gearbox. During operation, the main shaft transmits the mechanical power from the rotor to the gearbox, which has either a combined planetary-parallel design or a parallel-shaft design (as in the case of the V44 turbinen analyzed in this study). The gearbox and the generator are connected with a Cardan shaft. The generator is an asynchronous 4-pole generator with integrated electronically controllable resistance of the wound rotor (so-called OptiSlip technology, see Fig. 5.13), which requires neither brushes nor slip rings. The variability of the rotor resistance is provided by the Rotor Current Control unit (RCC) which is bolted to the non-drive end of the generator rotor and thus permanently rotates during wind turbine operation. It consists of a microprocessor unit to which the control signal is optically transmitted, of a power electronics unit and a resistor bundle. The rotor resistance is varied in the way that the resistor bundle is short circuited at varying frequency by means of IGBTs in the power electronics unit. This OptiSpeed technology allows the rotational speed of the generator to vary between 1500 rpm (idle) and 1650 rpm. The generator stator is connected to the electric power grid through a thyristor unit. This limits the cut-in current of the asynchronous generator during connection to the grid and smoothly reduces the current to zero during disconnection from the grid. The reactive power required by the generator is partially provided by a capacitor bundle at the bottom of the tower, the power factor correction, or phase compensation unit. The main function of the V44 system is the conversion of kinetic wind energy to electric energy which is provided to the electric power grid. More specifically, the system function is to provide up to 600 kW electric power at 690 V
5 Reliability-Centered Asset Management with Models for Maintenance. . .
119
Fig. 5.14 Structure of the V90-2 MW system [77]
2 1
5
10
3 13 4 9 8
11
6 20 18 7 19 17
14
15 16 12
1 Hub controller
11 High voltage transformers
2 Pitch cylinders
12 Blade
3 Blade hub
13 Blade bearing
4 Main shaft
14 Rotor lock system
5 Oil cooler
15 Hydraulic unit
6 Gear box
16 Machine foundation
7 Mechanical disc brake
17 Yaw gears
8 Service crane
18 Composite disc coupling
9 VMP-Top controller with converter
19 OptiSpeed generator
10 Ultrasonic wind sensors
20 Air cooler for generator
and 50 Hz to the electric power grid, in an operating temperature range of −20 ◦ C– +40 ◦ C and at wind speeds of 4–20 m/s. Failures on the system level are both a complete and a partial loss of energy conversion capability of the turbine. The wind turbine system has four operating states (RUN, PAUSE, STOP, EMERGENCY). The turbine can be connected to the electric power grid and fulfil the system function defined above only in the operating state RUN.
5.5.3.2
The V90-2 MW Wind Turbine
Figure 5.14 shows the structure of the Vestas V90-2 MW system. The first turbines of this type were installed in 2004. Like the V44, the V90-2 MW is an upwind turbine with three blades and active, electrically driven yaw. Its rotor has a diameter of 90 m and a weight of 38 t. The nominal rotor speed of 14.9 rpm is about half of the rotor speed of the V44. The so-called OptiTip pitch control system with individual pitching capability for each blade continuously adapts the blade angle to the wind conditions and in this way provides optimum power output and noise levels. In addition, it provides for speed control, turbine start-up, and stopping by aerodynamic braking. Similarly to the V44, an additional disc brake is located on the high-speed shaft. Unlike the V44 turbine, all V90-2 MW systems apply gearboxes with one planetary and two parallel stages from which the torque is transmitted to the generator through a composite coupling. A major difference from the V44 system
120
L. B. Tjernberg
is the generator concept: the V90-2 MW contains a 4-pole doubly fed asynchronous generator (DFIG) with wound rotor. A partially rated converter controls the current in the rotor circuit of the generator, which allows control of the reactive power and provides for smooth connection to the electric power grid. In particular, the applied DFIG concept (the so-called “OptiSpeed” technology) allows the rotor speed to vary by 30% above and below synchronous speed. The electrical connection between the power converter and generator rotor requires slip rings and carbon brushes. The generator stator is directly connected to the electric power grid [77–80]. The system function of the V90-2 MW is to provide up to 2 MW of electric power at 690 V and 50 Hz to the grid, in a standard operating temperature range of −20 ◦ C–+30 ◦ C and at wind speeds of 4–25 m/s. The system level failures of interest are (1) the complete loss of energy conversion capability or (2) the partial loss of energy conversion capability of the turbine. As with the V44, the V90-2 MW wind turbine system has four operating states, among which RUN is the only state allowing connection to the electric power grid. For the RCM analysis of the V902 MW, it is important to note that the series is not fully consistent, i.e., that small changes in design have been implemented in every year of production [59].
5.5.4 Subsystem Selection Based on Statistics and Practical Experience In the RCM study, failure statistics of the investigated wind turbines have been used in combination with expert judgment in order to prioritize the wind turbine subsystems for detailed analysis. The failure data used for statistical analysis covers the failures of 32 V44-600 kW turbines in the period 1996–2005 [81]. Statistical data analysis for the V90-2 MW system has been carried out based on data from [82]. It includes failures of 57 V90-2 MW turbines located in Germany, from the period 2004–2008. In order to include also the expert opinion of the RCM workgroup members in the identification of the most critical subsystems, all group members having professional experience with wind turbine O&M were asked to fill in questionnaires and in this way provide a subsystem ranking with respect to failure frequency and downtime per failure. Table 5.3 summarizes the results of the questionnaire evaluation as well as the statistical failure data analysis. Both the failure frequency and the downtime resulting from a failure are relevant for the criticality assessment of components. Therefore, it was found advantageous to combine these two measures by multiplication, resulting in the average downtime per wind turbine and year related to failures of a specific subsystem (see also [83]): I di tlost = I i=1 i=1 Xi Ti
.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
121
Table 5.3 Criticality of wind turbine subsystems with respect to failure frequency and downtime, according to expert judgment and statistical data analysis [73] V90-2 MW
V44-600 kW Expert judgment Downtime Failure per failure frequency
1
Gearbox
Gearbox
2 3
Generator Hydraulic system Rotor
Generator Yaw system Rotor
4
Data analysis Downtime per year and turbine Electrical system Generator Control system Generator
Expert judgment Failure Downtime frequency per failure
Data analysis Downtime per year and turbine
Gearbox
Gearbox
Generator Convertor
Generator Convertor
Hydraulic system
Rotor
Generator incl. converter Rotor Drivetrain incl. gearbox Control and protection system
with di being the downtime due to failures of a subsystem in the time interval, i, Xi the number of wind turbines reporting to the database in time interval i, and Ti being the duration of time interval i. Based on the results of both the failure data analysis and the questionnaire assessment, the subsystems (a) gearbox, (b) generator, (c) electrical system, (d) hydraulic system, and (e) rotor were chosen for in depth analysis in the RCM study. In spite of the significant contribution of the control system to the average downtime per wind turbine and year, it was decided not to include this system in the RCM analysis because its failures can hardly be influenced by means of preventive maintenance.
5.5.5 Results and Discussion The comprehensive results obtained during the RCM study are beyond the scope of this chapter. The presentation is thus limited to a tabulated compilation of selected analysis results for the three most critical subsystems identified above: the gearbox, the generator, and the converter (V90)/rotor current control (V44) as critical parts of the electrical system. Due to the broad similarity of failure modes, mechanisms, and applicable countermeasures found for the V44-600 kW and the V90-2 MW systems, the results for the two turbines are presented in only one table for each analyzed subsystem. Table 5.4 summarizes the results of RCM analysis of the gearbox. Bearings, gearwheels, and the lubrication system are identified to be the components with highest relevance for gearbox failure. Failure of shafts in the gearbox is considered to occur only as secondary damage and has thus not been included in the RCM analysis. Gearbox failure can have severe consequences: in case of complete demolition, parts of the gearbox can constitute a risk for personnel. Oil spills
122
L. B. Tjernberg
of up to 120 l (V44) or 300–400 l (V90) of lubrication oil contained in the respective gearboxes can cause environmental impact. Gearbox failure is among the failures resulting in the longest average downtime and thus has a strong impact on production availability, and it can cause severe secondary damage, e.g., in the main bearing or the rotor shaft. The central results for the generator subsystem are compiled in Table 5.5. Generator failure usually does not constitute a risk to personnel safety or the environment but often implies significant loss of production availability and costly down-tower repair. Secondary damage to other subsystems can occur, e.g., in case of excessive vibrations from damaged generator bearings or strong heat release. Table 5.6 summarizes selected analysis results for the subsystems providing rotor current control: these are the RCC unit in the V44-600 kW (being, according to the statistical analysis, the most frequently failing component in the category “electrical system”; see Table 5.3) and the partially rated converter in the V902 MW, respectively. The consequences of RCC failure in the V44 are usually limited to production losses: while failure of the power electronics unit or the microprocessor unit still allows operation at reduced power of 300 kW, failure of the resistor unit fully prohibits turbine operation. In the case of the V90-2 MW system, failure of the converter results in a full loss of the power generation capability. As the results in Tables 5.4, 5.5, and 5.6 show, a particularly frequent cause of failure is vibration. Excessive vibration is often a result of bearing damage. Among the variety of proposed preventive measures, those aiming at prevention or early detection of bearing damages are thus considered to be especially cost-effective. In case of the gearbox, early detection of impending bearing failure can prevent severe secondary damage and enable up-tower repair instead of significantly more expensive removal and external repair. Moreover, in case of a necessary replacement, the loss of residual value of the gearbox (e.g., due to internal shaft fracture) can be avoided. Suitable measures to detect impending bearing failure are vibration-based CMS and temperature measurements. A major difference between vibration and temperature monitoring is that vibration CMS usually provide a pre-warning time (P-F interval) in the range of several weeks to months while temperature-based detection only provides a pre-warning time in the range of hours to days. On V44 turbines, CMS are usually not installed; the “Elida” turbine of Gothenburg Energy, equipped with a CMS prototype from SKF since 2001, is an exception in this respect. On V90-2 MW turbines, vibration-based CMS are not part of the standard equipment provided by the wind turbine manufacturer but are in practice installed in virtually all turbines of this type. However, vibration monitoring and vibration-based diagnosis of planetary stages in gearboxes is at present still challenging, and an improvement of condition monitoring technology for this purpose is subject to intensive development activities today. An interesting finding has been obtained with respect to present practices in wind turbine maintenance: According to the RCM workgroup, experience has shown that the better the schedules and plans for service maintenance are followed, the more reliable a wind turbine works. This apparently trivial statement suggests that the present service intervals of 6 months are appropriate. Moreover, in a variety
Bearings
Item All gearbox components
Function Transmission of torque from the rotor to the generator shaft, providing the desired conversion ratio for speed and torque Keep shafts in position while allowing rotary motion at minimal friction
Subsystem: Gearbox
High friction
Failure mode Loss of torque transmission capability
Inappropriate lubrication (insufficient lubr., overlubrication, wrong lubricant) Moisture in oil
Overloading often due to design or installation deficiencies
Failure cause Manufacturing or installation deficiencies
Corrosion by oxidation; reduction of steel strength possibly due to H2 ingression → surface failure
Increased friction due to plastic deformations, high temperature → material fatigue Increased friction → high temperature → surface damages
Failure mechanisms Increased friction or inappropriate high cycle loading which leads to damage
Damage accumulating
Damage accumulating
Damage accumulating
Failure characteristic Damage accumulating
(continued)
Filter dryer in gearbox casing to dehumidify air inflow; oil analysis; online moisture detection
Oil analysis; measurement of oil pressure and temperature; online particle counting; follow lubrication scheme
Endoscopy, temperature, and vibration monitoring for early damage detection (not preventing bearing damage)
Proposed task, PM action Training of technicians for improved quality in manufacturing, installation, and repair; alignment check; temperature and vibration monitoring (for enhanced planning, secondary damage prev.)
Table 5.4 Selected analysis results for the gearbox (statements valid for the V44-600 kW only are marked with index “1”; index “2” indicates those limited to the V90-2 MW) [73]
5 Reliability-Centered Asset Management with Models for Maintenance. . . 123
Item Gearwheels
Function Transmit mechanical power while converting speed and torque at desired ratio
Subsystem: Gearbox
Table 5.4 (continued)
Teeth breakage
Scuffing
Failure mode Pitting
Metal particles in lubricant (e.g., consequence of overloading) Loading during standstill Insufficient lubricant film; load exceeding scuffing load capacity of the oil Overload; consequence of material removal by pitting or scuffing; fatigue
Failure cause Overloading
Metal-to-metal contact, welding and tearing of tooth surface → high friction, particle release Results in high friction and particle release
False brinelling
Failure mechanisms High local stress → surface fatigue, damage to tooth surface → high friction
Damage accumulating
Damage accumulating Random failure
Failure characteristic Damage accumulating
Alignment check; measures against pitting and scuffing as above; vibration CMS for early damage detection
Correct oil type; oil analysis to ensure oil quality, particularly additive content; avoid emergency stops with mechanical brake
Avoid loading during standstill
Oil analysis; online particle counting; particle indication with magnet; online magnetic filtering
Proposed task, PM action Alignment check; avoid emergency stops with mechanical brake
124 L. B. Tjernberg
Lubrication system
Supply lubricant to gearwheels and bearings at right temperature and viscosity; filter lubricant; cooling of the gearbox
Loss of lubrication, filtering or cooling function
Oil temperature measurements, avoid start-up below a threshold value (preventing secondary damage)
Additional filter stages (standard inline filter in V44; coarse inline and fine offline filter in V90); online particle counting; differential pressure measurement over filters; bleeding point for air in filter enclosure
Visual control at service; pressure measurement (standard in V90-2 MW); training of technicians to improve filter installation; filter design for foolproof mounting
Risk increasing with filter age
Random failure or increasing failure frequency
Too low viscosity → insufficient lubrication
(See consequences of particles in oil)
Too low oil temperature, due to failed heating system or insufficient mixing / circulation Insufficient oil filtration due to blocked inline filter → filter by-passed until replacement Loss of oil, low oil pressure (e.g., due to leakage at shaft seals or badly installed filters; sudden oil hose rupture)
Control system modifications; introduction of warming thresholds for oil temperature instead of present alarm levels
Insufficient lubricant film thickness (with consequences as above)
Too high oil temperature, mostly due to defective thermostat
5 Reliability-Centered Asset Management with Models for Maintenance. . . 125
Stator and rotor windings
Item All generator components
Function Convert shaft to electric power, provide up to 600 kW 1/2 MW2 at 680 V and 60 Hz to the grid within specified range of rotational speed Lead electric currents in order to provide electric power
Subsystem: Generator
Melting of the insulation due to overheating, due to material degradation (aging) of the insulation or due to mechanical impact/vibrations; can cause excessive gearbox loading Broken windings or failed contacts at connections
Short circuit failure
Open circuit failure (loss of conduction)
Failure cause
Failure mode No power conversion capability
Fatigue as a consequence of excessive vibrations; materials defects in copper conductors
Failure mechanisms
Sudden event or accumulating damage
Increasing failure frequency
Failure characteristic
Avoid excessive vibrations → vibration CMS; early diagnosis of impending failure by means of thermography or using a motor tester, vibration CMS for detection of electric unbalance
Temperature measurement in the windings (done already); vibration CMS for early detection of bearing failures and unbalances
Proposed task, PM action
Table 5.5 Selected analysis results for the generator (statements valid for the V44-600 kW only are marked with index “1”; index “2” indicates those limited to the V90-2 MW) [73]
126 L. B. Tjernberg
High friction
Loss of contact; high friction and contact resistance
Keep generator rotor shaft in position while allowing rotary motion at minimal friction
Provide electrical contact to the generator rotor2
Bearings
Brushes, slip rings2 (V90-2 MW only) Over-worn brushes
Bearing currents1 caused by winding damage in the generator rotor or by ground currents (only V44, hybrid bearings applied in V90-2 MW) Slip ring damage
Design deficiencies (under-dimensioning)
Inappropriate lubrication (insufficient lubr., over-lubrication, wrong lubricant)
Carbon dust from wear of brushes causes electric spark-over
Formation of sparks → local destruction of lubricant film
Increased friction → high temperature → surface damages
Damage accumulating
Damage accumulating
Monitoring of brush thickness, e.g., by means of an integrated electrical contact or a camera system; condition-based replacement
Solved successfully with suction system for removal of carbon dust; condition-based replacement
Design change required, preventive measures limited to early damage detection Bearing current detection via measurement of high-frequency electromagnetic emissions from sparks; scheduled bearing replacement; temperature measurements, vibration CMS
Follow lubrication scheme; automatic lubrication; temperature and vibration monitoring for early damage detection
5 Reliability-Centered Asset Management with Models for Maintenance. . . 127
No target value for current control
RCC micro-proc. unit
Failure mode Loss of current control capability
Loss of rotor resistance variability
Function Control the generated electric power by regulating the rotor current
RCC power electronics unit
Item All RCC components (V44 only)
Failure of power electronics components due to electrical operating conditions
Failure of cables or contacts, predominantly due to mechanical impact: loose contacts due to vibration, cable twist Failure of power electronics components due to overheating
Failure cause
Ambient temperature exceeds design operating conditions, e.g., due to failure in nacelle/gearbox cooling system Failure correlates with grid disturbances/frequent transients in the grid
Failure mechanisms
Subsystem: Rotor Current Control (RCC unit in V44-600 kW, converter in V90-2 MW)
Seldom failure
Increasing failure frequency
Increasing failure frequency
Increasing failure frequency
Failure characteristic
Clarify correlation between grid disturbances and failures
Ensure design operating conditions
Avoid strong vibrations (e.g., due to damages bearings) → vibration CMS
Proposed task, PM action
Table 5.6 Selected analysis results for the rotor current control/converter subsystem (statements valid for the V44-600 kW only are marked with index “1”; index “2” indicates those limited to the V90-2 MW) [73]
128 L. B. Tjernberg
Loss of current control capability
Feed generator rotor circuit with electric current of desired amplitude and frequency for control of power output to the grid
All converter components (V90 only)
Converter micro-proc. unit Converter power electronics unit (IGBT)
Overheating
Open circuit failure (loss of conduction) in rotor circuit
RCC resistor unit
e.g., cracking of welds due to impacts of rotation and vibration
Open circuit/contact failure due to mechanical impact
Overheating, e.g., due to failed cooling system; aging moisture ingression on condensation; mechanical impact, e.g., from vibration, electrical impact from grid disturbances
Seldom failure
Frequent operation at high slip, failed temperature monitoring,
Increasing failure frequency
Random failure
Temperature monitoring; at high temperatures, operation at reduced power Seldom failure
Temperature monitoring; avoid strong vibrations (by design or using vibration CMS for early detection)
Software update
Avoid strong vibrations (e.g., due to damages bearings) → vibration CMS
5 Reliability-Centered Asset Management with Models for Maintenance. . . 129
130
L. B. Tjernberg
of cases, a lack or bad execution of service maintenance has been found to lead to low availability and costly secondary damage. This shows that not to perform maintenance in the right way can result in high O&M costs in practice. A challenge identified in the context of the RCM analysis is the large number of new personnel in wind turbine maintenance with limited experience in this field, resulting from the strong growth of wind turbine installations. Correct installation and de-installation routines as well as a proper alignment of components have a strong impact on the reliability of wind turbines. There is thus a need for enhanced training and education of wind turbine maintenance personnel in this respect. A fundamental problem revealed during the RCM study is that maintenance decisions are at present usually made with the aim of a short-term minimization of cost per kWh, not with a focus on long-term minimization of total life cycle cost. A difficulty is perceived in practically justifying the installation of additional equipment for prevention of failures in wind turbines because a quantification of the benefit of such investments is challenging. This issue can be addressed by means of the data-based, quantitative methods for maintenance optimization. However, it must be noted that the broad practical application of quantitative methods in maintenance decision-support tools will require the structured and automated collection of in-depth failure and maintenance data of wind turbines. Thus, further and intensified efforts toward such systematic data collection, as, e.g., using the RDS-PP component designation structure combined with the EMS designation structure for maintenance activities (see [84, 85]) as proposed in [55, 75], are strongly needed in order to tap the full potential of quantitative maintenance optimization for cost reduction of wind energy.
5.5.6 Conclusions A limited-scope RCM analysis of the wind turbines Vestas V44-600 kW and Vestas V90-2 MW has been carried out. The RCM study forms the basis for the development of quantitative models for maintenance strategy selection and optimization within the framework of the RCAM approach. The analysis has focused on the subsystems which in the past have contributed most to the average downtime of these wind turbine models. For these subsystems, it has identified the most relevant functional failures and their failure causes as well as suitable measures to prevent either the failure itself or to avoid critical secondary damage. Analysis results for the subsystems gearbox, generator, and rotor current control (V44-600 kW)/converter (V90-2 MW) have been presented here. It has been found that a considerable number of preventive measures proposed by the RCM workgroup for the V44-600 kW turbine have been implemented in the V90-2 MW series. Measures for prevention or early detection of bearing damages are concluded to be particularly effective due to the identified central role of vibration as a failure cause for mechanical failure of a variety of components. In addition to the analysis of specific wind turbine failures and appropriate preventive measures, comprehensive background
5 Reliability-Centered Asset Management with Models for Maintenance. . .
131
information regarding the current maintenance practices has been obtained during the RCM study. Challenges which are at present impeding the operation and maintenance of wind turbines from becoming more cost-effective have been identified, and solutions have been proposed. Standardized and automated collection of in-depth failure and maintenance data, enhanced training of maintenance personnel, and the utilization of quantitative methods for decision support in wind turbine maintenance are considered to be important steps to improve the reliability, availability, and profitability of wind turbines.
5.6 A Fault Detection Framework Using Recurrent Neural Networks for Condition Monitoring of Wind Turbines This section proposes a fault detection framework for the condition monitoring of wind turbines. The framework models and analyzes the data in supervisory control and data acquisition systems. For log information, each event is mapped to an assembly based on the Reliawind taxonomy [86]. For operation data, recurrent neural networks are applied to model normal behaviors, which can learn the longtime temporal dependencies between various time series. Based on the estimation results, a two-stage threshold method is proposed to determine the current operation status. The method evaluates the shift values deviating from the estimated behaviors and their duration time to attenuate the effect of minor fluctuations. The generated results from the framework can help to understand when the turbine deviates from normal operations. The framework is validated with the data from an onshore wind park. The numerical results show that the framework can detect operational risks and reduce false alarms. The framework and the case study has firstly been presented in the [87].
5.6.1 Introduction The energy system is experiencing a global transition toward a sustainable energy system. For electricity generation, one key trend is developing toward large-scale introduction of wind power. However, compared with fossil fuel, wind power still needs to drive down its cost, and one of the main costs come from operations and maintenance (O&M). An accurate fault detection framework can help to achieve early anomaly diagnosis and to support efficient asset management with lower costs for O&M. Traditional fault detection for wind turbines is mainly based on mechanical signals to perform time and frequency analysis [88]. With the development of supervisory control and data acquisition (SCADA) systems, a large amount of data can be accessed in the industry, which makes it possible to conduct data mining to detect potential degradations. Hence, recent studies start to focus
132
L. B. Tjernberg
on exploring the utilization of these data in different applications [89–90]. SCADA data usually consist of two parts: operation data and log files. The former records the real-time values of operation signals, and the latter records the operation status transitions and the corresponding triggered events. Compared with the operation data, log events are less studied in the existing literature. In [91] the authors used the time sequence and probability-based method to analyze the log information. The specific log events are discussed and predicted in [92] using data-driven methods. Both the papers reveal that the log files can also provide rich information for condition monitoring. However, what is rarely discussed is how to analyze both parts in the application of fault detection. Generally, anomalies can be categorized into three types: point anomalies, contextual anomalies, and collective anomalies [93]. For wind turbines, these anomalies can act as either incipient degradations or formed failures, both of which are operational risks. Concerning the modeling techniques, the existing machine learning methods can be classified into three categories depending on how many labels are available, which are unsupervised learning, semi-supervised, and supervised methods [94]. Specifically, for the data from wind turbines, labels are usually missing in most cases. Hence, the developed methodology mainly assumes that most data are under normal operations, which have no actual faults reported. Under this assumption, different methods are used to model normal behaviors and evaluate the deviations between the current status and normal operations, which is a kind of semi-supervised learning. Currently, two sorts of methods have been applied to model the operation data of wind turbines: probability-based methods and neural networks. [95] proposes a Bayesianbased framework to formulate the operation status of wind turbines. In [96] the Gaussian process is proposed to model power generation to detect significant yaw misalignment. Both the methods build a node-to-node relationship between input features and output signals from a statistical view. In [97] neural networks are used with a fuzzy inference system to detect the faults in electrical and hydraulic pitch systems. In [98] the authors built a multi-agent system based on three-layer perceptrons to detect the faults of wind turbines, which adds the past data as input to learn the dependencies between adjacent records. In [99] regressive neural networks are deployed to detect the faults of temperature signals. The model can establish limited dependencies with the known lag model, but not in an automatic way. In light of the increasing volume of the dataset, deep learning has also been applied to the application. In [102] the authors used three-layer feedforward networks to model the gear lubricant pressure. The deep autoencoders are applied in [101] to detect the blade breakage failures. However, similar to the probability-based methods, these networks still fail to model the dependencies in a proper way. For the methods mentioned above, the main challenge is how to automatically learn the temporal dependencies between adjacent records, which are critical to identifying underlying contextual anomalies for time series. Recurrent neural networks (RNNs) are a sort of dynamical network that can effectively capture long-time dependencies. Therefore, the model could be a feasible solution to the above challenge. RNNs are initially designed for natural language processing and then are applied to recognize the patterns of different time series, like spacecraft data and electric loads.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
133
Fig. 5.15 Illustration of proposed fault detection framework as an input for the reliability-centered asset maintenance [87]
Figure 5.15 illustrates the proposed fault detection framework as an input for the reliability-centered asset maintenance. In summary the framework: (1) analyzes SCADA data, including both the log information and operation data to understand when the turbine deviates from normal operations; (2) uses RNNs to model normal behaviors of wind turbines, which can automatically learn the long-time dependencies inside time series; and (3) proposes a two-stage threshold method which evaluates the duration of the extreme shift values to reduce false alarms. This section reports on the possibility of RNNs to model the operational data of wind turbines and employing the model for the application of fault detection. With regard to the alarm procedure, the existing studies mainly apply simple principles. In [99] the temperature values beyond three-sigma deviations from the mean values are deemed as anomalies. In [100] the exponentially weighted moving average (EWMA) control chart is used to detect the anomalous shifts of the lubricant pressure. Furthermore, in [106] the authors proposed an approach based on the extreme value theory without any assumptions on specific distributions. However, in these methods, false alarms can be triggered when the turbine suffers from minor fluctuations, like transient disturbances or individual spikes, although the shift from normal operations is still in control. Hence, how to design a reasonable alarm procedure can largely affect the fault detection performance. To tackle the above challenges, this section proposes a fault detection framework for wind turbines using SCADA data as the primary input. The framework monitors and evaluates the critical signals, which are sensitive to the variation of corresponding components’ health conditions.
134
L. B. Tjernberg
5.6.2 Mathematical Modelling RNNs are dynamic neural networks, which are powerful tools to model sequential datasets. However, with the increasing complexity of the networks, RNNs suffer from gradient vanishing and explosion problems. It is difficult for RNNs to learn long-time dependencies. To tackle these difficulties, long short-term memory (LSTM) is proposed in [108] which introduces an input gate and an output gate in the structure. In the subsequent development, different variants are made to improve the effectiveness of the model. In [109] an adaptive forget gate is used to solve the saturation problem. In [110] the authors proposed peephole connections to perform highly nonlinear tasks, by which the cell state is sent back as part of the gate input. The bidirectional structure is applied in [102] to deal with complicated speech processing tasks. Given that this section aims to diagnose the anomalies in the near future, the unidirectional LSTM with peepholes is applied to model the signal sequences of wind turbines. The main feature of the network is a memory cell unit and three gates, which are, respectively, the input gate, the output gate, and the forget gate. For details on the mathematical modelling, the reader is referred to [87].
5.6.3 The Proposed Fault Detection Model This section proposes a fault detection framework that analyzes SCADA log events and operation data. The framework monitors and evaluates the critical signals, which are sensitive to the variation of corresponding components’ health conditions. The framework uses RNNs as the main model and proposes a two-stage threshold method as post-processing. The framework aims to generate alarms toward potential operational risks and reduce false alarms toward normal operations. The flow chart of the framework can be found in Fig. 5.15. The generated results of the framework can be used as providing input for the RCAM.
5.6.3.1
Log Analysis
The SCADA system supervises the operational status of wind turbines and protects wind turbines from extreme loads. Once a critical signal exceeds the predefined operation threshold, an event is triggered and recorded in the log file. These events reveal potential operational risks, which can be seen as point anomalies that can reduce more component lifetime than usual cases. Hence, analyzing the log events can help operators understand when the turbine deviates from normal operations. A log event usually contains the following information: the event code, event description, triggered time, and acknowledgment time. To understand the log files’ information, this section maps the triggered events to the assembly level based
5 Reliability-Centered Asset Management with Models for Maintenance. . .
135
Turbine
Drive train
Electrical module
Gearbox
Auxiliary electrical system
Generator
Frequency converter
Main shaft
Control & communication system
Nacelle module
Rotor module
Support structure
Hydraulic system
Blade
Foundation
Nacelle
Hub
Tower
Yaw system
Pitch system
Power electrical system
Fig. 5.16 Reliawind taxonomy used in the proposed framework for fault detection and RCAM [87]
on the taxonomy and manufacturer documents. Due to the wide application, the Reliawind taxonomy is used in this study with minor modifications whose details can be found in Fig. 5.16.
5.6.3.2
Input Feature Selection
The proposed framework evaluates operational behaviors by monitoring the critical signals that are sensitive to the equipment’s health conditions. The input features are applied to estimate the defined critical signals. In this section, the critical signals are decided based on domain knowledge. The input features are selected depending on how much relevant information is contained. The selection process consists of three steps, which are evaluation, ranking, and forward search. In the evaluation step, normalized mutual information (NMI) is introduced to measure the relevance and dependence between the input features and the critical signals, which is defined as below: NMI (X, Y ) =
.
2I (X, Y ) H (X) + H (Y )
where H is the marginal entropies of the given random variable X and Y. I denotes the mutual information of X and Y, which can be calculated as:
136
L. B. Tjernberg
I (X, Y ) =
.
y∈Y
p (x, y) x ∈ X p (x, y) log p(x)p(y)
and where p(x, y) is the joint probability function and p(x) and p(y) are the marginal probability function. The input features are ranked based on the obtained results, and the top ones are kept for the following analysis. Since the normalized mutual information does not measure the joint effect of multiple input features, different subsets of the top features are examined by cross-validation tests. The forward search starts from the individual features and adds one more feature at each step until all the features are exhausted. For instance, it is assumed that three features are selected after ranking. Then, the forward search consists of three steps: one-feature search, two-feature search, and three-feature search. At each step, the different combinations of the features are separately evaluated. Finally, the subset with the minimum error is used to train the critical signals’ estimation models.
5.6.3.3
Pre-processing
In this study, the preprocessing consists of three parts: filtration, transformation, and normalization. Given that the raw data can contain noisy records, which can be either missing or erroneous, the filter aims to remove bad data through the following steps: 1. Delete the records with missing values. 2. Delete the records outside the operational ranges of normal behaviors. The normal operation ranges are determined based on the analysis of historical data, which are listed in Table 5.7. 3. Delete the records with the wind speed either less than the cut-in speed or larger than the cutoff speed, which ensures that RNNs mainly capture the features of the production mode. 4. Delete possible erroneous records using the isolation forest algorithm. The method builds an ensemble of search trees to partition sample individuals. For a sample with n instances, the anomaly score s(x, n) of the individual x is calculated as below:
s (x, n) = 2−
.
E(h(x)) c(n)
where E(h(x)) is the average of path length h(x) and c(n) is the average path length of unsuccessful searches in binary search trees, which is used to represent the estimation of average h(x). In the algorithm, the exceptional values are more likely to be separated in the early partitions. Hence, those instances with shorter search path lengths are deemed as anomalies.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
137
Table 5.7 Input data for operation ranges of wind turbines [87] Signals Active power
Units (kw)
Operation ranges [−100, 2100]
Nacelle temperature Rotor speed
(◦ C)
[−20, 40]
(rpm)
[0, 14.8]
Gearbox oil temperature Gearbox bearing temperature Pitch angle Spinner temperature
(◦ C)
[0, 65]
(◦ C)
[0, 80]
(◦ C) (◦ C)
[−5, 90] [−15, 40]
Yaw position
(◦ C)
[0, 360]
Grid busbar temperature Hydraulic oil temperature
(◦ C)
[5, 60]
(◦ C)
[10, 65]
Signals Hydraulic oil pressure Grid inverter temperature Rotor inverter temperature Top controller temperature Hub controller temperature Generator speed Generator front bearing temperature Generator phase temperature Generator slip ring temperature
Units (bar)
Operation ranges [0. 200]
(◦ C)
[25, 50]
(◦ C)
[25, 55]
(◦ C)
[10, 50]
(◦ C)
[0, 50]
(rpm) (◦ C)
[0, 1685] [0, 90]
(◦ C)
[0, 150]
(◦ C)
[−10, 40]
Considering the imbalance distributions existing in the operational data, which is shown in Fig. 5.17, the refined data are transformed to approximately normal distributions through the box-cox transformation, which can be expressed as follows: λ T (y) =
.
y −1 λ
log(y) λ = 0
λ = 0
where λ is the parameter that maximizes the likelihood function of the given data y. After that, the transformed data are normalized to the range of [0, 1].
5.6.3.4
Two-Stage Threshold
To generate a prompt reminder toward operational risks and to reduce false alarms toward normal operations, a two-stage threshold method is designed as postprocessing in the framework. In the three-sigma principle, a fixed threshold is usually set at least three standard deviations away from the mean value. However, false alarms can be triggered when the turbine suffers from minor fluctuations, although the shift is still in control. Hence, instead of focusing on the shift values, the two-stage threshold evaluates both the values and their durations, which aims
138
L. B. Tjernberg
Fig. 5.17 Histograms to illustrate the imbalance distributions of the signals under normal operations. For the rotor speed, oil temperature, and bearing temperature, most of the data can be observed at the tail of the corresponding distributions. For the power generation, more data exist at both the head and tail parts. (Data is shown for one turbine, Turbine 9 in the case study) [87]
to attenuate the effect of individual spikes and transient disturbances. The method mainly consists of the following two stages: Stage I: Apply the three-sigma principle to generate a threshold of the shift values deviating from the behaviors estimated by RNNs. Stage II: Calculate the area values composed of the shifts that exceed the threshold obtained at Stage I. The area values are monitored by the three-sigma principle at Stage II. The shifts that lead to areas larger than the three-sigma threshold are deemed as operational risks. The method mainly evaluates the areas generated by the extreme shifts that exceed the first-stage threshold to determine the current operation status. The area values measure both the shift values and their duration time and can reduce false alarms triggered by minor fluctuations compared with the three-sigma principle. Figure 5.18 demonstrates how the two-stage threshold works with the estimated result obtained by RNNs. In Fig. 5.18, three groups of shifts exceed the first-
Transient disturbances
Individual spikes
Abnormal Stage II threshold
139
Area value
Shift
5 Reliability-Centered Asset Management with Models for Maintenance. . .
Stage I threshold Time Time
Fig. 5.18 The schematic diagram of the two-stage threshold. The blue lines represent the shift values deviating from the estimated behaviors. The orange dash line denotes the first-stage threshold. The shifts beyond the first-stage threshold generate the gray and pink areas. The red crossings are the alarms triggered at the second-stage threshold [87]
stage threshold, which, respectively, generate the gray and pink areas. Due to either transient disturbances or individual spikes, the gray areas fail to trigger the secondstage threshold, which can be seen as minor fluctuations. For the pink area, the large shifts and their long duration make the area exceed the second-stage threshold, hence triggering alarms, which are deemed as potential operational risks.
5.6.4 Case Study for an Onshore Wind Park This section gives an overview of validation of the proposed fault detection framework with the data from an onshore wind park in the Northern of Europe.
5.6.4.1
Data Set Description
The wind park is located in the Europe northern area and consists of 16 wind turbines. SCADA data has been available for each turbine in the wind farm for a 2.5-year period. The data are from January 2014 to June 2016 and include operation data and log files. The operations data are sampled every 10 min, and 40 signals are available, including electrical signals, temperature, and rotational speed. According to maintenance reports, the earliest failure during the time period happened in early January 2015. To ensure that the training data are under normal operations without losing much information, the framework uses 11-month data for training and the remained part for the test. This rule applies to all the wind turbines for consistency. The models’ performance is measured by the normalized root-mean-square error (NRMSE) and the mean absolute percentage error (MAPE), which are defined as follows: 1 .NRMSE = Ymax − Ymin
2 1 n yˆi − yi · 100% i=1 n
140
L. B. Tjernberg
100% n
yˆi − yi
.MAPE = i=1 n yi where • .yˆi is the estimated value and yi is the target value. • n is the length of the vector y. • ymax and ymin are, respectively, the maximum and minimum of the target value. This case study considers the gearbox as the critical component. For further studies on critical assessment of components, the reader is referred to [52, 73, 123]. The gear oil and bearing temperature are selected as the critical signal to represent the health status of the rotational assemblies and their lubrication systems. In the feature selection, the power generation and rotor speed are the top-ranked features. The nacelle temperature is widely used to model the component temperature in wind turbines; hence, it is considered in this case as well. Through the forward search, the nacelle temperature, rotor speed, and power generation are decided as the optimal features.
5.6.4.2
Benchmark Test of Estimation Models
Many factors can affect the estimation results of RNNs. This section discusses the effect of hyperparameters and model structures and employs analysis of variance (ANOVA) to assess the difference among the group means of the estimation models. In this study, the batch size is sampled with the generally used values from 32 to 512. Figure 5.19 presents the box plots of tenfold cross validation to model the gear oil temperature. It can be seen that the error increases as the batch size gets larger, regardless of the mean, median, or the whole range. The result of ANOVA manifests that the mean errors of the batch size with 32 and 64 are significantly less than the other three groups. Since both the mean (MAPE = 2.13%) and median (MAPE = 2.15%) error are smaller than the size 64, the batch size is selected as 32 for the following analysis. In [119] the authors have concluded that the learning rate is the most critical hyperparameter, followed by the network size. In this study, the learning rate and other parameters in ADAM apply the values in [115] which are the suggested settings for the tested applications. Figure 5.20 shows the estimation performance of multiple variants of RNNs, which are the simple RNN, LSTM, and the gated recurrent unit (GRU), and their different structures, from one layer to three layers. The network size is tested with a value of [30, 50, 80, 100, 120]. It can be seen that the simple RNN performs the worst among the three variants in the test 15 structures. This is also certified by ANOVA, which shows the significant difference between the simple RNN and GRU/LSTM. Although no significant difference arises between GRU and LSTM, slight superiority can be universally observed in LSTM compared with GRU, except for the three-layer structure with the network size of 30 and the two-layer structure with the network size of 100. With respect to the
5 Reliability-Centered Asset Management with Models for Maintenance. . .
141
Fig. 5.19 MAPEs of tenfold cross-validation with different batch sizes to model the gear oil temperature. (The median and mean errors are denoted as the red lines and red dots. The bottom and top edges of boxes, respectively, indicate the 25th and 75th percentiles. The whiskers show the whole range of MAPEs) [87]
Fig. 5.20 MAPEs of tenfold cross-validation to model the gear oil temperature with different structures, network sizes, and model variants. (The lines denote the results with the network size from 30 to 120) [87]
network size and structure, no significant difference is observed among the different values in both GRU and LSTM. Given that the two-layer LSTM with the network size of 30 achieves the minimum error (MAPE = 2.0685%), this structure is used for the following analysis. To validate the effect of RNNs, the above LSTM is compared with two methods that have been widely applied in the application, which are the Gaussian process
142
L. B. Tjernberg
Table 5.8 Errors of 10-time multistep estimation to model the normal gearbox temperature ID 3
Temperature Oil Bearing
7
Oil Bearing
9
Oil Bearing
Errors NRMSE (%) MAPE (%) NRMSE (%) MAPE (%) NRMSE (%) MAPE (%) NRMSE (%) MAPE (%) NRMSE (%) MAPE (%) NRMSE (%) MAPE (%)
LSTM 10.1508 2.5021 6.0059 1.4498 9.0302 2.2222 5.2831 1.2780 8.7538 2.1546 5.2173 1.2599
DNN 14.5425 3.7894 9.0434 2.2742 13.0936 3.4248 8.1686 2.0383 13.0687 3.4145 8.2233 2.0476
GP 14.4383 3.7713 9.2559 2.2742 13.2458 3.4457 8.3464 2.0691 13.2462 3.4469 8.3452 2.0694
(GP) and deep neural networks (DNNs) [96]. The Gaussian process is implemented using the GPML package with zero mean function, squared exponential covariance, Gaussian likelihood method, and Laplace inference [120]. For DNN, rectified linear units are used as the activation functions, and the structure is three layer with the network size of 100. The initialization of parameters is conducted by the method in [121]. The methods are examined with three wind turbines from the case study with IDs 3, 7, and 9, respectively. Table 5.8 lists the mean values of NRMSEs and MAPEs after the 10-time running of multi-estimation, which means the currently estimated result is independent of the previous ones. The best estimation result for each row is emphasized in the bold format. Among the three examples, Turbine 3 achieves the largest estimation errors while Turbine 9 the minimum errors. With regard to the methods, except for the oil temperature of Turbine 3, GP achieves the worst estimation among the three methods, followed by DNN. LSTM outperforms the other two methods with the average 34.71% improvement of DNN and 35.41% of GP. This significant difference can also be detected by ANOVA. From the above analysis, it can be concluded that LSTM has its superiority in modeling wind turbines’ temperature signals compared with the other two methods.
5.6.4.3
Results of the Classified Log Events
This section presents the classified log events using the Reliawind taxonomy. A turbine with ID 15 is taken as an example. According to the maintenance report, the turbine was detected with a fault in the intermediate speed shaft on November 30, 2015. The operation data show that the turbine was stopped from 20:20 on November 28 to 15:40 on December 1. Figure 5.21 presents the distributions of the triggered SCADA log events classified at the assembly level during 2015. These events indicate the anomalous status highlighted by the SCADA system. Here, the
5 Reliability-Centered Asset Management with Models for Maintenance. . .
143
Fig. 5.21 The distribution of the triggered log events at the assembly level for Turbine 15 with a fault in the intermediate speed shaft [87]
operation logs report the operation mode transitions, which are irrelevant to any failures of components; hence, they are not included in the Reliawind taxonomy. It can be seen that an event about the gearbox was triggered at 13:00 on May 15, which reported a feedback problem of the gear oil cooling system. The problem was cleared on the same day. Beyond that, neither alarms about the temperature of oil nor bearing is reported, which means the temperature values are still within the SCADA threshold.
5.6.4.4
Results of the Results of the Two-Stage Threshold
This section presents the analysis of Turbine 15 using the proposed two-stage threshold. Figure 5.22 presents the diagnosis result of the bearing temperature. The upper subplot shows the variation of the percentage errors (PEs) obtained by LSTM, which are defined as follows: PE =
.
yˆ − y · 100% y
where .y˙ is the estimated temperature value and y is the target value. In Fig. 5.22, the estimation values are missing in two periods, respectively, from 4:50 on November 2 to 21:00 on November 19 and from 12:50 on November 21 to 17:50 on November 27. During this period, the turbine was not in the production mode. Since the framework aims to apply in the production mode, these values are neglected accordingly. No obvious information shows that the period is related to the actual failure. In the upper subplot, PEs are within 5% at the beginning, which certifies
144
L. B. Tjernberg
10
The First-Stage Threshold
PE [%]
5 0 -5 -10
Area over the first-stage threshold
Jan/15
100
Apr/15
Jul/15 Calendar time
Oct/15
The Second-Stage Threshold
80 60 40 20 0 Jan/15
Apr/15
Jul/15 Calendar time
Oct/15
Fig. 5.22 The result to model the bearing temperature using the two-stage threshold for the normal Turbine 3. The orange dash lines are the upper and lower limits [87]
that the LSTM achieves an accurate estimation of the normal temperature values. From 21:20 on November 1, PEs start to exceed the lower limit, which means that the current temperature is much higher than the estimated value. The generated area values begin to exceed the second-stage threshold at 3:10 on November 2, and the alarms are triggered to warn operators of the anomalous overheating. Different from the ones detected by the log events, these anomalies do not exceed the extreme operation limits defined by SCADA. However, they deviate so far from the estimated temperature values that the area values exceed the second-stage threshold; hence, they are deemed contextually anomalous compared with the normal operations established by RNNs. In addition, it is noted that PEs vary within [−9.7598%, −9.2622%] from 8:40 to 9:50 on October 21, which are large enough to cross the first-stage threshold. However, due to the short duration, the generated area values fail to trigger the second-stage threshold. These temporal spikes can be seen as minor fluctuations, but not potential operational risks. Second, the two-stage method is examined with a normal turbine with ID 3, and the result can be found in Fig. 5.22. In the upper subplot, there are some PEs outside the upper and lower limits, although most are located within 5%. In the lower subplot, the area values generated by the extreme PEs are far below the second-stage
5 Reliability-Centered Asset Management with Models for Maintenance. . .
145
Fig. 5.23 The result to model the bearing temperature using the EWMA control chart for the normal Turbine 3. (The orange dash lines are the upper and lower limits. The red dots denote the triggered alarm) [87]
threshold; hence, no alarms are triggered. Furthermore, the log files are checked as well, but no log events are recorded during the production mode. In addition, the proposed two-stage threshold is compared with the EWMA control chart, which is used in [100]. Figure 5.23 shows the result of EWMA using the same PEs as Fig. 5.22. The parameter ψ here is set as 0.5. It can be seen that continuous alarms are released throughout the whole year. The results validate that the proposed twostage threshold can reduce false alarms toward normal operations compared with the EWMA control chart.
5.6.4.5
Diagnosis Results of the Onshore Wind Park
Finally, the proposed fault detection framework is examined with the whole data of the onshore wind park. Table 5.9 shows the detailed failure information according to the maintenance report and diagnosis results of the developed framework. The framework detects a possible fault in the gearbox by monitoring gear oil and bearing temperature. The anomalous overheating in either gear oil or bearing temperature can be caused by many factors. Hence, the root causes for the specific failures need to be identified with the aid of expert knowledge or field inspection, which are not explored in this section. In the wind park, seven turbines have actual faults reported in the gearbox, two of which with ID 6 and 12 are in the lubrication system, and the others are in the rotational components. Three turbines with ID 3, 7, and 9 are reported as normal operations. The rest of the turbines have faults reported in other components. Since the correlation between the faults in different components is not clear yet, these turbines are not discussed in this section. In Table 5.9, no alarms are triggered in normal turbines. For the failure cases, Turbines 6 and 11 are diagnosed with anomalous values, respectively, in the gear oil temperature and bearing temperature 3 months ahead. The anomalous oil temperature is found for
146
L. B. Tjernberg
Table 5.9 Failure information in the maintenance report and diagnosis results of the onshore wind park [87] ID 3 6 7 9 11 12 13 14 15 16
Failure information Normal No filtrate and the pump was full of air Normal Normal A crack in the low speed shaft wheel A leakage in the gear oil Spalling in the intermediate speed bearing High speed shaft bearing replacement Fault in the intermediate shaft bearing A damage in the high speed shaft pinion with half tooth broken
Failure time – 2015-09-02 – – 2015-09-11 2015-01-15 2015-07-07 2015-09-07 2015-11-30 2015-01-05
Diagnosis time – 2015-06-01 – – 2015-06-02 2014-12-22 2015-02-26 2015-04-13 2015-11-02 2014-12-14
Note: Dates are formatted as YYYY-MM-DD
Turbine 12 less than 1 month ahead of the actual failure time. Turbines 13 and 14 are diagnosed with the anomalies in both oil and bearing temperature 5 months earlier than the actual failure time. Turbines 15 and 16 are detected with potential risks in the bearing temperature less than 1 month in advance. From the above results, it can be concluded that the proposed fault detection framework can release alarms toward operational risks in the near future and reduce false alarms toward normal operations.
5.6.4.6
Conclusions
This section has proposed a fault detection framework for the condition monitoring of wind turbines. This framework: analyzed both SCADA log events and operation data to understand when the turbine deviates from normal operation, used RNNs to model operation data which can automatically learn the dependencies between adjacent records, and proposed a two-stage threshold method to identify the underlying operational risks and to reduce false alarms toward normal operations. The framework is validated with the data from an onshore wind park. In the benchmark test, RNNs are compared with DNN and GP, and ANOVA validates their modeling time series superiority. The two-stage threshold is compared with the EWMA control chart, and the generated results show that the method can reduce false alarms as post-processing of fault detection. The framework is tested with both normal operations and failure examples. The results validate that the proposed framework can generate alarms toward operational risks and reduce false alarms toward normal operations. Acknowledgments The RCAM approach was first presented in the PhD thesis by Prof. Lina Bertling in 2002. It was later published in the book, together with an extensive selection of applications, on Infrastructure Asset Management with Power System Examples, by L. Bertling
5 Reliability-Centered Asset Management with Models for Maintenance. . .
147
Tjernberg, CRC Press Taylor and Francis, First Edition in 2018. Since 2018 the work has developed in using machine learning models for predictive maintenance as presented in the last section of the chapter. The RCAM research group was funded at KTH in 2002 by Dr. Bertling. For a period within 2009–2013, Bertling Tjernberg had a position with Chalmers Technical University and then initiated the Wind power asset management group (WindAM). From 2013 the WindAM group merged into the RCAM back at KTH. Numerous students and researchers have been working within the research group and have performed studies for the various applications. The first studies were performed by master thesis students Johan Ribrant (focusing on failure statistics) and Julia Nilsson (focusing on LCC analysis and optimization). The first PhD student Francois Besnard focused on maintenance optimization (with examples included in this chapter). The first post doc Kateryna Fisher performed an extensive RCM study (that partly is included in this chapter). The second generation of PhD student Pramod Bangalore started work with predictive models using machine learning models and input data from SCADA. These models were further developed in a third generation of PhD student by Cui Yue. (Examples from these predictive models are included in this chapter) that later continued as a post doc with the RCAM group. Pramod Bangalore continued to contribute to the research and provided necessary input data for the case studies from GreenByte. The research funding has mainly been from the Swedish industry and the Swedish Energy Agency. Projects have been funded from the following centers: Centre of Excellence on Electric Engineering (EKC), the Swedish Wind Power Technology Center (SWPTC) and the Swedish Centre for Smart Grids and Energy Storage (SweGRIDS). The wind power applications were also funded within Vindforsk research program, Vattenfall, Gothenburg Energy and the China Scholarship Council (CSC) collaboration with KTH. The post doc project that continued the work developing the models was sponsored within the SweGRIDs research program. The author sincerely thanks all the students and researchers that have contributed to this work during the years. Without all your effort, this work would not have been possible! The author wants to thank also the different sponsors for the research projects and for the engagement in the projects providing input data.
References 1. The 2030 Agenda for Sustainable Development, Adopted by all United Nations Members in 2015. Including the 17 SDGs https://sdgs.un.org/goals. 2. European Union, European Green Deal Call: A C1 billion investment to boost the green and digital transition | European Circular Economy Stakeholder Platform (europa.eu), Sept. 2020. 3. Global Energy Review 2021, “Assessing the effects of economic recoveries on global energy demand and CO2 emissions in 2021”, available: https://www.iea.org/ 4. GWEC, “GLOBAL WIND REPORT 2021”, available: https://gwec.net/ 5. N. Renström, P. Bangalore, and E. Highcock, “System-wide anomaly detection in wind turbines using deep autoencoders”, Renewable Energy, 157 (2020), 647–659. 6. Y. Cui, P. Bangalore and L. Bertling Tjernberg, “An Anomaly Detection Approach Using Wavelet Transform and Artificial Neural Networks for Condition Monitoring of Wind Turbines’ Gearboxes”, 2018 Power Systems Computation Conference (PSCC), 2018, pp. 1–7, https://doi.org/10.23919/PSCC.2018.8442916. 7. L. Wang, Z. Zhang, H. Long, J. Xu and R. Liu, “Wind Turbine Gearbox Failure Identification With Deep Neural Networks”, in IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1360–1368, June 2017, https://doi.org/10.1109/TII.2016.2607179. 8. P. Bangalore and L. B. Tjernberg, “An Artificial Neural Network Approach for Early Fault Detection of Gearbox Bearings”, in IEEE Transactions on Smart Grid, vol. 6, no. 2, 2015, pp. 980–987, https://doi.org/10.1109/TSG.2014.2386305. 9. Q. Huang, Y. Cui, L. B. Tjernberg and P. Bangalore, “Wind Turbine Health Assessment Framework Based on Power Analysis Using Machine Learning Method”, 2019 IEEE PES
148
L. B. Tjernberg
Innovative Smart Grid Technologies Europe (ISGT-Europe), 2019, pp. 1–5, https://doi.org/ 10.1109/ISGTEurope.2019.8905495. 10. Y. Cui, P. Bangalore and L. B. Tjernberg, “An Anomaly Detection Approach Based on Machine Learning and SCADA Data for Condition Monitoring of Wind Turbines” 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), 2018, pp. 1–6, https://doi.org/10.1109/PMAPS.2018.8440525. 11. Y. Cui, P. Bangalore, and L. B. Tjernberg, “A fault detection framework using recurrent neural networks for condition monitoring of wind turbines”, Wind Energy, 2021, 24(11), pp. 1249– 1262, https://doi.org/10.1002/we.2628. 12. J. S. Lal Senanayaka, H. Van Khang and K. G. Robbersmyr, “Autoencoders and Recurrent Neural Networks Based Algorithm for Prognosis of Bearing Life”, 2018 21st International Conference on Electrical Machines and Systems (ICEMS), 2018, pp. 537–542, https:// doi.org/10.23919/ICEMS.2018.8549006. 13. Z. Sun and H. Sun, “Stacked Denoising Autoencoder With Density-Grid Based Clustering Method for Detecting Outlier of Wind Turbine Components”, in IEEE Access, vol. 7, pp. 13078–13091, 2019, https://doi.org/10.1109/ACCESS.2019.2893206. 14. X. Wu, G. Jiang, X. Wang, P. Xie and X. Li, “A Multi-Level-Denoising Autoencoder Approach for Wind Turbine Fault Detection”, in IEEE Access, vol. 7, pp. 59376–59387, 2019, doi: https://doi.org/10.1109/ACCESS.2019.2914731. 15. J. Eduardo Urrea Cabus, Y. Cui, P. Bangalore, and L. Bertling Tjernberg, An Anomaly Detection Approach Based on Autoencoders for Condition Monitoring of Wind Turbines In proceedings of the International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Manchester, UK, June 2022. 16. EU taxonomy for sustainable activities | European Commission (europa.eu). 17. L. B. Tjernberg, “Chapter 11: Sustainable electricity grids – A prerequisite for the energy system of the future,” i Towards the energy of the future – The invisible revolution behind the electrical socket, Stockholm, Vetenskap & Allmänhet, 2022. (Available from www.energiantologi.se). 18. IEEE, IEEE Recommended Practice for the Design of Reliable Industrial and Commercial Power Systems, The Gold Book. IEEE Std 493-2007, February 2007. 19. Rausand, M. and Hoyland, A., Introduction, in System Reliability Theory: Models, Statistical Methods, and Applications, 2nd edition. John Wiley & Sons, Inc., Hoboken, NJ, 1994. http:/ /onlinelibrary.wiley.com/doi/10.1002/9780470316900.ch1/summary. 20. IEC, 60050-901:2013, International Electrotechnical Vocabulary — Part 901: Standardization, 2013. 47pp. 21. Billinton, R. and Allan, R., Reliability Evaluation of Power Systems, 2nd edition. Plenum Press, New York, ISBN 0-306-45259-6, 1996. 22. Billinton, R., Bibliography on the application of probability methods in power system reliability evaluation, IEEE Transactions on Power Apparatus and Systems, Volume PAS-91, Issue 2, IEEE, 1972. 23. Billinton, R., Bibliography on the application of probability methods in power system reliability evaluation, 1972, PAS-91. 24. Billinton, R., Fotuhi-Firuzabad, M., and Bertling, L., Bibliography on the application of probability methods in power system reliability evaluation 1996–1999, IEEE Transactions on Power Systems, 16(4), 595–602, November 2001. 25. ISO, Standard ISO 55000:2014, Asset Management—Overview, Principles and Terminology, ISO, 2014. Available at: https://www.iso.org/standard/55088.html 26. Tjernberg, L. B. (2018). Infrastructure Asset Management with Power System Applications. CRC Press. 27. Decision Theory: Bayesian G. Parmigiani, in International Encyclopedia of the Social & Behavioral Sciences, 2001. 28. Scenarios for the energy transition – global experience and best practices, International Renewable Energy Agency (IREWA), September 2020. ISBN: 978-92-9260-267-3 (Available on line 2022-05-23 Scenarios for the Energy Transition: Global experience and best practices (irena.org))
5 Reliability-Centered Asset Management with Models for Maintenance. . .
149
29. Endrenyi J., Anders G., Bertling L, Kalinowski B., Comparison of Two Methods for Evaluating the Effects of Maintenance, Invited paper to special session at the 8th International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Ames, Iowa, September 2004. 30. Arthur, Samuel (1959-03-03). “Some Studies in Machine Learning Using the Game of Checkers”. IBM Journal of Research and Development. 3 (3): 210–229. 31. Nowlan, F. S. and Heap, H. F., Reliability Centered Maintenance. Technical Report. National Technical Information Service, U.S. Department of Commerce, Springfield, Virginia, 1978. 32. Smith, A. M., Reliability Centred Maintenance. McGraw-Hill, USA, 1993. 33. J. Moubray, “Reliability-Centered Maintenance”, Industrial Press Inc., New York, USA, 1997, ISBN 0-8311-3078-4 34. Bertling, L., Allan, R. N., and Eriksson, R., A reliability-centred asset maintenance method for assessing the impact of maintenance in power distribution systems, IEEE Transactions on Power Systems, 20(1), 75–82, February 2005. 35. CIGRE, “Guide on transformer intelligent condition monitoring (TICM) systems,” CIGRE WG A2.44, Tech. Rep., 2015. 36. A. Heydari and et al., “A Hybrid Intelligent Model for the Condition Monitoring and Diagnostics of Wind Turbines Gearbox,” IEEE Access, pp. 89878–89890, Vol. 9 2021. 37. Y. Cui, P. Bangalore and L. B. Tjernberg, “A fault detection framework using RNNs for condition monitoring of wind turbines,” Wind Energy, 2021. (This is the same reference as number [11]). 38. CIGRE, “Condition assessment of power transformers,” CIGRE WG A2.49, Tech. Rep., 2019. 39. Besnard, F.; Fischer, K.; Bertling, L.: Reliability-Centred Asset Maintenance – A step towards enhanced reliability, availability, and profitability of wind power plants. In Proceedings of IEEE PES ISGT Europe 2010, October 2010, Gothenburg, ISBN/ISSN: 978-142448510-9. 40. R. Dekker, “Application of maintenance optimization models: a review and analysis”, Journal of Reliability Engineering and System Safety, 1996, 51(3):229–240. 41. A. Hoyland and M. Rausand, “System Reliability Theory: Models and Statistical Methods – Second Edition”, Wiley, New Jersey, USA, 2004. ISBN 0-471-47133-X 42. R.E. Barlow and F. Proschan “Mathematical Theory of Reliability”, Wiley, New York, USA, 1965, ISBN 978-0-898713-69-5 43. A.K.S. Jardine and A.H.C. Tsang, “Maintenance, Replacement, and Reliability – Theory and Applications”, Taylor and Francis, Boka Raton, USA, 2006, ISBN 0-8493-3966-9 44. J.A. Nachlas, “Reliability Engineering – Probabilistic Models and Maintenance Methods”, Taylor and Francis, Boka Raton, USA, 2005. ISBN 0-8493-3598-1. 45. D. M. Frangopol, M.-J. Kallen and J. M. van Noortwijk, “Probabilistic models for life-cycle performance of deteriorating structures: review and future directions”. Journal of Progress in Structural Engineering and Materials, 2004, 6(4):197–212. 46. J.M. van Noortwijk. “A survey of application of gamma processes in maintenance”. Reliability Engineering and System Safety, 94(1):2–21, 2009. 47. W. Q. Meeker and L. A. Escobar, “Statistical Methods for Reliability Data”, Wiley, New York, USA, 1998. ISBN 978-0-471-14328-4. 48. T. Welte, J. Vatn, and J. Heggset, “Markov state model for optimization of maintenance and renewal of hydro power components”. In proc. of the 9th International Conference on Probalistic Methods Applied to Power Systems, Stockholm, Sweden, 11–12th June, 2006. 49. C. S. Gray and S. J. Watson, “Physics of Failure approach to wind turbine condition based maintenance”, Wind Energy, Published online, 2009. 50. T. Welte, “Using state diagrams for modeling maintenance of deteriorating systems”, IEEE Transaction on Power Systems, 24(1):58–66, 2009. 51. J. Nilsson and L. Bertling, “Maintenance Management of Wind Power Systems Using Condition Monitoring Systems – Life Cycle Cost Analysis for Two Case Studies”, IEEE Transactions on Energy Conversion, vol. 22, no. 1, pp.223–229, March 2007.
150
L. B. Tjernberg
52. J. Ribrant and L.M. Bertling, “Survey of failures in wind power systems with focus on Swedish wind power plants during 1997–2005”, IEEE Transaction on Energy Conversion, 2007, 22(1):167–173. 53. J.A. Andrawus, “Maintenance optimization for wind turbines”, PhD thesis, Robert Gordon University, Aberdeen, United Kingdom, 2008. 54. F. Besnard, J. Nilsson and L. Bertling, “On the Economic Benefits of using Condition Monitoring Systems for Maintenance Management of Wind Power Systems”, In Proc. of Probabilistic Methods applied to Power Systems, Singapore, 14–17 June 2010. 55. S. Faulstich, P. Lyding, B. Hahn and D. Brune “A Collaborative Reliability Database for Maintenance Optimisation”, In Proc. of European Wind Energy Conference 2010, Warsaw, Poland, 20–23 April 2010. 56. Besnard, F.,Bertling L.: An Approach for Condition-Based Maintenance Optimization Applied to Wind Turbine Blades. IEEE Transactions on Sustainable Energy, Vol.1 No. 2, pp. 77–83, July 2010. 57. F. Besnard, M. Patriksson, A. Strömberg, A. Wojciechowski and L. Bertling. “An Optimization Framework for Opportunistic Maintenance of Offshore Wind Power System”, In Proc. of IEEE PowerTech 2009 Conference, Bucharest, Romania, 28 July – 2 July 2009. 58. Z. Hameed and J. Vatn, “Grouping of maintenance and optimization by using genetic algorithm”, In proc. of ESREDA 2010, Pecs, Hungary, 4–5 May 2010. 59. M. Lindqvist and J. Lundin, “Spare Part Logistics and Optimization of Wind Turbines – Methods for Cost-Effective Supply and Storage”, Master Thesis, Uppsala University, 2010. 60. L.W.M.M. Rademakers, H. Braam, T.S. Obdam, P. Frohböse and N. Kruse, “Tools for Estimating Operation and Maintenance Costs of Offshore Wind Farms: State of the Art”, In Proc. of European Wind Energy Conference 2008, Brussels, Belgium, 31 March-3 April 2008. 61. Besnard F., Fischer K., Bertling Tjernberg L., A Model for the Optimization of the Maintenance Support Organization for Offshore Wind Farms, IEEE Transactions on Sustainable Energy, Vol. 4, No. 2, pp. 443–450, April 2013. 62. “Std 100 – The Authoritative Dictionary of IEEE Standards Terms”, Standards Information Network, IEEE Press, 2000, New York, USA, ISBN 0-7381-2601-2 63. S. Faulstich, B. Hahn, P. Lyding and P. Tavner S. Faulstich & al., “Reliability of offshore turbines – identifying risks by onshore experience”, In Proc. of European Offshore Wind 2009, Stockholm, Sweden, 14–16 September 2009. 64. S. Faulstich, B. Hahn, H. Jung and K. Rafik, “Suitable failure statistics as a key for improving availability”, In Proc. of European Wind Energy Conference 2009, Marseille, France, 16–19 March 2009. 65. S. Faulstich, B. Hahn, H. Jung, K. Rafik and A. Ringhandt, “Appropriate failure statistics and reliability characteristics”, In Proc. of DEWEK 2008, Bremen, Germany, 26–27 September 2008. 66. S. Faulstich, B. Hahn and P. Lyding, “Electrical subassemblies of wind turbines – a substantial risk for the availability”, In Proc. of European Wind Energy Conference 2010, Warsaw, Poland, 20–23 April 2010. 67. A. Stenberg, “Analys av vindkraftsstatistik I Finland”, Master Thesis, Aalto Universitet, 2010 (In Swedish). 68. M. Wilkinson et al., “Methodology and Results of the Reliawind Reliability Field Study”, In Proc. of European Wind Energy Conference 2010, Warsaw, Poland, 20–23 April 2010. 69. P. Asmus and M. Seitzler, “The Wind Energy Operations & Maintenance Report”, Wind Energy Update, 2010. 70. F.Spinato, P.J. Tavner, G.J.W. van Bussel, and E. Koutoulakos, “Reliability of wind turbine subassemblies”, IET Proceedings Renewable Power Generation, 2008, 3(4): 387–401. 71. E. Echavarria, B. Hahn, G.J.W. van Bussel and T. Tomiyama, “Reliability of wind turbine technology through time”, Journal of Solar Energy Engineering,2008, 130(3):1–7. 72. M.A. Drewru and G.A. Georgiou, “A review of NDT techniques for wind turbines”, Insight, 49(3):137–141, 2007.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
151
73. Fischer, K.; Besnard, F.; Bertling, L.: A Limited-Scope Reliability-Centred Maintenance Analysis of Wind Turbines. In Scientific Proceedings of the European Wind Energy Conference & Exhibition (EWEA) 2011, Brussels, March 2011. 74. Vattenfall, Driftuppföljning Vindkraft www.vindstat.nu, visited Dec.2010. 75. Vestas: www.vestas.com, visited Jan. 2011. 76. L. Lin, F. Sun, Y. Yang, Q. Li, Comparison of Reactive Power Compensation Strategy of Wind Farm Based on Optislip Wind Turbines. Proc. of the SUPERGEN Conference 2009, Nanjing, China, 6–7 April 2009. 77. Vestas Wind Systems A/S, V90-1.8MW & 2MW – Built on experience. Product brochure, Randers, Denmark, 2007 78. Vestas Wind Systems A/S: General Specification V90-1.8MW/2MW OptiSpeed Wind Turbine. Randers, Denmark, 2005. 79. Vestas Wind Systems A/S: V90-1.8MW/2MW. Product brochure, Randers, Denmark, 2009. 80. Vestas Wind Systems A/S: V90-1.8MW/2MW. Product brochure, Randers, Denmark, 2010. 81. SwedPower AB, Felanalys – Database of failures for Swedish wind turbines 1989-2005. Data compiled by SwedPower AB, Stockholm, on behalf of STEM and ELFORSK, 2005. 82. R. Balschuweit, Beanspruchungs- und Schadensanalyse von Windenergieanlagen am Beispiel der Vestas V90-2MW. Diplomarbeit, TFH Berlin in cooperation with Vattenfall, Germany, 2009. 83. E. Koutoulakos, Wind turbine reliability characteristics and offshore availability assessment. Master’s thesis, TU Delft, 2008. 84. VGB PowerTech, Guideline Reference Designation System for Power Plants, RDS-PP – Application Explanations for Wind Power Plants. VGB 116 D2, 1 st Ed., Germany, 2007. 85. VGB PowerTech, Richtlinie EMS – Ereignis-Merkmal-Schlüsselsystem. VGB-B 109, Germany, 2003. 86. RELIAWIND, FP7-ENERGY, European Commission, 2008-2011. 87. Yue C., Bangalore P, Bertling Tjernberg L. A fault detection framework using recurrent neural networks for condition monitoring of wind turbines. Wind Energy. 2021;1–14. https://doi.org/ 10.1002/we.2628 (This is the same reference as [11] and [38]). 88. Yang W, Tavner P, Tian W. Wind turbine condition monitoring based on an improved splinekernelled chirplet transform. IEEE Trans Indust Electron. 2015;62(10):6565–6574. 89. Qiao W, Lu D. A survey on wind turbine condition monitoring and fault diagnosis-part I: components and subsystems. IEEE Trans Indust Electron. 2015;62(10):6536–6545. 90. Qiao W, Lu D. A survey on wind turbine condition monitoring and fault diagnosis-part II: signals and signal processing methods. IEEE Trans Indust Electron. 2015;62(10):6546–6557. 91. Qiu Y, Feng Y, Tavner P, Richardson P, Erdos G, Chen B. Wind turbine SCADA alarm analysis for improving reliability. Wind Energy. 2012;15(8): 951–966. 92. Kusiak A, Li W. The prediction and diagnosis of wind turbine faults. Renew Energy. 2011;36(1):16–23. 93. Chalapathy R, Chawla S. Deep learning for anomaly detection. https://arxiv.org/abs/ 1901.03407. 94. Liu FT, Ting KM, Zhou Z. Isolation forest. In: IEEE International Conference on Data Mining; Pisa, Italy; 2008:413–422. 95. Song Z, Zhang Z, Jiang Y, Zhu J. Wind turbine health state monitoring based on a Bayesian data-driven approach. Renew Energy. 2018;125:172–181. 96. Pandit R, Infield D. SCADA-based wind turbine anomaly detection using gaussian process models for wind turbine condition monitoring purposes. IET Renew Power Gen. 2018;12(11):1249–1255. 97. Chen B, Matthews PC, Tavner PJ. Automated on-line fault prognosis for wind turbine pitch systems using supervisory control and data acquisition. IET Renew Power Gen. 2015;9(5):503–513. 98. Zaher A, Mcarthur S, Infield DG, Patel Y. Online wind turbine fault detection through automated SCADA data analysis. Wind Energy. 2009;12(6): 574–593.
152
L. B. Tjernberg
99. Bangalore P, Tjernberg LB. An artificial neural network approach for early fault detection of gearbox bearings. IEEE Trans Smart Grid. 2015;6(2): 980–987. 100. Wang L, Zhang Z, Long H, Xu J, Liu R. Wind turbine gearbox failure identification with deep neural networks. IEEE Trans Indust Inform. 2017;13(3): 1360–1368. 101. Wang L, Zhang Z, Xu J, Liu R. Wind turbine blade breakage monitoring with deep autoencoders. IEEE Trans Smart Grid. 2018;9(4):2824–2833. 102. Gravesa A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005;18: 602–610. 103. Cho K, Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. https:// arxiv.org/abs/1406.1078 104. Hundman K, Constantinou V, Laporte C, Colwell I, Soderstrom T. Detecting spacecraft anomalies using LSTMs and nonparametric dynamic thresholding. In: International Conference on Knowledge Discovery and Data Mining; London, United Kingdom; 2016:387–395. 105. Kong W, Dong ZY, Jia Y, Xu Y, Zhang Y. Short-term residential load forecasting based on LSTM recurrent neural network. IEEE Trans Smart Grid. 2019; 10(1):841–851. 106. Siffer A, Fouque P, Termier A, Largouet C. Anomaly detection in streams with extreme value theory. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2017; Halifax, NS, Canada:1067–1075. 107. Hochreiter S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int J Uncertainty, Fuzz Knowl-Based Syst. 1998;6(2):107–116. 108. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735– 1780. 109. Felix FA, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. In: International Conference on Artificial Neural Networks; Edinburgh, Scotland; 1999:850– 855. 110. Gers FA, Schraudolph NN, Schmidhuber J. Learning precise timing with LSTM recurrent networks. J Machine Learn Res. 2002;3:115–143. 111. Pascanu R, Mikolov T, Bengio Y. On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning; 2013; Atlanta, GA, USA. https://arxiv.org/ abs/1211.5063 112. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics; Chia Laguna Resort, Sardinia, Italy; 2010:249–256. 113. Kingma DP, Ba JL. Adam: A method for stochastic optimization. In: International Conference on Learning Representations; 2015; San Diego, CA, USA. https://arxiv.org/abs/1412.6980 114. Wilkinson M, Hendriks B, Gomez E, et al. Methodology and result
Professor Dr. Lina Bertling Tjernberg (Tjernberg from 2011) was born in Huddinge, a suburb of Stockholm, Sweden where she grow up with her parents, sister, and family dog and a large family spread all over the country (from Karlskrona in the south to Luleå in north). Summer holidays were spent in the cottage just south of Stockholm at Muskö or on the sea sailing. The family business in advanced medical diagnostics methods was founded by her father Jan Bertling and had expanded internationally. Family life and work have always been in focus with a great joint interest in technology developments, international exchange and cultural art history, nature, and gardening, guided by her mother Margareta Bertling. Lina has always enjoyed writing, and documenting life in diaries and photographs has always been a keen interest. Her vision for professional life was to follow her grandfather, Gunnar Berglund, becoming a lawyer. She was however inspired by
5 Reliability-Centered Asset Management with Models for Maintenance. . .
153
her older sister Sofia Bertling and decided to take the technical program in high school. For the last years of high school, Lina moved to Östhammar and followed the energy program in Forskmark school. This led both her and her sister to studies at KTH Royal Institute of Technology in Stockholm. Lina started in 1992 in the vehicle engineering program and later specialized in systems engineering and applied mathematics. In 1997 she started PhD studies in the electrical engineering department inspired to continue to explore her interest in applied reliability theory for a new application area of power grid technology. Here, international advisor Professor Ron Allan at the University of Manchester UK and her two supervisors at KTH Professor Roland Eriksson and Professor Göran Andersson (later with ETH in Zurich) became her role models and provided inspiration for academic life and exchange with the power industry. International collaborations have always been an important part of her work with an extensive volunteer work within IEEE and Power & Energy Society (PES) including several leadership positions. She has served: in the PES board as Secretary (2014– 2016) and as Treasurer (2012–2014), as Chair of the IEEE Sweden Chapter 2009–2019, as an Editorial board member of the IEEE Transactions of Smart Grid (2010–2015), Chair of the IEEE PES ISGT Europe steering committee (2010–2014), and officer in the board of the IEEE PES Subcommittee on Risk, Reliability, and Probability Applications (RRPA) (2007–2013). Currently, she is a Distinguished Lecturer of PES, Member of the IEEE PES Industry Technical Support Leadership Committee, IEEE PES ISGT Europe steering committee, and the 2023 IEEE Herman Halperin Electric Transmission & Distribution Award Committee. She has so far had three international research visits. Firstly, in 2000, with the power system reliability group led by Professor Roy Billinton at the University of Saskatchewan, Saskatoon, Canada. Secondly, in 2002/2003 for post doc studies at the University of Toronto and associated with Kinectrics, where she worked with Dr. John Endrenyi and Professor George Anders. The latest research visit was as a guest professor in 2014, at Stanford University in the Civil Engineering and Environmental Engineering Department, and the research group and laboratory for creating sustainable engineering systems with renewable energy systems (led by Associated Professor Ram Rajagopal). Lina presented her PhD thesis in 2002 at KTH on power system and then founded her own research group the reliabilitycentered asset management Group (RCAM). At the National Grid 2007–2009, she served as Director of Research and Development and worked in the asset management group for the first year. In 2009, she was recruited to Chalmers University of Technology as Professor in Sustainable Power Systems and served as chair of the power systems division. In 2013 she returned to KTH as professor and chair in power grid technology. Since 2018, she has served as Director of the Energy platform which stimulates and initiates new collaborations within energy researchers at KTH and provides a platform for collaboration and exchange with society. In 2021 she was appointed as the coordinator for lifelong learning at the School of Electrical Engineering and
154
L. B. Tjernberg Computer Science. Her research expertise is on applied reliability theory and predictive maintenance, and on system solutions for the GreenGrids vision (power grid technologies for smart grid). She is the author of 100+ scientific papers in journals and international conferences including the book on Infrastructure Asset Management with Power System Applications, CRC Taylor & Francis, April 2018. In 2021, she was awarded the Power Women of the Year award. The award is handed out by the minister of Energy in Sweden and is awarded by the association Kraftkvinnor (Power Women). Kraftkvinnor is an association and a network that aims to raise the profiles of and promote highly talented women in the energy sector and at the same time, dispel the notion that there is a shortage of experienced women in the energy sector. The network aims to increase the proportion of women in management groups and boards in energy sector companies and organizations. Having more female role models in executive positions will attract more women to the sector. The Power Woman of the Year Award is therefore a tool in helping to achieve this goal and accordingly accelerate the energy system reset. The citation reads. “This year, the Power Woman of the year Award goes to a woman, who in the traditionally male dominated energy sector, has been able to become a strong force and guarantor for a more gender balanced talent provision within the sustainable energy systems of both today and tomorrow. With her unmistakable enthusiasm, leadership and staying power, she has systematically and in a successful way, built foundations and platforms in a farsighted study programme to educate tomorrow’s energy sector employees. Via her work, she is an important bridge between education and enterprise in shaping future know-how in a rapidly changing energy system. She highlights how vital and crucial it is for the energy sector to reach out to coming generations by actively communicating with school age children and teenagers, not least girls, in painting an exciting and attractive picture of a more digitalised and electrified energy system. With strength, patience and concrete actions in her role as a professor in the academic arena, she has been able to build a strong scientific and media platform on her own merits to raise energy issues far beyond being purely a technical issue. In naming Lina Bertling Tjernberg this year’s Power Woman, we wish to give her more wind in her sails in her quest to persuade more young people, not least girls, to choose and be welcomed into the energy sector and thereby contribute to genuine and long term gender equality.” Lina has a great passion to communicate research and contribute to building bridges between different actors. She has been a volunteer in the organization parliament and researchers and is currently the vice chair representing the researchers. In 2022 the KTH Energy platform is publishing a popular science anthology paving the way for transformation of the energy system. Lina is the author of two chapters in the book Towards the energy of the future – The invisible revolution behind the electrical socket published by VA Public & Science.
5 Reliability-Centered Asset Management with Models for Maintenance. . .
155
In 2022 she was included in the list of the most 20 powerful people in Energy sector in Sweden. On the top of the list, both in 2019 and in 2022, is Anna Borg who is the CEO of Vattenfall. Also in 2022, she was elected as a Fellow of the Royal Swedish Academy of Engineering (IVA).
Chapter 6
Nuclear Power in the Twenty-First Century? – A Personal View Jasmina Vujic
6.1 Introduction Nuclear power, which accounts for about 10% of the world’s electricity supply, is currently the only technology with a secure base-load electricity supply and no greenhouse gas emissions that has the potential to expand at a large scale and effectively replace fossil fuels [1]. In addition, serious concerns about energy independence, sustainability, competitiveness, and security keep the nuclear energy option open in the USA and around the world. Currently, 32 countries worldwide are operating 442 nuclear power reactors for electricity generation (with a total net installed capacity of 392,612 MW), and 52 new nuclear power reactors are under construction in 19 countries [1]. The largest number of reactors under construction are in China (13) and India (7). Regarding the number of reactors planned for construction, the largest number is planned in China (29), followed by Russia (19), and Japan (9). The USA, with 94 currently operating nuclear power reactors in 31 states (with the total installed net capacity of 97 GW(e) and average capacity factor of 92.5% that produce about 20% of the total electricity production in the USA), is the country with the largest number of operating nuclear power plants (NPPs) [2]. All 94 operating reactors in the USA are Light Water Reactors (LWRs), with 63 Pressurized Water Reactors (PWRs) and 35 Boiling Water Reactors (BWRs). There are only two nuclear power reactors under construction in the USA at present. France has 56 nuclear power reactors operated by Electricite de France (EdF), with total capacity of over 61 GW(e), which generated 71% of the total electricity production there in 2020 [3]. The nuclear share of electricity generation in France is
J. Vujic () University of California at Berkeley, Berkeley, CA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_6
157
158
J. Vujic
Generation II
Generation I
Gen I 1960
Gen II 1970
1980
Generation III+ Evolutionary designs
Advanced LWRs
Commercial power reactors
Early prototype reactors
1950
Generation III
Gen III 1990
2000
Generation IV Revolutionary designs
Gen III+ 2010
2020
Gen IV 2030
Fig. 6.1 Generation of nuclear power reactors [5]
the largest in the world. France decided to focus on PWR reactors only. Currently, there is only one PWR reactor under construction in France. In 1974, after the world’s oil crisis, the French government decided to rapidly expand the country’s nuclear power capacity, minimizing oil imports and achieving greater energy security. Russia has 38 operating reactors totaling 29 GW(e), which represent close to 21% of electricity generation, and 3 power reactors under construction [1]. The fleet is comprised of various reactor types: 4 first generation VVER-440/230 or similar pressurized water reactors; 2 second generation VVER-440/213 pressurized water reactors; 11 third generation VVER-1000 pressurized water reactors with a full containment structure, mostly V-320 types; 8 RBMK light water graphite reactors; and 4 small graphite-moderated BWR reactors in eastern Siberia, constructed in the 1970s for cogeneration. In addition, Russia operates the only two large commercial sodium-cooled fast-breeder reactors in the world, BN-600 and BN-800. Russia has the world’s largest fleet of nuclear-powered icebreakers and merchant ships. The first floating nuclear power plant “Academician Lomonosov” started its commercial operations in May 2020, providing electricity and heat to the town of Pevek, in the far northeast of Siberia [4]. The floating nuclear power plant “Academician Lomonosov” houses two Small Modular Reactors (SMRs), KLT-40S type, 35 MWe each, representing a modified icebreaker propulsion nuclear reactor, to be used for electricity and heat generation, as well as for desalination. Since the beginning in the early 1950s, nuclear power technology has evolved through the following generations of system designs: Generation I, mostly early prototypes and first-of-a-kind reactors built between the 1950s and 1970s (such as, Shippingport, Dresden, Magnox); Generation II, reactors built from the 1970s to 1990s, most of which are still in operation today (such as PWR and VVER, BWR, CANDU); and Generation III, evolutionary advanced reactors with active safety systems built by the turn of the twenty-first century (such as General Electric’s Advanced BWR and Framatom’s EPR) (Fig. 6.1) [5]. The newest Westinghouse AP1000 and GE’s ESBWR designs that feature passive safety systems belong to the Generation III+.
6 Nuclear Power in the Twenty-First Century? – A Personal View
159
After expansion in the 1970s and early 1980s, growth of nuclear power slowed in most countries in the 1980s and 1990s. Many restrictions on construction of new NPPs were imposed after accidents at Three Mile Island (USA) in 1979 and at Chernobyl (the Soviet Union) in 1986. While most other countries continued to build NPPs, the last NPP ordered in the USA was in 1979. At the beginning of the 2000s, there was a worldwide interest in building new NPPs, not only in the USA, Russia, the UK, and France but also in developing countries with large populations, such as China and India. Some called it a nuclear renaissance, because many countries that never had NPPs considered building one. After the Fukushima accident in 2011, the predicted growth of nuclear power considerably slowed down, regardless of the fact that nobody got killed in this accident, and no long-term environmental or health impacts would be expected. However, there is no doubt that countries with large populations using mostly fossil fuels for electricity production will continue to seriously invest in nuclear power. In order to accelerate the development of the next generation of power reactors, the Generation IV International Forum (GIF) was established in 2000 by 9 countries, with an additional 4 countries joining later [5]. The original “Technology Roadmap for Generation IV Nuclear Energy Systems” was published in 2002 [6], and six selected reactor systems were presented: (1) gas-cooled fast reactor (GFR), (2) lead-cooled fast reactor (LFR), (3) molten salt reactor (MSR), (4) sodium-cooled fast reactor (SFR), (5) supercritical-water-cooled reactor (SCWR); and (6) veryhigh-temperature reactor (VHTR). These reactors were selected from among 100 various designs in order to satisfy the following goals [6]: (1) sustainability, to generate energy sustainably and promote long-term availability of nuclear fuels and to minimize nuclear waste; (2) safety and reliability, to excel in safety and reliability, to have a very low likelihood and degree of reactor core damage, and to eliminate the need for offsite emergency response; (3) economics, to have a life cycle cost advantage over other energy sources and to have a level of financial risk comparable to other energy projects; and (4) proliferation resistance and physical protection, to be a very unattractive route for diversion or theft of weapon-usable materials and to provide increased physical protection against acts of terrorism. Over the last 10 years, the focus shifted to the development of advanced SMRs, with a power output between 20 MWe and 300 MWe, as well as microreactors (1– 20 MWe) [6]. During the Trump administration, two pieces of legislation were passed by the US Congress and then signed into law (in 2018 and 2019) [7]: The Nuclear Energy Innovation and Modernization Act (NEIMA) directs “the Nuclear Regulatory Commission to make regulations move more quickly with respect to new nuclear reactors and to establish a better and faster licensing structure for advanced nuclear reactors,” and the Nuclear Energy Innovation Capabilities Act (NIECA) updates the Department of Energy mission and objectives to support deployment of advanced reactors and fuels, to enable the private sector to partner with national laboratories for the purpose of developing novel reactor concepts, to leverage its supercomputing infrastructure, to enable the private sector to construct and operate privately funded reactor prototypes at DOE sites. At the end of 2020, DOE awarded Terra Power and X-energy $80 million each in initial funding under
160
J. Vujic
the Advanced Reactor Demonstration Program, to build “two advanced reactors that can be operational within 7 years” [8]. In addition, DOE selected five teams to receive $30 million in FY2020 funding for Risk Reduction for Future Demonstration projects (all awards are cost-sharing) to “design and develop safe and affordable reactor technologies that can be licensed and deployed over the next 10 to 14 years,” including Hermes Reduced-Scale Test Reactor (Kairos Power), eVinci™ Microreactor (Westinghouse Electric Company), BWXT Advanced Reactor (BWTX Advanced Technologies), Holtec SMR-160 Reactor (Holtec Government Services), and Molten Chloride Reactor Experiment (Southern Company Services) [9]. In midApril 2020, Bloomberg NEF announced 12 winners of its 2022 BNEF Pioneers – “early-stage companies that are pursuing significant low-carbon opportunities.” Among the awardees is Kairos Power that received this award for “development of a novel advanced nuclear reactor technology to compliment renewable energy sources” as “the first advanced nuclear company recognized in the award’s 13-year history” [10]. An interesting fact is that Kairos Power [11] was founded by the University of California’s Department of Nuclear Engineering faculty and former students, as the first “nuclear start-up” in Silicon Valley. In summary, it is clear that a new phase in the advanced nuclear reactor development in the USA has begun. The key drivers for this development, particularly with regard to the SMRs, include the rising demand for energy in general, requirements for reduction in carbon emissions, energy independence and security, as well as successful management of nuclear proliferation. Demand for nuclear energy is also amplified in part by its application to a broader range of energy services beyond electricity generation (e.g., desalination, process heat, propulsion, hydrogen production, transportation fuels, etc.).
6.2 Why Nuclear Power: A Personal Note On February 11, 1939, a “one-page note” appeared in the magazine Nature by Lise Meitner and her nephew Otto Robert Frisch, entitled “Disintegration of Uranium by Neutrons: A New Type of Nuclear Reaction,” where for the first time a theoretical explanation for the splitting of uranium atoms was published, and the term “fission” was coined for that process using the analogy of cell division in biology [12]. They also calculated that two fission fragments should gain a total kinetic energy of about 200 MeV due to the conversion of mass into energy, according to Einstein’s famous mass-energy relation (E = mc2 ). The Royal Swedish Academy of Sciences awarded the 1944 Nobel Prize in Chemistry in 1945 to Otto Hahn for the discovery of nuclear fission. Lisa Meitner’s work was overlooked by the Nobel Prize Committee, as was the work of her nephew Otto Robert Frisch, and the French team Joliot-Curie and Savich. Meitner, a physicist, who collaborated for 30 years with Otto Hahn, a chemist, had to flee Nazi Germany in the summer of 1938, on the brink of discovery of nuclear fission. Otto Hahn never acknowledged Meitner’s or anybody else’s contribution to the
6 Nuclear Power in the Twenty-First Century? – A Personal View
161
discovery of nuclear fission. The dispute about who has priority over this discovery that changed the world remains to this day [13]. Although Meitner did not receive a Nobel Prize for her contributions to the discovery of nuclear fission, she has a chemical element named in her honor – Meitnerium. The German team of Otto Hahn, Lise Meitner, and Fritz Strassmann had the best radiochemistry and nuclear physics expertise at that time, but from 1935 to 1937 they were not able to correctly identify newly formed “transuranium” radioisotopes after heavy nuclides were bombarded by neutrons. The French team of Irene Joliot-Curie and Pavle Savich then entered the competition and devised their own experiments. In resulting papers published in 1937 [14] and in October 1938 [15], Joliot-Curie and Savich pointed out a new radioisotope with a relatively large half-life of 3.5 hours that had properties of Lanthanum (in the middle of the periodic table, Z = 57). They were within a hair’s breadth of discovering nuclear fission but did not rule out the possibility that it could be some unknown transuranium isotope, with Z > 92. The experimental results published in the 1938 Joliot-Curie and Savich paper helped Meitner (who had left Nazi Germany and fled to Sweden) to convince F. Strassmann and O. Hahn to repeat their experiments. O. Hahn and F. Strassmann’s famous paper was published on January 6, 1939, in the German scientific journal Naturwissenschaften [16], where they pointed out: “We must name Barium, Lanthanum and Cerium, what was called previously Radium, Actinium and Thorium. This is a difficult decision, which contradicts all previous nuclear physics experiments.” Meitner was not listed as one of the authors on this paper, because it was impossible due to the political situation in Germany. Although O. Hahn and F. Strassmann confirmed in their paper the presence of radioactive species which behaved as chemical elements in the middle of the periodic table, namely, Barium (Z = 56) and Lanthanum (Z = 57), they failed to explain the physics behind the process and did not recognize that the atomic numbers (i.e., the number of protons) of the elements formed after a uranium nucleus is split must add up to 92. The theory of a new nuclear process named “fission” was explained for the first time by Lise Meitner and Otto Robert Frisch in their famous paper published in the English journal Nature on February 11, 1939 [12], after Frisch conducted his “recoil” experiment in which he was able to detect large ionization signals due to presence of fission fragments. Remarkably, within less than 2 months after the papers by Hahn and Strassmann [16], and Meitner and Frisch [12] appeared in 1939, a large number of experimental and theoretical results were published around the world, and a possibility that the fissioning of heavy nuclei might be used to produce large amounts of energy was discussed. A French team led by Joliot-Curie in Paris and an American team led by Fermi in New York were rushing to prove that neutron-induced fission events on heavy nuclei would produce additional neutrons capable of causing new fissions, thus establishing foundations for the chain reaction [17, 18]. The first controlled chain reaction was experimentally proven by a team of scientists led by Fermi in Chicago on December 2, 1942, as a part of the Manhattan Project [19]. Leo Szilard (one of only 50 people who were allowed to be present
162
J. Vujic
at this remarkable event) recalled [20]: “There was a crowd there and when it dispersed, Fermi and I stayed there alone. Enrico Fermi and I remained. I shook hands with Fermi and I said that I thought this day would go down as a black day in the history of mankind. I was quite aware of the dangers. Not because I am so wise but because I have read a book written by H. G. Wells called The World Set Free. He wrote this before the First World War and described in it the development of atomic bombs, and the war fought by atomic bombs. So, I was aware of these things. But I was also aware of the fact that something had to be done if the Germans get the bomb before we have it. They had knowledge. They had the people to do it and would have forced us to surrender if we didn’t have the bomb also. We had no choice, or we thought we had no choice.” In 1945, the USA dropped two atomic bombs on two Japanese cities, Hiroshima on August 6 and Nagasaki on August 9. The number of deaths within the first few months after the bombing was estimated to be around 90,000–166,000 in Hiroshima and 60,000–80,000 in Nagasaki, with roughly half of the deaths in each city occurring on the first day [21]. Sadly, the world was introduced to nuclear energy in its most destructive form. The leading nuclear scientists, including Fermi, Joliot-Curie, and Oppenheimer expressed their deep disappointment and sorrow that the use of nuclear energy was not limited to peaceful energy production. In 1952, Fermi said: “It was our hope during the war years that with the end of the war, the emphasis would be shifted from weapons to the development of these peaceful aims. Unfortunately, it appears that the end of the war really has not brought peace. We all hope as time goes on that it may become possible to devote more and more activity to peaceful purposes and less and less to the production of weapons” [20]. My fascination with nuclear science and nuclear fission in particular started during my high school years in the small town of Sabac in former Yugoslavia, after our physics teacher told us the story about the discovery of fission and nuclear energy. I enrolled in the School of Electrical Engineering in Belgrade and shifted from Engineering Physics to Nuclear Engineering during my undergraduate studies. Immediately after graduating, I started working at the Nuclear Sciences Institute “Vinca” near Belgrade. The Institute was established after World War II by Dr. Pavle Savich, who was also its first director. I had a chance to meet with Dr. Pavle Savich in 1985, when a professor from the University of Michigan, Dr. Chihiro Kikuchi, who was attending a conference in Europe, asked me to arrange a meeting with Savich in Belgrade, in order to hear the first-hand story about the discovery of nuclear fission. Professor Kikuchi stopped by in Belgrade to interview me as a graduate student applicant to the Nuclear Science program at the University of Michigan (Fig. 6.2). I was mesmerized by Dr. Savich’s recollections of his work with Irene Joliot-Curie and his regret that their team was not confident enough to include a possibility of the splitting of uranium in their 1937 and 1938 papers, although they were the first to chemically isolate and identify the lighter nuclei (products of the uranium nucleus splitting), and they also privately discussed this possibility. The answer to “Why Nuclear Power?” comes from different points of view, including the fact that about one-third of the world’s population still does not have access to electricity and that underdeveloped and developing countries mostly
6 Nuclear Power in the Twenty-First Century? – A Personal View
163
Fig. 6.2 Dr. Pavle Savich (bottom left), Professor Chikiro Kikuchi (top left) and his wife Grace (bottom right), me (top middle), and Dr. Pavle Jovic (top right, the Director of the Nuclear Science Institute “Vinca”), 1985 (private archives)
use fossil fuels as the major source of energy. Nuclear power is currently the only technology with a secure base-load electricity supply and no greenhouse gas emissions that has the potential to expand at a large scale and effectively replace fossil fuels. While one can discuss the advantages and disadvantages of nuclear power utilization, it must be emphasized that an ideal source of energy/electricity that is at the same time efficient, cost-effective, environmentally friendly, and riskfree does not exist. In order to ensure optimal use of energy resources while limiting negative environmental and health impacts, various trade-offs need to be made. I would point out two advantages of nuclear power that distinguish it from any other source of energy: 1. Incredible “energy density”: 8.2 × 1013 J/kg as compared to 2.9 × 107 J/kg for fossil fuels [22]. This large energy density means that a considerably smaller amount of nuclear fuel is needed to produce the same amount of electric power as fossil fuels. It also means that the amount of waste produced is also proportionally much smaller. A quick calculation as presented in Fig. 6.3 shows that the fuel consumed by 1000MWth nuclear power plant (NPP) is only 1 kg/day (of uranium-235), or about 3.2 kg/day (of uranium-235) for 1000-MWe nuclear plant (the thermal efficiency of an average NPP is about 32%). Assuming that the annual fuel cycle is 320 days per year, we obtain that the annual consumption of fissile U-235 in a 1000-MWe NPP is about 1000 kg (1 metric ton), while the total amount of uranium fuel (enriched between 3% and 5% in U-235) is about 30 metric tons. The coal-burning plant would need more than 2 × 106 times more fuel to produce that same amount of electricity. This concentrated nuclear energy would also mean that the relative land use per unit
164
J. Vujic
Fig. 6.3 Fuel consumption in nuclear power plant Table 6.1 The capacity factors for utility scale generators in the USA [23] Year 2016 2017 2018 2019 2020 a Natural
Coal 52.8 53.1 53.6 47.5 40.5
Natural gasa 55.4 51.2 55.0 57.3 57.0
Geothermal 71.6 73.2 76.0 69.6 69.1
Hydro 38.2 43.0 41.9 41.2 40.7
Nuclear 92.3 92.3 92.5 93.4 92.4
Biomass 62.7 61.8 61.8 62.5 62.5
Solar PV 25.0 25.6 25.1 24.3 24.2
Wind 34.5 34.6 34.6 34.4 35.3
Wood 58.3 60.2 60.6 59.0 57.8
gas, combined cycle
of electricity generated is incredibly small as compared to solar, wind, hydropower, or biomass energy sources. In addition, the amount of construction material needed (cement, metals, concrete, glass, etc.) is several times smaller than the materials needed for solar PV, hydro, wind, geothermal, etc. Another important related parameter that shows the huge advantage of nuclear power is its capacity factor. Capacity factors measure the amount of electricity actually produced compared to the theoretical production by a facility (also known as “installed capacity”) if it operated 100% of the time, 365 days a year, 24 hours a day. Many confuse the “installed capacity” and the “capacity factors,” particularly when comparing solar and wind to other electricity sources. Table 6.1 shows the capacity factors for utility scale generators in the USA for several consecutive years [23]. These are real, measured data that show huge difference in capacity factors for NPPs as compared to other electricity generators, particularly those included in “renewables,” such as solar, wind, and hydro. 2. Ability to produce additional fuel from the nuclear core. Typically, in the majority of commercial LWRs, uranium fuel consists of uranium oxide, while the isotopic composition of uranium is mostly U-238 enriched between 3% and 5% in U-235. While U-235 is a fissile nuclide (capable of sustaining nuclear reactions by absorbing a thermal neutron), U-238 is a fertile nuclide that can actually produce a new fissile nuclide (Plutonium-239) through nuclear reactions. In different reactor concepts, where the base nuclide is Thorium-232 instead of U-238, we also have a transmutation of fertile Th-232 into fissile U-233, as shown in the following nuclear reactions [22]:
6 Nuclear Power in the Twenty-First Century? – A Personal View
165
‘Waste’
Recyclable materials
U 475 to 480 kg (~95%)
Pu 5 kg (1%)
RECYCLING
Fission Products & MA 15 to 20 kg (~4%)
FINAL ‘WASTE’
Fig. 6.4 A single PWR spent fuel assembly contains 96% recyclable fuel
β − 239 239 Np → Pu 23 min 2.3 d . β − 233 β− 232 Th(n, γ )233 Th → pa → 233 U 22 min 27 d 238 U(n, γ )239 U
β−
→
Every 18–24 months, about one-third of the fuel in a reactor core is removed and replaced with fresh fuel. This spent nuclear fuel is never referred to as “nuclear waste” by the nuclear community, because it still contains a large amount of fertile U-238 (for uranium-type reactors), fissile U-235, and the newly produced fissile Pu-239. For example, the French company Areva specified [24] that a single PWR assembly (a standard nuclear reactor core has several hundred of these assemblies) before irradiation in a reactor core contains about 500 kg of uranium. After several years of “burning” inside of a reactor core, the spent PWR assembly contains about: 475–480 kg (~95%) of uranium, 5 kg (~1%) of plutonium, and 15–20 kg (~4%) of fission products and minor actinides (real waste) (Fig. 6.4.). Thus, 96% of the spent fuel in a “burned” PWR assembly could be recycled, while only 4% is considered the “real waste” that needs to be placed in an underground waste repository. Nuclear power reactors could be designed as “burners,” i.e., they “burn” more fuel than what they consume, or as “breeders,” i.e., they “breed” more fuel than what they consume. In that sense, a conversion factor could be defined as the number of new fissile nuclides produced per one fissile nuclide consumed. If a conversion factor is larger than 1, it is called a “breeder factor.” Most commercial LWRs have a conversion factor around 0.6, meaning that commercial LWRs burn more fuel than what they consume and need to be shut down every 18–24 months for refueling. In the previous subsection, it was specified that a 1000-MWe NPP produces about 30 metric tons of spent fuel per year, consisting of 28.5 tons of uranium (95%), 0.3 tons of plutonium (1%), and only about 1.2 tons (4%) or “real” nuclear waste. In summary, my fascination with nuclear science, nuclear energy, and nuclear fission started during my high school years, and I spent close to 4 decades studying and doing research in the areas of advanced reactor designs and the development of
166
J. Vujic
numerical methods for modeling and simulations of processes inside of a nuclear reactor core.
6.3 Analyzing and Designing Advanced SMR Reactors Over the last 3 decades, the UC Berkeley Department of Nuclear Engineering has established a strong team consisting of faculty, graduate students, and postdocs, with a world-renowned expertise in analysis and design of advanced reactors, particularly SMRs. Many alumni joined nuclear reactor vendor companies, including those mentioned above, that received recent awards from the US DOE to design and construct the new generation of advanced reactors, such as Terra Power, the Southern Company Services, Kairos Power, Westinghouse, General Electric, General Atomic, etc. The main goals were to design simplified reactor cores; to extend core lifetime to 10, 20, or more years; improve thermal efficiency by going to higher temperatures; use non-light water coolants; and work on “breed and burn” fuel cycles. The development of small reactors began in the early 1950s for naval propulsion (as power sources for nuclear submarines). Over the years, several countries have been continuously working on the development of SMRs that could be broadly classified as integral PWRs, marine-derivative PWRs, gas cooled, lead and lead-bismuth cooled, sodium cooled, and various nonconventional designs [25]. Another possible classification subdivides SMRs into two broad groups – those for early deployment, based on a proven LWR technology, and those for longerterm deployment, based on other advanced designs. The SMRs could be beneficial in providing electric power to remote areas that are deficient in transmission and distribution infrastructure but could also be used to generate local power even for larger population centers. Small reactors are ideal for providing electricity to countries with small, limited, or distributed electricity grid systems as well as for the countries with limited financial resources for investment in large nuclear power plants. Most of the proposed small reactors offer combined electricity and process heat to be used by industrial complexes, water desalination, and district heating. Overall, SMRs have the following advantages: 1. Power generating systems for areas difficult to access or without infrastructure for transportation of fuel 2. Modular concept that reduces the amount of work on site, making it simpler and faster to construct 3. Long life cycle and reduced need for refueling (perhaps every 10–15 years) 4. Design simplicity 5. Passive safety 6. Expanded potential siting options since more sites are suitable for SMRs 7. Smaller nuclear island and footprint of the whole nuclear power plant 8. Low operation and maintenance costs 9. Lower initial costs and risks 10. Proliferation resistance
6 Nuclear Power in the Twenty-First Century? – A Personal View
167
However, the following disadvantages of SMRs must be overcome if the SMRs are to be broadly deployable in the near future: 1. Economics of SMRs needs more analysis in order to show possible advantages over large LWRs. 2. Spent nuclear fuel from small reactors could be located in remote areas which will make its transport more difficult. Also, spent fuel will be spread across many more sites, while currently it is congregated at a limited number of locations. 3. Public acceptance of new concepts. 4. Obtaining design certification and licensing may take longer than expected. As mentioned at the beginning of this section, our teams worked on many advanced reactor designs, and I will mention only those that I was involved with: 1. International Reactor Innovative and Secure (IRIS), a 100–335 MWe reactor in conceptual design stage with conventional PWR assemblies containing 5% enriched fuel rods in 17 × 17 bundles. It was developed by an international consortium of industry, laboratory, university, and utility establishments, led by Westinghouse. Even though it is based on LWR design, this concept has an integral reactor vessel (which means that the steam generators will be inside the reactor vessel) and offers natural coolant circulation versus typical active circulation by convection in PWRs. The core fueling options include four-year and eight-year core lifetime [26]. 2. Sodium-cooled Fast Reactor (SFR) for sustained “bread and burn” operations. It was shown that it was feasible to design an SFR core to generate close to 50% of the total core power from a large depleted uranium-fueled radial blanket that operates in the “bread and burn” mode without reprocessing. The modified core design with rated power of 1000 MWth had a conversion ratio (CR) of 1, meaning that a fuel-self-sustaining operation was accomplished [27]. 3. Self-sustaining thorium-fueled BWR. This Th-233 U fuel cycle Reduced moderation BWR (RBWR-Th) is charged only with thorium and discharges only fission products, recycling all actinides, and was a modification of a similar Hitachi design. The goal was to have a reduced-moderation BWR-type of assembly, with thorium instead of uranium fuel that could produce fuel-self-sustaining operation within the standard Advanced BWR pressure vessel [28, 29]. 4. Encapsulated Nuclear Heat Source Reactor (ENHS). This is a lead-bismuth or lead-cooled novel SMR reactor concept with a nominal power of 125 MWth [30]. The goal in designing this reactor was to have a nuclear-battery type of SMR, with a very simplified design and long self-sustaining core life of up to 30 years without refueling. The last SMR–ENHS will be explained in more detail. Several countries have been developing small Lead/Lead-Bismuth (Pb/Pb-Bi) reactors, including Russia, USA, Germany, Sweden, Italy, Japan, and South Korea. Among the newer Russian designs is the 75–100 MWe Lead-Bismuth Fast Reactor (SVBR). This is an integral design, with the steam generators sitting in the same 400–480 ◦ C Pb-Bi pool as the reactor core. It is designed to be able to use a wide variety of fuels: The reference
168
J. Vujic
Fig. 6.5 Vertical cross-section of ENHS [25]
model uses uranium oxide enriched to 16.5%, but mixed oxide fuel (MOX), uranium nitride, and uranium-plutonium fuels are also considered [31]. The unit was to be factory-made and shipped as a module and then installed in a tank of water which gives passive heat removal and shielding. Some of these milestones were achieving corrosion resistance of structural materials, controlling the mass transfer processes, and assuring radiation safety during operation (polonium 210). Since the mid-1990s, various research projects were initiated in the USA to identify highly proliferation-resistant reactor concepts that will have simplified designs and user-friendly operations. One gave birth to STAR (Secure, Transportable, Autonomous Reactor). Several STAR concepts were developed, including STAR-LW, STAR-LM, ENHS, and SSTAR (which was selected as the US concept for the Generation IV LFR category of advanced reactors) [32]. The ENHS reactor is presented in Fig. 6.5 [25, 30]. It is a liquid metal-cooled reactor that can use either lead (Pb) or a lead-bismuth (Pb-Bi) alloy as coolant. The lead-based coolants are chemically inert with air and water and have higher boiling temperatures and better heat transfer characteristics for natural circulation [32]. The ENHS fuel is loaded into the module in the factory, and it can operate on full power for 15 or more years without refueling or fuel reshuffling. The ENHS fuel is a metallic alloy of uranium and zirconium (U-Zr, enriched to 13%) or optionally uranium, plutonium or transuranic (TRU), and zirconium (U-Pu-Zr; U-TRU-Zr) and exhibits good stability under irradiation. The core is surrounded by six groups of
6 Nuclear Power in the Twenty-First Century? – A Personal View
169
segmented tungsten reflectors. The total weight of an ENHS module when fueled and when loaded with coolant to the upper core level is estimated to be 300 tons, which could pose a shipping challenge, especially to remote areas. The module height is dictated by the requirement of natural circulation. The Small STAR (SSTAR) [32] is a small natural circulation fast reactor of 20 MWe/45MWt that can be scaled up to 180 MWe/400 MWt. SSTAR shares many characteristics of the ENHS reactor, including use of lead as coolant and a long-life sealed core in a small, modular system. The SSTAR fuel consists of transuranic nitride, enriched in N-15 (nitrogen), with five radial zones with enrichment of 1.5/3.5/17.2/19.0 and 20.7%. Core lifetime is 30 years with average burnup of 81 and the peak burnup of 131 MWd/kgHM (MW per day/kg of heavy metal). The coolant circulation is by natural convection, with core inlet temperature of 420 ◦ C and the core outlet temperature of 567 ◦ C. The core height is 0.976 m, and the core diameter is 1.22 m. For the power conversion, a supercritical carbon dioxide (CO2 ) Brayton cycle is used. Overall, the STAR concept (ENHS and SSTAR) has a robust design, excellent safety potential, and proliferation resistance as well as good economic performance. Studies performed by the UC Berkeley team on ENHS [33] showed the difficulties in using the modeling and simulation methods and codes developed for standard LWR designs on non-water-cooled advanced reactors. While our methodology, including improved cross-section libraries (we discovered that the cross-sections for certain lead isotopes were not correct), showed that it was possible to design a self-sustaining core with a life-cycle of 30 years without refueling, different established methodologies/codes were not able to obtain the same results (Fig. 6.6). The effective multiplication factor, as shown in Fig. 6.6, must be above 1 during the entire life cycle of the reactor, i.e., the total number of fission nuclear reactions must be maintained nearly constant in order to sustain the reactor operation. It also means that the conversion ratio for ENHS is maintained around 1 (or slightly above 1), providing that the amount of fuel in the reactor stays approximately constant throughout 30 years of operations, i.e., the amount of fuel “burned” maintains the same ratio to the amount of fuel “bread.” Also, if we never refuel this reactor during its 30-year operation, it also means that we are not producing any spent fuel as well! It is why we sometimes refer to ENHS as a “nuclear battery” – as in the case of an ordinary battery, one uses it without ever opening it, until the time it is depleted and you need to dispose of it. The advantages of this design are sealed core, no onsite refueling, transportability (entire core and reactor vessel remain as a unit); long life core, 30-year core life is a target; autonomous controls, minimum operator intervention is required, local and remote observability, minimum industrial infrastructure required at host location, and very small operational (and security) footprint.
170
J. Vujic 1.030
Effective Multiplication Factor
1.025 1.020 1.015 1.010 1.005 1.000
0.990
MCNP-4C/ORIGEN2.1 ENDF/B-VI.7 (Vinca library) 45A+67FP/MCNP-4C+111FP/ORIGEN2.1 REBUS3/ANL 230group ENDF/B-VI 17A+5PFP
0.985
REBUS3/KAERI 80group ENDF/B-VI 17A+17PFP
0.995
0.980 0
5
10
15
20
25
30
35
Irradiation Time (year)
Fig. 6.6 Calculation of the ENHS effective multiplication coefficient showing a possibility of a 30-year life cycle without refueling
6.4 High-Performance Computing Tools in Nuclear Reactor Analysis For a complete understanding of the behavior of a nuclear reactor core, a detailed description of its neutron population is required. In the most general form, such a description must provide the neutron distribution in space (x, y, z), energy (E), ˆ. and time (t). The general integro-differential form of the direction of motion (), linear Boltzmann transport equation, written in terms of neutron number density (or neutron angular flux density) in seven-dimensional phase-space, describes a balance between various nuclear processes affecting the neutron population. The neutron → ˆ E, t represents the number of neutrons per unit volume number density .n r , , (cm3 ), per unit solid angle (steradian), and per unit energy (MeV) at time t. The angular neutron flux density is then defined as: → ˆ E, t = v(E)n → ˆ E, t r , , ψ r , ,
.
where v(E) is the neutron speed in cm/s. While the full derivation of the linear Boltzmann neutron transport equation can be found elsewhere [22, 34], we will briefly describe the processes and quantities included in this equation. In general, the neutrons within some volume (a reactor core) might leak out or collide with the nuclei and either scatter (with the change of direction and energy) or be absorbed (producing additional neutrons needed for the chain reaction or some other secondary particles or radiation). For each type of neutron interaction with nuclei, there are probabilities for it to happen that depend on neutron energies, type of nuclide that a neutron is colliding with, nuclide number densities, temperatures, etc. These probabilities are introduced into the transport equations as “cross-sections,”
6 Nuclear Power in the Twenty-First Century? – A Personal View
171
→ i r , E , where i is the type of interaction. In addition, the energy distributions of the prompt neutrons (the neutrons released promptly during the fission reactions) are represented by prompt probability density functions, χ p (E), while the energy distributions of the delayed neutrons (the neutrons released up to a minute after the corresponding fission reaction, by the decay of fission fragments called “the delayed neutron precursors”), are represented by delayed probability density functions, χ d (E). Usually, the six groups of delayed neutron precursors are introduced, based on the value of their decay constants, λ. For delayed neutron precursor group j, → r , E is the concentration, χ d, j (E) is the energy distribution, and λj is the .Cj decay constant. Below is a setof seven coupled integro-differential equations with → ˆ E, t , that describe neutron distribution and seven independent variables, . r , , behavior inside of the reactor core. → 1 ∂ ˆ E, t + r ψ , , v(E) ∂t ˆ · ∇ψ → ˆ E, t + r , , → → ˆ r r , E ψ , , E, t t ∞ ˆ → ˆ ψ → ˆ , E, t r , = 0 dE ˆ d s E → E, → → . χ (E) ∞ ˆ + p4π 0 dE ˆ d νp E f r ,E ψ r , ,E ,t 6 χd,j (E) + 4π λj Cj (r, t) .
j =1
+ Sexternal ∞ → → ∂ ν E ˆ C C dE d t) = −λ t) + (r, (r, j j j j ˆ f r ,E ψ r , ,E ,t 0 ∂t These sets of equations describe only part of the complex reactor core physics – this part is referred to as “neutronics,” and it assumes a constant temperature in the core. However, the moment our reactor starts operating, the core temperature will start rising, causing changes in material densities and coolant properties. Thus, we have to add temperature/fluid feedback and couple neutronics and thermalhydraulics equations. In addition, fuel composition will change from fresh fuel to basically the entire periodic table of elements, after the first splitting of the uranium nucleus. We should not forget corrosion, material degradation due to irradiation, high pressure, and high temperatures. The bottom line is that the modelling and simulation of nuclear reactor systems is one of the most challenging tasks even today with access to powerful supercomputers – due to many different scales (in time, in space, in energy) as well as intertwined/coupled multi-physics phenomena. In this section, I will focus only on my contributions to the development of numerical solutions for the linear Boltzmann neutron transport equation, starting from the time-independent integro-differential equation with prompt neutrons only. This equation is basically a balance equation with the neutron losses due to streaming and collisions on the left-hand side, and the neutron sources (in-scattering
172
J. Vujic
source, fission source and external source) on the right-hand side: ˆ · ∇ψ → ˆ E + r , , → → ˆ r r , E ψ , , E t ∞ . ˆ → ˆ ψ → ˆ , E r , = 0 dE ˆ d s E → E, → → (E) ∞ ν E ˆ dE d + χT4π T ˆ f r ,E ψ r , ,E 0 + Sexternal Although one would like to be able to solve this equation for the most general cases, this formidable task remains one of the most challenging engineering and scientific problems. Many numerical methods were derived over several decades to solve this or more complex neutron transport equations, but, due to limitations in computer power and memory, most of the reactor core analysis codes until recently had to rely on simplified analytical/numerical models. Only within the last 10– 15 years, with the development of the latest supercomputers and high-performance computing, are detailed and accurate full reactor core calculations becoming more efficient and affordable. There are two classes of computational methods used to solve the neutron transport equations numerically: deterministic and stochastic (Monte Carlo) [34]. The deterministic approach basically discretizes every single independent variable over its domain and assumes that the quantity of interest has constant value over the mesh (cell, discretized region). Each independent variable could be discretized using various methods: for the spatial variables (x, y, z), either finite difference or finite ˆ. usually the discrete element methods are usually used, for the angular variable (), ordinates or spherical harmonics approaches are used, while for the energy variable (E), the multigroup method is used. The existing integrals and derivatives are also presented in corresponding discretized form. The end result of these numerical procedures is a system of coupled linear algebraic equations that is usually solved by employing some of the efficient iterative methods. This approach has a list of disadvantages – the higher accuracy requires smaller discretization mesh for each independent variable, thus considerably increasing the number of equations to be solved. For example, if we use a very modest discretization approach assuming only 100 meshes in x, y, and z, also 100 directions that the neutrons could move in, and only 100 energy groups, we will end up with a system of 1010 -coupled linear eqs. A more serious disadvantage is that once the system of discretized equations has been developed for particular spatial meshing (for example, rectangular meshing for standard LWRs, or hexagonal/triangular meshing for fast reactors), those codes cannot be used for different geometry/meshing. In contrast, Monte Carlo methods are capable of modeling very complex three-dimensional configurations without any changes to the algorithms and have continuous treatment of all independent variables – energy, angle, space, and time. Thus, the Monte Carlo (MC) methods do not suffer from any discretization errors, but there are statistical errors attached to each MC result. For the MC methodology
6 Nuclear Power in the Twenty-First Century? – A Personal View
173
(the neutron history-based algorithm), each neutron history is followed from the neutron sources, through scattering collisions, until that neutron is absorbed or escapes from the observed volume. Each step in a neutron’s life is related to a certain probability and needs to be “sampled” from a corresponding probability density distribution. This process is repeated for hundreds of millions of neutrons (the larger the volume of interest, the larger the number of neutrons that need to be processed) in order to reduce the statistical error which is proportional to one over a square root of the total number of neutrons processed. Thus, in order to produce physically accurate results in geometrically large and complex volumes (such as a full nuclear reactor core), the run time could be extremely long (weeks instead of hours). This is why the MC codes were the first to be imported into supercomputers to run in parallel on many processors – the history-based MC algorithms are inherently parallel, because each neutron history is independent of every other neutron history, particularly for a master-slave parallel approach. This is also the main disadvantage of the history-based MC algorithms as they cannot be run efficiently on the current supercomputers that have CPUs accelerated by several GPUs [35].
6.4.1 New HPC Approaches for Deterministic Neutron Transport Methods During my undergraduate and graduate studies, as well as during my work at two institutes – the Nuclear Sciences Institute “Vinca” in Belgrade and Argonne National Laboratory (ANL) near Chicago – I developed a list of deterministic and Monte Carlo codes and implemented them on early vector and parallel computers [36–41]. Thus, I became very familiar with various computing methodologies for solving the neutron transport equation, as well as with advantages and disadvantages of each methodology. When I started working at ANL in 1989, my first assignment was to develop a computing methodology (which is not Monte Carlo) to analyze the Modular High-Temperature Gas-Cooled Reactor (MHTGR) of a modified General Atomics design. A single hexagonal assembly, as shown in Fig. 6.7, consists of a hexagonal graphite block, with smaller holes for cylindrical fuel compacts, a bit larger holes for helium coolant circulation, and two large holes for insertion of lithium-6 compacts for tritium production [41]. This was an extremely challenging geometric configuration that no deterministic codes at that time could analyze without distorting the geometric pattern, and use of Monte Carlo codes was very slow. In addition to the complex geometric configuration, material properties were also very complex from the neutron transport point of view – with very diffusive graphite and helium regions, to very absorbing (particularly for thermal neutrons) regions of fuel compacts and Li-6 rods, creating steep flux gradients. The idea that solved this problem was to combine what is unique and good in the MC methodology (i.e., ability to present accurately any complex geometric configuration without simplifications) and what is good in deterministic methods –
174
J. Vujic
Fig. 6.7 Modular High-Temperature Gas-Cooled Reactor (MHTGR) of a modified general atomics design
ability to solve problems much faster than the MC methods. In addition, as will be shown below, the powerful MC ray tracing (i.e., calculation of interception points between a straight line and complex surfaces in general 3D configurations) could be used to numerically determine multiple integrals in the calculation of collision probabilities in the collision probability (CP) method and predetermination of the pseudo-neutron paths in the method of characteristic (MOC). This approach, which led to development of deterministic methods (CP and MOC) that could be applied to any arbitrary reactor geometry and for any type of meshing (rectangular, hexagonal, triangular, circular, etc.) without a need to modify the original algorithm, was awarded a US patent in 1991 [42]. Figure 6.8 shows the MC “random walk” process for a neutron moving and scattering in general 3D. In this example, the neutron is born in the center of the cube. The dots represent scattering collision point. The change in neutron direction, the green “X” represented a surface crossing, and the red “X” represented an absorption point and the neutron “walk” termination. Most Monte Carlo (MC) codes use similar combinatorial geometry or constructive solid geometry methodologies to describe physical shapes used in a problem, and to track particles in a random walk. The combinatorial geometry approach allows the description of a physical domain (cell, region) by the combination of certain basic geometric shapes such as planes and second order surfaces (rectangular parallelepipeds, right circular cylinders, hexagonal prisms, etc.). The basic predefined shapes are then combined using logical operations: OR, AND, and NOT in order to define complex volumes (zones). The usual procedure used in a MC code to track a particle in its random walk is to: determine the zone in which a starting point is located, find the distance along a given direction to the exit point from this zone, determine the next zone the ray will enter, find the distance to the exit point of this zone, etc. This process is continued until the next collision point is encountered or if a particle reaches the outer system boundary. If
6 Nuclear Power in the Twenty-First Century? – A Personal View
175
Fig. 6.8 The Monte Carlo random neutron walk process
Fig. 6.9 Ray tracing in Monte Carlo method, left (a), and in CP and MOC, right (b). C, collision site; BC, boundary crossing
a particle undergoes a collision, the type of collision (absorption or scattering) is determined. If a particle is absorbed, its history is finished (analog MC). In the case of scattering, the new directional vector and the new particle energy are determined and the tracking procedure is repeated. Thus, the geometric ray tracing in MC is not decoupled from the rest of the calculation (i.e., “particle physics”) as shown in Fig. 6.9a, and the abovementioned procedure must be repeated for all neutrons, during their life histories. For simplicity, we are showing it in 2-D. Thus, the ray (or neutron) tracing algorithm must provide in general 3-D configurations: the particle location in the general 3-D geometry at any time, the neutron trajectory in the current coordinate system, the distance that a neutron can travel before having a collision, and the length of a neutron track through a particular zone/cell/material, from the entrance to the exit of each zone/cell/material. The
176
J. Vujic
→ ˆ is a result of adding up all uncollided Fig. 6.10 Angular neutron flux at . r in the direction . → → source neutrons in volume V, produced along the neutron trajectory . r −Rs ˆ , . r , plus all → → ˆ uncollided neutron entering V through surface S at . r s = r −Rs ˆ , in the direction .
difficulty arises from the fact that the shape of each 2-D (or 3-D) zone/cell/material is determined by a combination of multiple planes and second order surfaces, and the algorithm needs to determine what interception points and tracks belong to the “real” neutron track. However, as shown in Fig. 6.9b, the MOC and CP require similar data (the interception points between a straight line and any general surface in 3-D, as well as the length of each segment along the straight line belonging to any cell/mesh/region/material that a neutron crosses in its pseudo walk. The ray tracing approach shown in Fig. 6.9b has several advantages over the standard MC ray tracing shown in Fig. 6.9a – the geometric part (ray tracing) is decoupled from the rest of calculation. The ray tracing data in the case of MOC and CP could be pre-calculated and reused multiple times thus drastically accelerating the solution algorithm. The CP and MOC methodologies are based on the integral form on the neutron transport equations (the derivation can be found elsewhere [44–46]), with relevant quantities shown in Fig. 6.10. For simplicity, the integral form of neutron transport equation for angular flux density is shown in its multigroup form, with energy variable already discretized, g = 1, . . . , G: + .ψg r, = ψg r s , exp −τg r, r s
Rs
Qg r , exp −τg r, r dR
0
where the optical length is defined as → →
ˆ r r .τg , −R =
R 0
dR
t,g
→ ˆ r −R
6 Nuclear Power in the Twenty-First Century? – A Personal View
177
Vi rs ‘ ,i
i
Φ(sm,i,k)
,i,k
sm
Rs m,k-l
rs ,i
Φ(0) m,k m,k+l
Fig. 6.11 Neutron track in a zone/cell Vi in 3-D (left) and the neutron tracks through hexagonal/circular zones/cells in 2-D. With this MC-based ray tracing, the MOC can without any modifications analyze any complex/advanced nuclear reactor configurations
and the volumetric sources in energy group g, G G → → 1 → ˆ = ˆ + χg ˆ Qg r , Sg →g r , Fg r , k
.
g =1
g =1
consisting of in-scattering neutron sources →
ˆ ˆ = d Sg →g r ,
.
s,g →g
→ → ˆ ˆ ˆ , μ0 = · r , μ0 ψg r ,
4π
and fission neutron sources → → 1 → ˆ = r g r , Fg r , v f,g 4π
.
If a volume of interest is discretized into zones/cells/materials with uniform properties, the integral transport equation for angular neutron flux density boils down to an algebraic expression for the angular flux determination at the exit of the zone/cell/material, if the angular neutron flux density at the entrance to the zone is known – we start from the known values (i.e., the boundary conditions) and “march” along predetermined neutron pseudo-tracks, as shown in Fig. 6.11. Qg,i 1 − e− g,i sm,i,k g,m,i,k sm,i,k = g,m,i,k (0)e− g,i sm,i,k +
.
g,i
where sm,i,k is the track length of the track k through zone i in the direction m. For the application of this powerful combinatorial geometry-based ray tracing algorithm in CP [39–43], the integral form of the neutron transport equation for the
178
J. Vujic
angular neutron flux density is integrated over the angular variable to obtain the scalar neutron flux and the outgoing scalar neutron current:
dS exp[−τ (r,r )] in r , exp −τ r, r ++ , φ r = dr Q r , 2 2J s s r−r s | |r−r | V S| d→ r . J out r = 2 e s · Q r , exp −τ r, r s s r−r s | V| dS in r s , exp −τ r s , r s , + 2 e s · J r −r S | s s | with r − r , = r − r
r = r − R, r s = r − Rs ,
.
J in r s , = es · ψ r s , , with es · < 0
.
After discretization of the spatial variable and energy variable, the following set of algebraic equations is obtained: Vi .
out Sα Jg,α
in P (V ← S ) , Vi Qg,i Pg (Vi ← Vi ) + Sα Jg,α g i α i α in = Vi Qg,i Pg (Sα ← Vi ) + Sα Jg,α Pg (Sα ← Sα ) ,
t,g,i g,i
=
i
α
with V =
Nr
.
i=1
Vi ,
S=
Nb
Sα .
α=1
Although this system of algebraic equations appears to be very simple, the most time-consuming part of the entire CP probability formalism is the evaluation of the coefficients of the collision, escape, and transmission probability system matrices. The calculation time increases rapidly with the increase in the number of energy and space zone/cells. The usefulness of the collision probability method is closely related to the efficiency with which the collision, escape, and transmission probability matrices are numerically calculated, which involves determination of multiple integrals over volume/surface elements, as presented below for the volume-tovolume collision probability. The integrals are difficult to calculate, particularly for complex geometric configurations and irregular shapes of discretized zones/cells.
6 Nuclear Power in the Twenty-First Century? – A Personal View
Pg (Vi ← Vi ) =
→
d r
→ → −τ → g r ,r ˆ Qg r , e → 2
→ t,g r
Vi
.
179
→
dr
→ Vi r − r
→
dr
Vi
ˆ g → ˆ r , d Q
,
4π
As an example, and with simplification (2-D instead of 3-D) for better understanding of the process, the volume-to-volume collision probability could be written as [39–43], Pj →i =
.
t,i
4π Vj
2π 0
ymax
dφ
dy
xi
xi −xi
ymin
dx
xj
xj −xj
dx
π
dθ e−t (x,x )/ sin
θ
0
where the last integral could be calculated analytically (a tabulated Bickley-Nayler function)
π
.
dθ e−t (x,x )/ sin
θ
= K i1 t x, x
0
while the other 4 integrals are determined by numerical integrations, Pj →i =
.
t,i
4π Vj
K k=1
wk
L =1
w
xi
xi −xi
dx
xj xj −xj
dx
π
dθ e−t (x,x )/ sin
θ
0
From the volume-to-volume collision probability equations and the explanation of their numerical integration as presented in Fig. 6.12, the powerful combinatorial geometry-based ray tracing and the calculations of the track lengths between various interception points represent the key for the efficient determination of system matrix coefficients involving multiple integrals for arbitrary geometric configurations and any shapes of cells/zones. Figure 6.13 shows the ability of this methodology to generate any type of unstructured meshing and solve the integral form of neutron transport equations for very different reactor designs without any changes to the codes. Examples of neutron flux distributions as calculated by the CP and/or MOC codes (Fig. 6.14) show the versatility of the developed methodology to be utilized in the analysis of very different nuclear reactor designs – BWRs (General Electric), with square assembly lattice, where discretization meshing consists of a combination of rectangular and circular shapes, and MHTGRs (General Atomics) with hexagonal assembly lattice, where discretization meshing consists of a combination
180
J. Vujic
Fig. 6.12 Numerical integration of collision probabilities utilizing ray tracing
Fig. 6.13 Examples of unstructured meshes
of triangular, hexagonal, and circular shapes. Due to the combination of very different materials in these reactor designs, the neutron flux gradients are steep, particularly for the neutron energies in the thermal region, and thus difficult to accurately analyze by other deterministic methods. With the new CP and MOC methodologies equipped with powerful MC-based combinatorial geometry and ray tracing, it was straightforward to implement the codes on the HPC parallel supercomputers [43, 47].
6 Nuclear Power in the Twenty-First Century? – A Personal View
181
Fig. 6.14 Examples of neutron flux distribution analyses performed by the CP and/or MOC codes. Left, a BWR assembly (General Electric), the neutron flux is highly absorbed in the fuel pins containing gadolinium for the thermal neutron energies; right, a MHTGR assembly (General Atomics), the neutron flux is highly absorbed in large regions containing Li-6 for the thermal neutron energies
6.4.2 New HPC Approaches for Monte Carlo Neutron Transport Methods Graphics processing units (GPUs) [48] are the programmable powerhouses of today. The fastest supercomputers of today use a hybrid architecture – a combination of central processing units (CPUs) and GPUs. For example, the fastest supercomputer in the USA is the Oak Ridge National Laboratory Summit supercomputer – it has 4356 nodes, each housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs, delivering 148.8 Pflops/s on the HPL benchmark [49]. Since CPU-optimized parallel MC algorithms are not directly portable to GPU architectures and thus cannot efficiently use GPUs acceleration power, their performance is greatly degraded. Thus, the MC neutron (or particle in general) transport codes need to be rewritten (or specifically tailored) to execute efficiently on GPUs. Unless this is done, the legacy nuclear reactor codes with decades of personyears invested in them cannot take full advantage of these new supercomputers. The team at UC Berkeley chose to develop entirely new MC code, specifically tailored for high-performance execution on today’s supercomputers. WARP, which can stand for “Weaving All the Random Particles,” is one of the first three-dimensional (3D) continuous energy Monte Carlo neutron transport codes
182
J. Vujic
Fig. 6.15 Neutron tracking: initialization, surface crossing detection, collision location detection, done in a batched, data-parallel way [50]
developed specifically for hybrid CPU/GPU supercomputing architecture [50, 51]. WARP uses an “event-based” algorithm for neutron tracking, as compared to the “history-based” approach that was described earlier. In the “history-based” neutron tracking, each neutron is tracked from the beginning to the end of its “life/history.” Each neutron history is independent from any other neutron history – thus, the “history-based” neutron tracking is inherently parallel and easily implemented on CPU-based supercomputers. However, it is not well suited for the CPU/GPU-based supercomputers, due to constant interruptions of the computational flow to ask the following questions: “Where is the neutron in general 3D? Is it crossing a boundary? Is it colliding? What type of collision? What is its new direction?,” i.e., constantly switching between neutron ray tracing and neutron collision physics. The solution was to “feed” enough data to the GPUs and let them do their incredibly fast calculations without any interruptions – thus, in the “event-based” approach initialization, surface detection, collision location, and particle movement calculations are performed across all rays simultaneously, as shown on Fig. 6.15, instead of a single ray start-to-finish and then moving on to the next ray. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU “thread blocks,” keeps the Single Instruction/Multiple Data (SIMD) units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. In addition, WARP modifies the standard way of going back to the main memory to get data needed for the neutron collision physics. The large cross-section data libraries need to be accessed randomly for each neutron collision in order to extract
6 Nuclear Power in the Twenty-First Century? – A Personal View
183
Fig. 6.16 WARP versatility
a particular cross-section as a function of neutron energy, a type of collision, a type of nuclide the neutron is colliding with, a temperature in the reactor core – a unionized energy grid was implemented to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross-section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross-section value. As an example, in the initial testing where 106 source neutrons per criticality batch were used, WARP was capable of delivering results as accurate as the broadly used history-based MC codes, but with run times that were 11–100 times lower, depending on problem geometry and materials. On average, WARP’s performance on a NIVIDIA K20 (available to us at that time) was equivalent to approximately 45 AMD Opteron 6172 CPU cores. Larger batches of neutrons typically performed better on the GPUs, but memory limitations of the K20 card restricted batch size to 106 source neutrons. Many WARP tests and comparisons were performed [50– 51], but as an example a difficult hexagonal assembly geometry easily analyzed by WARP is shown in Fig. 6.16. WARP, as one of the first feature-rich programs for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, blazed the path for other similar codes around the world that are still being developed and implemented on supercomputers of today.
184
J. Vujic
6.5 Summary Since the end of the World War II, nuclear energy has been extensively used for electricity production, desalination, process heat, propulsion, hydrogen production, production of transportation fuels, etc. If we want to provide affordable and clean electrical power to more than 7 billion people in the world, nuclear power must continue to play an expanding role in meeting increased energy demands in a safe and proliferation-resistant manner and with minimal waste production. In order to achieve this goal, extensive research and technology development must continue in advanced fast and small modular reactors, in advanced spent nuclear fuel recycling and chemical separations processes, and in a comprehensive understanding of nuclear fuel cycles and the global environment. In 2022, we celebrate 111 years since the discovery of the atomic nucleus, 90 years since the discovery of the neutron, 83 years since the discovery of nuclear fission, and 80 years since the first nuclear chain reaction. My fascination with nuclear physics and nuclear power started early during my high school years, thanks to a fantastic physics teacher, and never stopped. During my decades-long career, I worked on nuclear reactor analysis and design, development of numerical methods in reactor core analysis, high-performance computing, biomedical application of radiation, and nuclear security and nonproliferation. In this chapter, I briefly summarized my contributions that are directly related to the advanced nuclear reactor analysis and design. On a personal note, although a controversial source of electricity for some, nuclear power is here to stay. Although technical challenges still exist, in order to fully achieve the global energy security that must include nuclear power, we need to develop a long-term strategy in further building research capabilities and teaming among international and national laboratories, universities, and industry to bring the next generation of nuclear reactors to the commercialization level.
References 1. IAEA Nuclear Power Reactors in the World, Reference Data Series No.2, 2021 Edition, https:/ /www-pub.iaea.org/MTCD/Publications/PDF/RDS-2-41_web.pdf (Accessed in April 2022) 2. U.S. Nuclear Generating Statistics, Nuclear Energy Institute, https://www.nei.org/resources/ statistics/us-nuclear-generating-statistics (accessed in April 2022) 3. World Nuclear Power Reactors, The World Nuclear Association,https://www.worldnuclear.org/information-library/facts-and-figures/world-nuclear-power-reactors-and-uraniumrequireme.aspx[Accessed in April, 2022] 4. Nuclear Power in Russia, The World Nuclear Association, https://www.world-nuclear.org/ information-library/country-profiles/countries-o-s/russia-nuclear-power.aspx, [Accessed in April, 2022] 5. Technology Roadmap Update for Generation IV Nuclear Energy Systems. Generation IV International Forum (GIF), 2014. Issued by the OECD Nuclear Energy Agency for the Generation IV International Forum, https://www.gen-4.org/gif/jcms/c_60729/technology-roadmapupdate-2013 (Accessed in April 2022)
6 Nuclear Power in the Twenty-First Century? – A Personal View
185
6. A Technology Roadmap for Generation IV Nuclear Energy Systems. Generation IV International Forum (GIF), 2002, Issued by the U.S. DOE Nuclear Energy Research Advisory Committee and the Generation IV International Forum, https://www.gen-4.org/gif/jcms/ c_9352/technology-roadmap (Accessed in April 2022) 7. Advanced Small Modular Reactors (SMRs), U.S. Department of Energy Office of Nuclear Energy, https://www.energy.gov/ne/advanced-small-modular-reactors-smrs, (Accessed in April 2022) 8. U.S. Department of Energy Announces $160 Million in First Awards under Advanced Reactor Demonstration Program, Office of Nuclear Energy, October 13, 2020, https://www.energy.gov/ ne/articles/us-department-energy-announces-160-million-first-awards-under-advanced-reactor (Accessed in April 2022) 9. Energy Department’s Advanced Reactor Program awards #30 Million in Initial Funding for Risk Reduction Projects, Office of Nuclear Energy. December 16, 2020, https:// www.energy.gov/ne/articles/energy-departments-advanced-reactor-demonstration-programawards-30-million-initial (Accessed in April 2020) 10. BloombergNEF Announces 12 Climate Innovators as BNEF Pioneers in 2022, April 14. 2022, https://about.bnef.com/blog/bloombergnef-announces-12-climate-innovators-as-bnefpioneers-in-2022/ (Accessed in April 2022) 11. Kairos Power, https://kairospower.com/company/ 12. L. Meitner and O.R. Frisch, “Disintegration of Uranium by Neutrons: A New Type of Nuclear Reaction,” Nature, 143, 239–240, Feb 11, 1939. 13. R. Lewin Sime, “Lise Maitner: A Life in Physics,” University of California Press, 1996. 14. I. Curie and Paul Savitch, Sur le radioéléments formes dans l’uranium irradié par les neutrons, J. Phys. Rad. 8 p. 385–387, 1937. 15. I. Curie and Paul Savitch, Sur le radioélément de période 3.5 heures formé dans l’uranium irradié par les neutrons, Comptes Rendus 206, p. 906–908, 1938. 16. O. Hahn and F. Strassmann, Über die Nachweis und dasVerhalten der bei der Bestrahlung des Urans mittels Neutronen entstehenden Erdalkalimetalle, Naturawiss. 27, 1939 (6. January 1939.) 17. H. Von Halban, F. Joliot, and L. Kowarski, “Liberations of Neutrons in the Nuclear Explosion of Uranium,” Nature 143, Issue 3620, p. 470–471, 1939 18. H.L. Anderson, E. Fermi and H.B. Hanstein, Production of Neutrons in Uranium Bombarded by Neutrons, Phys. Rev. 55, 797–798 (1939) 19. S. Groueff, “Manhattan Project: The Untold Story of the Making of the Atomic Bomb,” Little, Brown and Company, 1967 20. Moments of Discovery: Discovery of Fission, The American Institute of Physics, https:// history.aip.org/exhibits/mod/fission/fission1/10.html (Accessed in April 2022) 21. D. Listwa, Hiroshima and Nagasaki: The Long Term Health Effects, Center for Nuclear Studies, Columbia University, 2012, https://k1project.columbia.edu/news/hiroshima-and-nagasaki (Accessed in April 2022) 22. J. J. Duderstadt and L. J. Hamilton, Nuclear Reactor Analysis, John Wiley & Sons, Inc., 1976 23. Independent Statistic & Analysis, U.S. Energy Information Administration, Electric Power Annual, https://www.eia.gov/electricity/annual/ (Accessed in April 2022) 24. AREVA 25. J. Vujic, R. Bergmann, M. Miletic, and R. Skoda, “Small Modular Rectors: Simpler, Safer, Cheaper?”, Energy: The International Journal (Elsevier, ISSN 0360-5442), Vol. 45, Issue 1, pp. 288–295 (2012) 26. B. Petrovic, M.D. Carelli, E. Greenspan, M. Milosevic, J. Vujic, E. Padovani, F. Ganda, “First Core and Refueling Options for IRIS,” Proc. of ICONE 10, 10th International Conference on Nuclear Engineering, April 14–18, 2002, Arlington, VA (2002). 27. G. Zhang, E. Greenspan, A. Jolodoski, and J. Vujic, “SFR with Once-Through Depleted Uranium Breed & Burn Blanket,” Progress in Nuclear Energy, Vol. 82, pp. 2–6 (2015) 28. J.E. Seifried, G. Zhang, C.R. Varela, P.M. Gorman, E. Greenspan, and J.L. Vujic, “Selfsustaining thorium-fueled reduced moderation BWR feasibility study,” Elsevier Energy Procedia, Vol. 71, pp. 69–77, 2015
186
J. Vujic
29. F. Ganda, F. Arias, J. Vujic, and E. Greenspan, “Self-Sustaining Thorium Boiling Water Reactors,” Sustainable Nuclear Energy, Vol. 4, pp. 2472–2497 (2012) 30. E. Greenspan, N. W. Brown, M. D. Carelli, L. Conway, M. Dzodzo, Q. Hossain, D. Saphier, J. J. Sienicki, D. C. Wade, “The Encapsulated Nuclear Heat Source Reactor – A Generation IV Reactor,” Proc. of GLOBAL-2001, Paris, France, September 2001. 31. A. Chebeskov, “SVBR-100 Module-Type Fast Reactor of the IV Generation for Regional Power Industry,” The 4th Asia-Pacific Forum on Nuclear Technology: The Small and Medium Reactors, UC Berkeley, June 17–19, 2010. 32. C. Smith, “Lead-cooled Fast SMR’s: (S)STAR, ENHS and ELSY,” The 4th Asia-Pacific Forum on Nuclear Technology: The Small and Medium Reactors, UC Berkeley, June 17–19, 2010. 33. D. Barnes, M. Milosevic, H. Sagara, E. Greenspan, J. Vujic, K. Grimm, R. Hill, S. G. Hong, and Y. I. Kim, “The ENHS Core Benchmark,” Paper presented at PHYSOR2002: The International Conference on the New Frontiers of Nuclear Technology: Reactor Physics, Safety and HighPerformance Computing, Seoul, Korea, October 7–10, 2002. 34. E. E. Lewis and W. F. Miller, Jr., Computational methods of Neutron Transport, John Wiley & Sons (1984) 35. R. M. Bergmann and J. L. Vujic, “Optimizing Monte Carlo Transport on a GPU,” Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA+M&C 2013), Paris, France, October 27–31, 2013 36. W. R. Martin, J. Vujic and T. C. Wan, “Monte Carlo Particle Transport on Parallel Processors,” Proc. of 12th IMACS World Congress, Paris, France (July 1988). 37. J. L. Vujic and W. R. Martin, “Two-Dimensional Collision Probability Algorithm with Anisotropic Scattering for Vector and Parallel Processing,” Proc. of PHYSOR-90, International Conference on the Physics of Reactors: Operations, Design and Computation, Marseille, France, April 23–27, 1990, Vol. IV, p. PIV-104 (1990). 38. J. L. Vujic and W. R. Martin, “Solution of Two-Dimensional Boltzmann Transport Equation on a Multiprocessor IBM 3090,” Proc. of ASME International Computers in Engineering Conference and Exposition, Boston, Massachusetts, August 5–9, 1990, Vol. 2, p. 545 (1990). 39. J. L. Vujic and R. N. Blomquist, “Collision Probability Method in Arbitrary Geometries Preliminary Results,” Trans. Am. Nucl. Soc., 62, 286 (1990). 40. J. L. Vujic, “GTRAN2: A General Geometry Transport Theory Code in 2D,” Proc. Int. Topl. Mtg. on Advances in Mathematics, Computations, and Reactor Physics, Pittsburgh, PA, April 28 - May 2, 1991, Vol. 5, p. 30.4 1-+-1, American Nuclear Society (1991). 41. J. L. Vujic, “Validation of the GTRAN2 Transport Method for Complex Hexagonal Assembly Geometry,” Trans. Am. Nucl. Soc., 64, 523 (1991). Best Paper Award, The 1991 ANS Winter Meeting, San Francisco, CA, November 10–14, 1991. Awarded by the Reactor Physics Division of the American Nuclear Society. 42. J. Vujic, “Neutron Transport Analysis for Nuclear Reactor Design,” United States Patent No. 5,267,276, issued on November 30, 1993; filed on November 12, 1991. 43. S. Slater and J. Vujic, “Parallel Solutions of the Neutron Transport Equation in ThreeDimensions by Collision Probability Method,” Trans. Am. Nucl. Soc., 78, 112 (1998). 44. T. Postma and J. Vujic, “The Method of Characteristics in General Geometry with Anisotropic Scattering,” Proc. of the International Conference on Mathematics and Computation, Reactor Physics and Environmental Analysis in Nuclear Applications, Madrid, Spain, Sept 27–30, 1999, Vol.2, p.1215–1234 (1999) 45. J. Vujic, T. Jevremovic, T. Postma, and K. Tsuda, “MOCHA: An Advanced Method of Characteristics for Neutral Particle Transport,” Nuclear Sciences Bulletin, 1–4/98 (1998). 46. T. Jevremovic, J. Vujic, and K. Tsuda, “ANEMONA – A Neutron Transport Code for General Geometry Based on the Method of Characteristics and R-Function Solid Modeler,” Annals of Nuclear Energy, 28, Issue 2, pp. 125–152 (2001). 47. S. Slater and J. Vujic, “Optimization of Parallel Solution of Integral Transport Equation by Utilizing Domain Decomposition,” Proc. of the Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, NY, October 5–10, 1997. Vol. 1, p. 581–590, (1997) (Invited paper)
6 Nuclear Power in the Twenty-First Century? – A Personal View
187
48. Powering the World’s Fastest Supercomputers, NVIDIA, https://www.nvidia.com/en-us/ industries/supercomputing/ (Accessed April 2022) 49. TOP500, https://www.top500.org/lists/top500/2021/11/ (Accessed April 2022) 50. R. Bergmann and J. Vujic, “Algorithmic Choices in WARP – A Framework for Continuous Energy Monte Carlo Transport in General 3D Geometries on GPU,” Annals of Nuclear Energy, 77, pp. 176–193 (2015) 51. R. Bergmann, K. Rowland, N. Radnovic, R. Slaybaugh and J. Vujic, “Performance and Accuracy of Criticality Calculations Performed Using WARP, A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs,” Annals of Nuclear Energy, Volume 103, 2017, p. 334–349
Jasmina Vujic today renowned as a pioneer in the field of computer modeling in the nuclear reactor analysis and design field, received an M.S. degree in 1987 and Ph.D. degree in 1989 in Nuclear Science from the University of Michigan, Ann Arbor. After graduation, she worked at Argonne National Laboratory in the Reactor Analysis Division for 3 years before joining the faculty at UC Berkeley in 1992. With a well-heeled taste for the most challenging and difficult classes as a young girl, particularly those she was warned away from, Jasmina Vujic excelled in sports, math, and physics growing up in Sabac, Serbia, a small town about 50 miles west of Belgrade. “But I always wanted to study nuclear engineering,” she says. “It was the most exotic of the engineering physics fields, something unusual and different.” When she was 17, she moved to Belgrade to begin her university studies in electrical engineering at the University of Belgrade. A medal-winning high school athlete who went on to play college basketball, Vujic also won her native city’s scholarship for college studies, honoring her as one of Sabac’s best students. She soon shifted her studies to engineering physics, then into nuclear engineering getting both her B.S. and M.Sc., the latter while working full time. Between sports and school work, Vujic had little time for anything else, making it easy to take her mother’s sage advice. “Just study. Don’t worry about cooking.” As a result, she says, her husband often teased her that she had gotten married before she even learned how to fry an egg. “It was the early 1970s and nuclear energy was a booming topic,” says Vujic. “An idea of my high school hero, Nikola Tesla, ‘free energy for all,’ resonated strongly with me.” Her first job, at the Vinca Nuclear Science Institute in Belgrade on a radiation protection project, included computer simulations of radiation interactions with matter (including the human body), a field she would return to years later. As the former Yugoslavia began disintegrating, Vujic was encouraged by her lab director at the Institute to pursue her doctoral studies in the USA. With her husband, an electrical engineer, and young daughter, Vujic left her country for Ann Arbor. “I came to the University of Michigan knowing no English at all,” she says. “I learned it all there. I took most of my courses with two guys from the Nuclear Navy pursuing their M.S. degrees, and we formed an agreement that I would help them with their homework and they would teach me English.”
188
J. Vujic In Belgrade in the late 1970s and early 1980s, researchers at the Vinca Institute had access to a single mainframe computer with punched computer cards as input. It was hard to be productive when the computer turnaround time was weeks instead of seconds. Initially awed by the sophistication, abundance, and easy access American students had to advanced computers, she immersed herself in the then nascent studies of parallel processing, with a goal of speeding up nuclear reactor analysis. She earned a second master’s of science and a doctorate in nuclear science, both from University of Michigan in 4 years. A position at Argonne National Lab, where she worked for 3 years following graduation, earned Vujic a patent in her work developing simulation methods for advanced reactor designs, as well as an Exceptional Performance Award there. The methods she developed were later selected by the Department of Energy for their advanced reactor analysis, while General Electric picked up her breakthrough software, which for the first time coupled, as she says, “the Monte Carlo codes’ excellent geometrical flexibility with the speed and efficiency of deterministic methods.” But apparently the destiny of this daughter of two elementary school teachers was also to teach. “It must have been in the genes,” says Vujic of her offer to teach at UC Berkeley in 1992. “It was the chance of a lifetime. I was raising my daughter, who graduated from Berkeley in civil engineering in 2002, at the time. So, I walked the crazy tightrope of intense work and family during my first few years at Berkeley.” She broke the mold as a young girl and continued to do so at Berkeley. The first woman to join Berkeley’s nuclear engineering department in its 47 years, she also chaired the Department from 2005 to 2009, the first woman to do so among top NE Departments in the USA. In 2009/2010 she also chaired the US Nuclear Engineering Department Heads Organization (NEDHO) representing all 50 nuclear engineering departments and programs in the nation. Vujic teaches courses in introduction to nuclear engineering, nuclear reactor theory, radiation biophysics, and numerical methods in reactor design and analysis. “I love to teach young people and watch my students develop and learn.” Her research includes: reactor analysis and design, numerical methods in reactor core analysis, high-performance computing, biomedical application of radiation, nuclear security, and nonproliferation. She is the author of close to 300 technical publications and three books and editor of 6 monographs and international conference proceedings. Under her mentorship, 30 students received Ph.D. degrees and 30 received M.S. degrees. Over the last 10 years, Vujic has attracted over $80 million in research funding as a Principal Investigator (PI), and additional $six million as a coPI. She established two research centers at Berkeley, including the Nuclear Science and Security Consortium, supported by DOE-NNSA grants. Vujic currently leads the teams from 11 major universities, consisting of more than 200 students, faculty, and researchers, with a goal of supporting the nation’s nuclear nonproliferation mission. She is Fellow of the American Nuclear Society. Vujic has been also elected as an international member of the Academy of Engineering Sciences of Serbia, and Academy of Sciences and Arts of the Republic of Srpska.
Chapter 7
Security of Electricity Supply in the Future Intelligent and Integrated Power System Gerd H. Kjølle
7.1 Security of Electricity Supply in a Changing Power System Society is critically dependent on electricity to maintain its functionality and cover basic needs such as food and water supply, heating, safety, financial services, etc. A secure electricity supply is vital for the value creation in the industry and other business sectors. This will become even more important in the future as the various sectors will be gradually electrified to substitute fossil fuels in order to meet the climate targets of greenhouse gas reductions [1]. The sustainability targets for the future cannot be met unless the necessary transformation of the power system is addressed. The power grids of today are ageing infrastructures not designed for integration of vast amounts of and intermittent electricity generated by renewables (such as wind, solar) and the increasing electrification, e.g. of the transport sector. The power system, especially at the distribution level, needs to be digitalised, i.e. utilising sensors, communication technologies, and other smart components. This makes it possible to monitor and operate the electricity grid in new ways – to improve usage of the existing grid and reduce the need for new investments. The digitalised power system, the Smart Grid, will help to realise the flexibility resources1 inherent in electricity production, consumption, and storage [2].
1 Flexibility is defined as the ability and willingness to modify generation injection and/or consumption patterns, on an individual or aggregated level, often in reaction to an external signal, to provide a service within the energy system or maintain stable grid operation. Flexibility resources (or flexible resources) can be defined as production, and/or consumption resources, and/or energy
G. H. Kjølle () SINTEF Energy Research, Trondheim, Norway e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_7
189
190
G. H. Kølle
Fig. 7.1 Sustainability – the energy trilemma
Providing electricity with a high security of supply, which is also affordable, while at the same time ensuring the environmental goals, is often termed the energy trilemma2 (illustrated in Fig. 7.1), as these three goals are partly contradicting. In the energy trilemma, the aim is to find an optimal balance between these three. The future sustainable power system is environmentally friendly, secure, and cost-efficient. The transformation of the electricity system should be made at acceptable costs, without jeopardising the security of electricity supply which is increasingly important in an even more electrified society. The challenges regarding security of electricity supply in the future power system will be discussed below.
7.2 How Do We Understand the Security of Electricity Supply? Security of supply means the ability of the electric power system to supply end-users with electricity of a certain quality on a continuous basis.3 The security of supply is composed of four main elements, as illustrated in Fig. 7.2: storage where injected or consumed power can be modified on an individual or group level, as a response to, e.g. a price signal [2]. See also [3]. 2 A trilemma might be interpreted as a choice between three unfavourable options, a tradeoff between three goals, in which two are pursued at the expense of the third. The objective might be to achieve all three goals within the preferences and interests of the stakeholders in question. (For instance, the World Energy Council (WEC) has developed an energy trilemma index, based on the three dimensions: energy security, energy equity (accessibility and affordability), and environmental sustainability, https://www.worldenergy.org/transition-toolkit/world-energytrilemma-index.) 3 As defined by the Norwegian Energy Regulatory Authority. In the European Directive 2005/89/EC concerning measures to safeguard security of electricity supply and infrastructure investment, security of electricity supply means the ability of an electricity system to supply final customers with electricity.
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
191
Fig. 7.2 Security of electricity supply
• Energy availability: availability of primary energy resources to produce electricity • Power capacity: available capacity in generation, transformers, and power lines to cover the instantaneous demand • Voltage quality: the quality of the supply voltage at the end-user • Reliability of supply: the ability of the power system to supply electricity to the end-users, also termed as the continuity of supply, related to frequency and duration of power supply interruptions. In addition, the operational reliability (also referred to as the power system security) is a part of the ability to supply end-users with electricity on a continuous basis. It can be defined as the ability of the power system to withstand sudden disturbances such as short circuits or non-anticipated loss of system components [4]. Power system security refers to the degree of risk in its ability to survive imminent disturbances (contingencies) without interruption of customer service. Energy availability and power capacity are typically measured by the energy and power balance within an area or country, considering the import/export possibilities. Reliability of supply is measured by power system failures and the consequences in terms of number and duration of interruptions [5]. The degree of reliability may be measured by the frequency, duration, and magnitude of adverse effects on the customer service [6]. However, the severity of interruptions may be measured using indices like interrupted power, energy not supplied, and the corresponding societal cost. The main unwanted events which can threaten the security of electricity supply are energy shortages, capacity shortages or power system failures, or combinations of these [6]. The consequences of shortages for society and end-users can be extremely high prices or curtailment, while failures can cause wide-area power supply interruptions (blackouts) and major harm to society [7]. Energy and capacity
192
G. H. Kølle
Fig. 7.3 Energy not supplied (ENS) and cost of energy not supplied (CENS) in Norway 2009– 2020
shortage, or situations where components are out for maintenance or other causes, may give rise to strained power situations increasing the probability of wide-area interruptions, showing that there are interactions between the different elements of security of supply to be considered. Figure 7.3 shows an example for Norway of the consequences of power supply interruptions, measured in terms of energy not supplied to end-users and the corresponding cost of energy not supplied (CENS) according to the CENS scheme used in the incentive-based regulation of grid companies in Norway [8]. The data presented in Fig. 7.3 are collected using the FASIT system for reliability data management in Norway [9]. The figure shows the total ENS and CENS covering all system levels and end-user interruptions. The consequences of interruptions vary from year to year. This is mainly due to the weather as it is the major cause of interruptions in Norway. It can be observed that in 2011 and 2013 total ENS and CENS both were considerably larger than for other years, affected by major storms. Different systems are in use for collecting data on failures and interruptions. There is no standardised system in place across countries covering all voltage levels, failures, interruption events, etc. The information collected, the indicators in use, and how they are calculated vary. This makes it difficult to compare reliability of supply levels between countries. Work is going on in Europe on harmonisation and standardisation of systems, indicators, etc., [e.g. 5, 10]. There are examples of standardised systems for collecting and reporting such data [9, 11]. The FASIT system is a standardised system for reliability data management in Norway, in use by all grid companies and the system operator (TSO) Statnett [9]. In FASIT, data on component failures and end-user interruptions are collected, for all voltage
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
193
Fig. 7.4 Failure causes for the transmission system 33–420 kV, period 2009–2014, left: (a) no. of failures, right: (b) ENS
levels. The overall reliability of supply is about 99.98%, meaning that on average, Norwegian end-users experience interruptions 2–3 hours per year. Figure 7.4 shows an example of failure causes for the transmission system (33– 420 kV), from the FASIT system for the period 2009–2014. The left part of the figure shows distribution of triggering failure causes, while to the right the causes of energy not supplied (ENS) are shown. For both indicators, environmental causes comprise the major part. In this period, almost 80% of the ENS has environmental causes, again mainly due to the major storms. On these voltage levels, technical failures and human causes represented 21% and 12% of the failures, respectively; however, they have caused a relatively smaller portion of the consequences.
7.3 Extraordinary Events in Power Systems The power system at the transmission level is usually dimensioned and operated according to the N-14 criterion, meaning that the system should withstand loss of a single principal component without causing interruptions of electricity supply [e.g. 12]. Power system failures occur occasionally in both the transmission and the distribution systems, most often with minor consequences. Distribution systems are still mostly operated as radials and failures will, in general, lead to an interruption. The duration may be rather limited depending on reserve supply possibilities.
4 The N-1 criterion is a principle according to which the system should be able to withstand at all times a credible contingency – i.e. unexpected failure or outage of a system component (such as a line, transformer, or generator) – in such a way that the system is capable of accommodating the new operational situation without violating operational security limits [12]. Note 1: The N-1 criterion is a deterministic reliability criterion. Note 2: There is no common definition of N-1 in literature. It is defined in several different ways which are more or less similar.
194
G. H. Kølle
Severe consequences of interruptions will most likely be caused by combinations of events, such as two or more failures in the transmission grid, malfunctioning of the protection system together with a grid failure, grid failure overlapping an outage of a large power plant, or a storm causing damage on power lines. In distribution systems, failures with severe consequences may, for example, be those resulting in loss of service in interconnected infrastructures such as transport and telecommunication. Analyses of historic blackouts, e.g. [13–17], show that their causes often are on the system or organisational level, representing a combination of factors, for example [18]: • Strong winds and treefall (common causes) resulting in extensive damages of power lines [19] • Malfunction of critical equipment such as protection and cable joints or sleeves [20, 21] • Strained operating situations where the system is operated close to its limits [16, 22] • Human factors – lack of situational awareness and lack of coordination and planning of activities that may have an impact on the power system (for example, digging work, etc.) [13, 21] Extraordinary events, involving coinciding failures and severe consequences, are events with a high societal impact and a low probability to occur, often referred to as HILP (High Impact Low Probability) events. Many extraordinary events (or blackouts) that have occurred during the last decades are thoroughly described in the literature, e.g. [6, 13, 17, 23]. Examples are shown in the two-dimensional consequence diagram in Fig. 7.5, where the blackouts are classified according to the amount of interrupted power (MW) and stipulated average (weighted) duration. The largest in terms of interrupted power was the US/Canada blackout in 2003, while the largest in terms of interruption duration was the Canadian ice-storm in 1998. Most of the events in Fig. 7.5 have occurred in the transmission system. The storm Gudrun in Sweden, and the Steigen and Oslo S. events in Norway, are examples mainly affecting the distribution system and partly the sub-transmission. The two latter had a more local character; still they are regarded as extraordinary events due to the very long interruption duration (Steigen, the entire community was affected for 6 days) and the loss of service to critical infrastructures (Oslo S, railway traffic, telecommunication, Internet). Extraordinary events in power systems typically fall within two main categories: Events due to natural hazards (marked with a square in the figure) characterised by extensive physical damage. Events in the second category (marked with a dot) are of various types such as technical failures, operational failures, human errors, etc. They often involve a multitude of human, organisational, and technical factors, e.g. the US/Canada 2003 event [15]. Such events typically affect a larger part of the system (shown by a larger interrupted power) with a relatively short duration compared to the weather-related events. The Europe event marked with a triangle is the largest blackout that has occurred in Europe, in November 2006 [16]. This event differs
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
195
Fig. 7.5 Examples of historic blackouts – consequence dimensions stipulated duration and interrupted power, based on [18]
from the others mentioned above as it was triggered by a planned outage (and not by failures) of two lines for ship passage. This led to overloading and a sequence of events, cascading loss of lines, frequency drop, etc., affecting 15 million end-users. As a result, the UCTE European interconnected transmission system at that time was divided into three electrical islands. In the study of blackouts worldwide as reported in [17], the blackouts were classified according to natural calamities (heavy storms, earthquakes, hurricanes, lightning strikes, and temperature fluctuations), transmission and generation failure, cyber issues, and human/equipment/unknown error. The natural calamities accounted for about 90% of the consequences in terms of power outage duration, while cyber issues accounted for 0.7%. Emerging cyber risks are of great concern along with the digitalisation of the power system and dependence on communication and control systems. Various attempts of cyberattacks towards power systems are reported around the world, such as data/security breaches, malware, denial of services, etc. [17]. So far, there have been few incidents of cyberattacks causing large-scale power outages. The Ukraine attack in 2015 is the first known event of larger scale affecting several power stations for a few hours. However, it took a much longer time to restore system operation and control functions. To illustrate the variety of failures both in information systems and the power system itself and complexity that can be involved in a blackout-event, the course of events for the Northeast blackout 2003 (US/Canada) is shown in Table 7.1 below, based on [15, 23–24]. The blackout was caused by a rolling cascade stemming from inadequate response to failures in Ohio: a combination of loss of an important generation unit and tripping of several 345 kV lines due to line sagging and contact with overgrown vegetation. The system operator failed to deal with these events due to loss of computer systems’ alarm and surveillance functions earlier that day, allowing a cascade to sweep over large areas of the USA and Canada. Lack of joint
196
G. H. Kølle
Table 7.1 Overview of the course of events for the Northeast blackout 2003 (US/Canada), based on [15, 23, 24] USA/Canada, August 14, 2003
Overview of the course of events.a The map highlights the affected regions.
System in normal state, within prescribed limits. High, but not abnormally high loads
Aug 14
(info)
Erroneous input data put the Midwest Independent System Operator’s state estimator and real time contingency analysis tool out of service
13:31
Failure
Generator at Eastlake power plant trips – loss of important source of reactive power
14:02
Failure
345 kV line trips due to tree contact caused by high temperature and line-sagging
14:14
Failure
Control room operators at First Energy lose the alarm function (with no one in the control room realising this)
12:15
Failure
(info) 15:05 – 15:41
Failures
Operators at First Energy begin to realise that their computer system is out of order and that the grid is in serious jeopardy.
15:42
16:06
Three 345 kV lines into the Cleveland-Akron area trip due to tree contact. Load increase on other lines
Failures
Decreased voltage and increased loading of the underlying 138 kV system, causes 16 lines to fail in rapid order
Failure
Loss of the 345 kV Sammis-Star line between eastern and northern Ohio due to overload. Triggers the cascade Uncontrolled power surges and overload causes relays to trip lines and generators. Northeastern US and Ontario form a large electrical island, which quickly becomes unstable due to lack of generation capacity to meet the demand.
Cascade
Further tripping of lines and generators breaks the area into several electric islands, and most of these black out completely. Some smaller islands with sufficient generation manage to stabilise. Cascade over. 50 million people deprived of power
16:13
Aug 15 Aug 22
Restoration
Approx. 80% of electricity restored Restoration completed
a As
indicated, two types of failures contributed to this event: Failures in the information system and failures in the power system itself (lines and generators)
procedures, clear responsibility, and mandatory reliability standards in the larger interconnected grid have also been identified as major causes.
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
197
7.4 Power System Risk, Vulnerability, and Resilience Learning from past events, such as the blackouts described above, is important for the prevention of similar events in the future. The analyses of causes, contributing factors and the course of actions leading to the events, show that they are usually composed of various factors. HILP events are very unlikely in the sense that they are inconceivable, have a low probability of occurring, but result in high impact on the security of electricity supply, like wide-area power supply interruptions. Conventional probabilistic reliability assessment of power systems attempts to answer three fundamental questions (in analogy to risk analysis [25]): (1) What can go wrong? (2) How likely is it to happen? (3) What are the consequences? HILP events and their consequences are easily missed in conventional risk and reliability assessment, because the probability of the sequences of events is regarded as negligible, the number of sequences is unmanageable, or the causal mechanisms are not considered or modelled, e.g. human, and organisational factors. The analysis of extraordinary events requires complementary approaches to the risk and reliability assessment, to better understand and communicate about risks associated with extraordinary events. In recent years, the concepts of vulnerability and resilience have been developed for power systems to complement the traditional reliability of supply analysis approaches appropriate for more frequent, ordinary events. Vulnerability and resilience are defined in various ways in the literature. Herein, we use the following definition for vulnerability [26]: Vulnerability is an expression for the problems a system faces to maintain its function if a threat leads to an unwanted event and the problems the system faces to resume its activities after the event occurred. Vulnerability is an internal characteristic of the system.
The unwanted event is here understood as one or more power system failures which may lead to power supply interruptions. A system is vulnerable if it fails to carry out its intended function, its capacity is significantly reduced, or the system has problems recovering to normal function [18]. This definition of vulnerability comprises the susceptibility towards threats and the coping capacity to recover from the unwanted event. Figure 7.6 illustrates the internal dimensions (susceptibility and coping capacity) and external dimensions threats and criticality (i.e. to society). Threats may be related to natural hazards, humans, or the operational/technical conditions. The term criticality refers to the level of criticality of consequences to the end-users, dependent on factors as shown in the figure: duration of the interruption, affected population/area, as well as economic and social consequences. The power system is susceptible to a threat if it leads to a disruption in the system. Susceptibility depends, e.g. on the technology, the work force, and the organisation. The coping capacity describes how the operator and the system itself can cope with the situation, limit negative effects, and restore the function of the system after a disruption (unwanted event). There are many factors that have an influence on the vulnerability. These can be sorted in the three categories, technical, workforce, and organisational, as shown in Fig. 7.6 and exemplified in Table 7.2.
198
G. H. Kølle Power system
Threats
Criticality
Vulnerability
Operational/technical Generation/demand Natural hazards Meteorological Terrestrial Extra-terrestrial Human threats Intended/unintended
Exposure
Susceptibility
Coping capacity
Technical Human related Organizational
Interruption duration Interrupted power Societal Impacted areas/ consequences populations Economic consequences Social consequences Health/life
Fig. 7.6 Illustration of the vulnerability framework [26] Table 7.2 Vulnerability influencing factors – examples, based on [26] Influencing factors Technical Human related (work force)
Organisational
Susceptibility Technical condition of components Operational stress Availability of skilled personnel Competence in condition assessment
Availability of information Coordination between system operators Structure of the electricity sector
Coping capacity Equipment for repair Spare parts Availability of personnel Operative competence and situational awareness Skills in system restoration and repair of critical components Availability of communication Coordination of restoration Contingency plans Operational security limits
The relationship between the four dimensions of vulnerability as shown in Fig. 7.6 can further be illustrated by the bow tie model shown in Fig. 7.7. This model is a concept to help structure and visualise the causes and consequences of unwanted events (here power system failures). As indicated in the figure, several barriers exist to prevent threats from developing into unwanted events and to prevent or reduce the consequences of unwanted events. Barriers are associated with either the susceptibility or the coping capacity of the power system. A system is more vulnerable to the relevant threats if the barriers are either missing, weak, or malfunctioning. Barriers on the left-hand side are intended to prevent threats from causing power system failures and thus reduce the probability. These barriers can be associated with preventive actions taken by the system operator. Barriers to the right are corrective actions (automatic and manual operator response with respect to certain power system failures) and actions taken to restore normal operation after power supply interruptions. As can be seen, the vulnerabilities are closely related to the barriers. Examples of barriers affecting susceptibility and coping capacity, respectively, are illustrated in Table 7.3 in relation to major storms. More examples of barriers are given in [26].
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
199
Power system Barriers related to susceptibility
Barriers related to coping capacity
Threats
Criticality
Operational/technical Generation/demand
Interruption duration Interrupted power Impacted areas/ populations Economic consequences Social consequences Health/life
Natural hazards Meteorological Terrestrial Extra-terrestrial Human threats Intended/unintended
Causes
Consequences Unwanted event (power system failures)
Fig. 7.7 Bow tie model associated with power system vulnerability [26] Table 7.3 Examples of vulnerability indicators in relation to major storms, based on [19] Threat Storm (Wind prognosis: speed, direction, duration)
Susceptibility Location in the terrain (exposure) Vegetation management Technical condition of power lines Competence on condition evaluation Competence on risk and vulnerability analysis
Coping capacity Competence on repair of power lines Availability of spare parts and transport for repair of power lines Availability of communication systems and reserve generators
Criticality Location of critical loads Types of end-users Temperature
Revisiting the historic blackout events in Fig. 7.5, a summary of the analysis of barriers is given in Table 7.4. The table shows how different (malfunctioning or lack of) barriers contributed to the course of events for the analysed blackouts. The improvement potential and the costs of enforcement will vary significantly between the different barriers identified. The vulnerability concept and bow tie model presented above [26] is further developed to incorporate cyber security threats and cyber and power system interdependencies, providing decision support for controlling risks in cyber-physical systems [27]. The relationship between the concept of vulnerability as described above and power system resilience is illustrated in Fig. 7.8. Resilience is here defined5 as the inverse or dual to vulnerability: Resilience is an expression for the ability of a system to maintain its function if a threat leads to an unwanted event and the ability of the system to resume its activities after the event occurred. 5 In the literature, resilience is defined and interpreted in different ways. IEEE defines resilience as follows: The ability to withstand and reduce the magnitude and/or duration of disruptive events, which includes the capability to anticipate, absorb, adapt to, and/or rapidly recover from such an event [28].
200
G. H. Kølle
Table 7.4 Malfunctioning barriers as contributing factors to the historical events. The circles (orange for weather related blackouts, grey for others) indicate that the barriers have a potential for improvement Oslo 07
Steigen 07
Sweden 05
W. Norway 04
SE / DK 03
US / CA 03
Canada 98
Barriers to prevent component failure Strength and design of construction Vegetation management and adequate choice of right-of-ways Condition monitoring and preventive measures Barriers to prevent power system failure Design and dimensioning criteria for the system Redundancy, transmission, and generation capacity System operator response Generator house-load operation and black start Adequate protection schemes Barriers to prevent long-term power system failure Good and known restoration plan Access to personnel and material Communication Coordination and clarification of responsibility Barriers to reduce end-users’ consequences Alternative energy supply Back-up in connected infrastructure Information to the public Legend: Full circle • : Improvement potential
Semi full circle O: Some improvement potential
Legend: Full circle •: Improvement potential
Semi full circleO: Some improvement potential
As indicated in Fig. 7.8, a high resilience implies a low vulnerability and vice versa. Resilience is commonly characterised in the literature by the two dimensions “robustness” and “rapidity”. These dimensions are shown in Fig. 7.8b, a similar diagram to the two-dimensional diagram in Fig. 7.5. The left-hand side of the figure shows the time development of a power supply interruption event with the two dimensions interrupted power and interruption duration. Resilience encompasses all hazards and events, including HILP events that are commonly not revealed through traditional reliability calculations [28]. The
b Interruption duration (rapidity)
Interrupted Power power supplied (robustness)
Power supplied (MW) (system performance)
a
Time (h) Unwanted event (power system failures)
Interrupted power (MW) ( »“ reverse of” robustness)
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
ity
bil
ra
gh
201
e uln
v
Hi
e
nc
ilie
es hr
g
Hi
Interruption duration (h)
Power supply restored
( » rapidity)
Fig. 7.8 (a) The progression of an extraordinary event and (b) relationship between vulnerability and resilience according to the two consequence dimensions interrupted power and the interruption duration [26]
resilience concept involves a more detailed characterisation of the preparations prior to the event (preventive actions), the process during the event (including corrective actions), and the restoration process. Similarly with the vulnerability analysis, resilience analysis aims to capture the dimensions of criticality and human, organisational, and technical factors.
7.5 Future Intelligent and Integrated Power System: Challenges and Opportunities There is a strong need for transforming the power systems around the world to meet the sustainability targets supporting the transition to the net zero emission society. The energy transition is accelerating in line with new reports from UN [29] and IEA [30] and others on accelerated climate changes that call for more rapid actions. New policies are being shaped; fossil-fuel-based generation is gradually being substituted with renewable generation (wind and solar) and grid companies are facing an influx of electrification requests. The efforts to address climate change are leading to the rapid electrification of transport and industry and new types of loads such as data centres and battery factories, driving a massive increase in power demand as well as the need to generate as much of it as possible from renewable sources. The result is a dramatic transformation of power systems globally. This calls for new approaches to how power systems are designed and operated [1]. The power system is gradually transforming from today’s centralised system, where large power plants follow the load, to a decentralised system, where the load follows the generation to a much higher degree. According to the ETIP SNET Vision 2050 [31], the energy system is fully integrated (electricity, heating/cooling, gas) in 2050, with the electricity system as the backbone, the customer is fully
202
G. H. Kølle
Fig. 7.9 The future flexible and intelligent power system – illustration [33]
engaged, and digitalisation is everywhere. Moreover, there will be increased sector coupling between the power system and communication systems, electric transport, hydrogen, etc. [31]. Further, “In 2050 the grids will be fully resilient to endure natural disasters (e.g. due to extreme weather events), physical attacks and cyberattacks” [32]. A flexible energy system is crucial to realise the energy and climate targets, to achieve a sustainable system and to ensure the security of electricity supply. Electrification is highlighted as one of the most important means of reducing CO2 emissions. The electricity grid is hence a crucial enabler for the energy transition. Intensified electrification will pose new challenges and requirements for the electricity grid, through the connection of more variable renewable distributed generation and new types of electrical loads and increased demand. There is an increased need for flexibility and reserves in the operation of the power system. To utilise flexibility in generation, consumption, and storage, digitalisation is needed for better monitoring and control. The challenges will differ for the different grid levels, and they must be met in a cost-efficient way without jeopardising the security of electricity supply. To meet these sustainability targets, the future power system should be renewable, flexible, intelligent, secure, and resilient. At the same time, a higher level of electrification will increase society’s dependence and pose increased requirements regarding the security of electricity supply. To deal with the challenges and meet the requirements, power systems are developing into cyber-physical systems (Smart Grids) where the power system is merged with information and communication systems. An illustration of such a future flexible and intelligent power system is given in Fig. 7.9.
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
203
The electricity grids are developing into active grids, where sensors and communication technologies and vast amounts of new data are utilised for improved monitoring and control and decision support, e.g. for a higher utilisation of the existing grid. The transition to the cyber-physical flexible and intelligent electricity grid introduces new types of components, new operating patterns and increased complexity, and new types of threats (cyber-) and vulnerabilities, leading to a changed risk picture related to the security of electricity supply. At the same time, the power system will be more exposed to natural disasters in the future due to climate change and increased weather-related stress. It is important to understand the new risk picture and find the right strategies for dealing with the security of electricity supply in the future power system. The current level of security of electricity supply is high; however, the security of supply is challenged by new and increasing threats and vulnerabilities: • Natural hazards, extreme weather due to climate change • Cyber threats and interdependencies due to intensified digitalisation and integration • Operational stress due to changed and more strained operation patterns (following electrification, energy system integration, etc.) Climate changes pose strong challenges to the future power system through new weather patterns and more extreme weather that might lead to extraordinary events. Power systems are already exposed to and largely affected by climate hazards, such as heatwaves, wildfires, cyclones, and floods which have been the dominant causes of large-scale outages [34]. Digitalisation, new technologies, and flexibility will on the one hand lead to operational stress, cyber threats and interdependencies with the power system. On the other hand, these advances provide opportunities for new ways of handling the security of supply. The geopolitical development in, e.g. Europe due to the war in Ukraine limits the availability of energy resources, threatening energy security. The higher utilisation of power grids due to increased electrification leads to more strained power balances. There is a risk of energy shortages and capacity shortages in certain situations and depending on the premises in different parts of the world. All in all, there is a question if we might experience new types of HILP events, due to the increased complexity, dependencies, and increased uncertainties in the power system and the environment. Also, will there be a stronger interaction between the different elements of the security of electricity supply? There are a lot of new knowledge needs related to security of electricity supply in the future flexible and intelligent power system [e.g. 3]. Two major research questions are mentioned here [35]: • How will electrification, digitalisation, and integration influence the security of supply? • How can the new opportunities represented by flexibility and digitalisation be utilised to deal with the security of supply in the future?
204
G. H. Kølle
7.6 Conclusion In a rapidly changing power system and changing world, it is essential to analyse and monitor the development of the security of electricity supply, to ensure a secure and resilient power system. There are comprehensive changes to be expected as we move closer to 2050, and history can’t tell us what to expect. New ideas and approaches are needed to ensure resilience. For example, micro grids and local energy communities are solutions that might make people more self-sufficient. The concepts of vulnerability and resilience, as presented here, can be further developed and used to study these aspects of the security of supply for the future. We need to move from a reliability-oriented to a resilience-oriented approach, comprising, e.g. new tools for increased situational awareness and new analysis methods and techniques to understand the risks and vulnerabilities, and find the right mix between preventive, corrective, and restorative actions, new concepts, and methods for utilisation of digitalisation and flexibility in the power system and increased focus on emergency preparedness and training including cyber incidents. To manage the green transition, support social security, and the future value creation, it is crucial to ensure a stable power supply and safeguard the security of electricity supply. This implies securing the energy availability, a satisfactory power capacity, and an acceptable reliability of supply.
References 1. IEA, https://www.iea.org/fuels-and-technologies/electricity 2. G. Kjølle, K. Sand, E. Gramme, Scenarios for the future electricity distribution grid, CIRED 2021, Geneva, 20–23 September 2021, https://doi.org/10.1049/icp.2021.1527 3. I. B. Sperstad, M. Z. Degefa, G. H. Kjølle, The impact of flexible resources in distribution systems on the security of electricity supply: a literature review, Electric Power Systems Research, Vol. 188, November 2020, https://doi.org/10.1016/j.epsr.2020.106532 4. IEEE/CIGRE 2004 Joint task force on stability terms and definitions: Definition and classification of power system stability, IEEE Trans. on Power systems, Vol. 19, No. 3, Aug 2004 5. CEER Benchmarking Reports on Quality of Supply, www.ceer.eu, e.g., the sixth report: https:/ /www.ceer.eu/documents/104400/-/-/d064733a-9614-e320-a068-2086ed27be7f 6. NordSecurEl, Risk and vulnerability assessments for contingency planning and training in the Nordic electricity system. Final report. 2009, Statens Energimyndighet Eskilstuna 7. G. Doorman, K. Uhlen, G. Kjølle, E. Huse.: Vulnerability Analysis of the Nordic Power System, IEEE Transactions on Power Systems, Vol. 21, No. 1, February 2006, doi: https:// doi.org/10.1109/TPWRS.2005.857849 8. G. Kjølle, H. Vefsnmo: Customer interruption costs in quality of supply regulation: methods for cost estimation and data challenges, CIRED 2015, Lyon, 15–18 June 2015 9. G. Kjølle, H. Vefsnmo, J. Heggset: Reliability data management by means of the standardised FASIT system for data collection and reporting, CIRED 2015, Lyon, 15–18 June 2015 10. ENTSO-E, https://www.entsoe.eu/news/2021/12/15/entso-e-publishes-a-progress-report-onprobabilistic-risk-assessment-in-europe-1/ 11. ENTSO-E, https://eepublicdownloads.azureedge.net/clean-documents/SOC%20documents/ Nordic/Nordic_and_Baltic_Grid_Disturbance_Statistics_2020.pdf
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
205
12. GARPUR (EU-Project) Deliverable 1.1 State of the art on reliability assessment in power systems, April 2014, https://www.sintef.no/projectweb/garpur/deliverables/ 13. E. Johansson, K. Uhlen, A. Nybø, G. Kjølle, O. Gjerde, Extraordinary events – understanding sequence, causes and remedies, ESREL 2010, Rhodes. 14. J. W. Bialek, Blackouts in the US/Canada and continental Europe in 2003: Is liberalisation to blame? IEEE PowerTech 2005, St. Petersburg. 15. U.S.-Canada Power System Outage Task Force, Final Report on the 2003 Blackout in the United States and Canada: Causes and Recommendations, 2004. 16. UCTE, System disturbance on 4 November 2006. Union for the co-ordination of transmission of electricity, UCTE, Editor, 2007. 17. N. Sharma, A. Acharya, I. Jacob, S. Yamujala, V. Gupta, R. Bhakar, Major Blackouts of the Decade: Underlying Causes, Recommendations and Arising Challenges, 2021 9th IEEE International Conference on Power Systems (ICPS), https://doi.org/10.1109/ ICPS52420.2021.9670166 18. G. H. Kjølle, O. Gjerde. M. Hofmann, Vulnerability and security in a changing power system – Executive summary, SINTEF Energy Research, Trondheim, 2013, Report no. TR A7278 19. G. Kjølle, R.H. Kyte, Major storms – main causes, consequences and crisis management, CIRED 2013 Stockholm 20. Analysis of disturbance in Western Norway in February 2004 (in Norwegian: «Analyse av driftsforstyrrelsen på Vestlandet», Statnett 2004. 21. Fire in a cable culvert at Oslo Central Station in 2007 (in Norwegian: «Brann i kabelkulvert – Oslo Sentralstasjon 27.11.2007», Directorate of Societal Security and Emergency Preparedness, 2008. 22. E. Hillberg et al., Power System Reinforcements – the Hardanger Connection. ELECTRA, 2012(260): p. 4–15. 23. IEA, Learning from the blackouts: Transmission system security in competitive electricity markets, 2005, IEA, Paris. 24. Eurelectric, Power outages in 2003 – Task force power outages, 2004, Union of Electricity Industry. 25. M. Rausand, Risk assessment: Theory, methods, and applications. Vol. 115. 2013, John Wiley & Sons. 26. I. B. Sperstad, G. H. Kjølle, and O. Gjerde, A Comprehensive Framework for Vulnerability Analysis of Extraordinary Events in Power Systems, Reliability Engineering & System Safety, vol. 196, p. 106788, 2020. https://doi.org/10.1016/j.ress.2019.106788. 27. I. A. Tøndel, H. Vefsnmo, O. Gjerde, F. Johannessen, C. Frøystad, Hunting Dependencies: Using Bow-Tie for Combined Analysis of Power and Cyber Security, 2nd International Conference on Societal Automation – SA, IEEE 2021 28. IEEE PES Industry Technical Support Task Force, The definition and quantification of resilience, IEEE 2018 29. United Nations (UN) Climate reports 2022, “Code red for humanity”, 2021, and “It’s ‘now or never’ to limit global warming to 1.5 degrees”, 2022, https://news.un.org/en/story/2022/04/ 1115452 30. IEA, Net Zero by 2050 – A Roadmap for the Global Energy Sector, 2021, https://www.iea.org/ reports/net-zero-by-2050 31. ETIP SNET Vision 2050, 2018, https://www.etip-snet.eu/etip-snet-vision-2050/ 32. ETIP SNET-CIGRE 10 key messages for European Green Deal, 2016, https://smart-networksenergy-transition.ec.europa.eu/sites/default/files/publications/ETIPSNET-CIGRE-10-keymessages-European-Green-Deal.pdf 33. Illustration of the future flexible and intelligent power system, Centre for Intelligent Electricity Distribution (CINELDI), a centre for environment-friendly energy research (FME) in Norway, www.cineldi.no
206
G. H. Kølle
34. IEA Power systems in transition. Climate resilience, 2020, https://www.iea.org/reports/powersystems-in-transition/climate-resilience 35. G. Kjølle, Security of electricity supply in the future intelligent and integrated power system, Invited presentation at Centre of Natural Hazards and Disaster Science (CNDS) Research Seminar, 15 September 2021, Uppsala University
Gerd H. Kjølle PhD, is Chief Scientist at SINTEF Energy Research in Norway. She has also been a Professor at the Department of Electric Power Engineering at the Norwegian University of Science and Technology (NTNU). Gerd is the Centre Director of the Centre for Intelligent Electricity Distribution (CINELDI) which is a Norwegian centre for environment-friendly energy research. In her entire career, Gerd has been working with contract research at SINTEF and with teaching and supervising Master and PhD students at NTNU as well as employees at grid companies and others in the electricity sector. Her research areas comprise reliability and security of electricity supply, Smart Grids, power system planning, fault and interruption statistics, and interruption cost assessment. As a professor she was responsible for the PhD student-course in Power system reliability. She has also been the opponent for many PhD-candidates in Europe, defending their thesis. Gerd had many interests when she grew up, from mathematics to languages, physiology of nutrition, and medicine. After high school it took some time to decide what kind of education she wanted. Finally, the choice fell on the undergraduate study in engineering and science. She entered the Norwegian Institute of Technology (now NTNU) in electrotechnical engineering in 1979, specialising in electric power engineering and graduated in 1984. A year earlier she became a mother, and thanks to the arrangements from the university she managed to combine studies and take care of her baby with only a few months delay of the studies. She started her career at the university as a research assistant for 1 year. At the same time, she took on her education in theory and practice of teaching, taking courses in psychology, pedagogy, and didactics followed by teaching practice in mathematics and electronics for engineering students. In 1985/86, Gerd started working for Norwegian Electric Power Research Institute (now SINTEF Energy Research). She was one out of two female research scientists (2%) in the beginning. The share of female scientists at SINTEF Energy Research has increased to about 25% over the years. Quite early she started working in the field of reliability of power systems which soon became a favourite topic. She took on her PhD studies in 1993 and received her PhD degree in 1996. This led to the development and broadening of the field into a large research area at SINTEF in reliability and security of electricity supply, today covering many researchers, topics, and projects. In her various roles as scientist, project manager, mentor, and centre director, she has been actively engaged in the development of individual researchers and research teams in SINTEF. She has
7 Security of Electricity Supply in the Future Intelligent and Integrated Power System
207
taken part in the recruitment and development of researchers and research areas. In addition, she has been active in interdisciplinary research at SINTEF and NTNU. Gerd is a Senior Member of IEEE and a member of CIGRE. She has been a member of boards of directors, and a member of various expert committees, e.g. for evaluation of professor candidates, and she is a frequent keynote speaker. She has cooperated with many grid and energy companies, system operators, and energy regulators, nationally and internationally, and she has been the project manager of many research projects and participated in various European projects. Gerd’s scientific achievements have resulted in solutions in use by grid companies, transmission system operators, and energy regulators and have provided foundations for handbooks, guidelines of good practice, requirement specifications, as well as regulations of grid companies. Gerd received, as the first woman ever, the Norwegian Academy of Technological Sciences (NTVA)’s honorary award in 2020, for her contributions to societal security, to a more reliable electricity distribution grid and the security of electricity supply. This field has been her passion as a researcher due to the importance of a secure electricity supply for society. She is grateful for having an employer that has given her a lot of opportunities and interesting tasks making it possible to concentrate on and further develop her main field of interest throughout the career. She is looking forward to continuing the research on security of supply through the research centre CINELDI, for the transformation of the electric power systems and supporting the transition to the net zero emission society.
Chapter 8
Preparing the Power Grid for Extreme Weather Events: Resilience Modeling and Optimization Anamika Dubey
8.1 Introduction The electric power grid is one of the nation’s most critical infrastructures. It faces severe threats from extreme weather events leading to extended disruption of electricity supply that adversely affects critical services, potentially jeopardizing personal safety and national security. As a matter of fact, approximately 78% of outages from 1992 to 2012 can be attributed to extreme weather events that took a toll of $18–33 Billion/year on the US economy while affecting around 178 million metered customers [1, 2]. Figure 8.1 shows the number of outages affecting more than 50,000 customers per outage from 2000 to 2019 resulting from weather-related and non-weather-related events. As can be seen, most large outages were caused by weather-related events. The electric power grid is traditionally designed and operated for known and credible threats and, therefore, is not resilient to rarely occurring disruptions or high-impact low-probability (HILP) events [3, 4]. The current capabilities to restore power supply mostly rely upon a top-down restoration approach. Therefore, the “last mile” recovery, which depends upon the repairs in low/mid-voltage power distribution systems, often takes several days. The staggering cost of power system outages and their impacts on personal safety demands expedited incorporation of resilience in the aging and stressed-power distribution systems toward extreme weather events [5–8]. Additionally, the increasing intensity and frequency of extreme weather events due to climate change further exacerbate the need for immediate resilience solutions, especially in high-risk communities, including residential customers. Fortunately, the recent advances in the distribution grid with the integration of
A. Dubey () Washington State University, Pullman, WA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_8
209
210
A. Dubey
Fig. 8.1 Number of outages in the USA affecting more than 50k customers (Power OFF: Extreme Weather and Power Outages | Climate Central)
distributed generation (DGs) and distribution automation capabilities provide potential means of improving the grid’s operational resilience [9]. However, holistically improving grid resilience requires (1) metrics to quantify baseline performance, (2) methods for resource planning capable of service restoration to sustain the community in the aftermath of an extreme weather event, and (3) algorithms for automated restoration including intentional islanding to supply critical loads.
8.2 Extreme Weather Events and Their Impacts on the Power Grid In the USA, extreme weather caused nearly 70% more power outages from 2010 to 2019 than in the previous decade. Utility customers experienced 1.33 billion outage hours in 2020, up to 73% from roughly 770 million in 2019, according to PowerOutage.US, an aggregator of utility blackout data. Figure 8.2 shows the weather and climate events that caused the most frequent disasters in each US state [10]. As can be observed, a multitude of climatic and weather-related events are responsible for causing frequent and prolonged disruptions to power grids. While the northwest and southwest states are primarily affected by wildfires, most of the midwest, northeast, and southeast states are affected by severe storms. In what follows, we provide some examples of recent severe power outages caused by extreme weather events in the USA. • California Wildfires (2019): The high temperatures and heat waves impact the transmission and distribution lines by limiting their power transfer capacity, thus increasing powerline-initiated wildfire risks. To avoid large-scale wildfires, utility companies preemptively shut down electricity for their customers. For example, in 2019, Pacific Gas and Electric Company shut down electricity for ~3 million customers to protect the power lines [11].
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
211
Fig. 8.2 Billion-dollar disasters weather and climate events in the USA from 1980 to 2019 Climate Matters (climatecentral.org)
• Texas Power Crisis (2021): The winter blackout experienced by Texas in 2021 left more than ten million people in darkness for several days. Experienced in February 2021, this event exacerbated the operational crisis in the ERCOT while causing $4 billion worth of financial impact on the wind farms, significantly more than their annual gross revenue. • Hurricane Ida (2021): Storms and hurricane-induced high wind extreme events result in faults and damage to the power lines. Recently, Hurricane Ida is estimated to have caused a loss of about $64 billion (US dollars) in flooding damage in the northeastern USA. It took approximately 15 days to restore the electric power entirely, and almost 1.2 million people were affected. There has also been a significant rise in the frequency of extreme weather events in the past 5 years compared to the previous decades. From 1980 to 2020, the USA experienced an average of 7.1 events/year, which increased to 16.2 events/year during the 2016–2020 timeframe. The percentage change in weather-related outages for each of the US geographic regions is shown in Table 8.1 [12]. These disasters are also increasingly becoming more expensive. In 2020, the economic loss suffered by the USA due to the impact of extreme weather events was around $98.9 billion, with a total loss of $243.3 billion in the past 3 years [13, 14]. To this end, other critical systems such as high-tech industries, hospitals, and transportation systems face severe economic losses and significant impacts on personal safety and security due to extended power outages. Among the different power grid components and subsystems, the mid- and lowvoltage (MV/LV) distribution systems are most susceptible to extreme weather events. The bulk transmission systems are built to withstand high wind speeds, and having meshed topology continues to supply electricity while sustaining partial damages/disruptions. However, distribution systems, being radial in topology with low levels of visibility and limited options for reconfiguration, are extremely
212
A. Dubey
Table 8.1 Weather-related outages for the past two decades [12] Region Northeast Southwest Southern Great Plains Northwest Southeast Midwest HI&PR Northern Great Plains
2000–2009 weather-related outage 127 24 42 17 209 131 6 2
2010–2019 weather-related outage 329 51 88 32 282 203 3 2
%change 159% 113% 110% 88% 35% 55% −50% 0%
vulnerable to external threats. Furthermore, given the distribution systems are the last mile of power supply to the end customers, service cannot be restored until repairs are performed on the rest of the system. Thus, damage to power distribution systems is often responsible for extended customer outages. As per [15], more than 90% of the customer outages are due to the damages in the MV/LV power distribution systems.
8.3 Electric Power Grid: Reliable, Not Resilient Reliability is the most used performance criterion for assessing and planning the power grid. According to the North American Electric Reliability Corporation (NERC), power grid reliability is defined as the system’s ability to supply electric power at all times while incorporating routine or unexpected outages and withstanding sudden disturbances such as short circuits or unanticipated loss in system components [16]. Thus, the power grid is reliable if it can continue to provide electric service to its customers in the event of previously known/anticipated contingencies [17]. Electric power systems can efficiently manage known and credible contingencies and continue to supply high-quality power to the end customers with very few interruptions. Therefore, although outages are observed regularly due to equipment failures or scheduled maintenance, a majority of these events lead to short-duration service interruptions for a few customers. MV/LV power distribution systems, responsible for supplying power to the end customer, are equipped with remote-controlled (or manually operated) sectionalizing and tie-switches to quickly isolate the fault and restore the service to affected customers. To this end, Fault Location, Isolation, and Service Restoration (FLISR) is one of the key distribution automation applications capable of automatically isolating the faulted area and restoring power to affected customers. The reliability of power distribution systems is measured by the frequency and duration of customer interruptions [18]. Existing power distribution systems in the United States are highly reliable, with a per customer system average interruption duration (SAIDI) and interruption frequency
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
213
(SAIFI) of 116 minutes and ~1.013 events, respectively, in the Year 2020, without counting the major events [19]. However, the reliability indices are not suitable for gauging the power grid’s performance when impacted by extreme weather events or similar HILP events [20]. If recent outages driven by extreme weather events are any indication, power distribution systems are highly inadequate in keeping the lights on during HILP events. As the intensity and frequency of extreme weather events continue to increase due to climate change, we need additional criteria to quantify the power system’s performance. This motivates bringing the concept of resilience to power grid infrastructures and operations. Generally, resilience refers to the ability of an object to return to its original state after being stressed. In the context of infrastructure systems, resilience describes how fast the infrastructure can recover from an unexpected and catastrophic event. For power systems, discussions around resilience highlight different aspects of recovery and service quality requirements in the context of HILP events. The focus of resilience definitions is on the ability of the power systems to rapidly and effectively anticipate, absorb, and recover to their pre-disaster operational state. A few conceptual definitions of resilience based on government advisory reports and policy directives are detailed below. • As per a multi-lab consortium report [21], infrastructure resilience is defined as the ability to reduce the magnitude and/or duration of disruptive events. The effectiveness of a resilient infrastructure or enterprise depends upon its ability to anticipate, absorb, adapt to, and/or rapidly recover from a potentially disruptive event. • As per the US Government 2013 policy directive [22], resilience is defined as the ability to anticipate, prepare for, and adapt to changing climate conditions and withstand, respond to, and recover rapidly from disruptions. • Specific to power distribution systems, report [23] defines resilience based on three factors: prevention, recovery, and survivability. Here, system recovery refers to quickly restoring service to the affected customers. Survivability refers to the use of innovative technologies to aid consumers, communities, and institutions in continuing some level of normal function without complete access to the grid. • A recent National Academy of Engineering (NAE) report on resilience defines [4] as the ability of the power system to deal with large-area, long-duration outages by minimizing their impact when they occur and restoring the service quickly and improving future performance. All of the aforementioned definitions incorporate the following four system performance criteria as measures of resilience: (a) preparedness, (b) robustness, (c) resourcefulness, and (d) recovery. Although these aspects may not be comprehensive, they are instructive for power utilities in assessing their system performance. Thus, most of the research on resilience quantification and enhancement in the electric power domain is driven by these definitions. Reliability and resilience, although frequently used together to evaluate system performance, are conceptually different [18]. Typically, reliability refers to the
214
A. Dubey
ability of the power grid to provide a continuous power supply to its customers without any long and frequent service disruptions. On the other hand, resilience refers to the ability of the electricity infrastructure to withstand and recover from extreme events such as natural disasters and cyber-physical attacks. Additionally, reliability quantifies the system’s response to highly probable events that have low impacts on system performance, whereas resilience considers the impacts of HILP events in characterizing the system performance. Moreover, unlike reliability calculations that are based on observed system performance, resilience requirements are typically forward-looking with the ability to project system performance for future HILP events. This motivates the need for a resilience modeling and quantification framework for the electric power grids.
8.4 Resilience Curve Figure 8.3 shows a conceptual resilience curve commonly used to describe the performance deterioration in power systems during HILP events [24]. The baseline system performance is given by Ro that may simply measure the total load supplied, total energy supplied, or the total number of customers served. At this moment, the power system is operating within the expected parameters. Assume that the system is impacted by an extreme event at te after which it enters a post-event degraded state where service to a significant portion of the system is interrupted; this reduces system performance to Rpe . In this stage, the key features to reduce performance loss (or maintain a higher level of resilience) are resourcefulness, redundancy, and adaptive self-organization. These features provide the system with the corrective operational flexibility to adapt to unseen events and help minimize the degradation in system performance (Ro − Rpe ) prior to initiating a restoration plan. At tr , utility company initiates automated restoration plans or dispatches crew members to restore the system. The restoration actions primarily leverage the operational flexibility built into the system ahead of time to reduce the effects of external threats on system performance loss and raise the system performance level to Rpr . Note that restoration actions alone are not sufficient to restore the system to its pre-event level of resilience, Ro . Depending upon the event severity, the physical infrastructure might be damaged and require dispatching crews to repair or replace the damaged components. At the end of the infrastructure recovery stage, tpir , the system is back to its original level of performance, Ro . When impacted by an event, a highly resilient system will suffer lesser degradation and recover more quickly to its original operational state. Thus, any attempt to increase resilience should aim at the following multidimensional system performance criteria: • Reduce the degradation of resilience levels during the progress of an event (Ro − Rpe ).
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
215
Fig. 8.3 Conceptual Resilience Curve
• Achieve controlled and slow system degradation and avoid cascading failures (tpe − te ). • Reduce the overall system recovery time from both operational (tpr − tr ) and infrastructural (tpir − tr ) standpoints.
8.5 Resilient Power Distribution Systems Power distribution systems are greatly affected by HILP events that lead to widespread power outages due to equipment failures and damage to the infrastructure. Nevertheless, the system needs to meet certain requirements to maintain the quality of the power supply. Measures to improve power distribution system resilience can be broadly categorized into operational resilience and infrastructural resilience (see Fig. 8.4). In Fig. 8.4, the resilience curve is simplified to include three stages – avoid, react, and recover – used to describe the resilience enhancement techniques [25]. In the avoid stage, the possible resilience enhancement techniques can include methods to defer and reduce the likelihood of the event impacting the system. This can be achieved by reinforcing the system, upgrading the infrastructure, and maintaining reserves/back-ups. Post-event, the system enters react stage, and a resilient system enables proactive actions to reduce the impacts via maintaining a basic level of service to the customers and communities. For example, DGs can form flexible-boundary islands or microgrids for an upcoming event and continue to supply critical loads within the island. The proactive isolation/islanding feeder sections can also prevent cascading failures and reduce the number of affected customers. Finally, during the recovery phase, rapid damage assessment with advanced outage management systems, prompt crew deployment with the help of geographical information systems and asset management systems, and visualization tools for situational awareness can be used to improve the overall operational
216
A. Dubey
Fig. 8.4 Different Stages of Resilience Planning and Enhancement Strategies
resilience. In what follows, we describe the methods used for quantifying and improving the resilience of the power distribution systems.
8.6 Resilience Metrics The resilience of the power grid is characterized by its ability to respond to and recover from HILP events with minimum performance degradation. The focus of this chapter is on electric power distribution systems. Traditionally, the performance of the power distribution system is measured using post-event reliability metrics such as SAIDI, SAIFI, and MAIFI that provide an evidence-based performance indicator of how well a specific distribution grid responded to normal chance failures. Unlike routine outages, adequately responding to HILP events is an inherently difficult and different process. The impacts of HILP events cannot be properly quantified using reliability metrics, calling for further considerations beyond the classical reliability-oriented view. Additionally, enhancing power grid resilience requires proactive disruption management driven by HILP events rather than persistent costs. Planning for resilience requires metrics that cannot only quantify the impacts of potential HILP events on the grid but also help evaluate/compare different planning alternatives for their contributions toward improving the grid’s resilience. The related work on resilience quantification [21, 26, 27] can be categorized into methods using (1) scoring metrics for desirable system properties [28–30], (2) metrics similar to reliability indices [31–33], and (3) risk-based metrics that
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
217
either employ statistical analysis on past storms [34–36] or simulations for impact assessment [37–40]. Simulation-based approaches with risk-based metrics are most appropriate as they incorporate a probabilistic framework to capture the risks of historical, predicted, and hypothetical extreme event scenarios [39, 40]. A riskcentric approach is needed to quantify the value of investment decisions for HILP events. The existing literature, however, is limited in developing risk-based metrics to quantify power grid resilience. Moreover, there is little to no effort in developing a detailed resilience quantification framework that can help quantify grid resilience and evaluate the impacts of planning measures on resilience improvements. In what follows, we introduce risk-based metrics to quantify the performancebased resilience of power distribution systems [40]. Resilience metrics should quantify the progression and extent of system performance loss when impacted by an event. Thus, before defining resilience metrics, we introduce a mathematical model for the event under study and functional characterization of system performance loss. Next, we present a simulation-based framework to model the impacts of extreme events on the power system components and system. Since the occurrence of HILP events and their impacts on the power grid are not deterministic, we introduce a probabilistic framework to characterize system performance loss and quantify the system resilience.
8.6.1 Event/Threat Characterization An event or threat is characterized by two parameters – intensity of the impact and probability of its occurrence. The intensity affects the failure probability of the system equipment, which will affect the system performance loss function. Let I be a random variable indicating the intensity of the weather-related event under consideration and p(I) be the corresponding probability density function. For example, assuming windstorms to be weather-related events under consideration, then p(I) is given by the regional wind profiles shown in Fig. 8.5.
8.6.2 Weather-Grid Impact Model In related literature, component-level fragility curves have been used to model the impacts of hurricanes or other high-wind events on power system components. A fragility function maps the probability of failure of distribution system components on the intensity of the hazard (e.g., a wind speed). An example of the fragility curve is shown in Fig. 8.6a, which relates the failure probability of distribution system components to wind speed. The fragility curve can be generated using empirical data gathered during disaster events. The observed damage from past windstorms can yield damage estimates for a given wind profile. These damage estimates can be coupled with an infrastructure database and geographical information system
218
A. Dubey
Fig. 8.5 Extreme-event characterization
Fig. 8.6 Component-level impact assessment model: (a) Fragility curves for a wind-profile and (b) prototype curve fit model for percentage of equipment damage as a function of wind speed [41]
to construct an equivalent fragility curve for the component. For example, in [41], the recorded data on observed component damages within a given region is used to develop a fragility curve of percentage component damage as a function of maximum sustained wind speed (see Fig. 8.6b). The fragility modeling can be improved using advanced structural analysis or better data-driven approaches. This is an active area of research in the power systems community.
8.6.3 System Loss Function, U(I) Resilience quantifies changes in system performance over time when impacted by an event. Here, we mathematically characterize the system performance loss. The
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
219
Fig. 8.7 System performance loss function
performance loss in power distribution systems when impacted by a HILP event of intensity Ie is represented as a nonlinear function of loss of load, L(Ie ), and total time taken to recover the system back to an acceptable level of performance, t(Ie ). We define system loss, U(Ie ), when impacted by the specified weather event of intensity, Ie , as the shaded area shown in the performance curve (see Fig. 8.7). The probabilistic performance loss function, U(I), is then obtained by sampling the entire probability distribution function for the event, p(I), and simulating the system performance for each of the realization of the event, I.
8.6.4 Risk-Based Resilience Metrics We define two risk-based resilience metrics: Value-at-Risk (VaRα ) and ConditionalValue-at-Risk (CVaRα ) [40]. The proposed resilience metrics are motivated by risk-management literature that relates to quantifying the risks involved with a given financial investment. Specifically, VaRα and CVaRα are the two measures commonly used in risk-management literature to evaluate the impacts of low probability events that can cause extreme losses for traders. Since both metrics specifically quantify the extreme losses due to low probability events, they make suitable choices to quantify resilience.
220
8.6.4.1
A. Dubey
Value-at-Risk (VaRα )
VaRα calculates the maximum loss expected over a given time period and for a specified degree of confidence (see Fig. 8.8). Simply, VaRα refers to the lowest amount ζ such that with probability α the system loss, U(I), does not exceed ζ , (8.1). .ξ (ζ ) = p(I )dI (8.1) U (I )≤ζ
where ξ is the cumulative distribution function for the system loss when impacted by an event characterized by p(I). Then, by definition, with respect to a specified probability level α ∈ (0, 1), VaRα is given by (8.2). VaRα = min {ζ ∈ R : ξ (ζ ) > α}
.
8.6.4.2
(8.2)
Conditional-Value-at-Risk (CVaRα )
CVaRα quantifies the expected system performance loss conditioned on the events being HILP. For resilience quantification, the metric CVaRα is computed based on the decrease in system performance caused by those probabilistic threat events that cause the highest impacts. Mathematically, CVaRα is defined as the conditional expectation of the loss greater than those associated with VaRα or the (1 − α)%
Fig. 8.8 Probabilistic loss function and CVaR calculation
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
221
highest impact events (i.e., in the tail of U(I) distribution (see Fig. 8.8)). Accordingly, CVaRα metric characterizing resilience is defined in (8.3). CVaRα = (1 − α)−1
U (I )p(I )dI
.
(8.3)
U (I )>VaRα
8.6.5 Resilience Quantification Framework We introduce a simulation-based framework to quantify the resilience of the power distribution systems against extreme weather events. The framework requires models for extreme weather events, models to quantify the impacts of weather events on power grid components and systems (i.e., weather-grid impact models), and an approach to evaluate the system loss and associated probabilities. Specifically, the proposed risk modeling framework uses the event intensity model (such as regional wind speed profile) and component-level fragility curves (failure probabilities of equipment conditioned on the event’s intensity) to compute the risk of the extreme event on the system’s performance loss. The overall framework is shown in Fig. 8.9 [40]. The probability density function (pdf) for the weather event for a specified region can be obtained using the meteorological data collected using weather sensors (e.g., probabilistic regional wind-speed data; see Fig. 8.5). The fragility curve is empirically generated to model the probability of equipment being damaged as a function of the intensity of the extreme event (Fig. 8.6). As expected, these probabilities increase sharply with the increase in the event
Fig. 8.9 From fragility modeling to probabilistic loss for resilience metric characterization [40]
222
A. Dubey
intensity (e.g., wind speed). Since weather intensity and its impact on equipment are not deterministic, we sample the respective pdfs using Monte Carlo simulations to evaluate the probabilistic impacts of the weather event and obtain the pdf for U(I). The probability density function for U(I) is used to compute the CVaRα metric measuring the risks of the HILP event. Note that, for a given confidence level α ∈ (0, 1), CVaRα measures the conditional expectation of system loss due to (1 − α)% of highest impact events. Several proactive planning measures can be applied to enhance the distribution system’s resilience. The resilience quantification framework should be able to evaluate the impacts of planning measures on system resilience. From an operational standpoint, a decision-maker can improve resilience by allocating resources to lessen the average impact Li , decrease the damage assessment time (tr − te ) to quickly enter the restorative state, and/or apply advanced restoration to decrease impact in the restorative state (tir − tr ). The aforementioned resilience quantification framework can seamlessly incorporate any operational or planning solutions and evaluate their impacts on system resilience and system resilience metrics. We discuss two specific proactive planning measures and approaches to model their impact on the resilience curve and resilience metric. Robust Network-Hardening/Undergrounding Hardening the distribution lines, although expensive, is one of the most effective methods of protecting the system against extreme windstorms. The hardening solutions mainly boost the infrastructure resilience, thus reducing the initial system loss as the event strikes and progresses. Several hardening strategies include overhead structure reinforcement, vegetation management, and undergrounding of distribution lines. By hardening, components are made robust against extreme wind events by reducing the probability of wind-induced damages. In essence, hardening modifies the fragility curve for the distribution system components. With this planning measure, a decision-maker can expect to have reduced loss during the state of event progress, as shown by the green curve in Fig. 8.7. Smart Network-Improved Response and Automated Restoration Alternatively, we can employ operational solutions to enhance the grid’s resilience by quickly restoring the distribution system. The distribution systems equipped with enough smart meters and remote-controlled switches allow for speedy service restorations. In the presence of grid-forming DGs, intentional islands can be formed to supply critical loads prior to infrastructure recovery. Besides, adequate situational awareness tools can enable effective and timely decision-making for damage assessment and commencing appropriate actions for restoring the system’s critical loads. This planning measure assists the system operator in damage assessment and speedy system restoration that helps achieve post-disturbance resilience by reducing the duration of the damaged system state. This effect is characterized by the blue curve in Fig. 8.7, where the area under Phase III (tr − tir ) decreases due to restoration action.
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
223
Fig. 8.10 Example with IEEE 123-bus test system [39, 40]
8.6.6 Example of Resilience Modeling and Metric We have previously applied the above framework to quantify the impacts of high-speed wind events on the electric power grid under specified resource allocations [40]. The analysis includes two classes of reinforced networks: one with underground cables (hardened for wind events) and the other capable of smart and proactive islanding using DGs. The study showed a reduction in risks of extreme weather events for the reinforced systems (see Fig. 8.10). The CVaRα values measuring the system performance loss due to the 5% of highest impact events decreased for both hardened and smart networks. From simulation studies, it is observed that the proposed resilience metrics are well-suited to compare different resilience enhancement strategies. Also, a hybrid network (stronger and smarter) obtained by optimizing the investment costs for the possible resilience enhancement measures might offer better resilience. This motivates methods for resilience planning for power distribution systems using risk-averse methods.
8.6.7 Further Considerations The existing simulation-based risk quantification, including our prior work on quantile risk measures, utilizes simplified component-level fragility models that do not capture the multi-hazard (e.g., wind damage and flooding) impacts [42] and simplified extreme event models that do not capture the complex spatiotemporal characteristics of the windstorms [43] (see Fig. 8.11). Appropriately, modeling the impacts of extreme weather events on power grid components requires detailed sim-
224
A. Dubey
Fig. 8.11 Spatiotemporal effects of hurricanes [61]
ulators for extreme weather event models and their impact parameters (wind-field, flooding levels, etc.). The component fragility models also need to be substantially improved to capture the multi-hazard impacts of the extreme events. Although longterm weather prediction models have improved in recent years, they have not been used effectively to assess climate impacts on the power grid. The SLOSH model, for example, can estimate storm surges from an impending hurricane. However, for its intended applications, SLOSH rightfully does not provide information on how it affects the power grid infrastructure [44]. Moreover, the existing riskbased resilience modeling approaches do not incorporate the impacts of climate change projections on the HILP events and how its effect manifests on the power grid infrastructure [43]. Thus, there is a critical need to use the existing climate and weather models and improved component-level fragility models to develop representative climate-grid impact models that can accurately quantify the impacts of climate HILP events on the power grid.
8.7 Infrastructure Planning for Resilience Effective management of disruptions in critical infrastructure systems requires longterm planning to improve resilience [45, 46]. Upgrading the distribution system infrastructure by system hardening and investing in smart grid technologies enhance grid resilience [47–49]. Existing distribution system planning methods primarily consider the persistent cost of the expected events (such as faults and outages
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
225
likely to occur) and aim at improving system reliability. The resilience to extreme weather events requires reducing the impacts of HILP events that were previously characterized by the tail probability of the event impact distribution. The resilienceoriented system upgrade solutions need to be driven by the risks imposed by extreme weather events to the power grid infrastructure rather than persistent costs. This requires a mechanism to quantify the reduction in the risks associated with the highest impact events under a given resource allocation. The previously introduced metrics (CVaRα , VaRα ) specifically model and quantify these aspects. Thus, they can be employed to optimally allocate resources for different planning activities to enhance the system performance or reduce the risks of outages from HILP events. Toward this goal, we envision a risk-averse framework for resource planning to manage disruptions in the power distribution grid and to inform the optimal planning decisions. Specifically, we introduce a risk-based approach for infrastructure planning in active power distribution systems for resilience against extreme weather events. We employ CVaRα to quantify the risks of system outages imposed by the HILP events. The planning problem for resilience can be modeled as a CVaRα optimization problem, a method widely used for risk-averse financial planning [50–52], which is apt when HILP events are a primary concern. The main idea is to optimize a CVaRα metric, which measures the risks associated with loss of resilience when the system under study is subjected to a percentage of highestimpact events. We present a two-stage stochastic optimization framework to optimize smart grid investments such as line hardening or DG/microgrid placement to enable advanced system operations such as DG-assisted restoration and intentional islanding. The proposed approach models (1) the impacts of HILP events using a two-stage riskaverse stochastic optimization framework, thus explicitly incorporating the risks of HILP events in infrastructure planning decisions, and (2) the advanced distribution grid operations (in the aftermath of the event) such as intentional islanding into infrastructure planning problem. The inclusion of risk in the planning objective provides additional flexibility to the grid planners to analyze the trade-off between risk-neutral and risk-averse planning strategies. Since the proposed approach for infrastructure planning is based on the system’s response to HILP events, the resulting upgraded grid will ensure that the power supply to critical loads is more resistant to disasters.
8.7.1 Two-Stage Planning Framework Figure 8.12 shows a general representation of the two-stage planning framework. The problem objective is to identify the first stage optimal planning decisions that minimize the expected operational cost in the second stage. For example, optimal DG siting and sizing are the first stage planning decisions to improve system response to extreme weather events. The second stage objective is to minimize the prioritized load loss once a scenario is realized by optimally restoring the
226
A. Dubey
Fig. 8.12 Two-stage resilience planning and second-stage response for a specific scenario
power supply using the deployed DGs. We use DGs with grid-forming inverters to describe the problem formulation. Such grid-forming DGs can be used for intentional islanding when some area of the distribution grid gets disconnected from the system due to an extreme event. For example, for the case shown in Fig. 8.12, once faults occur in the system, the tie switches, and sectionalizing switches can be toggled to isolate the faulted sections, i.e., Island 3 and Island 4. The DGs can form two islands, Island 1 and Island 2, and continue to supply the loads within the islands [53]. It is important to understand that the two stages are not decoupled but rather solved as a single optimization problem. The first-stage decisions should include the optimal system response from the second-stage optimization problem for different realizations of extreme weather events. Since the first stage decisions are taken exante, they are fixed for each of the second stage scenarios/optimizations. The overall two-stage planning optimization problem solves for planning decisions that remain optimal for every realization of scenarios in the second stage of optimization. A general two-stage stochastic integer programming problem is formulated as the following [42]:
Subject to . where, Subject to
f (x) = cT x + EP [Q (x, ε)] Ax = b, Q (x, ε) = q T y Wy = h − T x,
+ + x ∈ Rm1 × Zn1
+ + × Zn2 y ∈ Rm2 (8.4)
where x is the first-stage decision variable, and y is the second-stage decision variable, E refers to the set of uncertain data (or scenarios) with a known probability distribution P, and (q, h, T, W) are scenario-dependent variables which vary for each
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
227
ξ ∈ E. The objective in a general two-stage stochastic optimization is to solve (8.4) for first-stage decision x that minimizes the first stage cost and the expected cost of the second-stage decisions, Q(x, E). The second-stage decisions are also known as the recourse decisions and are scenario dependent.
8.7.2 Risk-Averse Methods for Resilient Infrastructure Planning Our objective is to develop a risk-based framework for long-term infrastructure planning to manage disruptions in the electric power grid and provide guidance on trade-offs among different resilience planning activities based on their cost and risk mitigation potential. The general stochastic optimization model considers only the expected cost of the second stage, as shown in (8.4), and does not directly incorporate the tail probability of outcomes of P. The two-stage problem defined in (8.4) is modified to include risk-averse optimization criteria in the planning process. We introduce a two-stage problem risk-averse stochastic optimization problem where the first-stage problem minimizes the cost of planning, and the weighted sum of the expected value and the CVaRα of the second-stage problem. The notional objective function is described below: f (x) = cT x + λE [Q (x, ε)] + (1 − λ) CVaRα [Q (x, ε)] where, .
E [Q (x, ε)] =
ξ ∈ε
pξ Q (x, ξ )
CVaRα [Q (x, ε)] = η +
1 1−α
ξ ∈ε
pξ υξ (8.5)
where λ ∈ [0, 1] is the risk multiplier or factor that defines the trade-off between E[Q(x, ε)] and CVaRα [Q(x, ε)]. By selecting different values of λ, the first-stage decisions are termed as either risk-averse (λ = 1), risk-seeking/mean-risk (λ = 0.5), or risk-neutral (λ = 0). Thus, the formulation in (8.5) minimizes not only the expected loss but also the losses due to the highest impact events, depending on the value of λ.
8.7.3 DG Siting and Sizing for Distribution System Resilience Next, we briefly formulate a notional problem of siting and sizing the grid-forming DGs as a two-stage risk-averse optimization problem to improve power distribution resilience. Note that the DGs allow for intentional islanding and support critical
228
A. Dubey
loads in the aftermath of a disaster. Thus, optimally siting and sizing those will improve system response when impacted by HILP events. Problem Objective The stage-1 decision variables include the optimal sizes and locations for the grid-forming DGs. The problem objective for stage-1 is to minimize the cost of DG deployment and the weighted sum of expected value and CVaRα of the stage-2 cost. The stage-2 problem objective is to minimize the prioritized load loss or maximize the restoration of prioritized loads for each scenario, ξ ∈ ε. Stage-2 decision variables include an optimal restoration plan using the available resources, including the deployed grid-forming DGs. Thus, the stage2 costs correspond to the optimal restoration decisions once a scenario is realized, and stage-2 decision variables are scenario dependent. Stage-1 decisions, on the contrary, are scenario independent and need to be optimal for all the scenarios. This couples the stage-1 and stage-2 optimization problems. First-Stage Constraints The first-stage constraints correspond to the planning decisions made in the first stage. For DG siting and sizing problems, the first-stage constraint can be designed to limit the total cost of installed DGs to the maximum allowable DG size or sizing or budget restrictions for individual DG locations and sizes. Second-Stage Constraints The second stage of the stochastic optimization problem is the operational stage in which DG-assisted restoration is performed for each scenario ex-post. This inner-loop operational stage consists of several constraints corresponding to the restoration problem, including (1) connectivity constraints, (2) power flow constraints, (3) power system operational constraints, and (4) DG operational constraints. The optimal restoration plan for each scenario includes prioritized load restoration using available feeders and grid forming DGs to reduce the total system outage [53].
8.7.4 Example Scenarios We have previously applied this approach for DG sizing and siting for resilience on a modified Institute of Electrical and Electronics Engineers (IEEE) 123-bus distribution test feeder. Several case studies with multiple DG locations, variable number of DGs, and varying risk preferences were conducted. It was observed that the risk parameters α and λ affect the planning decisions. The value of CVaRα depends on α as it defines what is considered as an HILP event. For example, α = 5% implies the top 5% of highest impact events are considered risky and are used to quantify CVaRα . Likewise, λ denotes required levels of risk-aversion in planning decisions. Thus, higher values of α and λ lead to more risk-averse planning decisions. Figure 8.13 shows the relation of the optimal CVaRα for the prioritized loss of load for different values of α and λ. As discussed, CVaRα decreases when more scenarios are considered risky. Furthermore, for a fixed α, CVaRα decreases
8 Preparing the Power Grid for Extreme Weather Events: Resilience. . .
229
Fig. 8.13 Comparison of CVaR of prioritized loss of load for different values risk parameters
0.06
0.04
VAR α CVaR α
Probability
0.05
0.03 0.02 0.01
20
0
0
1320
2300
12295 19145 19645 19705 19785
6 DG locations; ∑ bi DG 5066MW Y
L6: 18
Vs < 399kV
N D10: 41
T10: 51
N
Y
Ps > 2581MW Y
N
Y
Y
14.7%
Qs:
7.3%
T4: 1645
Ps/Ns > 1190MW
N D7: 109
T13: 25
T6: 145
Qs < -217Mvar Y
Vs:
Ps > 3760MW
N
T11: 25
D6: 15
57.4% 20.6%
Ps > 4538MW T12: 134 D5: 42
L4: 26
Ps/Ns:
N
L3: 10
N
Y
T5: 1500
Qs < 160Mvar Y
L5: 10
N
D8: 15
Qs < -393Mvar
Ps > 4362MW
N
Y
T7: 141
L1: 4
Y
D2: 41
ND1:
1459
Ps > 3028MW Y
N
L2: 96
T8: 45
Learning set classification CCT0.240s: 1819 (stable)
Vs < 399kV Y
D3: 7
N D4: 38
Fig. 9.6 Example DT for TS assessment and control. (Adapted from Ref. [2])
its influence on TS. Note also that the IQ of a test attribute that appears at many tree nodes is the sum of its partial IQs. According to the above description, a DT provides the following information: – A subset of relevant attributes (system parameters) that drive TS. – A synthetic description of TS. – The straightforward classification of a new, unseen case, by placing it at the top node, letting it progress down the DT by submitting it to the successive tests that it meets: it finally reaches a terminal node, labeled stable or unstable. – Means to control: to stabilize an otherwise unstable case, the DT informs about the attribute(s), along with the(ir) value(s), to stabilize it. It is important to emphasize that the successful application of the DT method depends on the close collaboration between its designer and the expert(s) of the physical problem under study. Incidentally, during our implementations of the DT approach to PSs, the experts were pleased and at the same time proud to observe that the relevant attributes selected by the DT, as well as their degree of influence, corroborated nicely with their own intuition. This was certainly one of the best incentives in favor of the adoption of the approach in practice [15].
9 Power Systems Operation and Control: Contributions of the Liège Group ...
259
9.5 Hindsight We have described one of the main R&D subject matters (physical problems) studied by the Liège group, and we have given an outline of two methodological approaches developed over the years to carry them out. The first methodology emanates from the Lyapunov direct criterion. It gave rise to the EEAC and its variants, then further revisited and yielded SIME. SIME is a hybrid approach aiming at combining advantages of direct methods and of timedomain numerical information, by coupling them together. The obtained advantages go far beyond those of the two approaches put together. The other methodology calls upon machine learning techniques. Decision trees are a member of this set of techniques. We proposed them about four decades ago to tackle transient stability. Today, the outburst of machine learning gives rise to a rapidly growing number of applications in many domains including in the field of electric power systems reliability management [16]. In the meantime, the morphology of the electricity sector has undergone two major upheavals. One is the liberalization of the electricity sector. This has profoundly changed its very organization, going from – Vertical, monopolistic organization, which is managed by a single operator, the Electricity Company, to – Horizontal, unbundled organization, which depends on many different players, such as generation companies, transmission and distribution companies, transmission and distribution operators, suppliers, aggregators, market operators, and regulators. (Incidentally, unlike the “ancient” vertical structure, this “modern” organization does not have the legal obligation to cover consumption needs at all times by adequate generation-transmission-distribution-supply schemes.) The other upheaval comes from a profound change in the morphology of the electrical generation; it is linked to the advent and dazzling increase in renewable energies aimed at taking the lion’s share of the total electricity production. The management of this new multifaceted morphology brings up new, much tougher challenges: increasing complexity in size and in uncertainties; decreasing security of operation, etc. Nevertheless, the tailor-made SIME approach is still adaptable. Regarding machine learning methods, their last decades’ extraordinary outburst undoubtedly benefits enormously various aspects of power system operation and control, including transient stability. Acknowledgments This contribution is a small sample of the R&D work carried out collectively by the Liège group, mainly PhD students, in 1970–2000. I (Mania Pavella) am greatly pleased to express my sincere appreciation for the enthusiastic motivation, inspiration, and dedication in their research work, and the constantly smiling and cordial behavior of all my PhD students. Warm thanks go (in order of thesis defense) to: Thierry
260
M. Pavella et al.
Van Cutsem, Lamine Mili, Yusheng Xue, Patricia Rousseaux, Régine Belhomme, Louis Wehenkel, Ywei Zhang, Isabelle Houben, Arlan Bettiol, and Daniel Ruiz-Vega. My gratitude goes also to the co-authors of my three books (in order of publication): Ljubomir Gruji´c and A.A. Martyniyuk; P.G. Murthy; Daniel Ruiz-Vega and Damien Ernst. I also wish to express my deepest appreciation to our industrial collaborators for sharing their engineering expertise, this indispensable complement for the success of practical achievements. Undoubtedly, a special tribute goes to my co-authors of this paper, for their unwavering friendship over the years, and their skilled suggestions while writing up this paper, which reminded me of all the good memories of our many past collaborations.
References 1. M. Ribbens-Pavella and F.J. Evans. “Direct Methods for Studying Dynamics of Large-Scale Electric Power Systems – A Survey”. Automatica 21: 1–21, 1981. 2. M. Pavella, D. Ernst, and D. Ruiz-Vega. Transient Stability of Power Systems: A Unified Approach to Assessment and Control. Kluwer Academic Publishers, 2000. 3. Y. Xue. A New Method for Transient Stability Assessment and Preventive Control of Power Systems. PhD Thesis, University of Liège, Belgium, 1988. 4. Y. Xue, T. Van Cutsem, and M. Ribbens-Pavella. “A Simple Direct Method for Fast Transient Stability Assessment of Large Power Systems”. IEEE Trans. on PWRS, PWRS3: 400–412, 1988. 5. Y. Xue, L. Wehenkel, R. Belhomme, P. Rousseaux, M. Pavella, E. Euxibie, B. Heilbronn, and J.F. Lesigne. “Extended Equal Area Criterion Revisited”. IEEE Trans. on PWRS, PWRS7: 1010–1022, 1992. 6. A. Bahmanyar, D. Ernst, Y. Vanaubel, Q. Gemine, C. Pache, and P. Panciatici. “Extended Equal Area Criterion Revisited: a Direct Method for Fast Transient Stability Analysis”. Energies, 14 (21), 2021. 7. Y. Zhang, L. Wehenkel, P. Rousseaux, and M. Pavella. “SIME: A Hybrid Approach to Fast Transient Stability Assessment and Contingency Selection”. Journal of EPES, Vol 19, No 3: 195–208, 1997. 8. D. Ruiz-Vega. Dynamic Security Assessment and Control: Transient and Small Signal Stability. Ph.D. Dissertation, University of Liège, Belgium, 2002. 9. M. Pavella, B. Thiry, L. Wehenkel, D. Ruiz-Vega, P. Dubois, B. Sak, and S. Servais. “Schéma de Fonctionnement du Pool-dispatch tel que Prévu dans le Cadre de la Libéralisation du Secteur de l’Electricité”. Final Report. Convention ENER/5 – 97 Etude réalisée à la demande du Ministere des Affaires Économiques-Administration de l’Energie, University of Liège, Belgium, 1998. 10. Single Machine Equivalent (SIME) Approach to Dynamic Security Assessment (DSA): Contingency Evaluation and Protection Control, EPRI, Palo Alto, CA: 2000. 1000412. M. Pavella, and D. Ruiz-Vega, Principal Investigators, University of Liège, Belgium, 2000. 11. D. Ruiz-Vega, D. Ernst, C. Machado Ferreira, M. Pavella, P. Hirsch, and D. Sobajic. “A Contingency Filtering, Ranking and Assessment Technique for On-line Transient Stability Studies”. Proc. of the International Conference on Electric Utility Deregulation and Restructuring, and Power Technologies, London, UK. pp. 459–464, 4–7, 2000. 12. D. Ernst, D. Ruiz-Vega, M. Pavella, P. Hirsch, and D. Sobajic. “A Unified Approach to Transient Stability Contingency Filtering, Ranking and Assessment”. IEEE Trans. on Power Systems, Vol. 16, No. 3, pp. 435–443, 2001. 13. A. Bihain, D. Cirio, M. Fiorina, R. López, D. Lucarella, S. Massucco, D. Ruiz-Vega, C. Vournas, T. Van Cutsem, and L. Wehenkel. “OMASES: A Dynamic Security Assessment Tool for the New Market Environment”. Proc. of the IEEE Bologna Power Tech, Bologna, Italy, 2003.
9 Power Systems Operation and Control: Contributions of the Liège Group ...
261
14. L. Wehenkel, T. Van Cutsem, and M. Pavella. “Artificial Intelligence Applied to On-line Transient Stability Assessment of Electric Power Systems”. Proc. of the 25th IEEE Conf. on Decision and Control, pp. 649–650, 1986. 15. L. Wehenkel. Automatic learning techniques in power systems. Kluwer Academic Publishers, 1998. 16. L. Duchesne, E. Karangelos and L. Wehenkel, “Recent Developments in Machine Learning for Energy Systems Reliability Management”. Proceedings of the IEEE, vol. 108, no. 9, pp. 1656–1676, 2020.
Mania Pavella (Remembrances of a professional life) arrived in Liège (Belgium) at the end of September 1952. It was foggy, rainy, and chilly—quite different from the Greek sunny and warm weather I had just left. Deciding to study outside Greece was motivated by my curiosity to see whether “abroad” would correspond to the descriptions of my French and English Professor. My parents did not dissuade me, despite the fact that I was their only child and still under 18. On my arrival in Belgium, I had to suddenly adjust to my new and quite different surroundings. However, having to attend first-year University courses as a “free student” while at the same time working for my University entrance exams left me no time to experience homesick feelings. At the end of my undergraduate studies, with my diploma of electrical engineer in hand, I was leaving for my vacation in Greece without worrying about “what to do next.” So, when my main electrical engineering professor suggested that I apply for an assistantship, I did so without hesitation. During my first years of assistantship, I explored possible thesis topics related to my basic training (Electronics Engineer). However, after a while, I realized that I was reaching a dead end. I thus turned to the realm of Power Systems. My thesis subject was found, its elaboration accomplished, followed by its defense and my PhD graduation (1969). (Let me mention for fun, that at those times, the computing center of my university possessed a sole computer, which had the responsibility of executing all manner of tasks. Theses’ calculations would be considered after executing all other tasks. So, usually, during my thesis elaboration, I was allowed to go and use the computer around 4 am or 5 am—and not for as long as I needed.) After my PhD graduation, I joined academia as an associate professor. This was the real start of my professional career and the origin of the Liège group. My first trip to the States was quite an experience. The purpose was my participation in the 1971 Institute of Electrical and Electronics Engineers (IEEE) Power Engineering Society (PES) Winter meeting in New York, and presentation of a paper summarizing my PhD thesis. Strangely enough, being among an almost totally male assembly without knowing anyone did not affect me. However, I was in shock at the end of my presentation,
262
M. Pavella et al. when a participant started criticizing my work, quite violently. I was flabbergasted inasmuch as I did not understand a single word of what he was saying, and therefore unable to answer. The intervention of a participant whom, of course, I did not know either, saved the situation, apparently by his explaining the interest in applying the Lyapunov theory. Later, I learned that he was Petar Kokotovic, a professor at the University of Illinois, who was an expert in Lyapunov theory. Petar invited me to pay a visit to his CSL (Coordinated System Laboratory) in October of the same year, and to contribute a paper in the Allerton Conference, held at the “Allerton house”— an idyllic place in the middle of a superb forest. This was my first encounter with the international community of scientists and researchers. It was a period of hard work and intense professional activities locally and internationally. I even had to give up my bridge card game that I was so fond of. The second decisive encounter took place in 1975, during the COPOS conference in San Carlos, Brazil, where I met Tom DyLiacco. Tom, in his thesis, a masterpiece published in 1968, laid the foundations of the overall organization of power systems, identified needed techniques to operate them from the control center, and promoted innovative theoretical approaches to meet these needs. By 1975 he was recognized to be the “father of power system control centers.” This is still valid today. Tom kindly invited me to go to Cleveland and visit the Cleveland Electric Illuminating (CEI) company and also his home, where I met his wonderful wife and children. I accepted his invitation enthusiastically. This was the beginning of collaborations and meetings at numerous conferences. These efforts gave me the opportunity to appreciate even more Tom’s exceptionally gifted personality: a polyglot, a deep connoisseur of fine arts, an enthusiastic lover of the “art de vivre,” and an extremely kind and benevolent human being. This was a long time ago. At present, after many years of professional retirement, I would like to throw a hindsight, and share some personal thoughts. Looking back at the professionally active part of my life, I realize that my career has seldom been planned. Rather, it has consistently been guided by luck, whenever luck has been encountered and recognized. Among the happier circumstances, were my interactions with three exceptional personalities. – As a teenager, my cosmopolitan French and English Professor. He instilled in me the love of foreign languages, and introduced me to and made me adore French literature. – During my university studies and my assistantship, my main Professor of EE, F. Dacos. He instilled in me the love of physics, and introduced me to Einstein’s relativity, Eddington’s small flatfish, and the magic of physics. – Later, as a senior researcher and professor, Tom Dy-Liacco. A dear friend and guiding light.
9 Power Systems Operation and Control: Contributions of the Liège Group ...
263
I am and will always be most grateful to them and appreciative of the chance of having crossed their paths and benefited from their personalities. My deepest feelings of love go to my passed parents, for their unconditional and tactful love, and also for the confidence they put on me, which has been the dominant strength in my life. I will be thankful to them forever. Finally, the warmest thoughts and love go to my daughter Clio. Mania Pavella is an Emeritus Professor and an IEEE Life Fellow. Louis Wehenkel graduated in Electrical Engineering (Electronics) in 1986 and received the PhD degree in 1990, both from the University of Liège (Belgium), where he is a Full Professor of Electrical Engineering and Computer Science. His research interests lie in the fields of stochastic methods for modeling, optimization, machine learning, and data mining, with applications in complex systems, in particular large-scale power systems planning, operation and control, industrial process control, bioinformatics, and computer vision.
Damien Ernst received the MSc and PhD degrees in Engineering from the University of Liège, Belgium, in 1998 and 2003, respectively. He is currently a Full Professor at the University of Liège and Visiting Professor at Télécom Paris. His research interests include electrical energy systems and reinforcement learning, a subfield of artificial intelligence. He is also the CSO of Haulogy, a company developing intelligent software solutions for the energy sector. He has co-authored more than 300 research papers and 2 books. He has also won numerous awards for his research and, among which, the prestigious 2018 Blondel Medal. He is also regularly consulted by industries, governments, international agencies, and the media for his deep understanding of the energy transition.
Chapter 10
Reinforcement Learning for Decision-Making and Control in Power Systems Xin Chen, Guannan Qu, Yujie Tang, Steven Low, and Na Li
10.1 Introduction Electric power systems serve to deliver electricity from the generation to the load through transmission and distribution in a reliable and cost-effective manner. Driven by technological and economic growth and sustainability requirements, power systems are undergoing an architectural transformation to become more sustainable, distributed, dynamic, intelligent, and open. On the one hand, the recent years have witnessed a rapid proliferation of renewable generation and distributed energy resources (DERs), including solar energy, wind power, energy storage, responsive demands, electric vehicles, and so forth. This revolutionizes the way power is generated and energy is managed with bi-directional power flow and multi-stakeholders. On the other hand, widely deployed smart meters, upgraded communication networks, and data management systems foster the emergence of Advanced Metering Infrastructures (AMIs) and Wide Area Monitoring Systems (WAMS), which greatly facilitate the real-time monitoring and control of power
X. Chen School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA G. Qu Carnegie Mellon University, Pittsburgh, PA, USA Y. Tang Peking University, Beijing, China S. Low California Institute of Technology, Pasadena, CA, USA N. Li () Gordon McKay Professor in Electrical Engineering and Applied Mathematics, Harvard University, Cambridge, MA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_10
265
266
X. Chen et al.
systems and enable bi-directional information flow. However, new challenges for system operation and control have emerged during the power grid evolution, including the following: 1. Growing complexity. The deployment of massive amounts of DERs and the interconnection of regional power grids dramatically increase system operation complexity and make it difficult to develop accurate system (dynamical) models. 2. Increasing uncertainty. The rapid growth of responsive loads and renewable generation significantly increases uncertainty which complicates predictions and jeopardizes system reliability. 3. Intensifying volatility. The high penetration of power electronics converterinterfaced devices reduces system inertia and leads to faster dynamics which necessitates advanced controllers with online adaptivity. Advanced decision-making and control techniques are necessitated to address these challenges to ensure the reliable and efficient operation of modern power grids. In particular, reinforcement learning (RL) [1] is a prominent machine learning paradigm that is concerned with how agents take sequential actions in an uncertain interactive environment and learn from the received feedback to optimize a certain performance. Leveraging artificial neural networks (ANNs) for function approximation, deep RL is further developed to solve large-scale online decision problems. The most appealing virtue of RL is model-free, i.e., it makes decisions without explicitly estimating the underlying models. Therefore, RL has the potential to capture hardto-model dynamics and could outperform model-based methods in highly complex tasks. Moreover, the data-driven nature of (D)RL allows it to adapt to real-time observations and perform well in uncertain dynamical environments. The past decade has witnessed great success of RL in a broad spectrum of applications, including playing games, clinical trials, robotics, autonomous driving, clinical trials, and so forth. Meanwhile, the application of RL in power system operation and control has attracted growing attention (see Chen et al. [2] and references therein). RL-based decision-making mechanisms are envisioned to compensate for the limitations of existing model-based approaches and thus appear promising in addressing the emerging challenges described above. In this chapter, we present a comprehensive and structural overview of the RL methodology, from basic concepts and theoretical fundamentals to state-of-the-art RL techniques. Two key applications are selected as examples to illustrate the overall procedure of applying RL to the control and decision-making in power systems. In the end, we discuss the critical challenges and future directions for applying RL to power system problems in depth.
10.2 Preliminaries on Reinforcement Learning Reinforcement learning (RL) is a branch of machine learning concerned with how an agent makes sequential decisions in an uncertain environment to maximize the
10 Reinforcement Learning for Decision-Making and Control in Power Systems
267
Fig. 10.1 The structure of the RL methodology with related literature
Fig. 10.2 Illustration of a Markov Decision process
cumulative reward. The structure of RL methodology with related literature is shown in Fig. 10.1. Mathematically, the decision-making problem is modeled as a Markov Decision Process (MDP), which is defined by state space, action space, the transition probability function that maps a state-action pair to a distribution on the state space, and lastly the reward function. The state space and action space can be either discrete or continuous. As illustrated in Fig. 10.2, in an MDP setting, the environment starts with an initial state s0 . At each time, given the current state st , the agent chooses action at and receives reward rt that depends on the current state-action pair, after which the next state st + 1 is randomly generated from the transition probability. A policy π for the agent is a map from the state to a distribution on the action space, which rules what action to take given a certain state. The agent aims to find an optimal policy π ∗ that maximizes the expected infinite time discounted reward. In the MDP framework, the so-called model specifically refers to the reward function and the transition probability. Accordingly, it leads to two different problem settings: 1. When the model is known, one can directly solve for an optimal policy by Dynamic Programming (DP) [3]. 2. When the model is unknown, the agent learns an optimal policy based on the past observations from interacting with the environment, which is the problem of RL.
268
X. Chen et al.
The crux to find the optimal policy is the concept of Q-function. The Q-function Qπ (s, a) for a given policy π is defined as the expected cumulative reward when the initial state is s, the initial action is a, and all the subsequent actions are chosen according to policy π . The Q-function Qπ satisfies the Bellman Equation, which indicates a recursive relation that each Q-value equals to the immediate reward plus the discounted future value.
10.2.1 Classical Reinforcement Learning Algorithms This subsection considers the RL setting when the environment model is unknown and presents classical RL algorithms for finding the optimal policy. Model-free RL algorithms are primarily categorized into two types: value-based and policy-based. Generally, for modest-scale RL problems with finite state/action space, value-based methods are preferred as they almost always come to convergence and do not assume a policy class. Conversely, policy-based methods are more efficient for problems with high dimension or continuous action/state space. But policy-based methods are known to suffer from various convergence issues, e.g., high variance, local optimum, and so forth. Value-based RL algorithms directly learn the optimal Q-function, while the optimal policy is a secondary product that can be retrieved by acting greedily. Q-learning [4] is perhaps the most popular value-based RL algorithm. Q-learning maintains a Q-function and updates it toward the optimal Q-function based on episodes of experience. Thus, the Q-learning algorithm is essentially a stochastic approximation scheme for solving the Bellman Optimality Equation. Policy-based RL algorithms restrict the optimal policy search to a policy class that is parameterized as π θ with parameter θ . With this parameterization, the objective can be rewritten as a function of the policy parameter, i.e., J(θ ), and the RL problem is reformulated as an optimization problem maxθ J(θ ) to find the optimal parameter θ *. A straightforward solution scheme is to employ the gradient ascent method and Policy Gradient Theorem [5]. This latter theorem was a big breakthrough in addressing the gradient computation issue. In particular, the actor-critic algorithm is a prominent and widely used architecture based on policy gradient. It consists of two eponymous components: (1) the “critic” is in charge of estimating the Q-function, and (2) the “actor” conducts the gradient ascent step. Exploration Versus Exploitation A fundamental problem faced by the RL algorithms is the dilemma between exploration and exploitation. Good performance requires taking actions in an adaptive way that strikes an effective balance between (1) exploring poorly understood actions to gather new information that may improve future rewards and (2) exploiting what is known for decisions to maximize immediate rewards. Generally, it is natural to achieve exploitation with the goal of reward maximization, while different RL algorithms encourage exploration in different ways. For on-policy value-based RL algorithms, -greedy is commonly
10 Reinforcement Learning for Decision-Making and Control in Power Systems
269
utilized with a probability of to explore random actions. In policy-based methods, the exploration is usually realized by adding random perturbation to the actions or adopting a stochastic policy. Online RL Versus Batch RL The algorithms introduced above are referred as “online RL” that take actions and updates the policies simultaneously. In contrast, there is another type of RL called “batch RL” [6], which decouples the sample data collection and policy training. Specifically, batch RL collects a set of episodes of experience that are generated by any behavior policies, then fits the optimal Q-function or optimizes the target policy fully based on this sample dataset. To encourage exploration, batch RL typically iterates between exploratory sample collection and policy learning prior to application. The advantages of batch RL lie in the stability and data efficiency of the learning process. The representatives of batch RL algorithms include Least-Squares Policy Iteration (LSPI), Fitted Q-Iteration, and so forth.
10.2.2 Deep Reinforcement Learning For many practical problems, the state and action spaces are large or continuous, and the system dynamics are complex. As a result, it is not possible for value-based RL to compute or store a gigantic Q-value table for all state-action pairs. To deal with this issue, function approximation methods are developed to approximate the Q-function with some parameterized function classes, such as linear function or polynomial function. As for policy-based RL, determining a policy class capable of achieving optimal control is nontrivial for high-dimensional complex tasks. Advances in DRL that leverage ANNs for function approximation or policy parameterization are increasingly used. Specifically, DRL can use ANNs to (1) approximate the Q-function with a Q-network and (2) parameterize the policy with the policy network. Examples of each are described as follows: Q-Function Approximation Q-network can be used to approximate the Q-function in Q-learning. However, it is known that adopting a nonlinear function, such as an ANN, for approximation may cause instability and divergence. To this end, Deep QNetwork (DQN) [7] has been developed and greatly improves the training stability of Q-learning with the following two techniques: 1. Experience Replay. Distinguished from the classical Q-learning that runs on consecutive episodes, Deep Q-Network stores all the transition experiences in a database called “replay buffer.” At each step, a batch of transition experiences is randomly sampled from the replay buffer for Q-learning update. The recycling of previous experiences can enhance the data efficiency and reduce the variance of learning updates. More importantly, sampling uniformly from the replay buffer breaks the temporal correlations that jeopardize the training process, and thus improves the stability and convergence of Q-learning.
270
X. Chen et al.
2. Target Network. The other technique is the introduction of the target network, which is a clone of the Q-network, while its parameter is kept frozen and only gets updated periodically. This technique can mitigate the training instability as the short-term oscillations are circumvented. Policy Parameterization Due to the powerful generalization capability, ANNs are widely used to parameterize control policies, especially when the state and action spaces are continuous. The resultant policy network takes in states as the input and outputs the probability of action selection. In actor-critic methods, it is common to adopt both the Q-network and the policy network simultaneously, and the backpropagation method can be used to compute the gradient of ANNs efficiently.
10.2.3 Other Modern Reinforcement Learning Techniques Several state-of-the-art modern RL techniques that are widely used for complex tasks are summarized in the following paragraphs. 1. Deterministic Policy Gradient: The RL algorithms described above focus on stochastic policies, while deterministic policies a = π θ (s) are more desired for many real-world control problems with continuous state and action spaces. On the one hand, deterministic policies match the practical control architectures, e.g., in power system applications. On the other hand, a deterministic policy is more sample-efficient as its policy gradient only integrates over the state space, whereas a stochastic policy gradient integrates over both state and action spaces. Similarly, there is a Deterministic Policy Gradient Theorem [8] showing that the policy gradient for a deterministic policy can be expressed as a simple form. One major issue regarding a deterministic policy is the lack of exploration owing to the determinacy of action selection. To encourage exploration, it is common to perturb the deterministic policy with exploration noises, e.g., adding a Gaussian noise to the action during execution. 2. Modern Actor-Critic Methods: Although the actor-critic methods achieve great success in many complex tasks, these methods are known to suffer from various problems including slow convergence, high variance, sub-optimal, local optimum, and so forth. Thus, many variants have been developed to improve the performance of actor-critic methods, and a sampling of those methods follows. – Advantaged Actor-Critic: The advantage function, i.e., the Q-function subtracted by a baseline, is introduced to replace the Q-function in the “actor” update. One common choice for the baseline is an estimate of the state value function. This modification can significantly reduce the variance of the policy gradient estimate without changing the expectation. – Asynchronous Actor-Critic presents an asynchronous variant with parallel training to enhance sample efficiency and training stability. In this method, multiple actors are trained in parallel with different exploration polices, then
10 Reinforcement Learning for Decision-Making and Control in Power Systems
271
the global parameters get updated based on all the learning results and are then synchronized to each actor. – Soft Actor-Critic (SAC) with stochastic policies is an off-policy deep actorcritic algorithm based on the maximum entropy RL framework. This variant adds an entropy term of the policy to the objective to encourage exploration. 3. Multi-Agent RL: Many tasks for the power system control involve the coordination over multiple agents. For example, in frequency regulation, each generator can be treated as an individual agent that makes its own generation decisions. However, the frequency dynamics are jointly determined by all power injections. The multi-agent RL framework considers a set of agents interacting with the same environment and sharing a common state. At each time, each agent takes its own action given the current state and receives the reward. Then the system state evolves to the next state based on the joint action profile. Multi-agent RL is an active and challenging research area with many unsolved problems. An overview on related theories and algorithms is provided in [9]. In particular, the decentralized (distributed) multi-agent RL appears very attractive for power system applications. A popular variant is that each agent adopts a local policy, which determines the local action based on local observation (e.g., local voltage or frequency of bus i). This method allows for decentralized implementation as the policy for each agent only needs local observations, but it still requires centralized training since the system state transition relies on the actions of all agents. The separate RL techniques discussed above can be integrated for a single problem to achieve better performance. For instance, one may apply the multi-agent actor-critic framework with deterministic policies, adopt ANNs to parameterize the Q-function and the policy, and use the advantage function for actor update. Accordingly, the resultant algorithm is usually named after the combination of key words, e.g., deep deterministic policy gradient (DDPG), asynchronous advantaged actor-critic (A3C), and so forth.
10.3 Applications of RL in Power Systems Throughout the past decades, tremendous efforts have been devoted to improving the modeling of power systems. Consequently, the schemes based on (optimal) power flow techniques and precise modeling of various electric facilities are standard for the control and optimization of power systems. However, the large-scale integration of DERs and renewable generation significantly affects the complexity, uncertainty, and volatility of power grid operation. It becomes increasingly arduous to procure accurate system models and power injection predictions, challenging the traditional model-driven approaches. Hence, model-free RL based methodology becomes an appealing alternative. As illustrated in Fig. 10.3, the RL-based schemes relieve the need for accurate system models and learn control policies based on
272
X. Chen et al.
Fig. 10.3 RL schemes for the control and decision-making in power systems
data collected from real system operations or high-fidelity simulators, whereas the underlying physical models and dynamics of power systems are regarded as the uncertain and unknown environment. For power system operation, frequency level is one of the most critical indicators of system operating status, while reliable and efficient energy management is the core task. Accordingly, in this section, we focus on two key applications, i.e., frequency regulation and energy management, and review the recent work that applies RL to them. Frequency regulation is a fast timescale control problem, while energy management is a slower timescale decision-making problem. For each of these applications, we elaborate on how to formulate them as RL problems and the associated solution schemes. The emerging challenges and potential future directions are also discussed. A natural question to ask is why it is necessary to develop new RL-based approaches since traditional tools and existing controllers mostly work “just fine” in real-world power systems. The answer varies from application to application, and we explain some of the main motivations for these new approaches. 1. Although traditional methods work well for the current grid, as the grid evolves with higher penetrations of renewable generation and human user participation, that may no longer be the case. Most existing schemes rely heavily on sound knowledge of power system models. Those models and knowledge are being challenged by various emerging issues including the lack of accurate distribution grid models, highly uncertain renewable generation and user behavior, the growing deployment of EVs coupled with transportation, coordination among massive distributed devices, and so forth. 2. The research community has been studying various techniques to tackle these challenges, e.g., adaptive control, stochastic optimization, machine learning, zeroth-order methods, and so forth. RL has been shown to be a promising direction to investigate and will play an important role in addressing these challenges because of its data-driven and model-free nature. RL is capable of dealing with highly complex and hard-to-model problems and can adapt to rapid power fluctuations and topology changes. 3. We do not mean to suggest a dichotomy between RL and conventional methods. Instead, we believe that RL can complement existing approaches and improve them in a data-driven way. For instance, policy-based RL algorithms can be
10 Reinforcement Learning for Decision-Making and Control in Power Systems
273
integrated into existing controllers to adjust key parameters in real time for adaptivity and achieve hard-to-model objectives. The right application scenarios for RL need to be identified and RL schemes need to be used appropriately.
10.3.1 Application of RL for Frequency Regulation Frequency regulation (FR) functions to maintain the power system frequency closely around its nominal value, e.g., 60 Hz in the United States, through balancing power generation and load demand. Conventionally, three generation control mechanisms in a hierarchical structure are implemented at different timescales to achieve fast response and economic efficiency. In bulk power systems, the primary FR generally operates locally to eliminate power imbalance at the timescale of a few seconds, e.g., using droop control, when the governor adjusts the mechanical power input to the generator around a setpoint and based on the local frequency deviation. The secondary FR, known as automatic generation control (AGC), adjusts the setpoints of governors to bring the frequency and tie-line power interchanges back to their nominal values, which is performed in a centralized manner within minutes. The tertiary FR, namely economic dispatch, reschedules the unit commitment and restores the secondary control reserves within tens of minutes to hours. In this subsection, we focus on the primary and secondary FR mechanisms, as the tertiary FR does not involve frequency dynamics and is corresponding to the power dispatch that will be discussed in the subsection on energy management. For this discussion, we use multi-area AGC as the paradigm to illustrate how to apply RL methods, as the mathematical models use of AGC have been well established and are widely used in the literature. We will present the definitions of environment, state, and action; the reward design; and the learning of control policies, and then discuss several key issues in RL-based FR. The models presented herein are examples for illustration. There are other RL formulations and models for FR depending on the specific problem setting. 1. Environment, State, and Action. The state can be defined to include the frequency deviation at each bus and the power flow deviation over each line that diverge from the nominal values. When controlling generators for FR, the action is defined as the concatenation of the generation control commands. The corresponding action space is continuous in nature but could get discretized in Q-learning-based FR schemes. The environment covers the frequency dynamics in power networks and the governor-turbine control models of generators. The continuous-time system dynamics are generally discretized with a discrete-time horizon to fit the RL framework, and the time interval depends on the sampling or control period. 2. Reward Design. The design of the reward function plays a crucial role in successful RL applications. Although there is no general rule to follow, one principle that can be used is to effectively reflect the control goal. For multi-
274
X. Chen et al.
area AGC, the reward function aims to restore the frequency and tie-line power flow to the nominal values after disturbances. Accordingly, the reward can be defined as the minus of frequency deviation and tie-line flow deviation, e.g., in the square sum form [10]. The exponential function, absolute value function, and other sophisticated reward functions involving the cost of generation change and penalty for large frequency deviation can also be used. 3. Policy Learning. Since the system states may not be fully observable in practice, the RL control policy is generally defined as the map from the available measurement observations to the action. The following two steps are critical for achieving a good control policy. – Select Effective Observations. The selection of observations typically faces a trade-off between complexity and informativeness. It is helpful to leverage domain knowledge to choose effective observations. For example, multi-area AGC conventionally operates based on the area control error (ACE) signal. Accordingly, the proportional, integral, and derivative (PID) counterparts are often adopted as the observation. Other measurements, such as the power injection deviations can also be included in the observation. – Select RL Algorithm. Both valued-based and policy-based RL algorithms have been applied to FR in power systems. In Q-learning-based FR schemes, the state and action spaces are discretized and the -greedy policy is used. The DDPG-based actor-critic framework can be applied to develop the FR schemes, considering continuous action and observation. In addition, multiagent RL is applicable to coordinate multiple control areas or multiple generators, where each agent designs its own control policy with the local observation. 4. Discussion. Several key observations surface as a result of the information above. – Environment Model. Most of the references build environment models or simulators to simulate the dynamics and responses of power systems for training and testing their proposed algorithms. These simulators are typically too complex to be useful for the direct development and optimization of controllers. A potential solution is to train off-policy RL schemes using real system operation data. – Safety. Since FR is vital for power system operation, it necessitates safe control policies. Specifically, two requirements need to be met: (1) the closed-loop system dynamics are stable when applying the RL control policies; (2) the physical constraints, such as line thermal limits, are satisfied. Unfortunately, very few of the existing studies consider the safety issue of applying RL to FR. – Load-Side Frequency Regulation. Most existing research focuses on controlling generators for FR. Various emerging power devices, e.g., inverter-based PV units and ubiquitous controllable loads with fast response, are promising complements to generation-frequency control. These are potential FR applications of RL in smart grids.
10 Reinforcement Learning for Decision-Making and Control in Power Systems
275
10.3.2 Application of RL for Energy Management Energy management is an advanced application that utilizes information flow to manage power flow and maintain power balance in a reliable and efficient manner. Energy management systems (EMSs) are developed for electric power control centers to monitor, control, and optimize the system operation. With the assistance of the supervisory control and data acquisition (SCADA) system, the EMS for transmission systems is technically mature. However, for many sub-regional power systems, such as medium-/low-voltage distribution grids and microgrids, EMS is still under development due to the integration of various DER facilities and the lack of metering units. Moreover, an EMS family with a hierarchical structure is necessitated to facilitate different levels of energy management, including grid-level EMS, EMS for coordinating a cluster of DERs, home EMS (HEMS), and so forth. In practice, there are significant uncertainties in energy management, which result from unknown models and parameters of power networks and DER facilities, uncertain user behaviors and weather conditions, and so forth. Hence, many recent studies adopt (D)RL techniques to develop data-driven EMS. In the rest of this subsection, we first introduce the RL models of DERs and adjustable loads, then review the RL-based schemes for different levels of energy management problems. 1. State, Action, and Environment. We present the action, state, and environment models for typical DER facilities, building HVAC, and residential loads. – Distributed Energy Resources. We consider a bundle of several typical DERs, including a dispatchable PV unit, a battery, an EV, and a diesel generator (DG). The action is defined as the combination of the power outputs of the PV unit, battery, EV, and DG, which are continuous. The DER state can be defined as the combination of the maximal PV generation power determined by the solar irradiance, the associated state of charge (SOC) levels of the battery and EV, as well as other related states of the EV, e.g., current location (at home or outside), travel plan, and so forth. – Building HVAC. Buildings account for a large share of total energy usage, about half of which is consumed by the heating, ventilation, and air conditioning (HVAC) systems. Smartly scheduling HVAC operation has a huge potential to save energy cost, but the building climate dynamics are intrinsically hard to model and affected by various environmental factors. Generally, a building is divided into multiple thermal zones, and the action at each time is defined as the combination of the conditioned air temperature, the supply air temperature, and the supply air flow rate at each zone. The choice of states is subtle, since many exogenous factors may affect the indoor climate. A typical definition of the HVAC state includes the outside temperature and indoor temperature of each zone, and the humidity and occupancy rate. In addition, the solar irradiance, the carbon dioxide concentration, and other environmental factors may be included in the state.
276
X. Chen et al.
– Residential Loads. Residential demand response (DR) that strives to drive changes in electric consumption by end-users in response to time-varying electricity price or incentive payments. The domestic electric appliances are classified as (1) non-adjustable loads, e.g., computers and refrigerators, which are critical and must be satisfied; (2) adjustable loads, e.g., air conditioners and washing machines, whose operating power or usage time can be adjusted. The action for an adjustable load can be defined as the combination of the binary variable denoting whether the appliance is changing from on to off or off to on or staying in the same state, and the power consumption of the load that can be adjusted either discretely or continuously depending on the load characteristics. The operational state of the load can be defined as the on/off status and other related states. For example, the indoor and outdoor temperatures are contained in the state if the load is an air conditioner; and the state captures the task progress and the remaining time to the deadline for a washing machine load. – Other System States: In addition to the operational states above, there are some critical system states for EMS, e.g., the current time t, electricity price (from past Kp time steps to future Kf time predictions), voltage profile, power flow, and so forth. In addition, the state can include the past values and future predictions to capture the temporal patterns. The previous actions may also be considered as one of the states. For different energy management problems, their state and action are determined accordingly by selecting and combining the definitions above. The environment model is given by the transitions of the state and other related exogenous factors. 2. Energy Management Applications. Energy management indeed covers a broad range of sub-topics, including integrated energy systems (IES), grid-level power dispatch, management of DERs, building HVAC control, and HEMS, and so forth. IES, also referred to as multi-energy systems, incorporate the power grids with heat networks and gas networks to accommodate renewable energy and enhance the overall energy efficiency and flexibility. Grid-level power dispatch aims to schedule the power outputs of generators and DERs to optimize the operating cost of the entire grid while satisfying the operational constraints. Optimal power flow (OPF) is a fundamental tool of traditional power dispatch schemes. Device-level energy management focuses on the optimal control of DER devices and adjustable loads, including EV, energy storage system, HVAC, and residential electric appliances, which usually aims to minimize the total energy cost under time-varying electricity prices. 3. Discussion. The issues that arise for EMS are described below. – Challenges in Controlling DERs and Loads. Large-scale distributed renewable generation introduces significant uncertainty and intermittency to energy management, which requires highly accurate forecasting techniques and fast adaptive controllers to cope. The partial observability issue of complex facilities and the heterogeneity of various devices lead to further difficulties in coordinating massive loads. Moreover, the control of HVACs and residential
10 Reinforcement Learning for Decision-Making and Control in Power Systems
277
loads involves interaction with human users; thus, it is necessary to take into account user comfort and learn unknown and diverse user behaviors. – Physical Constraints. There are various physical constraints, e.g., the state of charge limits for batteries and EVs, that should be satisfied when taking control actions. Various techniques have been postulated for modeling those constraints including a logarithmic barrier function as a penalty in the reward [11] and constrained MDP problem [12] to take into account the physical constraints and solve the problem with the constrained policy optimization method. These methods impose the constraints in a “soft” manner, and there is still a chance to violate the constraints. – Hybrid of Discrete and Continuous State/Action. Energy management often involves the control of a hybrid of discrete devices and continuous devices, yet the basic RL methods only focus on handling either discrete or continuous actions. Some Q-learning-based work discretizes the continuous action space to fit the algorithm framework. One such technique is an ANN-based stochastic policy to handle both discrete and continuous actions, combining a Bernoulli policy for on/off switch actions and a Gaussian policy for continuous actions [13].
10.4 Challenges and Perspectives This section presents the critical challenges of using RL in power system applications, i.e., safety and robustness, scalability, and data. Several future directions are then discussed.
10.4.1 Safety and Robustness Power systems are vital infrastructures of modern societies. Hence, it is necessary to ensure that the applied controllers are safe, in the sense that they do not drive the power system operational states to violate critical physical constraints, or cause instability or reliability issues. For RL-based control schemes, there are two aspects of concerns about safety: 1. Guarantee that the learning process is safe (also referred to as safe exploration). For this issue, off-policy RL methods are more desirable, where the training data are generated from existing controllers that are known to be safe. It is not yet known if on-policy RL can guarantee safe exploration. Some works propose safe on-policy exploration schemes based on Lyapunov criterion and Gaussian process. The basic idea is to construct a certain safety region, and then special actions are taken to drive the state back as safety boundaries are approached [14]. However, almost all the existing systems developed to date train their RL control
278
X. Chen et al.
policies based only on high-fidelity power system simulators, and it is plausible that the safe exploration problem is circumvented. Nevertheless, one can argue that there might be a substantial gap between the simulator and the real-world system, leading to the failure of generalization in real implementation. A possible remedy is to employ the robust (adversarial) RL methods [15] in simulator-based policy training. 2. Guarantee that the final learned control policy is safe. It is generally hard to verify whether a policy is safe or the generated actions can respect physical operational constraints. Some common methods to deal with constraints include (1) formulating the constraint violation as a penalty term to the reward, (2) training the control policy based on a constrained MDP, (3) adding a heuristic safety layer to adjust the actions such that the constraints are respected. Specifically, the second method aims to learn an optimal policy π ∗ that maximizes the expected total return and is subject to a budget constraint. By defining the physical constraint violation as certain costs, the budget constraint imposes safety requirements to some degrees. Typical approaches to solve a constrained MDP problem include the Lagrangian methods, constrained policy update rules, and so forth.
10.4.2 Scalability Most existing studies run simulations and tests on small-scale power systems with a few decision-making agents. We do not believe any real-world implementation of RL control schemes has been reported yet. A crucial limitation for RL in large-scale multi-agent systems, such as power systems, is the scalability issue, since the state and action spaces expand dramatically as the number of agents increases, known as the “curse of dimensionality.” Multi-agent RL and function approximation techniques are useful for improving the scalability, while they are still under development with many limitations. For example, there are limited provable guarantees on how well Q-function can be approximated with ANNs, and it is unclear whether it works for real-size power grids. Moreover, even though each agent can deploy a local policy to determine its own action, most existing multiagent RL methods still need centralized learning among all the agents because the Q-function depends on the global state and all agents’ actions. One potential direction to enable distributed learning is to leverage local dependency properties (e.g., the fast-decaying property) to find near-optimal localized policies. Some application-specific approximation methods can also be utilized to design scalable RL algorithms.
10 Reinforcement Learning for Decision-Making and Control in Power Systems
279
10.4.3 Data 1. Data Quantity, Quality, and Availability. Evaluating the amount of data needed for training a good policy, namely sample complexity, is a challenging and active research area in the RL community. For classical RL algorithms, such as Qlearning, the sample complexity depends on the size of the state and action spaces; the larger the state and action spaces are, the more data are generally needed to find a near-optimal policy. For modern RL methods commonly used in power systems, such as DQN and actor-critic, the sample complexity also depends on the complexity of the function class adopted to approximate the Q-function and the intrinsic approximation error of the function class. In addition, data quality is one of the critical factors affecting learning efficiency. Real measurement and operational data of power grids suffer from various issues, such as missing data, outlier data, noisy data, and so forth. Thus, a pre-processing of raw data is needed. Theoretically, larger variance in noisy observations typically leads to higher sample complexity for achieving a certain level of accuracy. Almost all existing works assume that high-fidelity simulators or accurate environment models are available to simulate the system dynamics and response, which are the sources of sample data for training and testing RL policies. When such simulators are unavailable, data availability becomes an issue for the application of on-policy RL algorithms. 2. Potential Directions to Address Data Issues. Despite successful simulation results, theoretical understanding of the sample complexity of modern RL algorithms is limited. In addition, many power system applications use multiagent training methods with partial observation and adopt ANNs for function approximation, further complicating the theoretical analysis. A key solution to improve the sample complexity of training RL policies is the use of warm starts. Empirical results validate that good initialization can significantly enhance training efficiency. There are multiple ways to achieve a warm start, such as (1) utilizing existing controllers for pre-training, (2) encoding domain knowledge into the design of control policies, (3) transfer learning that transplants welltrained policies to a similar task to avoid learning from scratch, and (4) imitation learning that learns from available demonstrations or expert systems, and so forth. Data availability and quality can be handled at algorithmic and physical levels. At the algorithmic level, when high-fidelity simulators are unavailable, a potential solution is to construct training samples from existing system operational data and employ off-policy RL methods to learn control policies. Other training techniques such as generating virtual samples from limited data to boost data availability can also be adopted. There have been extensive studies on data quality improvement in the data science field, such as data sanity check, missing data imputation, bad data identification, and so forth. At the physical level, (1) deploying more advanced sensors and smart meters and (2) upgrading communication infrastructure and technologies can improve data availability and quality at the source.
280
X. Chen et al.
3. Standardized Dataset and Testbed. Existing works in the power literature mostly use synthetic test systems and datasets to simulate and test the proposed RLbased algorithms, and they may not provide many implementation details and codes. Hence, it is necessary to develop benchmark datasets and authoritative testbeds for power system applications to standardize the testing of RL algorithms and facilitate fair performance comparison.
10.4.4 Key Future Directions Regarding the challenges in applying RL to power systems, we present several potential future directions as below. 1. Integrate Model-Free and Model-Based Methods: Actual power system operation is not a black box and there is abundant model information available to use. Purely model-free approaches may be too radical to exploit available information and suffer from their own limitations, such as the safety and scalability issues discussed above. Since existing model-based methods have already been well studied in theory and applied in the industry with acceptable performance, one promising future direction is to combine model-based and model-free methods for complementarity and achieve both advantages. For instance, model-based methods can serve as warm starts or the nominal model or be adopted to identify critical features for model-free methods. Alternatively, model-free methods can coordinate and adjust the parameters of incumbent model-based controllers to improve their adaptivity with baseline performance guarantees. There are three potential manners of integration: implementing model-based and model-free methods in serial, in parallel, or embedding one as an inner module in the other. Despite limited work on this subject so far, integrating model-free RL with existing model-based control schemes is envisioned to be an important future direction. 2. Exploit Suitable RL Variants: RL is a fundamental and vibrant research field attracting a great deal of attention. New advances in RL algorithms appear frequently. Besides DRL, multi-agent RL, and robust RL mentioned above, a wide range of branches in the RL field, such as transfer RL, meta RL, federated RL, inverse RL, integral RL, Bayesian RL, hierarchical RL, interpretable RL, and so forth, can improve the learning efficiency and tackle specific problems in suitable application scenarios. For instance, transfer RL can be used to transplant the well-trained policies for one task to another similar task, so that it does not have to learn from scratch and thus can enhance the training efficiency. 3. Leverage Domain Knowledge and Problem Structures: The naive application of existing RL algorithms may lead to many troubles in practice. In addition to algorithmic advances, leveraging domain knowledge and exploiting applicationspecific structures to design tailored RL algorithms are necessary to achieve superior performance. Specifically, domain knowledge and empirical evidence
10 Reinforcement Learning for Decision-Making and Control in Power Systems
281
can guide the definition of state and reward, the initialization of the policy, and the selection of RL algorithms. For example, the area control error (ACE) signal is often used as the state in RL-based frequency regulation. The specific problem structures are useful in determining the policy class, approximation function class, hyperparameters, and so forth, to improve training efficiency and provide performance guarantees. 4. Satisfy Practical Requirements: The following concrete requirements for RLbased methods need to be met to enable practical implementation in power systems. – The safety, scalability, and data issues of RL-based methods need to be addressed. – RL-based algorithms should be robust and thus capable of dealing with the noises and failures in measurement, communication, computation, and actuation, to ensure reliable operation. – To be used with confidence, RL-based methods need to be interpretable and have theoretical performance guarantee. – Since RL requires a large amount of data from multi-stakeholders, the data privacy should be preserved. – As power systems generally operate under normal conditions, an unsolved problem remains ensuring that RL control policies learned from real system data have sufficient exploration and perform well in extreme scenarios. – Since RL-based approaches rely heavily on information flow, the cyber security should be guaranteed under various malicious cyber attacks. – Existing RL-based algorithms typically take tens of thousands of iterations to converge, which suggests that the training efficiency needs to be improved. – Necessary computing resources and communications infrastructure and technology need to be deployed and upgraded to support the application of RL schemes. We elaborate on this requirement below. In many existing works, multi-agent DRL is used to develop scalable control algorithms with centralized (offline) training and decentralized (online) implementation. To enable centralized training of DRL, the coordination center needs largescale data storage, high-performance computers, and advanced computing technologies, such as accelerated computing (e.g., Graphic Processing Units (GPUs)), cloud and edge computing, and so forth. As for decentralized or distributed implementation, although the computational burden is lighter, each device (agent) typically requires local sensors, meters, microchip-embedded solvers, and automated actuators. Moreover, to support the application of DRL, advanced communication infrastructures are necessary to enable the two-way communication and real-time streaming of high-fidelity data from massive devices. Various communication and networking technologies, such as (optic) cable lines, power line carrier, cellular, satellite, 5G, WiMAX, Wi-Fi, Xbee, Zigbee, and so forth, can be used for different RL applications. In short, both algorithmic advances and infrastructure development are envisioned to facilitate the practical application of RL schemes.
282
X. Chen et al.
10.5 Conclusion Although the promise of applying RL to the power system field appears enticing, many critical problems remain unsolved, and there is still a long way to go before practical implementation. On the one hand, this subject is new and still under development and needs much more study. On the other hand, it is time to step back and rethink the advantages and limitations of applying RL to power systems (the world’s most complex and vital engineered systems) and figure out where and when to use RL. In fact, RL is not envisioned to completely replace existing model-based methods but could be a viable alternative for specific tasks. For instance, RL and other data-driven methods are promising when the models are too complex to be useful or when the problems are intrinsically hard to model, such as the human-inloop control (e.g., in demand response). The right application scenarios for RL and its appropriate use could bring significant benefits to the power grid especially with high levels of renewable generation and distributed energy resources. Authors’ Note This chapter was adapted from an Institute of Electrical and Electronics Engineers (IEEE) paper by Xin Chen, Guannan Qu, Yujie Tang, Steven Low, and Na Li titled “Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges” published in 2022 in the IEEE Transactions on Smart Grid.
References 1. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press. 2. Chen, X., Qu, G., Tang, Y., Low, S., & Li, N. (2022). Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges. IEEE Transactions on Smart Grid. 3. Bertsekas, D. (2012). Dynamic programming and optimal control: Volume I (Vol. 1). Athena scientific. 4. Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine learning, 8(3), 279–292. 5. Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12. 6. Lange, S., Gabel, T., & Riedmiller, M. (2012). Batch reinforcement learning. In Reinforcement learning (pp. 45–73). Springer, Berlin, Heidelberg. 7. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. 8. Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., & Riedmiller, M. (2014, January). Deterministic policy gradient algorithms. In International conference on machine learning (pp. 387–395). PMLR. 9. Zhang, K., Yang, Z., & Ba¸sar, T. (2021). Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control, 321– 384. 10. Yan, Z., & Xu, Y. (2020). A multi-agent deep reinforcement learning method for cooperative load frequency control of a multi-area power system. IEEE Transactions on Power Systems, 35(6), 4599–4608.
10 Reinforcement Learning for Decision-Making and Control in Power Systems
283
11. Liu, W., Zhuang, P., Liang, H., Peng, J., & Huang, Z. (2018). Distributed economic dispatch in microgrids based on cooperative reinforcement learning. IEEE transactions on neural networks and learning systems, 29(6), 2192–2203. 12. Li, H., Wan, Z., & He, H. (2019). Constrained EV charging scheduling based on safe deep reinforcement learning. IEEE Transactions on Smart Grid, 11(3), 2427–2439. 13. Li, H., Wan, Z., & He, H. (2020). Real-time residential demand response. IEEE Transactions on Smart Grid, 11(5), 4144–4154. 14. Garcıa, J., & Fernández, F. (2015). A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1), 1437–1480. 15. Morimoto, J., & Doya, K. (2005). Robust reinforcement learning. Neural computation, 17(2), 335–359.
Xin Chen is currently a postdoctoral associate affiliated with the MIT Energy Initiative at Massachusetts Institute of Technology. He received B.S. degrees in both Engineering Physics and Economics and the master’s degree in Electrical Engineering from Tsinghua University, Beijing, China, in 2015 and 2017, respectively. He received the Ph.D. degree from Harvard University, MA, USA, in 2022. His research lies in the learning, optimization, and control of human-cyber-physical systems, with particular applications to power and energy systems. He is passionate about developing theoretical foundations and practically applicable algorithms that lead to intelligent, autonomous, and sustainable energy systems. His research has resulted in multiple high-impact publications and has been applied in several industry projects. He was a recipient of the Outstanding Student Paper Award in IEEE Conference on Decision and Control in 2021, the Best Student Paper Award Finalist in the IEEE Conference on Control Technology and Applications in 2018, and the Best Conference Paper Award in the IEEE PES General Meeting in 2016.
Guannan Qu has been an assistant professor in the Electrical and Computer Engineering Department of Carnegie Mellon University since September 2021. He received his B.S. degree in Electrical Engineering from Tsinghua University in Beijing, China in 2014, and his Ph.D. in Applied Mathematics from Harvard University in Cambridge, Massachusetts in 2019. He was a CMI and Resnick postdoctoral scholar in the Department of Computing and Mathematical Sciences at California Institute of Technology from 2019 to 2021. He is the recipient of Caltech Simoudis Discovery Award, PIMCO Fellowship, Amazon AI4Science Fellowship, and IEEE SmartGridComm Best Student Paper Award. His research interest lies in control, optimization, and machine/reinforcement learning with applications to power systems, multi-agent systems, Internet of Things, and smart cities. During college, Qu’s focus of study was power system engineering. While interested in the practical side of engineering, Qu also got intrigued by how the math worked for solving complex power system equations. As a result, in Qu’s Ph.D., he started to pursue a more theoretical understanding of how engineering systems worked, and this eventually led to an interdisciplinary research profile that spans multiple areas like control theory, optimization theory, machine learning, and engineering systems.
284
X. Chen et al. Yujie Tang received his bachelor’s degree in Electronic Engineering from Tsinghua University in 2013. He pursued his doctoral research in the Electrical Engineering department at the California Institute of Technology from 2013 to 2019, studying optimization methods in smart grids. He was a postdoctoral fellow at the School of Engineering and Applied Sciences at Harvard University from 2019 to 2022, working on distributed zeroth-order optimization and reinforcement learning. He is currently an Assistant Professor in the Department of Industrial Engineering and Management at Peking University, with research interests broadly in distributed optimization, control, and learning and their applications in cyber-physical systems.
Steven Low is the F. J. Gilloon professor at the Departments of Computing and Mathematical Sciences, and Electrical Engineering at Caltech. He received his B.S. from Cornell and PhD from Berkeley, both in EE. After graduation, he joined AT&T Bell Laboratories, Murray Hill, NJ, and then the faculty of the University of Melbourne, Australia, to work on communication networks, before moving to Caltech. He has held honorary/chaired professorship in Australia, China, and Taiwan. He was a co-recipient of IEEE best paper awards, an awardee of the IEEE INFOCOM Achievement Award and the ACM SIGMETRICS Test of Time Award, and is a fellow of IEEE, ACM, and CSEE (Chinese Society for Electrical Engineering). He is well known for work on Internet congestion control and semidefinite relaxation of optimal power flow problems in smart grid. His research on networks has been accelerating more than 1 TB of Internet traffic every second since 2014. His research on smart grid is providing large-scale cost-effective electric vehicle charging to workplaces.
Na Li is a Gordon McKay professor in Electrical Engineering and Applied Mathematics at Harvard University. She received her B.S. degree in Mathematics from Zhejiang University in 2007 and Ph.D. degree in Control and Dynamical Systems from California Institute of Technology in 2013. She was a postdoctoral associate at the Massachusetts Institute of Technology 2013–2014. Her research lies in the control, learning, and optimization of networked systems, including theory development, algorithm design, and applications to cyber-physical societal systems. She was or has been an associate editor for IEEE CSS Conference Editorial Board (CEB), Systems and Control Letters, IEEE Control Systems Letters, IEEE Transactions on Automatic Control and she has served on the organizing program committees for several conferences and workshops. She received the NSF career award (2016), AFSOR Young Investigator Award (2017), ONR Young Investigator Award (2019), Donald P. Eckman Award (2019), McDonald Mentoring Award (2020), along with some other awards.
10 Reinforcement Learning for Decision-Making and Control in Power Systems
285
In college, besides mathematics, Li was very interested in bio-medical areas. Her original plan was to pursue a PhD in mathematical biology, which is a broad term because she was not sure what exactly it meant and what exactly she wanted. Luckily, she participated in a research exchange program during the summer of her junior year where she was a summer research intern at Prof. Jeff Shamma’s lab at UCLA. Her summer started with reading textbooks on systems, signals, and control, followed up with many system biology papers. The summer was an eyeopening experience. Though she had taken many math classes and obtained good grades, she was constantly concerned about how to apply the math to do meaningful biological research because biology is so complex. The first week when she was reading systems and controls, she was deeply impressed by the beauty of it: the application of mathematics was so elegant and meaningful. When she was reading system biology papers (some of which were written by her PhD adviser, Prof. John Doyle), she was convinced that the system approach was the one she wanted to pursue. Because of that summer, she applied for several control PhD programs besides math PhD programs and she was lucky to get admitted to CDS (control and dynamical systems) at Caltech, which led to her career in the control field. At Caltech, she got many opportunities to interact with other professors and researchers. While working with a project on exercise physiology, she participated in a study group led by her co-adviser, Prof. Steven Low. The group study brought a lot of inspiration to Li where she saw how she could use her mathematical skills to make an impact for our society—to develop optimization and control algorithms for the power grids so as to improve the environmental and energy sustainability. She devoted her PhD thesis to this topic and has been working on this interdisciplinary research area for the past decade. Li has two little kids and the first one was born around the time when she started her tenure-track position at Harvard. Before the kids were born, Li enjoyed hiking, photography, and reading scientific fiction. After the kids were born, most of her time has been spent on them, observing how “human intelligence” is being developed. Now, she enjoys reading books on brain development, emotion growth, etc. Interestingly, these books help her understand herself better and she uses the skills learned from these books in her professional career.
Chapter 11
System Protection Ariana Hargrave
11.1 Introduction Millions of times per day, people across the world flip on a light switch in their homes or businesses. These people all have the same expectations—the lights should come on, instantaneously, every time. Electricity is critical to everyday life in our modern society, and we have come to expect near-perfect reliability in the power delivered to us. And nearly 100% of the time, our expectations are met. Because the operation of the power system is usually so seamless, we rarely put much thought into what is truly involved behind the scenes to get power to our light switch. The electric power grid is an engineering marvel and the largest machine in the world. Every second of every day, power system engineers across the world are working together to operate, maintain, and improve the power grid—ensuring that safe, reliable, and economical power is constantly available to everyone, immediately, at the flip of a switch. Occasionally, events occur that disrupt the flow of power in the grid. A tree falls on a power line, a car crashes into a utility pole, or a bird relieves itself on a distribution line. These events are called faults, and they cause the power to divert away from its intended path. Faults cause damage to electrical equipment and are dangerous to the general public. System protection engineers are specialists who design protection systems for the power grid. These protection systems identify when and where faults occur and disconnect (or re-route) power until the problem can be corrected. Protection engineers work hard to make sure that the systems they design are fast, sensitive, selective, secure, and dependable. Protection engineers are responsible for analyzing records from the protection system after a fault
A. Hargrave () Schweitzer Engineering Laboratories, Inc., Fair Oaks Ranch, TX, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_11
287
288
A. Hargrave
to ensure that the equipment operated properly and that the system design was appropriate. This chapter explains why protection systems are necessary, describes the equipment that makes up a protection system, and explains the philosophies and basic operating principles of protective relays.
11.2 The Power Grid Figure 11.1 shows a simplified drawing of the power grid. Generating stations or power plants convert various sources of energy (coal, natural gas, hydro, nuclear, oil, wind, or solar) into electricity. Places that use electricity, like homes and businesses, are called loads. Loads are inherently diverse, meaning they do not all require energy at the same time. An air-conditioning unit at one house, for example, will likely not be cycling on and off at the exact same time as that of the neighboring house. Because of this, it was discovered in the early days of the power grid that large, centralized generation stations were more efficient and economical than small generators near each individual load. A centralized power plant capable of generating a certain amount of power can actually service an area that requires five times more than that capacity, because not all the load will use the power at the same time [1]. Although modern technology is making smaller, distributed generation sources more efficient, the bulk of our electric power still comes from centralized generation. Power lines (transmission and distribution) transport electricity from centralized generating stations to where it is used. The power distribution system resembles a grid—a network of transmission lines, distribution lines, and substations that serves to route the power from generation to load. There are more than 492,000 miles of transmission lines and more than six million miles of distribution lines in North America—enough to go to the moon and back 14 times! [2, 3]. Many different generation sources can connect to the grid, and power can take many different paths through the system to reach a given load. This grid structure maintains overall reliability by ensuring that there are always alternate paths available when a problem occurs on the system.
Fig. 11.1 Simplified power grid
11 System Protection
289
Fig. 11.2 Equipment in a typical substation
Substations exist as the “nodes” of the power grid and connect transmission and distribution lines together. Substations also house important equipment that is necessary for the power grid to function. Figure 11.2 shows some important pieces of equipment found in a typical substation. Transformers increase or decrease the voltage of the power being sent over the power lines. Higher voltages are required when sending power over a long distance, and lower voltages are required when the power is ready to be used. Power circuit breakers are similar to the circuit breakers in your home’s electrical panel, just much larger and capable of switching more power. Circuit breakers open and close to stop or start the flow of electricity, therefore controlling the paths along which power flows through the grid. Circuit breakers are controlled by protective relays. Control houses hold the equipment that protects, operates, and controls the power grid—including protective relays, communication devices, satellite-synchronized clocks, and computers.
11.3 Power System Faults Delivering power to homes and businesses is an incredibly critical task, so it is important that our power grid is reliable. However, the grid is constantly exposed to a variety of hazards that can disrupt its intended operation. Overhead transmission and distribution lines are exposed to harsh weather, including snow, ice, wind, heat, and lightning. Figure 11.3 illustrates a couple examples of these conditions. Snakes, rodents, insects, and other critters seek shelter in the warmth of power system equipment housed in substations, where they often build nests, chew through
290
A. Hargrave
Fig. 11.3 Lightning and ice are two examples of weather events that can cause faults on the power system
wires, and cause other damage. These external hazards can cause problems known as faults. A fault occurs on the power system when electricity is routed away from its intended path (e.g., from a generation center to a home) and instead flows through an unintended path. One example of a fault is a tree branch contacting a distribution line. Electricity flows from the generation station, through the distribution line, through the tree, and into the ground. In addition to faults caused by animals and forces of nature, some faults are due to accidents such as vehicles colliding with utility poles, kites flying into overhead lines, or homeowners digging into buried wires in their yards. Regardless of the cause, all faults must be identified and isolated from the grid as quickly as possible. Faults cannot be allowed to “sit” or remain on the energized power system. Not only do faults disrupt the flow of electricity to the intended load, but they also impact system stability and are incredibly dangerous! When a tree branch contacts a distribution line, the entire tree becomes electrified and a hazard to the public. Undetected faults can cause severe injury or death if an unknowing person contacts the fault. Finally, faults can cause the current flowing in the power system to increase dramatically. This elevated current can cause permanent damage to expensive power system equipment if it is allowed to persist.
11.4 Protection System Equipment Protective relays are the brains of modern protection systems. These devices make the decision on whether or not a fault is on the system and if power should be disconnected. Protective relays are described in this section, as is the supplementary equipment needed for the full protection system to function.
11 System Protection
291
Fig. 11.4 Instrument transformers in a substation
11.4.1 Instrument Transformers Protective relays use current and voltage measurements on the power system to determine whether or not a fault exists. Standard current and voltage levels on the power system are very high—much higher than what the electronics in protective relays are rated to measure. Instrument transformers are used to step down the very high values of current and voltage on the power system to smaller values that protective relays are capable of measuring. Two common types of instrument transformers are used: current transformers (CTs) for measuring current, and potential transformers (PTs)—sometimes called voltage transformers (VTs)— for measuring voltage. These pieces of equipment sit in the substation yard and measure current and voltage at various locations on the power grid (see Fig. 11.4). The instrument transformers report their measurements to the protective relays, and the relays perform calculations on these values to decide whether or not there is a fault, and if power should be disconnected.
11.4.2 Circuit Breakers If protective relays are the brains of the protection system, circuit breakers are the brawn. Circuit breakers sit in the substation yard and are shown in Fig. 11.5. They can be thought of as industrial-sized light switches that are controlled by protective relays and used to route power through the grid. The relay tells the circuit breaker when to trip (open and disconnect power) or close (reconnect power). Circuit breakers are often filled with oil or gas such as sulfur hexafluoride to help
292
A. Hargrave
Fig. 11.5 Circuit breaker in a substation
extinguish the electrical arc that occurs when circuit breakers operate to interrupt such large amounts of power.
11.4.3 Batteries Protective relays and circuit breakers need power to function. Since these pieces of equipment are often tasked with disconnecting power, it would be unwise to power them from the same power source that they might need to disconnect in the case of a fault. Most protective relays are powered by large battery banks that exist in substation control houses, as shown in Fig. 11.6. These battery banks are charged by the power grid when it is healthy and continue to operate for hours after an outage.
11.4.4 Protective Relays Protective relays take current and voltage measurements from instrument transformers, decide if there is a fault on the system, and make the decision to turn off the power by tripping (opening) a circuit breaker. Because their electronics cannot be
11 System Protection
293
Fig. 11.6 Battery bank in a substation
Fig. 11.7 Protective relays in a panel in a control house
directly exposed to rain, relays are typically installed in panels in the control house (see Fig. 11.7). The very first protective relays used on the power system were electromechanical devices developed in the late 1800s and early 1900s. An example of an electromechanical relay is shown in Fig. 11.8. These devices used magnetic induction and moving parts to operate, and the settings required to define their operating thresholds
294
A. Hargrave
Fig. 11.8 IA-101 electromechanical relay [4]
were made using physical screws and dials. These electromechanical devices were the grandfathers of our modern protection systems, and their designs were impressive for their time. But like most mechanical devices, they required routine maintenance and testing to continue to operate reliably. Each relay was only capable of performing a single function, so large panels full of electromechanical relays were required to perform complete protection schemes. When an electromechanical relay detected a fault and called for a circuit breaker to trip, the relay dropped a small red flag as an indicator. Although this could be used by engineers to tell if the relay operated, it would not tell them how much current or voltage the relay measured to make its decision, where the fault was located, or if there even was an actual fault. With such limited data, any engineering analysis after the fault occurred was impossible. In 1982, Edmund O. Schweitzer, III, invented the first digital (microprocessorbased) protective relay. Digital relays have no moving parts and can be thought of as industrial-hardened computers optimized for system protection. Digital relays sample the current and voltage measurements from the connected instrument transformers and turn them into digital signals. These signals can then be put through mathematical algorithms programmed in the microprocessor to determine if a fault exists on the system. These algorithms are called “protection elements,” and each one requires settings to define its operating thresholds. These settings are stored in non-volatile memory, so they are maintained if the relay loses power. When a protection element decides to issue a trip to the circuit breaker, the settings call for the relay to close a physical output contact that has been designated for tripping. This output contact is wired to trip (open) the connected circuit breaker. A simplified drawing of how a digital relay works is shown in Fig. 11.9. Figure 11.10 shows an example of a modern digital relay. The front of the relay has a color touch-screen that allows operators easy access to the relay’s settings and reports. The pushbuttons on the front allow operators to turn on and off various functions of the device. The LEDs on the front light up when the relay issues a trip
11 System Protection
295
Fig. 11.9 Simplified functional drawing of a digital relay
Fig. 11.10 Front and back of a digital protective relay
and give operators a visual indication as to what type of fault occurred and which elements in the relay operated. The silver terminals on the back of the relay are where the instrument transformers are connected, which the relay uses to measure current and voltage. The green terminals on the back of the relay are where the circuit breaker, and other various inputs and outputs, are connected to the relay. Digital relays have many benefits over their electromechanical grandfathers, some of which include the following: • Digital relays can perform many functions in one box. Figure 11.11 shows how several electromechanical devices in a panel can be replaced with a single digital relay. • Digital relays perform self-monitoring diagnostics that will alarm for failures. • Digital relays have no moving parts, resulting in reduced maintenance and testing costs. • Digital relays can be remotely accessed by engineers via Ethernet, allowing for analysis to begin immediately after a fault without anyone having to drive to the affected substation.
296
A. Hargrave
Fig. 11.11 A single digital relay on the left replaces a full panel of electromechanical devices on the right
• Digital relays are able to communicate with each other over long distances to create complex protection schemes. • Digital relays allow for supervisory functions and logic (e.g., issue a trip if both Element 1 and Element 2 have asserted) to be programmed internally. In the electromechanical days, these functions were performed with manual wiring, which was cumbersome and error prone. • Digital relays not only operate for faults—they tell engineers and technicians where the fault is located! This capability greatly reduces outage times, as utility crews know exactly where to drive to find the problem. The sooner utility crews can correct the problem, the sooner the lights can come back on. In case the previous list of benefits was not enough, there is one major capability of digital relays that has transformed the world of system protection: event reports! Event reports are generated by the relay after a fault and show the currents and voltages measured by the relay during the fault, as well as the outputs of all the protection elements inside the relay. The data available in these reports are incredibly useful in post-fault analysis, where system protection engineers work to understand what happened on the power system and confirm if the relay operated correctly based on the way it was programmed. An example event report is shown in Fig. 11.12. The currents measured by the current transformers are shown on the top graph, and the voltages measured by the potential transformers are shown in the middle graph. The bottom graph shows the outputs of the various protection elements in the relay. The top graph shows the current change from a normal prefault value to a higher faulted value. At the same time, the protection elements assert
11 System Protection
8000
297
Relay operation time
IA IB
4000
IC
0 -4000
Post-fault
Pre-fault
Fault
-8000 0 ms
50 ms
100 ms
150 ms + 00
20000 15000
00
-50 ms
VA VB
10000
VC
5000 0 -5000 -10000 -15000 -20000 -50 ms
0 ms
50 ms
100 ms
150 ms
-50 ms
0 ms
50 ms
100 ms
150 ms
51G1P 51G1T TRIP OUT103
Fig. 11.12 Example event report
on the bottom graph. The time between when the current goes high and the TRIP element on the bottom graph assert is the relay operation time: the time it took the relay to detect the fault and call for the breaker to trip. The assertion of the OUT103 element shows a physical output contact on the relay closing. This output allows the connected circuit breaker to trip. When the circuit breaker successfully interrupts the power, the current drops to zero on the top graph (post-fault).
11.5 System Protection Philosophies System protection engineers design protection systems to identify and disconnect faults from the power grid. Protection engineers must ensure that the systems they design meet the following performance criteria: • Speed—The protection system must operate to detect the fault as quickly as possible. Protection systems do not operate in seconds—they operate in
298
•
•
• •
A. Hargrave
milliseconds. The newest digital relays can detect a fault in 2–4 milliseconds. That is 100 times faster than the blink of a human eye! Sensitivity—The protection system must be able to sense every fault, even those that do not draw a lot of current. This can be challenging, especially when current drawn by a fault is lower than current drawn during times of high-power demand from the load. Selectivity—The protection system should only disconnect the part of the power grid that is absolutely necessary to remove the fault, and no more. Disconnecting more of the system cuts off the power to more load, losing service to homes and businesses. Security—The protection system should never operate when there is no fault on the power grid. Dependability—The protection system should always operate when there is a fault on the power grid.
Modern digital relays are not simple devices, and there is a lot of engineering and complexity that goes into protecting the power system. Relays do not come pre-set from the factory and ready to install on the power system right out of the box. Protection systems are complex, and every application is unique. Digital relays can have hundreds of settings, and they all must be set correctly for the relay to function properly for all possible faults. Coming up with correct settings requires detailed knowledge of the power system and numerous fault studies using power system analysis software. This software allows protection engineers to model the power system, places faults at various locations, and views the resulting currents and voltages the relays will measure for those conditions. Protection engineers use these results to calculate set points for each of their protection elements to ensure the protection is fast, sensitive, selective, secure, and dependable. It is challenging to design a protection system that maximizes all of these criteria for every possible fault. Oftentimes, trade-offs must be made in the design of the system in order to focus on the performance criteria that the engineer deems most important. Although the Institute for Electrical and Electronics Engineers (IEEE) has published guides on how to set various protection elements, protective relaying has been called an “art and science,” since true expertise requires sound engineering judgment that comes with years of experience [5]. Misoperations of protection systems are rare but potentially dangerous and costly occurrences. Whenever a protection system misoperates (fails to trip for a fault, or trips when a fault does not exist), its operation should be analyzed to determine the root cause. The North American Electric Reliability Corporation (NERC), the agency responsible for the reliable operation of the electric grid in the United States, mandates that utility companies report all misoperations (along with corrective actions) for transmission equipment operated at 100 kV and above. NERC reports that over the past 5 years, the rate of protection system misoperations has declined steadily [2]. Figure 11.13 shows that over this same time period, the most common cause of misoperations has been incorrect relay settings, logic, and design errors.
11 System Protection
299
0.35
0.3
Percent of Causes of Misoperations
0.25
0.2
0.15
0.1
0.05
0 AC System
As-Left Communication DC System Incorrect Other Realy Failures, Unknown Personnel Failures Settings, Logic, Explainable Malfunctions Unexplainable Error Design Errors
2015
2016
2017
2018
2019
Fig. 11.13 Misoperations by Cause Code from 2015 to 2019 [2]
Because the functions of protective relays are so important to keeping the lights on, critical power system assets (e.g., transmission lines) are often protected by both a primary and a backup digital relay. In these installations, either relay can trip the circuit breaker for a fault, thus providing redundancy in the unlikely scenario that one relay were to fail. In an effort to reduce a failure that could be common to both devices, some electric utilities dictate that the primary and backup relays must be made by different manufacturers or designed on different hardware platforms. Notice, however, that the most popular cause of misoperations in Fig. 11.13 is related to human error: incorrect settings, logic, and design errors. Setting and testing digital relays can be complex, which results in human errors being more common than equipment failures. The data could be used to argue that having the exact same relay as the primary and backup protective device would have the greatest impact on reducing the total number of misoperations. This philosophy simplifies protection schemes to where engineers and technicians who set, install, and support the relays only need to master one type instead of two.
11.6 Relay Operating Principles To truly understand the art and science of system protection, the algorithms that exist inside digital relays must be examined. Inside a digital relay, different protection elements are used to detect faults and make the decision to trip. Unique protection elements exist to detect faults on various pieces of power system equipment. For example, the element used to detect a fault on a transmission line will be
300 Fig. 11.14 Elements in a digital protective relay for distribution lines [7]
A. Hargrave Bus P
59 N
27
Q
O
81 U R
3 P
P
67 G
50 GQ
50B F
Q
67N
50N
51N
27
59
3
P
32
51 GQ
25
1 52
79
1 BRM
52P B
DFR
HBL
HMI
LGC
LOC
MET
85 RIO
PMU
PQM
SBM
SER
LDP
Line
16sEC
1
1 Fiber-optic serial port
Front-panel USB
1 IRI G-B
4 EIA-232 EIA-485
2 Ethernet
different from the one used to detect a fault inside a transformer. Even for the same piece of power system equipment, various elements are used in combination to detect different types of faults in that equipment. Figure 11.14 shows an example of the protection elements available in a modern digital relay. Each element is described by a unique function code. These codes are defined in C37.2—IEEE Standard Electrical Power System Device Function Numbers, Acronyms, and Contact Designations [6].
11.6.1 Inverse-Time Overcurrent Element One of the simplest protection elements is the inverse-time overcurrent characteristic, defined as 51 (PGQ) and 51 (N) in Fig. 11.14. When current is high, the fault is nearby, and the relay should trip fast to protect equipment and personnel from high currents. When current is lower (but still above what is expected for normal conditions), the fault is farther away, so the relay should take a little more time to trip. Slowing the relay down allows for other relaying in closer proximity to the fault to operate first. This helps the protection system maintain selectivity by not disconnecting more of the grid than is absolutely necessary. Figure 11.15 shows a graph of a time-overcurrent characteristic. The X-axis is the current measured by
11 System Protection
301
Fig. 11.15 Inverse time-overcurrent characteristic
Fig. 11.16 Logic for an inverse-time overcurrent element
the relay, and the Y-axis is the time it will take the relay to trip. As shown in Fig. 11.15, the higher the measured current, the shorter the time to trip. Figure 11.16 shows the logic for an inverse-time overcurrent element. The protection engineer sets the pickup setting threshold in the software, as well as a few other settings such as time dial and curve type. The element then compares the measured current to the pickup setting. If the measured current is higher than the pickup setting, the element asserts an output to indicate that the element has picked up. This signals that the minimum current threshold for a fault has been met, but the required time has not yet expired. Next, the relay runs the measured current through an equation to calculate how long the current must exist before the element times out. If the current still exists after that time has expired, the element asserts another output to indicate that the element has timed out. This output is mapped to trip logic in the relay, which takes care of issuing a trip to the circuit breaker by closing a physical output contact on the back of the relay that is wired to the circuit breaker. The example event report in Fig. 11.12 shows an inverse-time overcurrent element correctly operating for a fault. As soon as the measured current goes above a set threshold, the 51G1P bit (element pickup) asserts on the bottom graph of Fig.
302
A. Hargrave
11.12. A certain amount of time after that, since current is still above the threshold, the 51G1T bit (element timeout) asserts. When the element times out, the relay calls for a trip (TRIP bit asserts) and closes a physical output contact (OUT103 bit asserts) that is wired to trip the circuit breaker.
11.6.2 Reclosing Element Another example of a protection element is reclosing, defined as “79” in Fig. 11.14. If you have ever noticed your lights go out and come back on again very quickly during stormy weather, you have witnessed reclosing in action! The purpose of a reclosing element is to get the lights back on as quickly as possible after a fault occurs. The vast majority of faults that occur on overhead distribution lines are temporary and resolve quickly without any human intervention. For example, a small tree branch breaking off and falling onto a line will cause a fault, but the branch will fall off the line and no longer be a problem within a few seconds. Under normal conditions, the reclosing element sits in the “reset” state, as shown in Fig. 11.17. When a fault occurs, the relay trips the circuit breaker and the reclosing element enters the “cycle” state. In this state, the reclosing element starts a timer for a set amount of time. When the timer expires, the relay tries to close the circuit breaker back in (reclose), hoping that the fault has cleared and power can be restored. If the fault has cleared, the circuit breaker will stay closed, the power is restored, and the reclosing element returns to the reset state. However, if the fault has not cleared, the relay will see it and trip the circuit breaker again. Relays can be programmed for several reclose attempts if desired. If the fault is still detected after all of these attempts, the relay declares it to be a permanent fault, trips the circuit breaker, and the reclosing element enters the “lockout” state. At this point, the reclosing element does not allow any more reclose attempts, and the utility company must send a line worker or technician to the location of the fault to find the problem, correct it, and manually close the circuit breaker to restore power. The reclosing relay will only return to the reset state once all of these steps have been taken.
11.6.3 Underfrequency Element Protection elements can also be used to protect the power system as a whole from other undesired conditions besides faults. This is the case for an underfrequency element, defined as 81 (U) in Fig. 11.14. On very hot days, many air-conditioning units will be running at the same time to keep homes and businesses cool. This can cause the load on the power system to become higher than the amount of generation available. When this happens, the frequency of the power grid drops. If the frequency drops too low, the system can become unstable and potentially collapse. The 2003 blackout in the northeast United States that affected 50 million
11 System Protection
303
Fig. 11.17 Three states of a reclosing element
Fig. 11.18 Simplified logic for an underfrequency element
people is one such example of a system collapse that could have been mitigated by the use of underfrequency elements. After that blackout, NERC mandated the use of underfrequency elements by electric utilities [8]. Underfrequency elements monitor the frequency of the voltage (which should normally be 60 Hz) at the relay’s location. If the frequency drops below a set threshold for a set amount of time, the relay will trip the circuit breaker. Although this results in a loss of power to the customers on that line, the reduction in load helps stabilize the power grid as a whole. This is called “underfrequency load shedding” and is often implemented on non-critical distribution lines (lines that do not serve hospitals, fire stations, etc.) The logic for a basic underfrequency element is shown in Fig. 11.18.
11.7 Conclusion Protective relays are the silent sentinels of the power system. They sit quietly in substations and continuously monitor the health of the power system, every millisecond, 24 hours a day, 7 days a week, 365 days a year. When a fault occurs, they operate in fractions of a second. The decision on whether or not to operate for a given fault must not only be fast—it must be correct. Operate incorrectly, and the relay takes down an otherwise-healthy part of the power system. Incorrect relay decisions can result in equipment damage, longer service interruptions, injury to
304
A. Hargrave
personnel and the public, and potentially a complete power system collapse. There is no room for error. The entire specialty of system protection is centered around designing, setting, troubleshooting, and analyzing data from protective relays in order to maintain the reliability of the power system. System protection engineers wear many different “hats” in order to fulfill many crucial roles. They are designers, creating the settings and schemes to protect various parts of the power system. They are researchers, always needing to stay up to date on the latest technology and protection elements to be sure that the equipment they are installing and the elements they are setting will adequately protect the system. They are detectives, analyzing records after a fault for clues as to what happened, deciding if equipment operated properly, and determining if settings need adjustments. The ultimate goal for system protection engineers is 100% power system reliability. Since faults will always occur, achieving perfection is unlikely. Regardless, these dedicated engineers work tirelessly to improve the power system and deliver safer, more reliable electric power to the world. Acknowledgment Special thanks to Schweitzer Engineering Laboratories, Chris Wadley, and Victoria Arthur for their support and editing of this chapter.
References 1. E. O. Schweitzer, III, “Making Sure the Lights Stay On, No Matter What,” RMEL, Summer 2021. Available: electricenergy-digital.com/rmet/0221_summer_2021/ MobilePagedArticle.action?articleId=1704630#articleId1704630. 2. NERC 2020 State of Reliability, July 2020. Available: nerc.com/pa/RAPA/PA/ Performance%20Analysis%20DL/NERC_SOR_2020.pdf. 3. Harris Williams & Co., “Transmission & Distribution Infrastructure,” Sum2014. Available: harriswilliams.com/sites/default/files/industry_reports/ mer ep_td_white_paper_06_10_14_final.pdf. 4. A. Guzman, L. Anderson, C. Labuschagne, “Adaptive Inverse-Time Elements Take Microprocessor-Based Technology Beyond Emulating Electromechanical Relays,” proceedings of the 1st Annual PAC World Americas Conference, September 2014. 5. C. R. Mason, The Art and Science of Protective Relaying, John Wiley & Sons, Inc., New York, NY, 1956. 6. ANSI/IEEE C37.2, Standard for Electrical Power System Device Function Numbers, Acronyms, and Contact Designations. 7. SEL-351 datasheet. Available: cms-cdn.selinc.com/assets/Literature/Product%20Literature/ Flyers/351_PF00198.pdf?v=20180607-232419. 8. K. Jones, “The Need for Faster Underfrequency Load Shedding: Avoid Blackouts and Prevent Generator Turbine Damage,” T&D World, August 2021. Available: tdworld.com/ resources/webinars/webinar/21168731/the-need-for-faster-underfrequency-load-sheddingavoid-blackouts-and-prevent-generator-turbine-damage.
11 System Protection
305 Ariana Hargrave has always been interested in math, science, and computers. Her dad taught her Ohm’s law and how to solder at a young age, and they spent time in the summers learning how to build computers together. Ariana received her CompTIA A+ Certification in computer repair as a teenager and spent her free time making modifications to her Chevy Cavalier. When Ariana started college at St. Mary’s University in San Antonio, Texas, she was unsure if she should major in computer engineering or electrical engineering, but as soon as her advisor told her “electrical is harder,” she had her answer. In her sophomore year of studying electrical engineering, Ariana applied for internships at several large microprocessor companies. None of them gave her an interview. Even though she had always worked hard in school and had excellent grades, the fact that she was only a sophomore made it tough to compete. After being turned down by the Geek Squad at Best Buy, Ariana decided that she was done filling out forms and applications. She drove downtown to the local electric utility company (CPS Energy), walked in, and asked if they had any internships. CPS Energy immediately arranged for an interview, and she was hired shortly thereafter in their Transmission Design group. Ariana had no idea what transmission design was, or what power engineers even did at utilities, but she was incredibly grateful that they took a chance on her. Ariana’s internship at CPS Energy changed her life. She learned how power systems worked, what a substation was, and how transmission lines were designed and built. One day while traveling around the service territory taking GPS coordinates of transmission towers, she found herself standing in the middle of a large field under several circuits of 345 kV transmission lines. It was completely silent except for the buzzing from the corona of the lines overhead. As she thought about how much power was flowing above her and how important it was to the functioning of everyday life, it was there that she decided what she wanted to do for her career: she would be a protection engineer. Ariana continued to work at CPS Energy and completed her senior design project with their System Protection group before graduating from St. Mary’s University. Ariana attended graduate school at Texas A&M University in College Station, Texas, and obtained her Master’s degree in Electrical Engineering in 2009, specializing in Power Systems. In the summers, she interned as an Application Engineer with Schweitzer Engineering Laboratories, Inc. (SEL), the inventors of the first microprocessor relay. Upon graduation, Ariana started full time with SEL where she is now a Senior Application Engineer and leads a team of system protection engineers in Texas. Her job duties include writing application guides and technical papers, teaching engineers and technicians about system protection, helping them troubleshoot protection problems, and analyzing event reports after power outages. Ariana loves writing and specifically finds her purpose in being able to take a complex technical topic and explain it in a way that people can easily understand. She has published more than 30 technical papers and application guides and was
306
A. Hargrave honored to receive the Walter A. Elmore Best Paper award from the Georgia Institute of Technology Protective Relaying Conference 2 years in a row (2017 and 2018). Ariana has also received both the Author Excellence Award and the People’s Choice Award from SEL for her technical papers in 2016 and 2017, respectively. She is a senior IEEE member and a registered Professional Engineer in the state of Texas. Ariana lives in Fair Oaks Ranch, Texas, with her husband Glenn (also a protection engineer) and their 3-year-old daughter Lumen (not a protection engineer). In her free time, Ariana works on a constant stream of DIY projects. She enjoys home improvement projects, building furniture, and fixing things when they break. Ariana credits many people for helping her get to where she is today. Her dad, who always encouraged her to not be afraid of learning new things. Her mom, who has always been her biggest fan and supported her in everything she has ever wanted to do. Her first supervisor at CPS Energy, David Luschen, who introduced her to electric power and gave her a better internship experience than any college sophomore could have hoped for. Her husband, Glenn, who is her forever teammate. And her daughter, Lumen, who is the light of her life.
Chapter 12
Interaction Variables-Based Modeling and Control of Energy Dynamics Marija D. Ilic
12.1 Introduction This chapter is written based on a hindsight view of teaching and research ideas I have pursued in collaboration with close to 100 graduate students, postdocs, and colleagues over four decades. These ideas have emerged along with the evolution of electric power systems themselves from highly stationary, predictable, and manageable to what has become a very complex man-made dynamical network system. The complexity of today’s systems is multifold and it comes not only from the sheer size of these networks but also from diversity of often non-standardized equipment deployed and the lack of systematic protocols for integrating new technologies. In this chapter I introduce unifying principles of modeling, simulations, and control of any given technology that can support systematic protocols and can overcome the complexity caused by highly diverse technologies. I have only recently arrived at the recognition that it is indeed possible to conceptualize fundamentals of electric power systems as dynamic energy processing by diverse technologies, and to establish general feasibility and stability conditions for their control toward achieving provable performance. I revisit different dynamical problems by interpreting them as particular examples of a general unified modeling and control. By doing so, it becomes possible to pose power systems problems as large-scale dynamical systems with much inherent structure. This structure, when recognized, makes often unsolvable problems of complex nonlinear dynamical network systems solvable under well-understood assumptions. This approach, then, enables one to relate the causes of dynamical problems to the complexity of models needed to assess them and to design implementable control.
M. D. Ilic () M.I.T., Cambridge, MA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_12
307
308
M. D. Ilic
In Sect. 12.2 I briefly introduce this unified multi-layered interactive modeling and energy control co-design. In the follow-up sections I then summarize several major dynamical problems which are of concern when operating electric power systems. This is done in an increasing order of complexity, starting from modeling of frequency dynamics, and their control under typically made assumptions that generally hold in normal operations. I illustrate how following the unified modeling one can begin to manage scale of large interconnected power grids, assess causes of low frequency oscillations, and set the basis for minimal information exchange for coordinated frequency control. This is followed by posing the problem of frequency modeling, feasibility, and stability assessment during abnormal conditions, including the effects of high penetration intermittent resources. I provide examples of: 1) low frequency inter-area oscillations in the Northeastern Power Coordinating Council (NPCC) system following past major outages, such as failures of large transmission lines and nuclear plants, and, 2) torsional oscillations caused by electro-mechanical sub-synchronous resonance (SSR) in light of energy dynamics and its control. In Section 12.4 concerning voltage control I revisit the problem of voltage “collapse” and its control by using our novel energy dynamical model which accounts for reactive power dynamics. This is an important section which points out that the differential algebraic equations (DAE) models currently used only hold under certain assumptions which may or may not be valid as new technologies are deployed. Instead of subjecting dynamics of components to algebraic realreactive power constraints, I propose to use an interactive model which relaxes the assumptions embedded in today’s DAE modeling approaches. In Sect. 12.5 regarding systems with large penetration of intermittent resources, I further expand the use of the energy dynamics model introduced for frequency and voltage control to the examples of emerging systems with large saturation power electronically controlled intermittent resources, known as the Inverter Based Resources (IBRs). In Sect. 12.7, I pose next in Section 12.6 the problem of electricity market design and, in particular, set the basis for information exchange between different stakeholders and resulting derivatives needed to provide incentives for feasible and stable market implementation; this, too, is done by starting from the unified modeling of energy dynamics. Notably, this approach provides the basis for multi-disciplinary modeling and control of diverse energy systems, beyond considering electric power grids. I propose a unified multi-layered interactive operating paradigm for the emerging energy systems, including three straightforward principles for future operations of these systems. I suggest that these principles and protocols are a natural outgrowth of today’s Area Control Error (ACE) used for characterizing Balancing Authorities (BAs), formerly Control Areas (CAs), when performing Automatic Generation Control (AGC). An end-to-end cyber-physical platform for implementing these protocols, named Dynamical Monitoring and Decision Systems (DyMonDS), becomes, in turn, a natural outgrowth of today’s System Control and Data Acquisition (SCADA) in which different layers communicate according to the protocols. Finally, I make some recommendations regarding challenges and opportunities based on the introduced framework.
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
309
12.1.1 Historical Note: Hindsight View of the Future This chapter represents my hindsight view of what I have done collectively with many of my graduate students, and, notably, how it has all been ahead of its time. It is very exciting to me that our concepts developed over time are needed as the industry is changing. Throughout this chapter, I refer to the emerging operating challenges in which clearly the first bottleneck is a dynamical problem of one type or the other. It is even more exciting that it is actually possible to evolve today’s hierarchical control by following energy modeling and control principles that set the basis for minimal information exchange between different layers of a complex energy system so that ranges of stable and robust use of intermittent resources in the legacy grid are greatly increased. I provide a summary of quantifiable benefits from such an approach. Important for moving forward is that the proposed approach is not radical; it is effectively a natural outgrowth of today’s AGC implementation. All that is needed is more granular temporal, spatial, and functional representation of intelligent Balancing Authorities (iBAs) in terms of interaction variables. I have been fortunate to have had the privilege of learning from many giants in the electric power systems field, too numerous to list. As an example, the idea of suggesting generalization of large BAs into more granular iBAs is fully supported by the observation of both John Zaborszky and Lester Fink who almost twenty years ago observed that “BAs are moving closer to the end users” [1]. At the end of the day, in a hindsight with an eye into the future, we propose to conceptualize, model, and control energy dynamics at the interfaces of iBAs, namely their interaction variables. Based on these many years of learning from others and combined with our own work, I am convinced now more than ever before that modeling, control, practical objectives, economics, and design for sustainable electricity services can become transparent, scalable, and the main catalyst of future innovation when thinking in terms of interaction variables [70]. Arriving at the unified energy dynamics modeling and control which draws on using interaction variables has been an on-going process over several decades. I explain in what follows that to recognize the unique structure it was necessary to depart from expressing models in terms of conventional state variables different for specific technologies and to express how internal energy processing affects dynamics of power interactions between different components within an interconnected system. Not knowing that I have been after this unified model, I first introduced with my MIT doctoral student Shell Liu in the early 1990s the basic idea that there exists an interaction variable which is a function of internal states only, and that its derivative is zero when the component is disconnected from the rest of the system [3–7]. In hindsight, this was a hidden lightning strike which occurred during an Electricite de France (EdF) sponsored MIT project [7, 8]. This early collaboration with EdF and MIT researchers, particularly with Christine Vialas and Michael Athans, led to the concept of interaction variable which, as formalized in this chapter, was to become one of the major means of establishing principles of my teaching and introducing methods needed to conceptualize design and operations
310
M. D. Ilic
of these systems as they grew in their complexity [9, 10, 71]. It became clear only relatively recently to me that the notion of an interaction variable was the guiding light underlying many years of research to come and culminated in the unified modeling based on energy dynamics described here. This chapter is an attempt to describe this process of connecting the dots between the follow-up industry problems as they evolved; mathematical formulations for modeling, analyzing, and designing control of specific phenomena causing industry problems; and, ultimately, seeking protocols for electricity markets and new technology integration into the legacy complex systems. In what follows I acknowledge to the best I can collaborate and joint work, in particular, with those contributing to the generalization of the original notion of the interaction variables and their first-of-the-kind proof of concept applications to relevant industry problems of today. In closing, in this chapter I describe research activities carried out with my graduate students, and colleagues at the old MIT Energy Laboratory; at Carnegie Mellon University (CMU) Electric Energy Systems Group (EESG) [95]; and, most recently, at the MIT Electric Energy Systems Group http://www.eesg.lids.mit.edu/. The work is mainly focused on unified energy modeling for systematic control design to ensure stable and efficient system operations over broad ranges of system inputs, disturbances, and equipment status. Historically, the modeling and control for electric power systems have evolved over time and have often targeted specific technical problems, for which assumptions made resulted in simpler sub-problems. As the power systems have grown in their complexity, it has become much harder to have general models for control design at the desired performance. I suggest that this lack of systematic general modeling for control performance currently presents one of the major roadblocks to deploying large amounts of intermittent power, and it is as such one of the key problems on the path to decarbonization [11, 12]. This is because current industry practice relies on system-specific assessments of potential dynamical problems by performing extensive off-line simulations of what is considered to be the worst case scenarios. The problem is that this approach does not support scheduling of the cleanest and/or the least expensive energy resources during normal operations in anticipation of such worst case scenarios. This generally prevents efficient on-line use of already deployed energy resources. At the same time, the dynamical problems, such as unacceptable frequency and voltage, still take place because the legacy grid system and its control no longer support often qualitatively different power flow patterns. Such unacceptable frequency or voltage changes could lead to very fast disconnection of equipment by protective relays and consequently could result in a blackout which was not triggered by any large equipment failures [89]. Looking into the future, I strongly believe that events of this sort will come as a perfect storm unless today’s hierarchical control is enhanced. Enhancing hierarchical control is a big job in its own right and there are efforts mainly in academia concerning gradual relaxation of assumptions underlying the hierarchy which no longer holds as the system is changing. Proposing a unified modeling enables one to relax these assumptions. This chapter describes different theoretical enhancements made at different times by relaxing critical assumptions
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
311
and showing that currently used models are particular cases of a much more general unifying modeling of energy dynamics.
12.2 Interaction Variables (intVar) for Modeling and Control of Energy Dynamics Systematic control of any dynamical system requires modeling in the form which lends itself to applying general methods from systems control. The most developed are tools for analysis and control of dynamical systems whose models are in the form of linear Ordinary Differential Equations (ODEs) [13]. In the 1990s breakthroughs took place in the nonlinear control of general dynamical systems, particularly for nonlinear ODEs expressing state space dynamics in the general form in which control enters in an affine way [14–16]. This led to broadly studied feedback linearizing control (FBLC) which results in a closed loop linear ODE model, which further lends itself to well-established Linear Quadratic Regulator (LQR) methods for constant gain tuning and, consequently, to provable closed loop performance over the broad ranges of states and inputs. However, it quickly became obvious that FBLC is quite sensitive to the model accuracy [18]. This fundamental problem can be overcome by implementing FBLC using Sliding Mode Control (SMC) and other Variable Structure Control (VSC) methods [19, 20]. These controllers belong to the class of high gain controllers and are robust with respect to modeling uncertainties [21]. These nonlinear controllers have been widely studied and some even used in practice in robotic systems and other mechanical systems [23, 25]. Another interesting and important class of nonlinear controllers is based on energy shaping [22, 25], particularly with applications to Hamiltonian systems [26]. These general nonlinear controllers all hold a great potential for designing next generation control in electric energy systems because the power electronics technologies are now matured [74] and are just beginning to be utilized for systems with IBRs [75]. However, when one contrasts this state-of-the-art in general control methods for dynamical systems, with the state-of-the-art in control methods actually used in electric power systems, we see that generally primary controllers of power plants are constant gain controllers and, as such, cannot be used reliably when the system conditions become highly dynamic. Because of this, they have been considered by the utility operators fine tuning devices of secondary importance. There are currently no computer applications for controlling the system during abnormal conditions [68]. Instead, frequency regulation and generation dispatch for predictable normal conditions use models which represent steady-state equilibria [72]. Transient stability analysis using dynamic models is only done off-line for most severe scenarios. The situation is very similar in voltage regulation and dispatch computer applications [6, 72]. It is, however, necessary to begin to rely more proactively on systematic control design because system inputs are no longer as stationary and predictable as has been the case with Bulk Power Systems
312
M. D. Ilic
(BPS). Flexible data-enabled operation is essential for ensuring reliable and resilient electricity service without excessive reserve requirements [11, 12, 27], described next in some detail.
12.2.1 Making the Case for Key Role of Control in the Changing Industry Consider the basic challenge of supplying demand during normal operations. Assuming no significant large storage, this is done by scheduling for predicted demand first the slowest power plants, and then faster ones. Unless this temporal hierarchy is observed, it is hard to regulate frequency in response to hard-topredict demand excursions around the predicted demand [84]. Also, most often faster resources are more expensive, which requires utilizing slow resources first. For the longest time, feed-forward economic dispatch of conventional power plants following such practice, and AGC have worked reliably although the hidden cost of AGC has been significant [29]. Ensuring robust stable response to small fast fluctuations has been implemented using primary governor and Automatic Voltage Regulator (AVR) control. Tuning and performance validation of these primary controllers have been carried out by representing the rest of the system as a Thevenin equivalent. This effectively means that each controller is tested assuming localized response, and no resonances or other oscillations between different components. Difficult problems like electro-mechanical sub-synchronous resonance [30], and more recently electromagnetic control-induced system stability (CISS) problems have been prevented by doing extensive simulations for estimating power schedules which do not cause such problems, and/or by designing special protection schemes (SPS) [44, 45] and remedial protection (RAS) [35] to prevent such problems from happening. These problems typically occur during abnormal conditions when certain large equipment fails to operate. This, in turn, requires nonlinear controllers. Moreover, given that system can become unstable very quickly following such events, it is desired to have distributed primary controllers and not depend critically on fast system-level communications. There is no fast communications infrastructure in place which could be used to implement centralized controllers. This further leads to research questions concerning performance of distributed control and defining minimal information exchange for provable performance. Finally, both primary and secondary controllers are entirely decentralized and do not cooperate for efficiency [36]. All these open questions present both opportunities and challenges as the industry is evolving. This section briefly describes my approach to unified modeling of energy dynamics which lends itself to characterizing diverse components by defining their aggregate energy processing dynamics resulting from internal technology-specific detailed physics and primary control. It is shown that the dynamics of aggregate variables depend on the aggregate variables themselves and the rate of change of intVars of the component itself with the neighboring subsystems. System-level dynamics are then expressed using aggregate variables
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
313
and their intVars. This model takes on the form of a mathematical model for general physical systems [37] and, as such, lends itself to using many powerful systems control methods. For the first time it becomes possible to design conditions for provable performance that can be interpreted in terms of energy dynamics.
12.2.2 Modeling I start by briefly summarizing the modeling of energy dynamics for any single- or multi-port subsystem comprising complex electric energy systems. I refer to any subsystem (hardware component; control area (CA), recently being referred to as balancing authority (BA); non-utility owned microgrids; portfolios of distributed energy resources (DERs) combined with local storage; power grid (wires) as a “module.” Depending on the complexity of the specific technologies and designs these modules can be more or less complex. The basic idea of multi-layered energy dynamics modeling is the one of zoom-in/zoom-out. The zoom-in technologyspecific models are of the basic form shown in Fig. 12.1 and are generally proprietary to the designers of technologies and are used for local level sensing,
Fig. 12.1 Interactive stand-alone model of a closed loop component i: The lower-layer models are utilized to compute the outgoing interaction variables .z˙ ir,out , which drive the higher-layer energy dynamics. The incoming interaction variables from the grid .z˙ ir,in are utilized by the lower-layer models to evaluate the extended state trajectories .x˜i = [xi , ri ]T given their initial conditions. The incoming interactions are a result of outgoing interactions with neighbors
314
M. D. Ilic
control, and computing so that the modular outputs of interest are within the ranges specified by various engineering standards. This design of specific hardware and its local sensing, control, and computing represents the first step of control co-design [88].
12.2.3 Unified Energy Dynamics Modular Model The first published energy dynamics modular model can be found in [55]. Shown in Fig. 12.2 is the basic idea of considering a detailed technology-specific model that uses state variables x as variables with memory, and local control u feedback responding to the deviations of output variables of interest from the set points given to those controllers. These can be either pre-programmed, as is the case currently with most equipment in local distribution networks directly serving small end users, or are given from the higher-level commands from the BPS hierarchical levels. The basic early idea of unified energy modeling was to consider any module as a black box which stores some energy E and processes it over time. The rate of change of stored energy defined as p has to satisfy the conservation of energy law across the cutset surrounding the module, the rate of change of stored energy has to balance with thermal power loss . Eτ , instantaneous power out of the module P ; net power out of the module can be a superposition of power going to the rest of the system out of port .P r and the power injected into the equipment from internal locally controlled sources .Pu sources, such as batteries; solar panel; exciter voltage and governor mechanical power of a synchronous machine. An important, unique, feature of the proposed unified modeling is shown in Eq. (12.2) which states that the rate of power delivering work (second derivative of stored energy) at the terminals of the module is proportional to the energy stored in tangent space .Et = 12 x˙ T x˙ ˙ which represents effects of and is reduced by the oscillatory energy component .Q ˙ Shown here are these core equations oscillations of generalized reactive power .Q.
Fig. 12.2 Two interconnected modules: Unified state space modeling
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
315
defining the unified energy dynamics of any module. .
E˙ = p.
(12.1)
˙ p˙ = 4Et − Q
(12.2)
Shown in Fig. 12.2 is the internal mapping between the conventional state space model and the aggregate variables. The most general aggregate model comprises three variables, stored energy E, rate of change of stored energy .E˙ = p and energy stored in tangent space .Et . The dynamics of aggregate variables themselves take on the form shown in Fig. 12.2 and have quite a revealing structure. Throughout most of my research we have assumed that stored energy in tangent space .Et has a second order effect on the other two variables, and it is represented as a state dependent disturbance, resulting in the aggregate model shown in Fig. 12.2. It can be seen from these models that dynamics of aggregate variables .xz = [E p]T depend on their own aggregate variables and the rate of change of interaction variable .z˙ in and the rate of change of interaction variables .z˙ m created by local physical disturbances ˙ m. The higher-level aggregate control is rate of controlled reactive power .uz = Q.
12.2.3.1
Zoomed-in View: Technology-Specific Modeling and Control
For purposes of control design I model each stand-alone module as a set of coupled ODEs in standard state space form defining dynamics of state variables . dx dt = x˙ as a function of local state variables x, control inputs u, if-then saturation statements, and port variables r [17].1 In contrast with today’s practice of modeling wires and loads as parameters, a unified approach requires modeling RLC dynamics of a wire, and load dynamics, much the same way as modeling dynamics of power plants. These models of different modules (generators, wires, loads) and the specific form of the modular stand-alone dynamics in terms of their local state variables and their port variables jointly result in an extended state space which has the structure shown in the lowest row of Fig. 12.1 [17, 39, 52]. The extended state space comprises in addition to the internal state variables x also the port variable r. Its dynamics are function of the rate of change of incoming interaction variable .zin entering the module from the rest of the system as shown in Fig. 12.1.
1 To start with, all components are represented using electromagnetic, chemical, and mechanical sub-processes which are in the form of partial differential equations (PDEs) defined by the first principles [38]. In electric power grids these models are usually simulated using Electromagnetic Program (EMP) software with detailed Partial Differential Equations (PDE) dynamics and are not scalable to large systems. Much work has been under way on validating the component models with their lumped parameter models used in this chapter. We point out that a systematic process of approximating PDE models with ODE models is a theoretically challenging problem and it is not discussed here [73]. It is important to study these approximations by further generalizing the unified energy dynamics of lumped parameter systems.
316
M. D. Ilic
12.2.3.2
Interaction Variables (intVars)
The shared variable z by module i is key to characterizing the functionality of each module for its ability to interact with the rest of the system.2 Because of this we name it “interaction variable” [3] and I further differentiate between .z˙ out and .z˙ in . .z ˙ out represents instantaneous power .P out and the rate of change of reactive power ˙ out resulting from internal energy processing sent from its port to the rest of the .Q system. .z˙ in represents instantaneous power .P in and the rate of change of generalized ˙ in sent from the neighboring module into this module as a result of its own power .Q energy processing. The existence of the interaction variable z is a direct consequence of conservation of energy and the conservation of the rate of change of generalized reactive power [53], both following from the generalized Tellegen’s theorem [54, 55]. Extremely relevant for establishing unifying principles of modeling for control of multi-layered interactive electric energy systems are the following two properties of the interaction variable z. Property 1 states that the rate of change of interaction variable z is .z˙ = 0 when the module is disconnected from the rest of the system. This is an obvious property since neither instantaneous nor generalized reactive power flow when the module is disconnected. Property 2 states that the rate of change of interaction variable z is a function of its local state variables x and their derivatives .x. ˙ This is a harder property to prove, and it can be checked by deduction for many components typically present in modern electric power systems. These properties jointly result in a multi-layered model of any module i as shown in Fig. 12.1 [10, 55].
12.2.3.3
Zoomed-Out View: Aggregate Dynamics
Based on the above, we observe that the detailed zoom-in component models are very complex and hard to model in a scalable way to directly assess and control dynamic interactions between many diverse modules. To overcome this problem, I have proposed that one can utilize aggregate variables associated with each module [55] for zoomed-out studies of the system-level dynamics. The interactive model of energy dynamics in two interconnected modules is obtained by combining aggregate models of the two modules and subjecting them to the generalized Tellegen’s theorem constraints which represents the conservation law applied to the rate of change of interaction variables at the interfaces of different modules. These concepts are basic to modeling, simulating and controlling complex power systems. Instead of modeling modules as combined high order models using conventional state variables x and subjecting them to the static real and reactive power balance constraints, the modeling lends itself to mapping detailed models into aggregate models internal to the modules and then exchanging information between the modules about their own interaction variables. It is interesting to interpret this modeling approach in the context of seminal work by J.C. Willems on behavioral
2 For
simplicity, we omit subscript i, it is implied that modeling is for any module i.
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
317
modeling of open systems. Willems forcefully argued that physical systems, in contrast with information systems, cannot have their components characterized in terms of unidirectional inputs and outputs only. He suggests that in addition to the dynamics of internal state variables x it is necessary to model so-called shared variables at the interfaces between the components [37]. At the early stages of our research we were not aware of this connection. Thanks go to Professor Sanjoy Mitter at MIT who recently pointed out to us this possible relation. Further research can and should be done to relate more formally the interaction variables and the shared variables in [37].
12.2.4 Cooperative Interactive Control in Energy Space We observe in Fig. 12.2 that when declaring .z˙ out to be an aggregate control variable .uz , the problem of local output feedback control design can be defined as an LQR problem which aligns aggregate input and output interaction variables between the neighboring modules. The aggregate model is linear in closed loop and as such it supports control design for provable performance even during large disturbances, generally a very difficult task [68]. The aggregate control .uz has to be mapped into physical primary control u of the zoom-in technology-specific modular dynamics. While higher-level control design of .uz is unifying and not technology-specific, its mapping into physically implementable internal control u is very technologyspecific. It must be carried out carefully by observing different saturation and safety equipment-specific constraints. A very intriguing open R&D area concerns further understanding of this control of energy dynamics and its comparison with the stateof-the-art advanced machine controls, such as variable speed drives [20]. This interactive control is fundamentally cooperative. The basic idea is that the lower level controllers (primary control of modules such as governor control of generator local frequency and generator voltages) be designed to align their interaction variables at the interfaces with the neighboring modules so that the interconnected system meets conservation laws. Unique to the proposed energy dynamics is that the modeling accounts for both conservation of energy and the second law of thermodynamics that recognizes that not all potential work can be utilized. This is done in a distributed interactive way by modules computing their own interaction variables and attempting to align them with interaction variables of their neighbors. This approach lends itself to checking system-level feasibility of the interconnected modules. The basic sufficient feasibility conditions can be used in a feed-forward manner on-line to check whether the system will be feasible, and, if not, the modules can adjust their interaction variables by means of their primary control [52]. These conditions are probably best utilized at the design stages for the ranges of system operating conditions of interest. For an application of nearautonomous cooperative control for the case of a reconfigurable microgrid, see [56]. We are currently pursuing a systematic design of a general digital twin of electric power grids based on the use of intVars [41, 42].
318
M. D. Ilic
The use of interaction variables-based multi-layered models for improved hierarchical control of frequency and voltage dynamics will be summarized in the follow-up sections as examples of proof-of-concept use of the unified energy dynamics model under the well-defined assumptions. The early work was to design distributed improved modular control which competitively compensates the effects of neighboring interaction variables and the effect of their own disturbances [43]. This approach is “improved” when compared to fully decentralized control that responds only to deviations in their outputs of interest, frequency and/or voltage, in particular, and treats the interactions with the neighbors as small disturbances [6]. In the follow-up sections I summarize several dynamical problems which have become major concerns in operating electric power systems. This is done in an increasing order of complexity, starting from the problem of frequency dynamics in Sect. 12.3, and their control and regulation under typically made assumptions that generally have been valid in the past. I describe how following the unified modeling one can begin to manage scale of large power grids, and to assess causes of low frequency oscillations, and set the basis for minimal information exchange in support of coordinated control.
12.3 Keeping AC Systems in Synchronism Keeping electric power systems in synchronism by balancing power supply, demand, and delivery losses over several time scales forms the basic skeleton of today’s hierarchical control [70]. Shown in Fig. 12.3 is a typical system load which is decomposed into the slowest predictable component .PˆL [kT t], hard-topredict slow deviations from the predictions .PL [kTs ] and near real-time continuous load fluctuations .PL [pTp ] (Fig. 12.4). The hierarchical control today is arranged so that the largest coordinating entities, power pools comprising several Balancing Authorities (BAs) schedule generation .PG [kTt ] in a feed-forward way so that predicted load is compensated as economically as possible over the slowest time horizons .[kTt ], Fig. 12.5. The effects of slow load deviations from the predicted are seen in slow frequency deviations (.ω[kTs ] = (ω[kTs ] − ω0 ) from nominal AC frequency .ω0 and are corrected for in a feedback manner. Each BA responds to the net power imbalance, known as the Area Control Error .ACE[kTs ] defined as ACE[kTs ] = 10βω[kTs ] − F [kTs ]
.
(12.3)
This error is compensated by changing the set points of the generator-turbinegovernor (GTG) of conventional power plants .ωset [kTs ]. The term .F [kTs ] stands for the net power deviations of scheduled exports from a BA to the neighboring BAs. Fastest power balances are controlled by the governor moving valve position in response to fast fluctuations of local frequencies .ω[pTp ] from the set governor set points .ωset [kTs ]. For purposes of this chapter it is important to understand that these three controller levels are designed today somewhat independently from
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
319
Fig. 12.3 Time scale decomposition of system load [70]
Fig. 12.4 Typical two-area system: power generated by individual power plants and the area interaction variables [72]
320
M. D. Ilic
Fig. 12.5 Basis for today’s hierarchical control
each other, assuming that the other levels meet their objectives. This makes today’s hierarchical frequency control quite straightforward. Unfortunately, it may not work well in systems with highly fluctuating intermittent power characteristics such as those experiences with renewable resources.
12.3.1 Particular Case of Unified Energy Model Looking back, it is interesting to observe that these frequency models are effectively a particular case of our unified models introduced here. To show this, I simply think of an interconnected electric power grid as a composition of single ports representing GTG of power plants and power loads interconnected via two-port transmission lines. Without loss of generality, a sketch of a two-area system is shown in Fig. 12.4. Now, following the unified modeling approach, each module, single- or two-port, is specified by modeling its interfaces with other modules using its own interaction variables. To derive these models for frequency dynamics, the assumption is made that the electromagnetic sub-processes affecting voltage and reactive power dynamics have only second order effects and are not modeled. Keeping this assumption in mind, each GTG set is a black box shown in Fig. 12.6. Under the assumption that electromagnetic energy stored in the coils is decoupled from the mechanical energy in rotating shaft, the unified aggregate model shown in Fig. 12.1 becomes J
.
dω = Tmech − TG − D(ω − ω0 ) dt
(12.4)
Typically used normalization makes an assumption that frequency is nominal and it is 1 per unit, leading to the basic swing equation model of the open loop generator, such as J
.
dω = Pmech − PG − D(ω − ω0 ) dt
(12.5)
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
321
Fig. 12.6 Two node power systems with its dynamic components
The power control .Pu is .Pmech and, as shown in Fig. 12.6, it comprises the internal primary control module. Next, to understand how one derives the dynamical model of a power grid interconnecting power plants, and to identify often hidden assumptions, one can use the unified model of two-port transmission lines. Similarly to most of the BPS models used today, the dynamical model for assessing frequency stability assumes that the electromagnetic stored energy dynamics in LC parts of the lines are stable and much faster than the dynamics of power plants. As a result, their aggregate energy dynamics model becomes a static constraint simply saying that power .PG entering the line is the same as power entering load side .PL with the thermal losses added. Again, this directly follows from the unified model under these assumptions, and a transmission line connecting nodes 1 and 2, Fig. 12.6 for example, has two interaction variables at its ports .P in,1−2 and .P in,2−1 in in intV arT L = [P1−2 P2−1 ] = [PG PL ]
.
(12.6)
Finally, the constant power load is characterized by its intVar being .PL . These three components are subject to instantaneous conservation of power at all interfaces (no inter-component dynamics under the modeling assumptions made), and, after also neglecting thermal losses of the transmission line, and equating the intVars at the interfaces, the interconnected system ends up having a simple ODE model J
.
dω = Pmech − PL − D(ω − ω0 ) dt
(12.7)
322
M. D. Ilic
Without loss of generality, a multi-power plant system comprising .nG interconnected power plants has .nG ODEs shown in Eq. (12.7), where all variables are vectors. We have also shown that following the same steps as when deriving a continuous time interaction variable in a linearized frequency model of a single system, one can derive a dynamical model of each subsystem in terms of its own internal variables and the net tie-line flow exchanges .Fe = FG +Dp FL with the neighboring subsystems z˙ = −l T (F˙e + DP P˙L )
.
(12.8)
where .T = [0 l T ] and .z = T x = l T PG [6, 7]. This is a generalization of today’s ACE that accounts for locational differences in frequency, the model derived for the first time in [2, 3]. Using this model, and assuming primary control for frequency stabilization is effective, one can derive a quasi-static model for AGC of each area by combining quasi-static droops of all GTG sets participating in AGC with real power conservation (power flow) equations [6, 72]. These models are the only ones known to us which explicitly account for the demand fluctuations shown in Fig. 12.3, and also differentiate locational aspects of AGC.
12.3.2 IntVar-Based Multi-Layered Scalable Frequency Stability Assessment One of the most difficult recent challenges has been detecting sources of interarea oscillations [34]. The most widely practiced approach has been to compute eigenvalues of linearized dynamical model and to utilize participation factors for determining which states contribute to these oscillations the most [46, 47]. For large-scale multi-BA systems this is computationally challenging, and much effort has gone into developing computer applications for this purpose. Similarly, important research was done on using vector Lyapunov functions for determining whether the instabilities are local to subsystems or are caused by strong tie-line flow interactions [48]. The well-known challenge here is defining non-conservative Lyapunov functions [49]. Another hidden problem with eigenvalue-based stability assessment methods is that using these computer applications requires knowledge of detailed GTG power plants, including parameters of their primary controllers. These are often either highly inaccurate or simply unknown for proprietary reasons. Here we propose a novel approach which overcomes many of these problems. The approach is based on computing energy functions in terms of interaction variables. The fundamental idea is to define Lyapunov function as a quadratic function of interaction variables in the BA based on the fact that the existence of intVar is a direct result of conservation of energy across the BA cutset, or any other subsystem cutset. It was further shown that this leads to structural singularity of the
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
323
subsystem matrix and that the eigenvector corresponding to its zero eigenvalue for lossless transmission grid is an identity vector [3, 6, 72]. This further implies that the interaction variable which is obtained by multiplying this eigenvector by power outputs at all nodes within the subsystem represents energy imbalance between supply and demand within the subsystem. This then leads us to propose a Lyapunov function in terms of the interaction variable of the subsystem .zss as T Vss = zSS zSS
.
(12.9)
We are currently exploring the use of this Lyapunov function for applying the method proposed in [48]. Notably, this requires only knowledge/measurements of power outputs from the subsystem since, as shown in [3] for the first time, for lossless grid interconnecting different nodes within the subsystem, or BA z˙ ss =
.
Pi
(12.10)
i∈ss
Given that power output from components is relatively straightforward to measure, one can assess the stability of a subsystem much more accurately than when requiring full knowledge of state variables is required. Moreover, this candidate Lyapunov function has a clear physical interpretation when considering electromechanical dynamics only, decoupled from electromagnetic dynamics. In addition, use of Lyapunov functions does not require linearization, so the stability assessment results will be valid for larger regions of operations.
12.3.3 IntVar-Based Cooperative Frequency Stabilization and Regulation Systems with high penetration of intermittent resources make it hard to stabilize and regulate frequency using today’s temporally separated primary and secondary control. This is because system states are continuously changing and it is not possible to use quasi-static models for secondary level AGC. To overcome this problem we proposed enhanced-AGC (E-AGC) whose objective is to minimize the integral of energy imbalances between the control areas expressed in terms of their intVars [50]. Instead of having entirely distributed AGC in which each BA cancels its own quasi-static ACE, the E-AGC has the objective of minimizing at the system level a combination of AGC control cost and energy imbalances created by all interconnected BAs. The solution is a combination of least-cost resources from all BAs participating in frequency regulation. For the case of two BAs shown in Fig. 12.4, BA1 has large amounts of solar intermittent power but has no fast controllable equipment, and BA2 has slowly varying predictable demand and flexible hydro resources. Today’s fully decentralized AGC would require BA1 to purchase expensive storage in order to meet its AGC performance requirement,
324
M. D. Ilic
Fig. 12.7 Zoom-in and zoom-out view of complex energy systems [50]
and BA2 will not utilize its flexible hydro power. In contrast, E-AGC results in a cooperative solution in which BA2 provides to BA2 flexible hydro because this is the least-cost solution for the interconnection comprising these two areas. The EAGC is an example of intVar-based cooperative control which leads to most efficient integration of clean resources. Notably, the proposed E-AGC does not require hardto-justify temporal separation of frequency stabilization and regulation. Another distinguishing feature of intVar-based frequency stabilization and regulation is that the information exchange between subsystems and the system coordination is in terms of interaction variables only and, as such, it would not require major sensing nor communications. Shown in Fig. 12.7 is a sketch of such minimal information exchange [50]. Performance of E-AGC depends to a large extent on whether interaction variables are coordinated at each power plant level, each area or at the system level. Shown in Fig. 12.8 are closed loop frequency responses of two-area power systems to the radiance fluctuation in BA1, Fig. 12.4. Depending on which level of interaction variable coordination is implemented. It can be seen that the more granular control is, the better the frequency response. It can also be seen that the interaction variable coordination regulates net power imbalances between the areas dynamically.
12.4 Keeping Voltages in AC Systems Just as AC power grids need to operate in synchronism, they also need to maintain voltages close to nominal. This is necessary for variety of reasons, in particular for: supplying end users with high Quality of Service (QoS) power; preventing equipment damage; and, enabling power delivery. One way to assess the relevance of voltage control is to use the notion of Available Transfer Capability (ATC) as
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
325
Fig. 12.8 Effects of interaction variable coordination in response to solar radiance fluctuations [36]
a multi-temporal and multi-spatial measure of the power grid’s ability to perform these functions [78]. Determining ATC is complex since grid “congestion” does not occur only when thermal limits of the wires are exceeded but also when maximum electric power transfer is reached [90], and/or when the system experiences stability problems [80]. Stability problems can be reflected as overly sensitive responses to parameter uncertainties or state deviations from system equilibrium [47] or as a sudden voltage “collapse” triggered by large sudden equipment failures after which the system cannot transition in a stable manner to its steady state [49]. Keeping voltages close to nominal is done to avoid these problems by compensating reactive power supply, demand, and delivery losses over several time scales. The well-established industry approach to keeping voltage close to nominal has been viewed mainly as a planning and to a lesser extent an operating problem. It has also been considered to be a localized problem which needs to be managed close to where the problem occurs [66]. As such, voltage support has been provided by deploying reactive power hardware, capacitors, and on-load-tap-changing transformers (OLTCs) at the BPS planning stage so that distribution systems deliver power to the grid users close to nominal voltages. Doing this in today’s power grid operations is generally less systematic than doing real power dispatch and frequency regulation. This is for variety of reasons, technical and historic. Detailed formulations, state-of-the-art, and industry practice are outside of the scope of this chapter, see more detailed references [80–82]. Much the same way as when studying frequency dynamics and their control, it is necessary to understand potential instabilities created by the deviations in reactive power from nominal, as well as the longer-term deviations from nominal voltages
326
M. D. Ilic
and voltage dispatch for regulation and scheduling. In the early 1980s several realworld large BPS experienced voltage collapse for the first time, and it turned out that they were caused by the non-adaptive OLTC control during reactive power shortages [91]. The events were counter-intuitive to system operators who expected OLTCs to maintain voltages by decreasing reactive power consumed. A large power system bifurcation-related voltage collapse reported was in Missouri [72]. Low frequency inter-area oscillations are also affected by control of field excitation voltage during a nuclear power plant outage [76]. The blackout of 2003 in the United States was documented to have been caused by inadequate reactive power support, in this case by the independent power producers [65]. More recently, a Southwest United States nuclear power plant was disconnected in response to inadequate grid voltage. Most recently, operating problems are reported due to malfunctioning of fast inverter controllers of utility scale solar power plants. These are only some of the events which occurred in real-world power grids and were explained by voltage related problems. These voltage operating problems often appear a bit as a perfect storm given what was believed to be well-established industry approaches to maintaining voltages [64]. Our experience with exploring the potential of optimally dispatching set points of voltage controllable equipment as real power is dispatched has taught us that it is no longer most efficient to manage voltage problems by adding new equipment, see [62, 63]. Instead, the existing equipment must be utilized in a more flexible way as system conditions vary and, notably, as large power plants retire and are being replaced by different plants located elsewhere in the system, including deployment of off-shore wind power resources [27, 28]. These changes fundamentally affect the electrical distances between resources and loads and require different voltage dispatch for implementing desired delivery from when the grid was built. As the BPS and even lower local power grids are beginning to be required to deliver power in between different locations than when the grids were designed, it will become necessary to have a more systematic approach to operating the grid reliably and also utilizing the delivery capacity more efficiently. These new operating problems opened the era of studying reactive power and voltage dynamics and their effects on system frequency, beyond previously studied synchronization, frequency regulation and real power economic dispatch [67, 81]. After many years of R&D concerning different causes of voltage collapse, it is fair to say that there is no full understanding which of studied phenomena can take place in real-world systems, and which are physically impossible and are the result of using incomplete piecemeal modeling and control approaches subject to often hidden assumptions. Our main conjecture is that taking a step back and rethinking voltage dynamics in terms of our unified energy dynamics may ultimately help identify these modeling and control deficiencies. This can help assess new technologies for their potential to support voltage over broad ranges of temporal and spatial scales.
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
327
12.4.1 IntVar-Based Voltage Stabilization, Regulation, and Dispatch Voltage dynamics and their control have been studied since the 1980s and there exists vast literature on their different aspects. The main interest in this chapter, however, concerns systems questions about interactions directly related to voltage and electromagnetic phenomena, and the existence of fundamental structures which can ultimately help model, analyze, and control them for both better QoS, more efficient utilization of existing resources, and determine the most effective types and location of voltage controllable equipment to achieve these. We summarize our on-going research which is focused on establishing conditions under which a power grid exhibits localized response to deviations in reactive power injections [66], and when the interactions must be accounted for. In this section we revisit the transient stability modeling with this question in mind. We then consider state-of-the-art secondary and tertiary level voltage control and their industry implementations [6, 7, 57, 58]. Viewing this secondary level control in light of unified energy dynamics modeling helps one quickly identify potential problems and enhancements needed. We interpret the potential benefits from optimizing voltage dispatch while dispatching real power. In addition, we document our experience with using AC Optimal Power Flow (AC OPF) for supporting power delivery over larger distances, more efficiently and within the acceptable operating limits. The upcoming relevance of voltage support for gradual controlled degradation of services during extreme events and power delivery as decarbonization efforts increase cannot and should not be underestimated.
12.4.1.1
Transient Models and Voltage Stabilization
To start with, currently used models for simulating transients in response to reactive power deviations from nominal predictable conditions, comprise coupled electromechanical and electromagnetic energy conversion models in conventional power plants, which when interconnected by power grid are subject to static real and reactive power balancing constraints [39, 51]. These models do not account for fast electromagnetic transients of electrical wires and also assume that the power resulting from the energy dynamics processing in loads is instantaneous. One of the consequences of not modeling fast system-level electromagnetic dynamics is that these currently used DAE models cannot capture fast electromagnetic oscillations caused by voltage deviations at the generator terminals. When using these DAE models one arrives at highly misleading conclusions. For example, when attempting to define voltage interaction variables it appears at first sight that they do not exist. This is because if one follows the same quest for transformation which would map reactive power injections into net reactive power injection, much the same way as was done when deriving the decoupled real power-frequency interaction variable in the section above, it becomes obvious that
328
M. D. Ilic
this portion of the power flow Jacobian is not structurally singular. The Jacobian submatrix ∂Q (12.11) .JQV = ∂V is generally not structurally singular, and one cannot derive a transformation which would be an aggregate measure of “reactive power imbalance” analogous to the “real power energy/power” imbalance fundamental for the existence of what could be the reactive power interaction variable. This means that when simulating transient response of interconnected power systems modeled using conventional DAEs, even the effects of unstable voltage behind transient reactance will not be seen. This misleading conclusion is a result of not modeling transients in transmission lines and loads and of assuming that the reactive power dynamics are negligible. It will be shown in the section concerning voltage problems how this quite relevant modeling issue can be overcome when using unified energy dynamics which accounts for these effects. It is important to assess design of today’s voltage controllers keeping in mind the above conclusions that voltage dynamics are highly localized, and, that it is, in particular, sufficient for the field exciter at the power plant to respond to local voltage only.. Voltage stabilization using field excitation by conventional generators has been studied for quite some time. It is mainly intended to stabilize terminal voltage of the generators back to close to nominal. This is expected to be done in response to relatively slow reactive power load fluctuations during normal conditions. If the disturbances are more sudden, the AVRs could experience too high of a gain of a gain which could create negative damping and destabilize coupled frequencyvoltage dynamics. Nevertheless, the seminal papers by Charles Concordia [59] made it clear long ago that such a controller design, if tuned so that the control gain is high, could destabilize even the simplest one generator one load system interconnected via a weak transmission line. This instability is attributed to the exciter creating negative damping in electro-mechanical rotor dynamics. It becomes possible using the unified energy dynamics model which relaxes dynamics of an electro-mechanical rotor shaft given as Eq. (12.4) to study effects of exciter control design on terminal voltage of the generator, as done in [59]. More recently, nonlinear feedback linearizing controllers (FBLC) have been proposed for managing slow inter-area oscillations during large equipment failures [76]. These controllers hold major potential for stabilizing coupled frequencyvoltage dynamics and also for avoiding electro-mechanical sub-synchronous resonance known to have led to major breaking of GTG shafts [30], as illustrated in Figs. 12.9 and 12.10. A hindsight view of these nonlinear controllers shows that they respond to the rate of change of real power produced by the GTG. and can also be shown to be a particular case of energy controllers introduced in the section concerning synchronism of AC power systems. Field excitation control of AVRs has been known to have contributed to blackouts at times because of over-excitation or under-excitation problems [83]. This problem occurs when AVRs respond to
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
329
Fig. 12.9 Interactions during typical blackout and their control
Fig. 12.10 Transient stabilization using FBLC [64]
large deviations in terminal voltages and reach field excitation limits. When this happens, the GTG begins to lose control of its voltage and it behaves effectively as a load whose voltage is un-controlled. These problems have been known to contribute to the general problem of voltage collapse. Today’s engineers are ready for designing better nonlinear controllers, FBLC or SMC types. Further research
330
M. D. Ilic
is needed to show how these nonlinear controllers enable larger transient stability regions. Out conjecture is that, all else being the same, the fundamental reason is that the energy shaping controllers minimize the inefficiencies created by the reactive power oscillations. As an example of the potential benefits that could be gained from deploying nonlinear energy dynamics controlling equipment, shown in Fig. 12.10 is a sketch of major swings created by dynamic interactions triggered by the loss of major equipment during the 2003 US blackout. I believe that similar benefits could have been shown for preventing islanding even during the early 1965 blackout. During these events the interactions were so pronounced that system protection devices disconnected tie-lines between these subsystems. During these disturbances generator controllers were too slow to control the interaction swings and the protection disconnects the system into subsystems. Notably, these potential problems could not be predicted using today’s transient stability programs based on DAE models described above. The plots in Fig. 12.10 [64] are showing response of power plants in the NPCC system during the loss of the Oswego New York nuclear power plant, with conventional AVR and with the FBLC controller [76]. It can be seen that, all else being the same, newly proposed controllers enable staying in synchronism and maintaining voltages close to nominal, without experiencing sudden voltage collapse.
12.4.1.2
IntVars-Based Models for Multi-Layered Voltage Regulation
The seminal work on using voltage measurements at several “pilot point” loads and their feedback to adjust set points of AVRs participating in Automatic Voltage Control (AVC) demonstrated an effective counterpart to the AGC frequency regulation. They are secondary level controllers in today’s hierarchical control. The AVC was first introduced by Electricite de France (EdF) and it has been implemented and has been working quite successfully in nineteen (19) regions of the EdF system [57]. More recently China has implemented the AVC and has used it for regulating voltages during normal operations [58]. The AVC of EdF, has had its main objective as minimizing deviations of voltages from nominal and it has not been concerned with the economics of coordinating system regions for minimizing the need for reactive power reserves [8, 57]. I have in collaboration with EdF in the 1990s introduced improved AVC as well as a new tertiary level coordinating voltage control using reactive power interaction variables [6, 7]. This work draws on the notion of a quasi-static intVar and not on the continuous time intVar, since as we discussed above, currently used DAE models do not show their existence. These quasi-stationary models were derived by starting from decoupled reactive power balance equations in which disturbance is incremental reactive power consumed by loads and/or incremental tie-line reactive power flow deviations from schedules. These models represent dependence of steady-state load voltage deviations .VL from nominal at discrete times .[kTs ] and deviations in reactive power loads and net tie-line flow reactive power flow deviations and are controlled by changing
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
331
increments in generator set points of AVRs .VG . The number of loads in a typical BPS is very high, and instead, only “pilot point” deviations in voltages .VP are measured and secondary level AVC responds to these deviations only. This means that both observability and controllability problems may occur. The existence of secondary level intVar is then a consequence of the lack of controllability in a quasi-static model defining dependence of load voltage increments on the voltage increments of generators participating in AVC. The quasi-stationary model is key to understanding locations and numbers of voltage measurements of loads and of controllable equipment participating in AVC for enabling voltage regulation. For detailed treatment of these models and their use, see [6, 7, 72].3 These early concepts of improved AVC and the proposed tertiary level reactive power management should be understood for their major relevance in enabling feedback control of set points on voltage controllable equipment so that the load voltage is regulated as reactive power demand varies. Here, again, it is important to differentiate between the seemingly localized transient response to fast disturbances; and, the slower intVar-based regulation of voltage deviations caused by small persistent deviations in reactive power consumption. It is indeed possible to manage voltages in a coordinated manner more efficiently than by thinking of the problem as strictly local regulation problem [7].
12.4.2 Voltage Dispatch for Maintaining Nominal Voltages The typical approach in a control center of a BPS to operating the system within the acceptable limits, currently relies on finding the worst case scenarios without accounting for reactive power delivery problems, and then further simulating contingencies which may cause thermal constraints in more detail by solving realreactive power flow problems. If the nonlinear AC power flow does not solve, the approach is to rely on operators’ experience and extensive if-then simulations for determining “proxy” line flow limits which schedule real power only within these limits [92]. This approach neither guarantees that the voltage collapse would not occur [61] nor does it enable near optimal real-reactive power dispatch. In the early 1980s the first major challenge to these methods used by the industry were the occurrence of wide-spread voltage “collapse” related blackouts. Systems began to exhibit network responses not previously seen by the engineers. In particular, OLTC actions attempting to maintain voltage at the receiving end connected to the load by increasing tap positions to send more reactive power as voltage was below nominal resulted in the opposite effect and voltage after certain time “collapsed.” No matter what the system operator tried, the network responded just in the opposite
3 We greatly appreciated one book reviewer’s question concerning causes of rank deficiency in the controllability matrix [6]. As a result, Chapter 4 in this book provided the clarification that appeared in its final version.
332
M. D. Ilic
direction from what his/her expert knowledge about the grid response had predicted [91]. Distribution systems are often required to keep their power factor at the substation connection with BPS close to one (1). This generally requires major use of reactive power compensating equipment, mainly capacitors and, at times also shunt inductors to avoid excessively high voltages during light loading conditions.
12.4.2.1
Inconsistent Performance Objectives
The current practice of controlling reactive power equipment to maintain power factor close to one (1) at the substation level as system conditions vary is generally hard. It is, moreover, not the most optimal when viewed from the BPS level. In particular, many studies and simulations of delivery problems related to voltage collapse have shown that, depending on loading and grid design, it is necessary to optimize voltage-controlled equipment at the receiving end to support delivery into such voltage-constrained load pockets [79]. Similarly, enabling power delivery over large electrical distances can be achieved by optimizing set points of controllable OLTCs. We have done several studies of voltage problems in the US Northeast Power Coordinating Council (NPCC) system [77]. We have shown using our home grown AC Optimal Power Flow (AC OPF) that currently observed East-West “flow gates” in the New York Control Area (NYCA) can be greatly relaxed by controlling set points of several AVR on generators and of OLTCs placed between Niagara and New York City [86]. Over time, the power industry has engaged in deploying large amounts of shunt capacitors to overcome voltage problems. To the best of our knowledge, such new equipment does not always change the flow gate limits. We have had significant experience with exploring potential of optimal AC OPFbased voltage dispatch for overcoming these problems, in particular, through the New Electricity Transmission Software Solutions (NETSS), Inc startup we founded [86]. However, the performance objectives during normal operations are not always coordinated between: (1) transmission owners of voltage controllable equipment, such as on-load tap-changing transformers (OLTCs), shunt and series capacitors, phase angle regulators (PARs) and, more recently, power electronically controlled reactive power equipment, such as static var compensators (SVCs), and, more generally Flexible AC Transmission Systems (FACTS) [72]; (2) distribution system owners responsible for providing high quality service to the very large number of end users; and, (3) BPS planners and operators. The grid owners are mainly concerned with maintaining voltages close to nominal during both normal and contingency conditions, and, to the extent possible with supporting efficient power delivery [87]. On the other hand, BPS operators are concerned with economically and environmentally efficient generation utilization by dispatching real power within the operationally acceptable voltages. The higher voltage at the generator terminals, the more efficient power delivery. Maintaining AVR voltage settings high during normal operations does not leave much headroom for ensuring power delivery during large equipment failures. The BPS operators also request distribution
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
333
system operators to maintain power factor close to 1 per unit at the medium voltage substations. These multiple sub-objectives are attempted at the grid planning level by adding reactive power compensation as necessary, and much less by on-line voltage dispatch. Closer to the end users, voltage-controlled equipment is preprogrammed at best for maintaining voltages close to nominal for given daily load consumption.
12.4.2.2
The Most Recent Challenge to Power Transfer: Extreme Conditions
Flexible model predictive dispatch of both real and reactive power becomes essential for resilient electricity services during extreme weather events, as well. The planning is usually done for ensuring secure service during .(N − 1) and .(N − 2) contingencies, but .(N − k) events, .k >> 2 typical of hurricanes, floods, fires, and other disasters cannot be managed strictly in a preventive way. We have shown that during hurricane Maria in Puerto Rico voltage dispatch becomes key to serving critical load in system pockets, “enclaves", after the loss of major transmission lines [85]. As part of our Puerto Rico study, we have proposed a DyMonDS architecture as an extension to today’s BPS SCADA in order to enable multi-layered interactive participation of end users, distributed energy resources (DERs) to help with supply of critical loads during such extreme events. To do this, it is necessary to support voltage in an adaptive way at the end users’ side. Deciding on what to build as the system and resources are changing in a significant way must be done in the context of how these resources will be utilized. Because of major uncertainties, including regulatory, it is becoming important to have computer applications for flexible utilization short-term; this is where flexible voltage and reactive power dispatch can play significant roles [27].
12.5 Dynamics in Systems with Intermittent Resources and Power Electronics Control Deployment of intermittent resources at the locations for which the power grid was not designed has begun to cause concerns regarding the stability of such systems. Persistent power fluctuations out of these sources lead to a mix of problems previously not studied nor formulated. For example, fast fluctuations of instantaneous power can excite dynamics of transmission lines previously assumed to have a stable response. The interactions between these resources and diverse not well modeled loads could lead to further dynamic oscillations, particularly pronounced in microgrids with dynamically varying induction machine type loads turning on and off and creating large sudden inrush currents [93].
334
M. D. Ilic
Voltage dynamics and its control are becoming much more important in systems with high penetration of intermittent resources. Fundamentally, when instantaneous power of the load fluctuates, this leads to the fluctuations in nodal voltages. There have been many controllers known as FACTS, such as static Var Compensators (SVCs), STATCOMs, and the like, which effectively control energy stored in reactive components such as inductors and capacitors by means of power electronic switching in order to stabilize these fast voltage fluctuations. In the past, FACTS have been placed in electrically distant parts of the BPS and the common practice of tuning them against static Thevenin equivalent representation of the rest of the system, without accounting for dynamic interactions with the other components, has worked relatively well. Also, in the past these controllers have been deployed assuming relatively small disturbances and using linearized dynamic models has been adequate for tuning them so that the eigenvalues of the linearized system-level model could be placed as desired. More recently, the increased deployment of wind and solar power plants equipped with power electronically switched controllers systems has resulted in qualitatively new electromagnetic inter-area oscillations caused by uncoordinated tuning of FACTS controllers. For example, in Texas such oscillations were named control-induced system stability (CISS) problems [94]. The Texas system experienced these CISS problems caused by the wind power controlled FACTS resonating against the thyristor-controlled series capacitor (TCSC) placed on the long “weak” transmission line channeling large amounts of wind power toward the large load center of Houston. Notably, in my EECS Department talk at MIT in the late 1990s I predicted this CCSS problems, the old video of this talk can be found in [95]. Also, more recently a partial blackout occurred which was caused by a nuclear power plant tripping in response to voltage oscillations created by the inverter control of a large utility scale solar power plant. It is our understanding that the electric power grid in China has experienced similar fast instabilities created by the poor coordination of controllers placed on wind power plants [60]. These oscillations were detected in measurements and have been hard to reproduce using today’s transient stability programs such as PSS/E or PSLF [31–33]. This is because modeling detailed dynamics of components and subjecting it to the algebraic real/reactive power balance constraints no longer holds. Simulating these instabilities requires detailed electromagnetic transient stability programs (EMTP) which simulate travelling waves in transmission lines interconnecting these power electronically controlled components [96], and these are not scalable for simulating larger systems. This inability to model the dynamical problem as ODEs raises the question of control design needed to suppress these oscillations.
12.5.1 Unified Energy Dynamics Modeling for Control of CISS All these emerging systems with fast varying power electronics switching in response to persistent power fluctuations call for revisiting the models being used.
Fig. 12.11 Small bulk power system: Longer critical clearing time with FACTS energy control
Fig. 12.12 FACTS control of electromagnetic energy accumulation during fast evolving faults [97]
336
M. D. Ilic
Their frequent inability to deliver all available renewable power definitely indicates that it is no longer possible to make real-reactive power decoupling assumptions nor to assume that fast responses stabilize in between commands given by AGC and/or AVC for adjusting output set points so that frequency and voltage are regulated close to their nominal values. A closer look into these problems shows that they can evolve at qualitatively different time scales. First, there has been an increasing presence of low frequency inter-area oscillations. These can occur even during normal operations, as described above. Probably these are triggered by relatively slow variations in power exchanged between system components and are fundamentally electro-mechanical in nature. The second newer problem is the one of very fast electromagnetic inter-area oscillations. Notably, instead of assuming the validity of time-varying phasors (TVPs), our unified modeling is based on representing instantaneous power and rate of change of instantaneous reactive power. We conjecture here that many of these emerging electromagnetic problems can be modeled using such an approach instead of having to use time consuming EMTP programs which require integration of PDEs. Our unified modeling approach naturally lends itself to capturing fast voltage fluctuations causing these problems. Our MIT group is actively pursuing research in this direction.
12.5.2 Examples of Electromagnetic Energy Control for Frequency Stabilization In our work we have shown potential benefits of nonlinear energy control for increasing critical clearing time in a BPS with wind power plants with rapidly varying output. Figure 12.11 shows a sketch of a small power system that has both a large conventional plant and a wind power plant. When load suddenly decreases, the wind power plant begins to accelerate as excess energy accumulates in the fast wind power plant. If FACTS does not observe this rate of change of power in the wind power plant, it does not do anything to help the system stabilization. However, when FACTS responds to this rate of change seen in power flow through the common transmission line, it quickly absorbs excess energy and helps increase the critical clearing time of the wind power plant. Figure 12.12 shows a sketch of electromagnetic energy dynamics accumulated in a FACTS device with and without the energy controller. It can be seen that accumulated electromagnetic dynamics peaks quickly and reduces the acceleration of the wind power plant. These results are the first of its kind and show major potential for fast power electronically switched shaping of electromagnetic dynamics in BPS. Similar concepts are applicable and should be studied for assessing their potential for controlling CISS problems. We observe that the fast frequency stabilization during large faults cannot be done using today’s controllers, and, importantly, the
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
337
Fig. 12.13 Nonlinear control of sudden wind gusts [84]
fast electromagnetic energy control becomes essential. For more recent research on these ideas, see [52]. Similar examples of energy controllers can be found for ensuring stable service during large sudden wind gusts in islands [84]. Figure 12.13 shows a small realworld power system of Flores, one of the Azores Islands, Portugal which we have studied. When exposed to a short large wind gust, an SVC can stabilize frequency by controlling electromagnetic energy, much the same way as in a BPS with wind power during faults [84]. However, if wind gusts are both prolonged and large, it becomes necessary to use a flywheel and control its energy dynamics [84, 98], as shown in Fig. 12.13. It is shown that it both case conventional controllers of this equipment do not manage to stabilize system frequency during large wind gusts because it is essential to have nonlinear controllers. Finally, we have recently done most work on energy dynamics modeling and control design on small microgrids. We have studied two representative microgrids, one typical of tactical microgrids [99], and the second resembling distribution feeders and the third in power trains designed for hybrid aircraft [100]. We omit their description here due to space limitations.
12.6 Dynamics of Electricity Markets An important aspect of several operating and planning future electric energy systems is aligning economic and technical signals so that the right incentives are given for making the most out of what is available at the lowest societal cost. Recall our discussion earlier in this chapter concerning performance metrics issues in voltage dispatch. In the past several decades some parts of the electric power industry
338
M. D. Ilic
Fig. 12.14 Emerging industry performance objectives [105]
have gone through restructuring from fully regulated to market electricity services. This process has been quite challenging, in part because the electricity market implementation targeted for economic efficiency has often been at odds with the main industry objectives of serving uninterrupted electricity. In addition to these dominant objectives, transmission owners have attempted to enhance the grid and its control so that physical efficiency is maximized. It is well known that achieving these multiple objectives is not always straightforward. For example, dispatch of controllable resources for economic efficiency is generally not the same as for delivering power at minimum thermal losses. The industry is currently a long way from resolving these conflicting objectives (see Fig. 12.14) [105]. Nevertheless, it is clear that it is becoming necessary to introduce new market derivatives in addition to energy and capacity products. We have shown in our previous work that one way of doing this would be to consider several basic functions necessary for providing electricity services. These are balancing predicted real and reactive power supply and demand; compensating delivery losses; balancing hard-to-predict real and reactive power deviations during normal conditions at the right rates so that frequency and voltages are within nominal operating limits during normal conditions; and ensuring uninterrupted service during large equipment outages [1]. One of the more difficult problems is valuing uncertainties when attempting to ensure these functions. We suggest that one of the basic steps can be an extension of today’s markets to include bids which are differentiated according to their rate of response, and, perhaps, according to the level
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
339
Fig. 12.15 Emergence of iBAs within the legacy BAs [24]
Fig. 12.16 Market bids using energy dynamics [101]
of confidence as to which of the bids can be physically implemented. Figure 12.15 shows a typical industry organization in which. today’s BAs have embedded nonutility entities; we refer to these as the intelligent Balancing Authorities (iBAs). Based on multi-layered modular representation of technical processes, it becomes necessary to characterize the modules for their rate of response (see Fig. 12.16). At the end, there will ultimately be trade-offs between the number of derivatives in the markets, their spatial and temporal granularity; confidence in how physically implementable they are; and, their emissions levels if implemented. While this is all a work in progress, it is somewhat clear that in order to align market incentives with technical functionalities of diverse technologies a unified modeling is critical. Figure 12.16 shows modular characterization using energy dynamics. The market participants provide their bid functions which relate the ranges of power they produce/consume to the price at which they are willing to participate. At the same time they need to provide ranges of rate of power changes and the corresponding prices. This is the key idea which shows how the energy dynamics cab be used to establish more complete market derivatives, like rate of change of power, for example, so that the right economic incentives are given to the right technologies [40, 69].
340
M. D. Ilic
12.7 Unified Energy Dynamics as the Basis for Integration Protocols I describe here briefly our multi-decade on-going work toward generalizing today’s SCADA in BPS to DyMonDS cyber-physical framework for enabling integration of the emerging technologies into legacy power systems at value [24]. Our staple DyMonDS shown in Fig. 12.17 indicates the very structure of the changing electric energy systems. The extra high voltage/high voltage physical power grid comprises transmission and large conventional power plants. They have their centralized Energy Management System (EMS) which performs feed-forward scheduling subject to physical grid “congestion.” Power is delivered to the MV substations and further distributed through distribution feeders to the LV electricity users. Many of these components, at all gird levels, are equipped with some sort of data-based decision tools, sensing, communications, and control. On-line information exchange is only enabled by the BPS SCADA and lower voltage levels do not exchange information on-line. To integrate many of these lower voltage grid participants, recently referred to as the “grid edge” components, it is necessary to support minimal information exchange even from them to the BPS level SCADA. This is a natural evolution from today’s SCADA in which grid users have their own
Fig. 12.17 Next generation SCADA: Dynamic Monitoring and Decision Systems (DyMonDS) [24]
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
341
Fig. 12.18 Three principles for defining protocols in electric energy systems
embedded “DyMonDS” and communicate with different levels of other grid users and coordinators, shown as end-to-end blue dotted lines. While there is much ongoing work toward the new architectures for the changing electric energy systems, there is very little theoretical foundation which considers the minimal information necessary to be exchanged in order for the sub-objectives of different grid users to become aligned with the societal objectives. I have shown in the context of particular functionalities, such as integration of adaptive load demand (ALM) [102], transactive energy management TEM and enhanced automatic generation control (E-AGC) [103] that protocols technology-agnostic candidate architectures require minimal information exchange for integrating diverse emerging technologies reliably and efficiently. This, in turn, supports distributed decisions and intVar-specifications as the basis for choice and minimal coordination [27]. An extensive treatment of the resulting unified value-based feedback, optimization, and risk management in complex electric energy systems can be found in [104]. We are pursuing further work toward market designs for valuing uncertainties. In short, protocols defining rights, rules, and responsibilities (3Rs) are needed for moving forward [106]. A summary sketch of three basic principles required to observe when introducing protocols for the changing industry is shown in Fig. 12.18.
12.8 Conclusions It should be probably self-evident from this longish chapter that the field of electric power systems is in a state of flux and imminent transition. Evolving power grids will need to deliver power in ways difference from the legacy power grid that was planned for when the conventional power plants were placed close to the load centers, and local distribution grids were expected to uni-directionally deliver
342
M. D. Ilic
power to the end users. This situation brings into questions current modeling and hierarchical control which have evolved for this purpose but are expected to enable fundamentally different types, sizes, and locations of power plants, as well as of different end users. It is no longer possible to rely on historic demand patterns, in particular. In this chapter I take a closer look into these modeling and control assumptions and the emerging challenges. As one attempts to identify what needs to be enhanced by either new hardware technologies, storage, in particular, and new data-enabled prediction and decision software, it becomes clear that current methods must be conceptualized to avoid technology-specific or system-specific approach to innovation. This is particularly necessary when attempting to integrate new technologies into the legacy systems. I explain how this very quickly becomes hard to do with the existing models because often hidden assumptions either prevent one from modeling and controlling new previously not experienced operating problem or the existing methods are not scalable for timely decision making and control. As a way forward, we have proposed a unified modeling of energy dynamics as a common characterization of all modules in any electric power system. I believe that this becomes one possible way of assessing seemingly different technologies for their functions, such as ranges of power as well as rates of change of reactive power and energy over specific time interval of interest. I have suggested based on our research to date that characterizing dynamics of these three variables is sufficient to differentiate the role of modules comprising the interconnected system, and to further establish conditions for enabling feasible and stable dynamics of the interconnected grid. I have derived the energy dynamics in terms of this triplet .(E, p, Et ) representing stored energy E, rate of change of stored energy p and energy in tangent space .Et . The modular dynamics of these variables are subject to the conservation of power P and conservation of rate of change of reactive ˙ with other modules. The structure of electric power systems is transparent power .Q and lends itself to cooperative information exchange in terms of the interaction ˙ Notably, the energy dynamics of an interconnected system are variables .[P Q]. linearly dependent on energy variables and the rate of change of interaction variables coming from the rest of the system. Once this is understood it becomes possible to pursue distributed cooperative control of complex interconnected systems at provable performance, something badly needed but hard to do. The control of a module shapes its own interaction variables and the information is exchanged so that the intVars of system-level modules are aligned. Notably, I show that identifying assumptions made by starting from the proposed general unified model stabilizing and regulating frequency and dispatching power during normal conditions, become a particular case of the problem. Current models, however, assume relatively small perturbations from predictions, and lend themselves to drawing on well-established knowledge in linear dynamical systems in the form of coupled ODEs. This simplification then makes it possible to model and control generators interconnected via transmission lines as systems of masses interconnected via springs, exhibiting what amounts to well studied kinetics of mechanical systems. This problem has taken on a new importance with the deployment of power electronically controlled intermittent resources, known as
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
343
inverter based resources, IBRs. An important aspect of having systematic modeling and control of frequency dynamics is the ability to model these new technologies following the same modeling and control principles. The situation with maintaining voltage in the changing industry is even more challenging. In this chapter we do not attempt a full treatment of the emerging voltage control problems. Instead, we highlight a few important assumptions and reasons for having to relax them to support both reliable and efficient utilization of clean resources, as they are replacing old large power plants. Ultimately, systematic protocols must be put in place for helping manage both past and emerging voltage related problems. Most of the models at secondary levels and primary levels in complex power grids are still derived by assuming decoupling of real and reactive power. Dynamics of reactive power must be studied for both avoiding new operating problems, such as the control-induced sub-synchronous electromagnetic resonance, and for more efficient utilization of what is available even during normal operating conditions. Decarbonization will require that the system is operated in a way that the rate of potential to do real work must be maximized by minimizing cumulative effects of reactive power related inefficiencies at both modular and system levels. There is no way out of attempting to make the most out of what is available. This can only be achieved through flexible control of energy dynamics over multiple time horizons. This chapter offers the first step of conceptualizing electric power systems for their ability to do so. Much work remains to be done.
References 1. Fink, Lester, New Control Paradigm for Deregulation, in Ilic, M., Galiana, F., & Fink, L. (Eds.). (2013). Power systems restructuring: engineering and economics. Springer Science & Business Media. Chapter 11,pp.405–450. 2. Liu, X, “Structural Modeling and Hierarchical Control of Large-Scale Electric Power Systems,” June 1994 (Mechanical Engineering Department, MIT) 3. Ilic, Marija, and Xiaojun Liu. “A simple structural approach to modeling and control of the inter-area dynamics of the large electric power systems: Part i linear models of frequency dynamics.” In Proceedings of the North American Power Conference (NAPS) IEEE. 1994. 4. Ilic, Marija, and Xiaojun Liu. “A simple structural approach to modeling and control of the inter-area dynamics of the large electric power systems: Part ii nonlinear models of frequency and voltage dynamics.” In Proceedings of the North American Power Conference (NAPS) IEEE. 1994. 5. Ilic, M.D. and Liu, X.S., 1995. A modeling and control framework for operating large-scale electric power systems under present and newly evolving competitive industry structures. Mathematical Problems in Engineering, 1(4), pp.317–340. 6. Ilic, Marija D., and Shell Liu. Hierarchical power systems control: its value in a changing industry. London: Springer, 1996. 7. Ilic, Marija, X. Liu, B. Eidson, C. Vialas, and Michael Athans. “A structure-based modeling and control of electric power systems.” Automatica 33, no. 4 (1997): 515–531. 8. X. Liu, M. Ili´c, M. Athans, C. Vila, and B. Heilbronn, A new concept of an aggregate model for tertiary control coordination of regional voltages. In [1992] Proceedings of the 31st IEEE Conference on Decision and Control, pages 2934–2940, vol. 3, 1992 9. https://courses.ece.cmu.edu/18618
344
M. D. Ilic
10. https://www.edx.org/course/principles-of-modeling-simulations-and-control-for-electricenergy-systems 11. Ili´c, M., Carvalho, P., From Hierarchical Control to Flexible Interactive Electricity Services: A Path to Decarbonization, Proc. of the PSCC Conference, Porto, Portugal, June 2022. XXII Power Systems Computations Conference, June 2022, Porto, Portugal. 12. Ilic, M., Jaddivada, R., Modeling and Control of Multi-Energy Dynamical Systems: Hidden Paths to Decarbonization, 11TH BULK POWER SYSTEMS DYNAMICS AND CONTROL SYMPOSIUM, JULY 25–30, 2022, BANFF, CANADA 13. Luenberger, D. G. (1979). Introduction to dynamic systems: theory, models, and applications (Vol. 1). New York: Wiley. 14. Kou, S. R., T. Tarn, and D. L. Elliott. “Finite-time observer for nonlinear dynamic systems.” In 1973 IEEE Conference on Decision and Control including the 12th Symposium on Adaptive Processes, pp. 116–119. IEEE, 1973. 15. Isidori, Alberto, and M. D. Benedetto. “Feedback linearization of nonlinear systems.” Control Handbook (1996): 909–917. 16. Krener, A. J. “Feedback linearization.” Mathematical control theory (1999): 66–98. 17. Ilic, M., Jaddivada, R., & Miao, X. (2017). Modeling and analysis methods for assessing stability of microgrids. IFAC-PapersOnLine, 50(1), 5448–5455. 18. Kokotovic, P. and Marino, R., 1986. On vanishing stability regions in nonlinear systems with high-gain feedback. IEEE Transactions on Automatic Control, 31(10), pp.967–970. 19. Utkin, V. I., & Poznyak, A. S. (2013). Adaptive sliding mode control. In Advances in sliding mode control (pp. 21–53). Springer, Berlin, Heidelberg. 20. Draženovic, B. “The invariance conditions in variable structure systems.” Automatica 5, no. 3 (1969): 287–295. 21. Marino, Riccardo. “High-gain feedback in non-linear control systems.” International Journal of Control 42, no. 6 (1985): 1369–1385. 22. Åström, Karl J., and Björn Wittenmark. Adaptive control. Courier Corporation, 2013. 23. Grizzle, J.W., Chevallereau, C., Sinnet, R.W. and Ames, A.D., 2014. Models, feedback control, and open problems of 3D bipedal robotic walking. Automatica, 50(8), pp.1955–1988. 24. Ilic, Marija D. “Dynamic monitoring and decision systems for enabling sustainable energy services.” Proceedings of the IEEE 99, no. 1 (2010): 58–79. 25. Romero, J. G., Donaire, A., & Ortega, R. (2013). Robust energy shaping control of mechanical systems. Systems & Control Letters, 62(9), 770–780. 26. Tabuada, P., & Pappas, G. J. (2003). Abstractions of Hamiltonian control systems. Automatica, 39(12), 2025–2033. 27. Ilic, Marija, Pedro MS Carvalho, and Donald Lessard. “Minimal Coordination of Dynamic Reserves for Flexible Operations at Value: The Case of Azores Islands.” In 2021 IEEE Power & Energy Society General Meeting (PESGM), pp. 1–5. IEEE, 2021. 28. Xie, L., Carvalho, P. M., Ferreira, L. A., Liu, J., Krogh, B. H., Popli, N., & Ili´c, M. D. (2010). Wind integration in power systems: Operational challenges and possible solutions. Proceedings of the IEEE, 99(1), 214–232. 29. Illian, H. F., & Hoffman, S. P. (1998). Enabling Market Managed Reliability (No. CONF980426-). Illinois Inst. of Tech., Chicago, IL (United States). 30. Allen, E.H., Chapman, J.W. and Ilic, M.D., 1996. Effects of torsional dynamics on nonlinear generator control. IEEE transactions on control systems technology, 4(2), pp.125–140. 31. Adams, J., Pappu, V. A., & Dixit, A. (2012, July). ERCOT experience screening for subsynchronous control interaction in the vicinity of series capacitor banks. In 2012 IEEE Power and Energy Society General Meeting (pp. 1–5). IEEE. 32. Maslennikov, Slava, and Eugene Litvinov. “ISO new England experience in locating the source of oscillations online.” IEEE Transactions on Power Systems 36, no. 1 (2020): 495– 503. 33. Cheng, Yunzhi, Lingling Fan, Jonathan Rose, Fred Huang, John Schmall, Xiaoyu Wang, Xiaorong Xie et al. “Real-world subsynchronous oscillation events in power grids with high penetrations of inverter-based resources.” IEEE Transactions on Power Systems (2022).
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
345
34. Maslennikov, Slava, Bin Wang, and Eugene Litvinov. “Dissipating energy flow method for locating the source of sustained oscillations.” International Journal of Electrical Power & Energy Systems 88 (2017): 55–62. 35. Padleschat, J.A. and McMahon, T.D., 1996. The Expedited Remedial Action Program: A case study. The Alhambra Front Street manufactured gas plant site (No. CONF-9611118-). Advanstar Expositions, Duluth, MN (United States). 36. Liu, Q. (2013). Large-Scale Systems Framework for Coordinated Frequency Control of Electric Power Systems (Doctoral dissertation, Carnegie Mellon University). 37. Willems, J.C., 2007. The behavioral approach to open and interconnected systems. IEEE control systems magazine, 27(6), pp.46–99. 38. Haus, H. A., & Melcher, J. R. (1989). Electromagnetic fields and energy (Vol. 107). Englewood Cliffs: Prentice Hall. 39. (co-editors) Ilic,M., Bachovchin, K., Jaddivada, R., State space modelling and primary control of smart grid components, Cambridge University Press, 2022 (to appear). 40. Jaddivada, Rupamathi. “A unified modeling for control of reactive power dynamics in electrical energy systems.” PhD diss., Massachusetts Institute of Technology, 2020. 41. Ilic, M., NSF EAGER Project number 2002570 entitled “EAGER: Fundamentals of Modeling and Control for the Evolving Electric Power System Architectures”, 2020–2022. 42. Ilic, Marija, Rupamathi Jaddivada, and Assefaw Gebremedhin. “Unified modeling for emulating electric energy systems: Toward digital twin that might work.” In Handbook of Research on Methodologies and Applications of Supercomputing, pp. 179–207. IGI Global, 2021. 43. Cohn, Nathan. “Techniques for improving the control of bulk power transfers on interconnected systems.” IEEE Transactions on Power Apparatus and Systems 6 (1971): 2409–2419. 44. Maslennikov, Slava, and Eugene Litvinov. “Adaptive emergency transmission rates in power system and market operation.” IEEE Transactions on Power Systems 24, no. 2 (2009): 923– 929. 45. Adamiak, M. G., Apostolov, A. P., Begovic, M. M., Henville, C. F., Martin, K. E., Michel, G. L., ... & Thorp, J. S. (2006). Wide area protection–Technology and infrastructures. IEEE Transactions on Power Delivery, 21(2), 601–609. 46. Pagola, F. L., Perez-Arriaga, I. J., & Verghese, G. C. (1989). On sensitivities, residues and participations: applications to oscillatory stability analysis and control. IEEE Transactions on Power Systems, 4(1), 278–285. 47. Parizad, A., 2013, November. Dynamic stability analysis for Damavand power plant considering PMS functions by DIgSILENT software. In 2013 13th International Conference on Environment and Electrical Engineering (EEEIC) (pp. 145–155). IEEE. 48. Siljak, D.D., 2011. Decentralized control of complex systems. Courier Corporation. 49. Ribbens-Pavella, M., and F. J. Evans. “Direct methods for studying dynamics of large-scale electric power systems–A survey.” Automatica 21, no. 1 (1985): 1–21. 50. Ilic, Marija D., and Qixing Liu. “Toward sensing, communications and control architectures for frequency regulation in systems with highly variable resources.” In Control and optimization methods for electric smart grids, pp. 3–33. Springer, New York, NY, 2012. 51. Sauer, P. W., Pai, M. A., & Chow, J. H. (2017). Power system dynamics and stability: with synchrophasor measurement and power system toolbox. John Wiley & Sons. 52. Jaddivada, Rupamathi, and Marija D. Ilic. “A feasible and stable distributed interactive control design in energy state space.” In 2021 60th IEEE Conference on Decision and Control (CDC), pp. 4950–4957. IEEE, 2021. 53. Wyatt, J. L., & Ilic, M. (1990, May). Time-domain reactive power concepts for nonlinear, nonsinusoidal or nonperiodic networks. In IEEE international symposium on circuits and systems (pp. 387–390). IEEE. 54. Penfield, P., Spence, R. and Duinker, S., 1970. Tellegen’s theorem and electrical networks (No. 58). MIT press. 55. Ilic, Marija D., and Rupamathi Jaddivada. “Multi-layered interactive energy space modeling for near-optimal electrification of terrestrial, shipboard and aircraft systems.” Annual Reviews in Control 45 (2018): 52–75.
346
M. D. Ilic
56. Miao, Xia. “Toward distributed control for autonomous electrical energy systems.” PhD diss., Massachusetts Institute of Technology, 2020. 57. Paul, J. P., Leost, J. Y., & Tesseron, J. M. (1987). Survey of the secondary voltage control in France: Present realization and investigations. IEEE Transactions on Power Systems, 2(2), 505–511. 58. Sun, Hongbin, Qinglai Guo, Boming Zhang, Wenchuan Wu, and Jianzhong Tong. “Development and applications of system-wide automatic voltage control system in China.” In 2009 IEEE Power & Energy Society General Meeting, pp. 1–5. IEEE, 2009. 59. Demello, F. P., & Concordia, C. (1969). Concepts of synchronous machine stability as affected by excitation control. IEEE Transactions on power apparatus and systems, 88(4), 316–329. 60. Zhao, Z.Y., Yan, H., Zuo, J., Tian, Y.X. and Zillante, G., 2013. A critical review of factors affecting the wind power generation industry in China. Renewable and Sustainable Energy Reviews, 19, pp.499–508. 61. Obadina, O. O., and G. J. Berg. “Determination of voltage stability limit in multimachine power systems.” IEEE Transactions on Power Systems 3, no. 4 (1988): 1545–1554. 62. Ilic, Marija, Sanja Cvijic, Jeffrey H. Lang, and Jiangzhong Tong. “Optimal voltage management for enhancing electricity market efficiency.” In 2015 IEEE Power & Energy Society General Meeting, pp. 1–5. IEEE, 2015. 63. Ilic, M., Cvijic, S., Lang, J. H., Tong, J., & Obadina, D. (2015, July). Operating beyond today’s PV curves: Challenges and potential benefits. In 2015 IEEE Power & Energy Society General Meeting (pp. 1–5). IEEE. 64. Ilic, M. D., Allen, H., Chapman, W., King, C. A., Lang, J. H., & Litvinov, E. (2005). Preventing future blackouts by means of enhanced electric power systems control: From complexity to order. Proceedings of the IEEE, 93(11), 1920–1941. 65. Minkel, J.R., 2008. The 2003 Northeast Blackout–Five Years Later. Scientific American, 13, pp.1–3. 66. Ilic-Spong, Marija, J. Thorp, and M. Spong. “Localized response performance of the decoupled Q-V network.” IEEE transactions on circuits and systems 33, no. 3 (1986): 316– 322. 67. Genç, I., Schattler, H. and Zaborszky, J., 2003, June. Hopf bifurcation related coherent oscillations between clusters in the bulk power system. In 2003 IEEE Bologna Power Tech Conference Proceedings, (Vol. 2, pp. 7-pp). IEEE. 68. Chow, J. H., Wu, F. F., & Momoh, J. A. (2005). Applied mathematics for restructured electric power systems. In Applied mathematics for restructured electric power systems (pp. 1–9). Springer, Boston, MA. 69. Ilic, M., Lang, J., “Voltage Dispatch and Pricing in Support of Efficient Real Power Dispatch”, Final Report, NYSERDA Project 10476, 2012 70. Ilic, M. D. (2016). Toward a unified modeling and control for sustainable and resilient electric energy systems. Foundations and Trends® in Electric Energy Systems, 1(1–2), 1–141. 71. Ilic, Marija D. “From hierarchical to open access electric power systems.” Proceedings of the IEEE 95.5 (2007): 1060–1084. 72. Ilic, M.D. and Zaborszky, J., 2000. Dynamics and control of large electric power systems (p. xviii). New York: Wiley. 73. Robinett III, Rush D., and David G. Wilson. “Exergy and irreversible entropy production thermodynamic concepts for nonlinear control design.” International Journal of Exergy 6, no. 3 (2009): 357–387. 74. Padiyar, K. R., and A. M. Kulkarni. “Flexible AC transmission systems: A status review.” Sadhana 22, no. 6 (1997): 781–796. 75. Lin, Y., Eto, J. H., Johnson, B. B., Flicker, J. D., Lasseter, R. H., Villegas Pico, H. N., ... & Ellis, A. (2020). Research roadmap on grid-forming inverters (No. NREL/TP-5D00-73476). National Renewable Energy Lab.(NREL), Golden, CO (United States). 76. Chapman, J. W., Ilic, M. D., King, C. A., Eng, L., & Kaufman, H. (1993). Stabilizing a multimachine power system via decentralized feedback linearizing excitation control. IEEE Transactions on Power Systems, 8(3), 830–839.
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
347
77. Cvijic, S., Lang, J., Ilic, M., Babaei, S., & Stefopoulos, G. (2017, July). Reliable adaptive optimization demonstration using big data. In 2017 IEEE Power & Energy Society General Meeting (pp. 1–5). IEEE. 78. Ilic, Marija, Francisco Galiana, Lester Fink, Anjan Bose, Pierre Mallet, and Hisham Othman. “Transmission capacity in power networks.” International Journal of Electrical Power & Energy Systems 20, no. 2 (1998): 99–110. 79. Ilic, M., Lang, J., Litvinov, E., Luo, X. and Tong, J., 2011, December. Toward coordinatedvoltage-control-enabled HV smart grids. In 2011 2nd IEEE PES International Conference and Exhibition on Innovative Smart Grid Technologies (pp. 1–8). IEEE. 80. Van Cutsem, T. (2000). Voltage instability: phenomena, countermeasures, and analysis methods. Proceedings of the IEEE, 88(2), 208–227. 81. Venkatasubramanian, V., Schattler, H., & Zaborszky, J. (1990, December). Global voltage dynamics: study of a generator with voltage control, transmission, and matched MW load. In 29th IEEE Conference on Decision and Control (pp. 3045–3056). IEEE. 82. Ilic, Marija, Voltage dynamics and control revisited, (paper under preparation, 2022) 83. Vournas, C. D., Manos, G. A., Sauer, P. W., & Pai, M. A. (1999). Effect of overexcitation limiters on power system long-term modeling. IEEE Transactions on Energy Conversion, 14(4), 1529–1536. 84. Ilic, M., Xie, L., & Liu, Q. (Eds.). (2013). Engineering IT-Enabled sustainable electricity services: The tale of two low-cost green azores islands (Vol. 30). Springer Science & Business Media. 85. Ilic, M., Ulerio, R. S., Corbett, E., Austin, E., Shatz, M., & Limpaecher, E. (2020). A Framework for Evaluating Electric Power Grid Improvements in Puerto Rico. 86. Cvijic, S., Ilic, M., Allen, E., & Lang, J. (2018, October). Using extended AC optimal power flow for effective decision making. In 2018 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe) (pp. 1–6). IEEE. 87. Ilic, Marija D., Kevin D. Bachovchin, and Amelia S. Lewis. “Costs and benefits of transmission congestion management.” In 2011 IEEE/PES Power Systems Conference and Exposition, pp. 1–7. IEEE, 2011. 88. Garcia-Sanz, Mario. “Control Co-Design: an engineering game changer.” Advanced Control for Applications: Engineering and Industrial Systems 1, no. 1 (2019): e18. 89. Zhang, Y., Ilic, M. D., & Tonguz, O. K. (2010). Mitigating blackouts via smart relays: A machine learning approach. Proceedings of the IEEE, 99(1), 94–118. 90. Calvaer, A.J., 1983. On the maximum loading of active linear electric multiports. Proceedings of the IEEE, 71(2), pp.282–283. 91. Medanic, Juraj, Marija Ilic-Spong, and John Christensen. “Discrete models of slow voltage dynamics for under load tap-changing transformer coordination.” IEEE Transactions on Power Systems 2.4 (1987): 873–880. 92. Ilic, Marija D., Jeffrey H. Lang, Eugene Litvinov, and Xiaochuan Luo. “The critical role of computationally robust AC optimal power flow in reliable and efficient reactive power/voltage dispatch.” In 2006 IEEE PES Power Systems Conference and Exposition, pp. 689–698. IEEE, 2006. 93. Miao, X., & Ilic, M. D. (2019, December). Modeling and distributed control of microgrids: A negative feedback approach. In 2019 IEEE 58th Conference on Decision and Control (CDC) (pp. 1937–1944). IEEE. 94. Mohammadpour, Hossein Ali, and Enrico Santi. “Analysis of subsynchronous control interactions in DFIG-based wind farms: ERCOT case study.” In 2015 IEEE Energy Conversion Congress and Exposition (ECCE), pp. 500–505. IEEE, 2015. 95. https://eesg.mit.edu/ 96. Zaborszky, J., & Rittenhouse, J. W. (1954). Electric power transmission; the power system in the steady state. 97. Cvetkovic, M., & Ilic, M. D. (2014). Entropy-based nonlinear control of facts for transient stabilization. IEEE Transactions on Power Systems, 29(6), 3012–3020.
348
M. D. Ilic
98. Bachovchin, K. D., & Ilic, M. D. (2015, July). Transient stabilization of power grids using passivity-based control with flywheel energy storage systems. In 2015 IEEE Power & Energy Society General Meeting (pp. 1–5). IEEE. 99. Miao, X., Ilic, M., Smith, C., Overlin, M., & Wiechens, R. (2020, October). Toward Distributed Control for Reconfigurable Robust Microgrids. In 2020 IEEE Energy Conversion Congress and Exposition (ECCE) (pp. 4634–4641). IEEE. 100. Ilic, M. D., & Jaddivada, R. (2021). Making flying microgrids work in future aircrafts and aerospace vehicles. Annual Reviews in Control, 52, 428–445. 101. Ilic, M., & Jaddivada, R. (2019, September). Toward technically feasible and economically efficient integration of distributed energy resources. In 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (pp. 796–803). IEEE. 102. Joo, J. Y., & Ili´c, M. D. (2016). An information exchange framework utilizing smart buildings for efficient microgrid operation. Proceedings of the IEEE, 104(4), 858–864. 103. Liu, Qixing, and Marija D. Ilic. “Enhanced automatic generation control (E-AGC) for future electric energy systems.” In 2012 IEEE Power and Energy Society General Meeting, pp. 1–8. IEEE, 2012. 104. Ilic, Marija, and Rupamathi Jaddivada. “Unified value-based feedback, optimization and risk management in complex electric energy systems.” Optimization and Engineering 21, no. 2 (2020): 427–483. 105. Ilic, M. D., Xie, L., Khan, U. A., & Moura, J. M. (2010). Modeling of future cyber–physical energy systems for distributed sensing and control. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(4), 825–838. 106. Ili´c, M. D. (2011, July). The challenge of IT-enabled policy design for sustainable electric energy systems. In 2011 IEEE Power and Energy Society General Meeting (pp. 1–7). IEEE. Marija D. Ilic I was born in the former Yugoslavia and was very fortunate to have had parents who unconditionally encouraged education and hard work. I studied Electrical Engineering at the University of Belgrade and obtained Bachelor and Master Degrees with specialization in Automatic Control. In the early 1980s I was sent to learn how to use automation to prevent wide-spread loss of electricity service infamously known as “blackouts.” My wonderful mentors at the University of Belgrade and at the Michael Pupin Institute contacted Professor John Zaborszky and I was given a research assistant ship and was fortunate to earn my Master and Doctor of Science degrees from Washington University, in St. Louis, Missouri. Dr Z, as I ended up calling him even after I graduated, and I built a working relationship which culminated in writing a joint 900 page book in 2000. I learned so much from him! It would take too long to talk about my path after I finished my graduate studies. In short, it was juggling two body family challenges of one sort of the other in academia; plowing the way from being the first female tenure track assistant professor at the School of Electrical Engineering at Cornell; getting tenured in four years at UIUC, and going through a divorce with two young children the same year; taking a step down, giving up on tenured position, for personal reasons and moving to EECS at MIT; going through two evaluations for promotion into a full professor and failing two times for not well-understood reasons; being invited as a tenured full professor in two departments at CMU, the first tenured woman in ECE there; shuttling back and forth between Boston, home
12 Interaction Variables-Based Modeling and Control of Energy Dynamics
349
with three children, and Pittsburgh, where I built the first power systems program at ECE and started my beloved Electric Energy Systems Group. Finally, getting tired of commuting and many lonely evenings without my family, and returning recently to MIT, again non-tenured. While this path sounds pretty rocky, often lonely and without a big brother watching out for my professional growth, today I am incredibly grateful for all the people I have interacted with and learned from; for my big birthday my former students documented our interactions, https://eesg-dev.mit.edu/ marija-ilic-workshop/ and made an academic tree as a present to me https://eesg.mit.edu/. There is another tree whose fruits I harvested for almost 40 years. This has not been drawn, yet it probably should. At the root of this tree are many giants in our power systems field who shared their ideas and knowledge with me. At the risk of missing some, academic people who influenced me tremendously are Milan Calovi´c, Juraj Medani´c, Petar Kokotovi´c, Dragoslav Siljak, James Thorp, Robert Thomas, Sam Linke, M.A.Pai, Pete Sauer, Mac Van Valkenburg, Thomas Everhart, Fred Schweppe, Sanjoy Mitter, Elisabeth Drake, Felix Wu, Pravin Variaya, Francisco Galiana and David Hill. Also, I treasure what I learned from many industry people, Lester Fink, Charles Concordia, Dale Osborn, Narain Hingorani and, most recently, Raymond Beach. There are no strong enough words to express my thanks to them all. Needless to say, I never returned to my beloved Belgrade to share what I learned. As a matter of fact, given so many constraints and so many turning points in the road I took, I never managed to plan. I did the best I could wherever I was at the time. I am very glad to be in the midst of continuing teaching, research and service. I continue to appreciate learning from people I work with and sharing what I know with those who reach out and ask me to do so. There are so many young people with huge potential to lead in the field of the changing electric power industry, and these are truly exciting time full of both challenges and opportunities. There are very few people in academia who have offered as extensive service and outreach as I have. I have served as an IPA at the NSF; organized over 50 national workshops, and a series of 10 Annual Electricity Conferences at Carnegie Mellon University (CMU); and given close to 400 invited lectures. My own current work concerns modeling, simulation and control methods for electric energy systems. The need for fundamental innovations is increasing because these systems are evolving into very-large-scale complex multi-temporal, multispatial and multi-contextual social-ecological energy systems. I believe that I was the first to anticipate this evolution with its fundamental challenges and opportunities. I am personally very excited about the foundational conceptualization of emerging electric energy systems using intVars which can be easily understood by engineers, economists, regulators, and alike. Based on this conceptualization I have worked with Prof Donald Lessard from Sloan School of Management and proposed three principles for moving forward with enabling
350
M. D. Ilic future energy systems in support of decarbonization, and resilient service even during extreme events, such as hurricanes in Puerto Rico. I served as the technical lead on an DHS funded project concerning architectures that integrate end-toend participation of different grid users in clean, resilient, and cost-effective service. Notably, the architecture proposed by the MIT LL team, and demonstrated using publicly available Puerto Rico data is effectively an example of DyMonDS. In 2002, I founded the New Electricity Transmission Software Solutions, which now deploys unique voltage-optimizing AC optimal power flow software in operations to enable more efficient power transfers and improve reliability. At MIT LL I led a project which proposed DyMonDS as an effective architecture in support of clean, resilient, and cost-effective electricity services in Puerto Rico and applicable to the entire Continental United States. NASA now collaborates in the use of DyMonDS for next generation aircraft electric systems. Last, but not least, I realize that teaching electric power systems has always been a major challenge. We must teach electric power systems very differently than we do now. It is critically important to establish technology-agnostic problem formulations using the language of disciplines that undergraduate students understand based on IntVar concepts. I believe that my IntVar-based modeling for control makes it possible to abstract problems of electric energy systems as complex dynamical physical and cyber network systems harmonized to serve the needs of socio eco-environmental technological systems (SETS). Looking forward, now more than ever, there are major needs for university-led research to help the electric power industry build a new IT-enabled physical grid and replace the aging grid. Having the necessary experience, my main objective is to help to the extent possible with such efforts by quietly building our growing eesg.mit.edu/. I am very excited about pursuing my “school of thinking” for teaching principles of modeling and control in the rapidly changing electric energy systems. Looking back, there are about 100 graduate students I have mentored, and a much greater number of young people to whom I provided guidance. This makes me very fulfilled since younger generation leadership is so badly needed.
Chapter 13
Facilitating Interdisciplinary Research in Smart Grid Yanli Liu
13.1 Introduction Ecological civilization and social sustainable development require human society to pay attention to and implement clean substitutes and electric substitutes for fossil fuels, so as to meet the requirements for climate change and environmental protection, and effectively ease the contradiction between the shortage of fossil energy resources and the increasing energy demand. The structure of energy production will gradually transition to a structure where renewable energy is the main resource and fossil energy is the auxiliary resource. However, renewable energy is characterized by uncertainty, intermittency, and poor controllability and predictability. As the share of electric vehicles (EV) in the energy consumption structure is increased, this will lead to stronger demand for user-friendly interaction. As an important supporting platform for energy production and consumption, the power system faces a significant challenge, i.e., great uncertainty on both the supply side and the demand side. Meanwhile, it also faces evolving requirements for providing a more secure, reliable, flexible, interactive, friendly, and open power supply. Therefore, SG has become the strategic choice shared by all countries around the world. SG will establish a new scenario for energy utilization, i.e., the introduction of Internet technology will turn the grid into a venue where energy sharing is possible, thus allowing a large amount of users to use renewable energy for power generation at their homes or in offices or factories and share the electricity with each other. In this way, EV and local storage equipment will be used widely. SG will change the industry-wide business model based on new concepts, and facilitate
Y. Liu () Tianjin University, Tianjin, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_13
351
352
Y. Liu
the organization, R&D, and implementation efforts, in order to cope with disruptive changes. SG will also nurture new technologies and business models, and it will facilitate an industrial revolution on a larger scale than the development of the Internet by means of deep fusion of the energy and power industry with advanced information communication technology, smart control technology, and artificial intelligence technology [1]. Actually, the SG is an interdisciplinary product, born with the advances brought about by the Internet. From 1925 to 1950, the idea of the interconnected power grid had been established and was greatly expanded in the world. In fact, it possesses four basic properties of the Internet including interconnection, openness, peer to peer, and sharing. Novel technologies and ideas in the area of the Internet should be incorporated for the realization of the vision of the SG. To further promote the innovation of the SG, interdisciplinary communication and cooperation are critical. Close cooperation is required among scientists and engineers in the fields of energy and information and communication. Advanced information technology pushes forward innovative teaching in order to cultivate new engineering talents for the development of SG.
13.2 Integration of Distributed Renewable Energy For renewable energy applications, there is a consensus that local (peripheral) development and consumption mode is the best way to tackle the uncertainty of wind power generation and solar power generation. The interconnected power system [2] proposed in the 1920s–1950s which is still functional is characterized by interconnection, openness, peer-to-peer architecture (peer-to-peer interconnection between energy autonomous units), and sharing (distributed local optimization is used to realize scheduling and optimization of global energy management). These characteristics are consistent with the basic features of the Internet. The application modes of wind power generation and solar photovoltaics (PV) power generation are analyzed from the perspective of the operating principles of the interconnected power system and considering renewable energy power generation ratio, system operation constraints, and available regulation capacity. Under the circumstance of coal-fired thermal power used as the main regulation capacity, this analysis shows that a high penetration of wind and solar PV power generation can be integrated only in local development and consumption mode [3]. Meanwhile, from the perspective of the whole society cost, in local development and consumption mode, unit renewable energy power supply cost is decreasing. Currently, international attention is being paid to the point when solar PV module and battery energy storage system can reach grid parity. Distributed energy resources (DER) include power generators and/or power storage systems connected to the distribution system, including behind-the-meter devices installed at the user side. DER can reflect a range of power generation and storage technologies, including renewable energy, combined heat and power (CHP),
13 Facilitating Interdisciplinary Research in Smart Grid
353
stationary battery energy storage, and EV equipped with a two-way converter. DER can be used for local power generation/storage, participate in capacity markets and ancillary service markets, or be aggregated into virtual power plants. Advanced DERs which are realized by a smart inverter and can interact with the grid are increasingly used to guarantee power quality and grid stability, and simultaneously meet the security requirements of the distribution system. Advanced functions of DERs will help facilitate new grid architecture, including “microgrid” [4]. In case of a power outage, microgrids can be separated from the grid and operated independently so as to form a more adaptive and resilient power system. Power demand-supply balances benefit when the user is encouraged to participate in demand response (DR) and protocol-based load control. The demand response is defined as the changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices, or when system reliability is jeopardized. The enhancement of conservation awareness promotes the electric power corporations to proactively seek different methods to satisfy the supply-demand balance. Since there is a large proportion of shiftable demand in the residential load, commercial load, and high energy consumption industries, they can cooperate with the grid operators to support the demand reduction under peak shaving and emergency conditions and assist the system in realizing instantaneous power balance. Modern transportation comprises a large percentage of energy consumption worldwide, second only to the power industry. Electric power substitution and efficient power utilization can effectively ease tensions arising from the shortage of fossil energy resources. The Beautiful China policy envisions that the proportion of EVs will reach 80% or higher by 2050; the total consumption and storage of electric power needed for that conversion is extremely large. The EVs can be charged in the low power consumption period (at low electricity prices) and feed the electric power back to the grid during the day when the electric power demand is the highest (at high electricity prices). Therefore, EVs will become the major part of demand response and should and can be developed in a coordinated way with the distributed renewable energy generation developed on a large scale. The uncertainties of the supply side and demand side constitute the biggest challenge confronted by the grid operation in the future. Access to the most distributed renewable energy and energy management systems on the user side increases the uncertainties for the supply and demand of the periphery of the power system including the distribution network, microgrids, plants, buildings, and homes. The key to solving the problem is to handle the uncertainties locally. In the future, the grid must share the responsibility with the periphery. Most DERs are connected to the distribution network at all voltage levels (particularly the distribution network with a voltage level of 110 kV and below). Thus, the distribution network serves as the power delivery system in which the line power might flow in a bidirectional way from top to bottom. However, the existing distribution network is designed in a unidirectional manner, without the technical potential to effectively integrate most DERs, which means that the existing
354
Y. Liu
grid can hardly accept a high proportion of distributed renewable energy. Therefore, “how to handle millions of distributed energy resources and solve the intermittency, variability and uncertainty of renewable wind energy and solar energy generation and meanwhile ensure the security, reliability and personnel and equipment safety of the grid and stimulate the market” becomes the problem to be solved by the grid in the future. The task will be completed by SG. From the perspective of grid, the prime motivations for SG include at least the following four aspects: improvement of the secure and stable operation level of the system (for the purpose of resisting disturbances and disruptions) including the reduction of major power outage risk and enhancement of capability of rapid restoration after catastrophic events; access to and full utilization of large amounts of DERS; advanced marketing and demand-side management; and the large requirements of a digital society for power supply reliability, power quality, and energy efficiency (EE) of the grid. “Distributed energy resource + SG” focuses on local control and reduction of external dependence, which is a disruptive change and exerts a huge influence on daily life and social and economic development.
13.3 Essential Features and General Idea of SG The essential technical characteristics of the SG are: the two-way flow of electricity and information in order to establish a highly automatic and widely distributed energy exchange network; and the utilization of distributed computation, communication, and Internet within power grids (Fig. 13.1) in order to realize the real-time exchange of information and a near-instantaneous balance between power demand and supply at the device level [5]. The general idea of SG involves intelligence, high efficiency, inclusiveness, incentives, opportunity, quality emphasis, strong anti-disaster ability (resilience), and environmental protection, not simply intelligence.
Fig. 13.1 Basic idea of SG
13 Facilitating Interdisciplinary Research in Smart Grid
355
The SG will strengthen all aspects of the power delivery system, including power generation, transmission, distribution, consumption, and so on. SG will perform the following: 1. Provide a large-scale situational awareness to facilitate the mitigation of grid congestion and bottleneck and reduce and prevent major power outage. 2. Provide “granular” system visualization for grid-operating personnel so that they can optimize power flow control and asset management and enable the postaccident rapid restoration of the grid, that is the self-healing ability. 3. Integrate and use a large number of DERS, especially renewable energy power generation. 4. Enable power companies to advocate, encourage, and support consumers to participate in the electricity market and demand response through two-way visibility, and support the growing use of EV. 5. Provide opportunities for consumers to actively engage in energy consumption choices with unprecedented enthusiasm. 6. Solve the problem of cyber security. The power infrastructure under centralized planning and control and created before the large-scale application of microprocessors limits the flexibility and efficiency of the power grid to a great extent, resulting in risks in several key aspects such as security, reliability, and resilience. The power grid in the future will be connected to a large number of DERs and it will become more and more difficult to accurately predict the output. This will make it much more difficult for the traditional centralized control mode to be applicable. Therefore, SG, especially a smart distribution network, will comprise distributed intelligence. The topology of the distribution network of the future shall be flexible and reconfigurable. Furthermore, there is reliable two-way communication wherever electricity is available. Starting from the underlying sensors and intelligent agents, the energy network and information communication network will be highly integrated. Advanced metering infrastructure (AMI) solves the “last-kilometer” problem of power communication, providing electric power companies with unprecedented system-wide measurement and visibility. Its implementation can help electric power companies obtain unprecedented amounts of data. In addition to electricity bill measurement, these data can also be used to evaluate the equipment operation status, optimize asset utilization, prolong equipment life, reduce O&M costs, improve power grid planning, identify power quality problems, and detect and reduce electricity theft. Advanced distribution automation (ADA) using flexible AC/DC transmission and distribution devices and intelligent power electronic equipment will create the smart distribution network of the future. ADA is a revolution, not just an extension of traditional distribution automation. AMI and ADA are recognized as important basic function modules in the global implementation of SG. For a “smart” grid, the amount of information technology used in the distribution network will be the same as that used in the operation of the transmission network. In essence, the lifeblood of any SG is the data and information used to drive applications, which in turn makes it possible to develop new and improved operation
356
Y. Liu
strategies. The data collected in any field of power system, including power consumers, electricity market, service providers, operation, generation, transmission, and distribution, may be related to the improvement in other fields. Therefore, realtime data sharing with those participants who need to use or have the right to know the data in a timely manner is the basic element of SG. The envisaged SG will change the way people live and work, as the Internet has, and stimulate similar changes. However, due to the complexity of the SG itself, it involves a wide range of stakeholders and requires a long transition, continuous R&D, and long-term coexistence of multiple technologies. In the short term, we can focus on a smarter grid, using existing or upcoming technologies to make the current power grid more effective and create greater social benefits (such as improving the environment) while providing safe and reliable electricity. The relevant technologies of SG can be divided into three categories, namely, SG technology, technologies that can be driven by SG, and technologies that create a platform for SG. The applications are very extensive. 1. SG technologies. These include wide-area measurement (wide-area measurement system/phasor measurement units (WAMS/PMUs) and situational awareness) and control systems, the integration of information and communications technology (ICT), the integration of renewable and distributed generation, the application of transmission expansion, flexible network topology of the distribution grid, advanced distribution network management, AMI, consumer-side management systems, and so on. 2. Technologies promoted by the SG. These include wind turbines, PV power generation devices, plug-in EV, green and energy-efficient buildings, and smart household appliances. EV and green buildings may become the killer applications of the SG. 3. Technologies building platforms for the SG. These include integrated communication technology, sensing and measurement technology, energy storage technology, power electronics, diagnosis technology, superconducting technology, and so on. To accelerate the implementation of the SG, the best technologies and ideas, such as open architecture, Internet protocol, plug-and-play technology, common technical standards, non-specialization, and interoperability, will be applied. In fact, many of these technologies have already been applied in power grids; however, the potential of these technologies can only be fully exploited in a case involving twoway digital communication and plug-and-play capabilities. A mature, healthy, and integrated electricity market must be established, including implementing time-of-use or real-time electricity prices to reasonably reflect the market value of electric energy as a commodity; formulating policies to encourage DERs to sell electricity back to the power grid, such as the feed-in tariff policy of distributed clean energy; and formulating policies to ensure the recovery of SG investment costs in order to stimulate innovation in SG.
13 Facilitating Interdisciplinary Research in Smart Grid
357
13.4 A Grid as Smart as the Internet The SG will become a more robust autonomous and self-adaptive fundamental infrastructure, capable of self-healing responses to disturbances such as terrorist attacks, military threats, component failures, and natural disasters, thereby greatly improving the security, resilience, and reliability of the power grid. From this perspective, a smart microgrid that is seamlessly integrated into the public grid is perfect. Numerous technical, cost, and social factors are converging to make microgrids almost certain to be the biggest change in power infrastructure. Since microgrids can operate autonomously, there is an opportunity to create a very different future distribution system. Rifkin presents a vision of the energy Internet, in which hundreds of millions of people produce their own energy from renewables, store it locally in batteries or EV in their homes, offices, and factories, and then share it with others through the grid. The application of Internet technology will transform the grid into an Internet where the energy can be shared. The electric grid is considered to be the greatest invention of the last century, while the Internet is the greatest innovation of this century. The Internet is smart and can readily accommodate fast-changing landscapes of continuous disruptive information revolutions. In the new era of electricity, we would like the grid to be as smart as the Internet! [6].
13.4.1 Hierarchical and Layered Architecture of the Internet The Internet is a network of sub-networks structured in a hierarchical manner with a number of tiers, as shown in Fig. 13.2a. At the top is the global Internet (NAP: network access point), followed by several tiers, including NSP (network service provider) backbones, ISP (local Internet service provider) backbones, with LANs (local area network) or users at the bottom. The layered Internet architecture, known as the Transmission Control Protocol/Internet Protocol (TCP/IP) stack, is shown in Fig. 13.2b. The functions of the four layers are briefly described below: 1. Application layer. Users interact with the application layer. Electronic mail (Simple Mail Transfer Protocol, SMTP) is one of the Internet applications. Others include World Wide Web (Hypertext Transfer Protocol, HTTP) and file transfer (File Transfer Protocol, FTP). The application program passes the message to the transport layer for delivery. 2. Transport layer. A message is usually divided into smaller packets, which are sent individually along with a destination address. The transport layer ensures that packets arrive without error and in sequence.
358
Y. Liu
Fig. 13.2 Data flow path: (a) In the network structure. (b) In the protocol stack
3. Network layer. The network layer handles communications between machines. The packets are encapsulated in datagrams. A routing algorithm is used to determine whether the datagram should be delivered directly or sent to a router. 4. Physical layer. The physical layer takes care of turning packets containing text into electronic signals and transmitting them over the communication channel. The message—in this case, the e-mail—starts at the top of the protocol stack on the sender’s computer and works its way downward. An upper layer uses the functions available in the next layer and instructs the next layer what to do. The
13 Facilitating Interdisciplinary Research in Smart Grid
359
instruction is coded as a header added to the front of the message. Each layer adds a header on the way down. This process reverses on the receiving end. Each layer reads and interprets the instruction from the header and moves the message up or down after stripping the header intended for this layer from the message. Figure 13.2b shows another example of the paths of an e-mail up and down the protocol stacks with two intermediate routers. Here, it is assumed that router D is the main server of the ISP and performs the store-and-forward function. The Internet is smart because the layered architecture provides a division of labor and the distributed control empowers sharing of responsibility. The responsibility of sending a message from A to B is shared by a number of routers along the path. The required intelligence of each router is simple and specific, i.e., forwarding the message to the next recipient correctly. Functional decomposition in a layered architecture makes it possible for new applications or functions to be added by utilizing and configuring existing lower layer functions. Innovation becomes more readily achievable. Distributed control and layered architecture also makes the Internet resilient to disturbance and adaptable to technology advancement.
13.4.1.1
The Hierarchical and Layered Architecture of Grids with Intelligent Periphery
The conventional operating paradigm leaves the fundamental responsibility of system operation, i.e., maintaining instantaneous power balance without overload and abnormal conditions of the whole grid, to a single centralized decision-maker: the grid operator. Grids with intelligent periphery (GRIP) focuses on the periphery and requires no scrapping of the successful energy management system (EMS) or the practices of grid operation of the transmission system that are functioning well, only to make them simpler (without the responsibility for the periphery).
The Power System Structure Based on GRIP: A Hierarchy of Clusters The future distribution systems, microgrids, and building units (buildings, factories, and homes) will all be like the transmission system today with local generation and bidirectional power flows. Both the core (the transmission grid with real-time EMS) and the periphery (distribution grids, microgrids, building units) will be equipped with the EMS and maintain the near-real-time power balance based on the concept of clusters. The interconnected transmission system cluster consists of several interconnected control areas, namely, the regional transmission system clusters. Each distribution system cluster consists of several interconnected microgrid clusters, or/and building unit clusters, and a microgrid cluster may consist of several interconnected building unit clusters. The clusters are structured in a hierarchical manner as shown in Fig. 13.3. The clusters will be organized in a nested manner if the graph in Fig. 13.3 is flattened.
360
Y. Liu
Fig. 13.3 Layered and clustered (clusters nesting) architecture of grids
During the operation of the clusters, the exchange schedule between individual clusters is determined through central scheduling, and each cluster is required to maintain its internal net power balance and bear the responsibility of external schedule, thereby maintaining the near-real-time power balance of the whole clusters. A cluster has three basic functions: (1) dispatch of generation/load to maintain net power balance; (2) local feedback control to smooth out fluctuations; and (3) mitigation of failures by cutting generation/load. The operating principles of clusters in GRIP are shown below: 1. Resource discovery and management. Each cluster reports its resource information to other clusters periodically in an autonomous manner and receives the updated resource information of its adjacent clusters in the meanwhile. In this way, each cluster in the system can know the existence of each other cluster. 2. Connectivity and responsibility sharing. Each cluster can flexibly operate in a connected/isolated mode. In the connected mode, clusters can achieve autonomous operation and share responsibilities (each cluster contributes its own capabilities to the entire power system as an energy consumer, or energy supplier). 3. Efficient inter-cluster data sharing and interface protocol. Resource sharing is achieved by directly exchanging information between clusters (rather than centralized scheduling). The interface protocol defines the formats that information must obey during communication and the meaning of these formats. Figure 13.4a shows a part of the GRIP architecture. Power cannot be directed from one node to another node, as data flows on the Internet. Nevertheless, it is logically possible to trace the power flowing from generation to consumption through sub-grids. The underlying theoretical basis for the logic is that power will be balanced (the difference between import and export power is constant) on the
13 Facilitating Interdisciplinary Research in Smart Grid
361
Fig. 13.4 Power delivery from A to B in the hierarchy of GRIP clusters: (a) clusters nesting in the GRIP; (b) layered architecture of each cluster in the GRIP
whole grid without causing overload or other abnormal conditions if and only if the net power is balanced (the difference between the generation and load is constant) on any of the disjoint sub-grids of the grid whose union covers the whole grid without causing overload or other abnormal conditions. This is consistent with the definition of the cluster in the GRIP architecture, that is, a connected sub-grid of the grid, consisting of a cluster of generation, load, and prosumers with the intelligence to manage and control its net power balance. Two clusters cannot partially overlap, and the union of all clusters covers the whole grid.
362
Y. Liu
Layered Architecture of GRIP The layered architecture of GRIP consists of three layers, namely, market layer, scheduling layer, and balancing layer (Fig. 13.4b). Users of the grid—that is, prosumers—interact with the electricity market (sometimes called the power market) to share or trade electricity. The transaction must be scheduled and realized physically on the grid and the net power of the cluster must be balanced at all times. 1. Market layer. The trading of electricity in the market must be realizable for implementation in the scheduling layer. Off-line analysis is used to translate cluster operating limits into constraints on acceptable transactions the clusters can engage in. 2. Scheduling layer. A prosumer may participate in one or more of the day-ahead, hour-ahead, and real-time markets to maximize his/her benefit. Preparation work for scheduling must be done in advance to ensure, first of all, that the cluster has the ability to maintain net power balance at the time of execution of the transaction. 3. Balancing layer. The scheduling layer ensures that the net power of the cluster can be balanced within the time-step of the dispatch, which may be seconds or less. Consider the generation/demand redispatch inside distribution system cluster D, as shown in Fig. 13.4b, to demonstrate the functional decomposition in the cluster.
The Following Operating Principles Make the GRIP Resemble the Internet 1. The responsibility of maintaining power balance in a GRIP is shared among all clusters. Each cluster must maintain its net power balance, with all the responsibilities of scheduling, dispatching, balancing, and security. Each cluster must operate as an autonomous unit, sticking precisely to its announced schedules, even in the event of a disturbance such as generation failure in the cluster. Each cluster must be able to prevent unscheduled power flows and acquire necessary reserve from within or from the cluster one tier above. The reserve from the cluster above will be able to maintain that cluster’s net power balance and prevent the effect of the disturbance from traveling further. 2. In a layered architecture, the lower layer carries out tasks instructed by the upper layer, and the interface must be well defined. The market layer expects the scheduling layer to implement any transaction that is deemed acceptable by the scheduling layer. What constitutes an acceptable transaction must be well defined. The scheduling layer knows precisely the level of capability the balancing layer has in smoothing out fluctuations. There are no back-and-forth negotiations between the layers.
13 Facilitating Interdisciplinary Research in Smart Grid
363
13.4.2 A GRIP Based on the Novel Paradigm Is as Smart as the Internet, Especially Suitable for Serving the Grid of the Future 1. Better utilization of variable DERs. The distributed operating paradigm will lead to maximal utilization of variable DERs resources because the operation of variable DER will rest in the hands of local stakeholders who have better knowledge to forecast, schedule, and control the resources. 2. Empowering prosumers. Prosumers will have complete control over the operation of their own generation and load and will have the incentive to install and operate the most efficient and effective facilities, such as solar PV, battery storage systems, EV charging systems, and ICT hardware and software. 3. Responsibility sharing with the periphery. In the new digital era, the periphery has a similar level of intelligence and capability as the grid operator in managing its own sub-grid, as hardware gets cheaper and software gets smarter. 4. Seamless integration of nano-, mini-, and microgrids. The responsibility-sharing feature of the new paradigm is compatible with the autonomous or semiautonomous philosophy of today’s nano, mini, and micro grids, and assists their seamless integration. Moreover, the allowance of semi-autonomous operation of clusters will prevent the total grid defection of prosumers. 5. Fast adaptation of technology innovations. The layered architecture of GRIP makes it easy to incorporate innovative new technologies.
13.5 Integration of Advanced Information Technology in SG SG is a new generation of power system based on the traditional power system, which integrates renewables, new materials, new equipment, advanced sensing and monitoring control technology, information processing and communication technology, energy storage technology, and so on. It can realize all-round perception, digital management, intelligent decision, and interactive transactions in the process of power generation, transmission, distribution, usage, and storage. Its goal is to fully meet the needs of users for electricity and optimize resource allocation, ensure the security and reliability of power supply, guarantee the power quality and adapt to the development of the electricity market, so as to provide users with reliable, economical, clean, interactive power supply, and corresponding valueadded services. The most essential feature of SG different from the traditional power grid is that it faces all links of the power system (electricity generation, transmission, distribution, consumption, dispatching, market, DERs such as distributed generation, EVs, and energy storage); massive electrical equipment, data/information acquisition equipment, and computing equipment are interconnected through the power grid and communication networks; and physical networks and information networks are
364
Y. Liu
highly integrated. The physical network covers the power physical network and the sensing and measuring devices integrated with it. Information network refers to the collection of interconnected computers, communication equipment, and other information and communication technologies that exchange information and share resources. Information will flow in and out from all links, and some information will also be integrated to support advanced applications of the SG. The latest information technology, communication technology, and computing technology will be introduced and applied in the SG (Fig. 13.5). The SG will be integrated into the following: • The physical power layer of the SG, which supports the intelligent sensing technology of massive electrical equipment state perception and interconnection • The communication network achieving data/information exchange and resource sharing in the SG and its key technology—communication technology • The information technology (big data, artificial intelligence, cloud computing, and so on) to process, analyze, and utilize massive information of the SG to provide various services in the application layer • SG Internet of Things technology integrating various technologies to improve wide area interconnection and perception and a cyber-physical power system technology to promote the organic integration of SG information network and physical network, SG cyber security technology, and so on Its extension also includes information communication hardware and software devices, information management as well as the formulation of relevant standards, policies, and regulations. 1. SG Sensing Technology Sensing technology is a modern science and engineering technology that obtains information from information sources and then identifies and processes information. It is the sum of functions such as automatic detection and conversion of information. Sensing technology is the key and foundation of signal detection and data acquisition, involving sensing mechanism, functional materials, manufacturing technology, packaging technology, processing technology, IOT communication technology, and so on. Sensing technology is the premise of realizing observability and controllability of power grid, and the basis of digitization and intelligence of power grid. In the context of SG construction and development, it is necessary to accelerate the breakthrough of intelligent, reliable, and high-performance special sensing technology and deeply strengthen the ubiquitous sensing ability under the SG. 2. SG Communication Technology Communication technology refers to the methods and measures taken to transmit information from one place to another and is the sum of information transmission technology and signal processing technology in the communication process. The basic functional elements of communication technology include information transmission, information exchange, and terminal reception. SG communication
13 Facilitating Interdisciplinary Research in Smart Grid
365
Fig. 13.5 Roadmap of critical information technology integrated into SG
technology needs to cover all links of power grid physical network infrastructure and realize real-time two-way high-speed flow of information to meet the functional requirements of SG. It is the core and key technology supporting the construction and development of SG. It is expected to establish an open and standardized
366
Y. Liu
communication system with bidirectional, real-time, and reliability characteristics, which can realize the integration of a large number of available resources in the existing grid communication system and the acceptance and inclusion of various advanced communication technologies as well as the mixed use and coordination of various communication technologies. SG communication includes power communications backbone network and access network by the network architecture. It includes wired communications (such as optical fiber communication and power line carrier communication) and wireless communications (such as microwave communication, micro-power wireless communication, and 3/4/5G and NB-IoT communication). Among them, the power communications backbone network undertakes the transmission task of large data flow, and the optical transport network (OTN), packet transport network (PTN) and automatically switched transport network (ASTN) based on optical transport network (ONT) have gradually replaced the traditional network. The most significant networking demand of the power communication access network is flexible access and plug and play. Power line carrier communication and wireless communication technology will be widely used in access networks. 3. SG Information Technology The lifeblood of any SG is the data and information used to drive applications, which in turn makes it possible to improve the operation strategy. In order to achieve secure an efficient information interaction and sharing among multiple entities in different links of SG, effectively support the core functions of SG, and maximize the value of power data, it is necessary to strengthen the research and application of next-generation information technologies such as big data, artificial intelligence, and cloud computing. (i) Big Data Technology Big data technology refers to the ability to quickly extract valuable information from various types of complex data. With the advancement of power intelligence and informatization, a tremendous volume of data is being rapidly produced and its application has almost covered all fields of power system. SG big data includes internal data and external data. The internal data of power grid includes massive data from the wide area measurement system (WAMS), supervisory control and data acquisition system (SCADA), advanced metering infrastructure (AMI), production management information system (PMIS), energy management system (EMS), distribution management system, power equipment online monitoring system, customer service system, and financial management system. External data of grid includes data from weather information system, geographic information system, Internet and public service sector, economic operation and development data, power demand data, and so on. The key support technologies for SG big data include data acquisition and preprocessing, data fusion, data storage, data processing, data analysis, data mining, data visualization, data privacy protection, and data security. At present, SG big data research and technology application are mainly divided into two aspects: one
13 Facilitating Interdisciplinary Research in Smart Grid
367
is to use data statistics and data mining methods to find the physical essence and operation law of power system represented by data as well as the hidden law relationship between SG and customers; and the second is to apply emerging datadriven methods (such as machine learning, deep learning, and stochastic matrix) to form a more intelligent solution. (ii) Artificial Intelligence Technology Artificial intelligence technology enables computers to imitate human logical thinking and advanced intelligence. It can be divided into three levels: computational intelligence, perceptual intelligence, and cognitive intelligence. Computational intelligence enables machines/computers to have high-performance computing power, and even surpass people’s computing prowess in processing mass data. Perceptual intelligence enables machines to perceive the surrounding environment like people, including hearing, vision, touch, etc. Speech recognition and image recognition belong to this category. Cognitive intelligence enables machines to have human-like rational thinking ability and make correct decisions and judgments. The integration of the three abilities will eventually enable the machine to realize humanlike intelligence to comprehensively assist or even replace human work. Artificial intelligence technology in SG is to deeply integrate artificial intelligence technology with SG, so as to support the development of SG; realize the combination of intelligent sensing and physical state, data driving and simulation model, auxiliary decision-making, and operation control; and effectively improve the ability to control complex systems. The key artificial intelligence technologies in SG are divided into a basic layer, a technology layer, and an application layer. The basic layer involves the collection and operation of basic data of artificial intelligence in SG and other related technologies. The technology layer involves the development of AI algorithm and model construction of SG, including knowledge graph, swarm intelligence, and machine learning technology. The application layer involves the visual perception, language understanding, and cognition of artificial intelligence technology in SG as well as the application terminal control technology integrating the two. (iii) Cloud Computing Technology Cloud computing refers to the delivery and usage pattern of information infrastructure. It refers to obtaining the required resources (hardware, platform, and software) in an on-demand and easily extensible way through the network. The network providing the required resources is called cloud. The characteristics of cloud computing are that it can immediately increase storage and computing resources and enjoy timely and stable services without increasing hardware investment, purchasing and installing software, and managing cost. With the development of SG and the popularity of smart electricity meters and other devices, the amount of information that needs to be collected and processed in the grid increases significantly. It is necessary to make full use of the concurrency and distributivity of cloud computing to improve the data computing and processing efficiency and reduce the data storage and usage cost.
368
Y. Liu
The application research of cloud computing in SG covers all links in the operation of grid. In power generation, cloud computing can provide corresponding solutions for storage-intensive and computing-intensive application systems such as wind power generation and solar power generation. In power transmission and distribution, cloud computing technology can provide a unified access service interface to accomplish data search, acquisition, and calculation, so as to help the grid data center improve the equipment utilization, reduce the energy consumption of the data processing center, solve the problems of low server resource utilization and information barriers, and comprehensively improve the effectiveness, efficiency, and benefits of mass data processing in the SG environment. In power utilization, cloud computing can support profound changes in user management mechanisms and system operation mode, and provide comprehensive capabilities such as largecapacity elastic expansion, supercomputing and service perception analysis for high-speed and real-time information flow, service flow, and energy flow. 4. Internet of Things in SG Internet of Things in SG integrates advanced perception, communication, information, and other technologies, that is, it uses perception technologies and intelligent devices to make ubiquitous perception of the power system, realize equipment interconnection and data transfer through the communication network, then carry out computing processing and knowledge mining, and realize interaction and seamless connection between people and equipment as well as equipment and equipment, so as to achieve the purpose of real-time control, accurate management and scientific decision-making of the power system. Internet of Things in SG provides sufficient and effective technical support for grid planning and construction, production and operation, operating and management, comprehensive service, development of new business models, construction of enterprise ecology, and so on. From the perspective of architecture, the Internet of Things in SG includes four layers: perception layer, network layer, platform layer, and application layer. The perception layer solves the data collection problem, the network layer solves the data transfer problem, the platform layer solves the data management problem, and the application layer solves the data value creation problem. Technically, it involves big data, cloud computing, Internet of Things, mobile Internet, artificial intelligence, blockchain, edge computing, and other information technologies and intelligent technologies. 5. Cyber-Physical Power System Technology Cyber-physical system refers to a complex system in which mutual mapping, timely interaction, efficient coordination of human, machine, object, environment, information and other elements in physical space and information space are constructed by integrating advanced information technologies and automatic control technologies such as perception, computing, communication, and control, so as to realize on-demand response, rapid iteration and dynamic optimization of resource allocation and operation in the system. Wide area sensing and measurement, highspeed information communication network, advanced computing, flexible control, and other technologies are widely used for power network. With more and more
13 Facilitating Interdisciplinary Research in Smart Grid
369
equipment interconnected through grid and communication network, SG will continue to evolve into a cyber-physical power system. Through the feedback cycle of grid cyber space and physical space, the system realizes deep integration and real-time interaction to add or expand new functions, and monitors or controls grid physical devices or systems in a secure, reliable, efficient, and real-time manner. Cyber-physical power system includes the power physics system, power communication system, and information decision system. The key technologies involved include architecture construction technology, unified modeling technology, security and reliability assessment technology, optimization and control technology, information support technology, cyber security technology simulation technology, and so on. 6. Cyber Security Technology in SG Cyber security technology is the sum of technologies to prevent accidental or unauthorized malicious disclosure, modification, and destruction of information. The meaning of cyber security is dynamic and changing, the cyber security situation is evolving, and the main cyber security technologies such as firewall technology, vulnerability scanning technology, and intrusion detection technology are constantly innovating. The operation of SG will depend significantly on the twoway communication of information, and real-time information needs to flow in and out from all links. Information failure may have a serious impact on the secure and reliable operation of SG, so cyber security becomes particularly important. Therefore, there are higher requirements for key standards such as confidentiality, integrity, and availability of information—almost “zero tolerance.” Cyber security technology in SG includes key technology, computer virus and virus defense technology, firewall technology, identity authentication technology, digital signature technology, cyber security standards, and so on. In order to deal with the increasingly serious security threat in SG, it is necessary to adopt comprehensive measures from the perspective of security technology, security architecture, and policies and regulations to improve the current cyber security level of grid. 7. Other Technologies Blockchain technology is an open, transparent, and decentralized database. In terms of data generation, data resources are shared by all network nodes and updated by operators, but are supervised by all members of the system at the same time. In terms of data use, all participants can access and update the data, and confirm the authenticity and reliability of the data. A large number of intelligent power generation, transmission, distribution, utilization, and energy storage equipment in SG urgently need blockchain technology to play an important role in different links of measurement and certification, market transaction, collaborative organization, and energy finance. In the future, the blockchain technology in SG will be developed in the aspects of improving the fault tolerance of asynchronous consensus network, digital identity authentication technology, data security, and anti-attack capability. High-performance computing refers to the parallel computing of highperformance cluster composed of multiple processors. It can be divided into
370
Y. Liu
high-throughput computing and distributed computing according to the close relationship between sub-processors. The nonlinear real-time complexity of SG itself may bring the curse of dimensionality of analysis and computation, which requires high-performance computing methods to provide effective solutions. The research on high-performance computing technology in SG is mainly divided into hardware and software levels. The hardware level includes the optimal design of network structure and efficient configuration of node equipment in SG environment. The software level includes the standardization of communication interface and the design of high-performance cluster management software.
13.6 Innovative Teaching Based on Advanced Information Technology For professional courses in power system, it is necessary to build an intelligent learning space with the virtuality and reality combination. Multimedia courseware is used to present the real scene of the power system, break through the limitations of time and space, introduce information technologies such as virtual reality, virtual simulation, and cloud platform, build a twin network mapping the real power system, and support immersive experience and simulation analysis. The following steps are needed for innovation teaching. 1. Create virtual reality (VR) learning resources with empathy across time and space [7]. By wearing VR glasses, students can “walk” into the electricity museum, power system equipment and engineering, power system on-site operation scene and advanced SG demonstration project anywhere (in the classroom, dormitory, etc.), and cross the long river of history to immerse in the spirit of scientists and engineers and immerse themselves in the reality of engineering, which greatly improves their learning enthusiasm and learning effect (Fig. 13.6). 2. Develop virtual simulation platform and international cloud sharing platform used for microgrid. Select an autonomous small power system (microgrid) to reproduce the planning and design, energy management, and operation control
Fig. 13.6 Create virtual reality (VR) learning resources with empathy across time and space
13 Facilitating Interdisciplinary Research in Smart Grid
371
Fig. 13.7 Virtual simulation platform schematic
Fig. 13.8 International cloud-based sharing platform
of the actual microgrid project in the form of virtual simulation, which can carry out equipment selection, parameter design, free transformation of operating conditions and scenes, etc. Research and develop the international microgrid cloud sharing platform and access more than ten actual microgrid project cases in China, the UK, and Japan that cover multiple scenes such as industrial parks, schools, and cities, so as to support virtual simulation based on cloud platform microgrid and semi-physical hybrid simulation based on local microgrid. Apply the two platforms to carry out face-to-face teaching and students’ practical operation, so as to deepen cognitive understanding and improve teaching effect, while allowing students to have zero distance contact with the international frontier (Figs. 13.7 and 13.8).
372
Y. Liu
3. Carry out process learning situation analysis, testing, and after-school feedback based on information tools, and conduct teaching and academic research on learning subject-based big data.
13.7 Conclusion Changes to research and teaching are happening at a time when the electrical energy system itself is experiencing its most dramatic transformation since its creation more than a century ago. SG will change the way people live and work, as the Internet has, and stimulate similar changes. With Internet thinking employed to design an SG and advanced information technologies to promote SG and its educational innovation, the envisaged concept of Internet of Energy can be expected soon.
References 1. Jeremy Rifkin. The Third Industrial Revolution (How Lateral Power Is Transforming Energy, the Economy, and the World). Palgrave Macmillan, accessed Sep 27, 2011. 2. Zhaoxu Liu, Yongzhuang Li. Challenges for Developing Nationwide Interconnected Power Systems in China. Power Engineering Society Summer Meeting. https:/ /ifbicd85ae6022a1f4d78sxocqcfvwxvfk65fufiac.eds.tju.edu.cn/wos/alldb/fullrecord/ WOS:000089398700377. accessed 2000. 3. Bing Sun, Yixin Yu. Should China Focus on the Distributed Development of Wind and Solar Photovoltaic Power Generation? A Comparative Study. Applied Energy, https://ifbfh1b13095ec5284139sxocqcfvwxvfk65fufgac.eds.tju.edu.cn/10.1016/ j.apenergy.2016.11.004. accessed Jan 1, 2017. 4. Zheng Zeng, Rongxiang Zhao, Huan Yang, Shengqing Tang. Policies and Demonstrations of Micro-grids in China: A Review. Renewable and Sustainable Energy Reviews. https:// ifbfh1b13095ec5284139sxocqcfvwxvfk65fufgac.eds.tju.edu.cn/10.1016/j.rser.2013.09.015. accessed Sep 27, 2013. 5. Yixin Yu, Yanli Liu, Chao Qin. Basic Ideas of the Smart Grid. Engineering, https:// ifbfh1b13095ec5284139sxocqcfvwxvfk65fufgac.eds.tju.edu.cn/10.15302/J-ENG-2015120. accessed Dec, 2015. 6. Yanli Liu, Yixin Yu, Ning Gao, Felix Wu. A Grid as Smart as the Internet. Engineering. https:/ /ifbfh1b13095ec5284139sxocqcfvwxvfk65fufgac.eds.tju.edu.cn/10.1016/j.eng.2019.11.015. accessed Jul, 2020. 7. Anton Tokarev, Ivan Skobelin, Mikhail Tolstov, Aleksandr Tsyganov, Margarita Pak. Development of VR Educational Instruments for School Pre-professional Education in a Research University. Procedia Computer Science. https:// ifbfh1b13095ec5284139sxocqcfvwxvfk65fufgac.eds.tju.edu.cn/10.1016/j.procs.2021.06.088. accessed 2021
13 Facilitating Interdisciplinary Research in Smart Grid
373
Yanli Liu entered Tianjin University in the Fall of 2003. The study of electrical engineering as an undergraduate made her develop a more systematic professional knowledge system and strengthened her determination to engage in the research and practice of power system. She acquired a master’s degree and a doctor’s degree in power system in the field of electrical engineering from Tianjin University in 2009 and 2014, respectively. Due to such experience, she has the ability to carry out scientific research independently and formulated a rigorous logical thinking. More importantly, under the guidance of her tutor—the initiator of smart grid in China, she was exposed to the frontier of the field—smart grid, after graduating as an undergraduate. In 2009, as the core person in charge, she organized and held China’s first international academic forum on smart grid and chose key smart grid technology situation awareness as her doctoral subject. In the continuous advancement of the subject, she gradually understood and realized the tracking of the frontier of the field with an international vision and the breakthrough of key technologies with an innovative spirit. After receiving her doctorate, Yanli Liu stayed at Tianjin University to teach and research. Seeking truth from facts, dare to be the first, and striving for excellence are the principles guiding her in doing everything. Yanli Liu enhances her basic teaching skills and continues to promote teaching innovation and reform. Now, she has become the person in charge of the national first-class course and the national outstanding teacher. She has proposed the future grid system architecture, made breakthroughs in key technologies in the interdisciplinary field of “artificial intelligence+ grid,” and won the IEEE PES China Outstanding Women Engineering Award. Yanli Liu is committed to the reform of the talent training system and the internationalization of the discipline. She has succeeded in organizing the application of two national-level platform bases and served as the principal. Yanli Liu devotes her efforts to driving the growth and development of more young and female engineers in the energy sector while she continues to improve herself. She now serves as the Chairman of Women in Applied Energy, the Secretary General of IEEE PES China Chapter Committee of Young Experts, and the Deputy Secretary General of the First Committee of Working Women. She is not only the youngest but also the first female head of the Department of Electrical Engineering at Tianjin University. Owning to her excellence in leadership and comprehensive performance, Liu has won more than 30 important awards and honors including National Young Post Experts (50 winners nationwide).
Part III
Operation, Automation and Control: Local Distribution Power Systems
Chapter 14
Substation Automation Mini Shaji Thomas
14.1 SCADA Systems Supervisory control and data acquisition (SCADA) systems are used extensively in power systems, in all areas from generation, transmission, and distribution to customer services. The terminology “SCADA” is generally used when the process to be controlled is spread over a wide geographic area, like power systems. New technologies and devices make it challenging for everyone to catch up with the developments in SCADA and automation in general, as with other emerging fields [1]. SCADA systems are defined as a collection of equipment that will provide an operator at a remote location with sufficient information to determine the status of particular equipment or a process and cause actions to take place regarding that equipment or process without being physically present. SCADA implementation involves monitoring (data acquisition) and remote control. Hence, an operator in a control room in a city should be able to observe the complete details of a system that may be located in a remote area or anywhere else. In power systems, the monitoring will fetch the values of voltage, current, power flow, switch positions, and many other parameters from the physical systems to the control center and will be displayed in a manner desired by the operator in real time. The control process can be automated so that the control command issued by the system operator gets translated into the appropriate action in the field, an example being the command to trip a circuit breaker.
M. S. Thomas () Former Director, NIT Tiruchirappalli, Tiruchirappalli, India Jamia Milia Islamia University, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_14
377
378
M. S. Thomas
REMOTE FIELD
CONTROL ROOM
Communication System HMI
Master Station
RTU/IED
Field Equipment (Power System)
Fig. 14.1 Components of SCADA systems
SCADA is an integrated technology comprising the following four major components: (a) Remote terminal unit (RTU)/intelligent electronic device (IED), (b) Communication systems, (c) Master station and d. Human–machine interface (HMI). RTU/IED is the “Eye, Ear and Hands” of a SCADA system, as it acquires all the field data from different field devices, processes the data, transmits the relevant data to the master station, and distributes the control signals received from the master station to the field devices. Communication systems are the channels deployed between the field equipment and the master station via different media such as hard wires, coaxial cables, fiber optic cables, or wireless technologies. The master station is a collection of computers, peripherals, and appropriate input/output (I/O) systems that enable the operators to monitor the state of the power system (or a process) and control it. HMI refers to the interface required for the interaction between the master station and the operators/users of the SCADA system. Figure 14.1 illustrates the components of a SCADA system.
14.2 Use of SCADA in Power Systems SCADA systems are used in all spheres of power system operations starting from generation, transmission, and distribution to the utilization of electrical energy. The SCADA functions can be classified as basic SCADA functions and advanced application functions. The basic SCADA functions include data acquisition, remote control, human–machine interface, historical data analysis, and report writing, which are common to generation, transmission, and distribution SCADA systems. The advanced application functions are specific to generation, transmission, and distribution, e.g., automatic generation control used in generation.
14 Substation Automation
379
SCADA/Automatic Generation Control (Generation SCADA) SCADA/AGC Supervisory Control and Data SCADA Acquisition
SA
SCADA/EMS (Energy Management Systems) (Transmission SCADA) SCADA/EMS
SCADA/DA/DMS (Distribution SCADA) DA
Substation Automation
Distribution Automation
DMS Distribution Management Systems
Fig. 14.2 Use of SCADA in power systems
Figure 14.2 demonstrates the use of SCADA in power systems, where the basic SCADA functions are represented by the initial block. Substation automation is the next block that is essential for all further applications, whether it is generation, transmission, or distribution. The top arm of the figure depicts the generation SCADA or SCADA/AGC (automatic generation control) used in all generation control centers, followed by the transmission SCADA or SCADA/EMS (energy management systems) implemented in all transmission control centers. The EMS software applications are the most expensive component of the SCADA/EMS, mainly due to the complexity of each application. The lower part of the figure shows the distribution functions superimposed on the basic SCADA functions, beginning with the SCADA/distribution automation (DA) systems and further expanding to the distribution management system (DMS) functions. As we move from left to right in the figure, the systems and application functions become more complex and expensive.
14.3 Substation Automation Power systems underwent a rapid change in the last few decades due to the deregulation and competition in the market where utilities have to compete with each other, selling electricity directly to the customers. The competition has also improved the power quality, service reliability, and lower cost of service, as expected. Since substations are the nodal center of activities in an electric utility, especially for transmission and distribution, automating substations is absolutely essential, and many advances and huge investments have been done to automate substations. The availability of various kinds of information from the system which results in better and improved decision making has also made the utilities proactive toward substation automation. Substation automation involves the deployment of substation and feeder operating functions and applications ranging from supervisory control and data acquisition
380
M. S. Thomas
(SCADA) and alarm processing, to integrated volt/var control in order to optimize the management of capital assets and enhance operation and maintenance (O&M) efficiencies with the minimal human intervention [2]. Development of IEDs such as protective relays, meters, and equipment condition monitoring IEDs has boosted the substation automation deployment to a large extent. Utilities have realized the importance of the data captured by the IEDs and its enterprise-wide applications. The capabilities of multifunction IEDs and digital communication inside the substation have substantially reduced the space requirements while automating substations. More and more personnel have been trained, and many projects are being implemented all over the world.
14.4 Conventional Substations Conventional substation design historically relied on copper wires carrying signals from the field where the high-voltage switchgear was located. The substation control room contained a number of electro-mechanical relays and other devices for protection, SCADA, metering, etc. Figure 14.3 provides a schematic of a conventional substation with its associated wiring requirements. Conventional substations are labor intensive to implement and any modification is very expensive. Other issues include the resistance of the wires connecting the
Fig. 14.3 Conventional substation
14 Substation Automation
381
Fig. 14.4 Islands of automation in a conventional substation
instrument transformers and protection equipment and the current transformer (CT) saturation which may affect protection relays operating under fault conditions. There were distinct islands of automation in a conventional substation, as illustrated in Fig. 14.4. As the protection functions are the highest priority, the CTs and potential transformers (PTs) in the field were directly connected to the relays in the control room with dedicated hardwires and from relays to the trip coil of the circuit breaker. Digital fault recorders were installed separately to record the fault waveforms. Monitoring and control functions of SCADA were performed by a separate set of equipment, using hardwires. For each functionality, instrument transformers with iron cores were designed. As substations became automated, the merging of these islands of automation started. Elimination of duplication and hardwiring was the primary focus and design teams started working together. This required combining the skills of RTU/IED support, protection, and communications into one [3].
14.5 The Integrated New Substation The new integrated substation creates a common platform for protection, metering monitoring, and many more functions, thus eliminating the islands of automation as illustrated in Fig. 14.5. The migration to the new substation was aided by the introduction of new smart devices in the substation and field, which are revolutionizing the automation of substations.
382
M. S. Thomas
Fig. 14.5 Integration of functions in a substation
14.5.1 New Smart Devices for Substation Automation The introduction of new smart devices in the substation and field led to bringing “Intelligence” into the substations, thus reducing human intervention and improving operational efficiency. The prominent smart devices are relay intelligent electronic devices, instrument transformers with a digital interface, intelligent breakers, and merging units [4].
14.5.1.1
Relay Intelligent Electronic Devices (Relay IEDs)
Relay IEDs are the major components for automating a substation which are replacing the conventional electro-mechanical relays and RTUs. Substation integration involves integrating protection, control, and data acquisition functions into a minimal number of platforms to reduce capital and operating costs, reduce panel and control room space, and eliminate redundant equipment and databases. Relay IEDs can capture, process, and transmit operational data (SCADA data), which are the instantaneous values of volts, amps, MW, MVARs, and status data such as circuit breaker status and switch position. This data is time critical and is used to monitor and control the power system. IEDs are also capable of capturing the nonoperational data (files and waveforms such as event summaries, oscillography event reports, or sequential events records) for post-event analysis and study. Figure 14.6 captures the details of an IED, which is time synchronized from the GPS clock and the functionalities.
14.5.1.2
New Instrument Transformers with Digital Interface
New instrument transformers using capacitive, optical, and Rogowski techniques capture the voltage and current from the field, which shows better accuracy and reliability under the hazardous environment of the power system field rather than
14 Substation Automation
383
Fig. 14.6 Relay IED configuration and functionalities
using conventional iron core devices. The new instrument transformers have analog to digital converters and hence give digital output, eliminating hardwires.
14.5.1.3
Intelligent Breaker
The power of digital transformation is best evident in the intelligent circuit breaker, with a digital interface that can access digital data from a local area network and also transmit back information, like the status changes, through the same local area network. Intelligent breakers come with small controllers inside which can be programmed to take appropriate decisions as per the system conditions.
14.5.1.4
Merging Units (MU)
Merging unit is a new addition to the smart devices in the field. It is defined in IEC 61850-9-1 Communication Protocol as: “Merging unit: interface unit that accepts multiple analogue CT/VT and binary inputs and produces multiple time synchronized serial unidirectional multi-drop digital point to point outputs to provide data communication via the logical interfaces 4 and 5.” The MU functionality includes the signal processing of all sensors, synchronization of all voltage and current measurements, and providing analogue interface to signals and digital interface to switch positions, as shown in Fig. 14.7. MUs are synchronized with the GPS clock with 1 pulse per second (1pps) signals, and the group delay in the analog and digital signal processing is compensated.
14.5.2 Levels of Automation in a Substation Substation automation and integration are generally implemented in three levels of activity, the first level being the IED implementation with data flowing from
384
M. S. Thomas
Fig. 14.7 Merging unit (MU) Table 14.1 Levels of substation automation Level III Level II Level I
Utility enterprise Substation automation functions IED integration IED implementation Power system equipment (transformers, breakers, etc., in the field)
the power system equipment (transformers, circuit breakers, intelligent instrument transformers, and sensors), as shown in Table 14.1. This is time-consuming and labor intensive to implement as the signals have to be brought in from the field to the IEDs. The second level is the IED integration, as IEDs may be from different vendors, with different functionalities to perform various functions in a cohesive manner. The third level is the application functions run by the substation automation system to effectively monitor and control the substation and associated transmission/feeder and customer automation functions. These three levels will feed information to the utility enterprise where the integration of different control centers in the utility can be done and utility-wide data sharing and applications can be run [5].
14 Substation Automation
385
14.5.3 Architecture Functional Data Paths The substation transmits data from the field to the utility enterprise through three functional data paths: operational data to the SCADA master, nonoperational data to the enterprise data warehouse, and remote access to the IEDs. The most common data path is conveying the operational data to the utility’s SCADA system at the scan rate of the master (every 2–4 seconds). Other very important operational data in transmission substations is the phasor measurement units (PMU) data which is concentrated by the phasor data concentrator (PDC) and transmitted to the master at a much higher scan rate (10–100 milliseconds). This operational data path from the substation to the SCADA system is continuous. Transmitting the nonoperational data to the utility’s enterprise data warehouse is challenging as the volume of data is largely due to the nature of the data (the fault event logs, metering records, and oscillography) which are more difficult to extract from the IED relays since the vendor’s proprietary ASCII commands are required for nonoperational data file extraction, which is mostly being pushed from the substation automation (SA) system to the data warehouse based on an event trigger or time. The remote access path to the substation utilizes a dial-in telephone or network connection. This may be the data from a remote IED to be assessed by a substation field worker from the office. All three data paths as shown in Fig. 14.8 are developed in the process of SA implementation.
Fig. 14.8 Functional data paths (operational, nonoperational, and remote access)
386
M. S. Thomas
14.6 The Smart Digital Substation The new smart digital substation is an amalgam of the smart devices, smart communication systems, and HMI devices in the substation and will integrate the protection, metering, data retrieval, monitoring, and control capabilities. The digital substation is organized into three architectural levels: the process level, the protection and control level (bay level), and the station bus and station level. Figure 14.9 shows the picture of the modern digital substation with the equipment in the field (process level based on IEC 61850-9-2), the bay-level devices and communication and the station-level LAN and the associated devices and databases.
14.6.1 The Process Level The local bus which gets the data from the equipment (primary systems), which are embedded with smart sensors, is referred to as the process bus as shown in Fig. 14.9. This is a new introduction with the advent of substation equipment with a digital interface deployed in the field, explained earlier. The data will include current, voltage, phasor measurements, and other data (pressure/temperature from GIS, etc.) from the field devices (primary equipment) which travel to the SCADA control center via the process bus, and the control signals from the control station reach the field devices through the process bus. The protection relays, recorders, PMUs, bay controllers, and PDCs can readily subscribe to this information as clients to this data flow over the process bus. In a fully digital architecture, control commands are also routed to the primary devices via the process bus for both protection and control.
14.6.2 The Protection and Control Level (Bay Level) The secondary equipment such as bay controllers, protection relays, Ethernet networks and switches, time synchronization units, measuring devices, and recording devices are included in the protection and control level. The bay level or the protection and control level includes these devices and the panels which host them.
14.6.3 The Station Bus and Station Level In a substation, the communication of data to a higher hierarchy originates from this layer. The IEDs perform the time-critical protection functions by directly interacting on the process bus. The point on wave switching is also done by the IEDs. The
14 Substation Automation
Fig. 14.9 The new digital substation
387
388
M. S. Thomas
station level will also host the substation HMI required to visualize the events in the substation so that the personnel will get the real-time data of the operations. Coordination of multiple IEDs and also the condition monitoring of the equipment in the substation such as the transformers, circuit breakers, and bushings are also managed by the station-level HMI via the station bus. The station bus supports peerpeer communication, and multiple devices and clients to exchange data. Figure 14.9 shows the picture of the modern digital substation with the equipment in the field (process level based on IEC 61850-9-2), the bay-level devices and communication and the station-level LAN and the associated devices and databases.
14.7 Substation Automation Architectures (Old and New Substation) Electric substations are the most critical infrastructure of the electric power systems as they monitor and control the widely spread transmission/distribution network. Automating a conventional substation will be different from the process of automating a new substation due to the availability of devices and communication channels [6]. Automating existing substations poses a serious challenge, as the existing equipment needs to be integrated/retrofitted with the new automation systems. Several alternatives are available for implementation depending upon the availability of equipment and software in the substation and the financial constraints. This automation can be done in stages so that the transition is smooth and the investment can be spread over a few years. In existing substations, there are several alternative approaches, depending on whether or not the substation has a conventional RTU installed. The utility has three choices for their existing conventional substation RTUs: integrate RTU with IEDs (assuming the RTU supports this capability); integrate RTU as another IED; and retire RTU and use IEDs and data concentrator as with a new substation. There is a need for trained staff well versed in old and new technologies for this transition. Automating a new substation is relatively easy. The design stage itself can start with a blank sheet of paper. IEDs will be implemented and integrated for performing different functions, and the majority of operational data for the SCADA system will come from these IEDs integrated with digital two-way communications. Conventional RTUs will be absent in the new substations. The RTU functionality is addressed using IEDs, PLCs, and an integration network using digital communications. Less than 5% of the points will be hard wired in a new substation. IEDs and data concentrators support both operational and nonoperational data whereas RTUs are designed to support operational data and hence are being replaced now.
14 Substation Automation
389
14.8 Substation Automation Application Functions Once the substations are automated, a host of application functions can be implemented from the substation and at the enterprise level. The substation automation application functions include the following: 1. Integrated protection functions. 2. The functions performed on the substation bus bar and associated switch gear include intelligent bus failover/automatic load restoration, adaptive relaying, and equipment condition monitoring 3. Enterprise-level automation/application functions. As discussed earlier, transmission/distribution SCADA will have the basic SCADA functions such as monitoring and control, report generation, and historical data storage and a myriad of application functions for special application in the substation automation scheme. The following sections will elaborate on the application functions in detail [1].
14.8.1 Integrated Protection Functions: Traditional Approach and IED-Based Approach In a traditional substation, the protection functions involving the relays, instrument transformers, and the circuit breaker were completely implemented by hardwired connections. In case of a breaker failure, the hardwiring will carry the trip signal to the backup protection scheme as shown in Fig. 14.10a. In a substation which has deployed IEDs, the information from instrument transformers will reach the relay IEDs via LAN. The relays exchange information via LAN (process bus) and the circuit breakers will receive a trip signal via a Generic Object Oriented Substation Event (GOOSE) message traveling in the process bus. In a modern protection scheme, the backup protection is initiated via LAN as given in Fig. 14.10b in case of a breaker failure, which reduces the wiring tremendously and also will have alternate pathways. Other protection functions like automatic reclosing and bus differential schemes also can be implemented without installing separate protection relays and performance improvement and reliability enhancement can be achieved.
14.8.2 Automation Functions The automation functions implemented from the substation will vary as per the type of substation, transmission, and/or distribution. A few common distribution functions from the substation are discussed below including intelligent bus
390
M. S. Thomas
Fig. 14.10 (a, b) Protection via hard wiring and protection Via GOOSE messaging using LANs
failover/automatic load restoration, supply line sectionalizing, adaptive relaying, and equipment condition monitoring.
14.8.2.1
Intelligent Bus Failover/Automatic Load Restoration
In a distribution substation that has two transformers and an open breaker, in case of the failure of a transformer, the load automatically transfers to the healthy transformer. Since this often overloads the transformer and may lead to another failure, this scheme is generally disabled. However, in an intelligent bus failover scheme, the substation automation system may shed a few outgoing feeders temporarily and hence ensure that the healthy transformer is not overloaded. These feeders can be supplied from an adjacent substation by closing a tie switch and hence the disruption in load can be minimized. The benefit of this scheme is primarily improvement in reliability as the transfer of load is done as quickly as possible. Outage duration can be reduced from 30 minutes to a minute. Figure 14.11 shows the fault on transformer B. Circuit breakers 1 and 2 will trip to isolate transformer B. The bus sectionalizing CB 3 will close to transfer the load to Transformer A. This may overload Transformer A and may have to shed some load connected to CB4. In interconnected and automated systems, the load connected to CB4 can be transferred immediately by connecting the line to the adjacent substation automatically by closing CB 5 (components are labeled as CBs, 1–5).
14 Substation Automation
391
Fig. 14.11 Intelligent bus failover demonstration
14.8.2.2
Supply Line Sectionalizing
The high side breaker in a distribution substation may disconnect all the supply lines and the substations downstream in case of a fault and may plunge many customers into darkness. However, if the faulted section is identified and isolated, the supply can be restored to the rest of the substations/supply lines. Such actions will improve the reliability as the service to substations/lines that are without power can be restored as quickly as possible. Here also, the outage duration is reduced from 30 minutes to 1 or 2 minutes.
14.8.2.3
Adaptive Relaying
Generally, protection relay IEDs are set to pick up and trip at 20% overload. However, the settings can be altered for a brief period to help the power system get through a crisis. In the case of a mainline or generator trip, the special protection function of adaptive relaying can help the operator save the system. Adaptive relaying is the process of automatically altering the settings of protective relay IEDs based on the system conditions. In the event of tripping of a critical generator, power may be diverted from other sources and a specific line may get overloaded. This might cause the line to trip leading to the overloading of other lines and finally a complete system failure. The initial event can be reported to the corresponding SA system by the master station, which in turn can switch the appropriate relay settings to new values to allow components to carry more power until the crisis is resolved. The relay will be switched back to its original settings once the system comes back to normal.
392
14.8.2.4
M. S. Thomas
Equipment Condition Monitoring
Some of the equipment in power systems such as the high-voltage transformers and bushings of circuit breakers are very expensive and could take a few months to replace in case of a breakdown/damage. Hence, nowadays the equipment in the field is generally maintained superbly where the equipment operating parameters are continuously monitored by the automation system. This is called equipment condition monitoring (ECM) which helps to detect any abnormal operating conditions, using specialized sensors and diagnostic tools and allows electric utility personnel to take timely action when needed to improve reliability and extend equipment life. Once ECM is operational, this will eliminate time-based routine testing, thus saving significant labor and material costs and reducing catastrophic failures. The ECM monitoring devices include dissolved gas in oil monitoring/samples, moisture detectors, load tap changer monitors, partial discharge/acoustic monitors, bushing monitors, circuit break monitors (GIS and OCB), battery monitors, and expert system analyzers.
14.8.3 Enterprise-Level Application Functions There are many application functions that can be implemented at the enterprise level once the substation is automated, as the final level of automation.
14.8.3.1
Disturbance Analysis
IEDs have the capability to record fault waveforms and also SCADA data can be time stamped by the IEDs. The time-stamped operational data can help recreate the “Sequence of Events” (SoE) after a disturbance which can provide a great deal of insight into the disturbance as the time stamping is generally recorded in milliseconds or less. This will help the operator/planners assess the situation and take corrective action before/during the next disturbance. For example, in case of an islanding of a section of the power system, the faults, line tripping, overloading/underloading, and tripping of generators may happen in quick succession. After the islanding, the utility personnel want to be able to assess the situation and come up with the correct sequence of events. Time stamping of events, analog, or status changes immensely helps the utility recreate the event and do a thorough analysis of the situations which led to the islanding. The fault waveforms captured can be used by the maintenance, protection departments to assess the severity of the damage and take corrective action.
14 Substation Automation
14.8.3.2
393
Intelligent Alarm Processing
Intelligent alarm processing is of the utmost importance in a control room, to ensure the operator is not confused by the battery of alarms being triggered by an event. Alarm processing technology ensures that dispatchers receive only those alerts relating to events that must be addressed immediately, while the details of less critical secondary warnings are sent to databases and possibly utilized for later review. With only the most important distribution system alarms presented in a prioritized fashion, dispatchers can assess problems more easily and make better decisions to prevent a bad situation from getting worse. In SCADA/EMS used in transmission, the alarms are triggered less frequently and only during actual outage events. In distribution SCADA, however, faults are more prevalent due to the distribution lines being more exposed which makes them more vulnerable to natural forces. Alarms are typically triggered by faults and the events surrounding them, which occur continuously during routine operations of the distribution system. As an example, up to seven alarms are triggered when a feeder breaker in a substation trips, indicating that the breaker has tripped, along with three alarms each for voltages and currents on all three phases dropping to zero. The dispatcher only needs the breaker trip alarm, and not the other alarm information, if the breaker is automatically reclosed after a transient fault where the situation will resolve itself. With audible and visual alarms inundating the control room throughout the day, dispatchers in the control room wanted the SCADA system to suppress secondary alarms as only the primary alarms require operator action. Hence the alarm filtering techniques were developed, some of which can be configured during SCADA implementation or activated on demand during a crisis. There are various techniques available for alarm processing and suppression including knowledge-based suppression techniques.
14.8.3.3
Power Quality Monitoring
The influx of power electronic devices, which inject harmonics and ripples into the system, has deteriorated the quality of power nowadays. At the same time, for most of the precision manufacturing and sensitive appliances, quality power is absolutely essential. The IEDs can capture harmonics in the voltage waveforms and the total harmonic distortion, etc., and report to the substation control room. The substation automation system with the IEDs can capture and convey the oscillogram information to the monitoring center for assessment. Suitable corrective actions can be initiated on time to ensure and maintain power quality.
394
14.8.3.4
M. S. Thomas
Real-Time Equipment Monitoring/Dynamic Equipment Rating
Typically, power system equipment is loaded to the rated capacity under normal conditions or even less by the system managers. However, if the status of the equipment is continuously monitored (equipment condition monitoring, ECM), the loading can be based on actual conditions, rather than on conservative assumptions. For instance, a transformer detected with a ‘hot spot’ will always be loaded to a much lower value, due to the fear of a catastrophe. However, with the ECM in place, the transformer can be loaded to a higher value, by monitoring the true winding hot spot temperature. It is reported that 5–10% additional loading can be achieved by this activity, as loadability can be derived. Hence, the utility can “squeeze” more capacity out of the existing equipment and delay investment with huge benefits and improved availability. This concept is utilized in adaptive relaying when the relay settings are altered temporarily to tide over a crisis. Substation automation thus provides the basis for better monitoring and control of power systems and implements many application functions which are very critical for better system management, savings, and reliability. The communication within the substation is very critical, especially with an all-digital substation. International Electro-Technical Commission-IEC-61850 is the communication protocol used at all levels, all over the world [7]. Research is underway to improve substation automation in multiple ways and cybersecurity concerns are also being addressed [8, 9].
14.9 SA Practical Implementation: Substation Automation Laboratory Although the automation of power systems is being carried out throughout the world, the accessibility of researchers and students to such facilities is limited and literature to study as well as equipment to perform experiments were meager. With new IEDs, communication standards and much more getting introduced in power automation, there was a need to set up a SCADA Laboratory to introduce the students to the basics of power automation. This task was accomplished at Jamia Millia Islamia with the establishment of the first-of-its-kind and unique SCADA Laboratory in 2003. The facility is extensively used by students and by industry personnel for the basic understanding of SCADA systems and assisting the students to enhance the practical knowledge and application of SCADA systems [10]. The need to establish a substation automation (SA) laboratory to equip the students with additional knowledge about relay IEDs, GPS clock, communication protocols, and retrofitting of substation equipment was felt in 2008 and its implementation started. Thus, the various modules of the SA laboratory have been designed, keeping in mind the integration of the latest technology available with the existing infrastructure in the substation automation area. These modules help the students to understand the concept of IEC 61850, interoperability and the substation migration process [11].
14 Substation Automation
395
Fig. 14.12 The substation automation laboratory set up
A variety of relay IEDs have been deployed in the design of the lab to bring in all aspects of system protection. The relay IEDs are of differential, distance, and bay controller type, from different vendors and communicate on different protocols, as shown in Fig. 14.12. A universal test kit for testing of various devices, a GPS clock, and a protocol converter for legacy protocol conversion to IEC 61850 protocol are also incorporated. Later in 2015, a complete wide area monitoring and control system (WAMS) with 2 PMUs and a PDC were installed to incorporate the developments in smart transmission [12].
14.10 Summary The transformation of substations from the hardwired RTU-centric systems to the new process bus-based digital substation was discussed. Earlier substations were islands of automation for protection, metering, SCADA, etc. Integration of these functions in the modern digital substation with multifunction IEDs is discussed in detail. The substation automation application functions are discussed with examples. This chapter concludes with a short description of the substation automation laboratory set up to demonstrate the substation automation functionalities.
396
M. S. Thomas
References 1. Mini S Thomas & John D McDonald, “Power System SCADA and Smart Grids” CRC Press, Taylor & Francis, USA- 2015 2. J. D. McDonald, “Substation Automation, IED integration and availability of information”, IEEE. Power & Energy magazine, Vol. 1, No. 2, pp. 22-31, March/April 2003. 3. Klaus-Peter Brand, Volker Lohmann, Wolfgang Wimmer, Substation Automation Handbook, Utility Automation Consulting Lohmann, 2003 (http://www.uac.ch), ISBN 3-85758-951-5 4. Richard Hunt; Byron Flynn; Terry Smith, The Substation of the Future: Moving Toward a Digital Solution, IEEE Power and Energy Magazine (Volume: 17, Issue: 4, July-Aug. 2019) 5. John D. McDonald, “Electric Power Substation Engineering,” Third edition, CRC Press, 2012 6. James Northcote-Green, Robert Wilson, Control and Automation of Electrical Power Distribution Systems, CEC Press-2007 7. Klaus-Peter Brand, The standard IEC 61850 as prerequisite for intelligent applications in substations, IEEE PES General Meeting panel Denver 2004 8. Yongtian Jia; Liming Ying, “Multi-dimensional time series life cycle costs analysis of intelligent substation”, IEEE Transactions on Cognitive and Developmental Systems (Early Access), 2022 https://doi.org/10.1109/TCDS.2022.3147253 9. Muhammad M Roomi: Wen Shei Ong; S. M. Suhail Hussain; Daisuke Mashim, “ IEC 61850 Compatible Open PLC for Cyber Attack Case Studies on Smart Substation Systems”, IEEE Access (Volume: 10), 2022, https://doi.org/10.1109/ACCESS.2022.3144027 10. Mini S Thomas, Parmod Kumar and V.K.Chandna, “Design, Development and Commissioning of a Supervisory Control and Data Acquisition (SCADA) Laboratory for Research & Training”, IEEE Trans. Power Systems, Vol. 20, pp. 1582-88, Aug 2004. 11. Mini S Thomas, Anupama Prakash, Design, Development & commissioning of A Substation Automation Laboratory to enhance learning, IEEE Transactions on Education, May 2011, Vol 54, No 2, May 2011, pp. 286-293 12. Mini S Thomas, Remote Control, IEEE Power & Energy Magazine Vol 8, No 4, July/August 2010, pp. 53-60
Mini Shaji Thomas Ph.D., started her scholastic journey in a small village in Kerala, in the Southern tip of India, where she did her schooling. In those days, parents in Kerala, a state with 95%+ literacy, wanted their children to pursue higher studies, Engineering or Medicine and Mini chose Engineering as she was not keen to become a doctor. She graduated in Electrical Engineering (as she loved Physics-magnetism and electricity) from the University of Kerala with a gold medal, and went on to complete her Master from The Indian Institute of Technology (IIT), Madras, where she won the gold medal and the Siemens Prize. She landed a coveted teaching job at the National Institute of Technology (NIT Calicut), Kerala, where she looked forward to a secure and fulfilling career as a teacher, her dream job. But destiny willed otherwise. Mini resigned from NIT Calicut a year later and moved to Delhi, the Capital of India, 3000 km away, to join her husband, and IIT Delhi welcomed this brilliant student into its fold as a Ph.D. Scholar in Electrical Engineering (Power systems). She got her Doctorate in 1991 at the age of 29. She joined the Delhi College of Engineering as a Faculty member and later joined Jamia Milia Islamia University (JMI) in 1995. In her two-decade-long stint as Professor, and then Head of the Department of Electrical Engineering at JMI, Mini established
14 Substation Automation
397 the first-of-its-kind Supervisory Control and Data Acquisition (SCADA) and Substation Automation (SA) Laboratories on campus. She designed and developed the curriculum and started a new Master’s Program in “Electrical Power System Management” in close coordination with major Power Industries. Mini has done extensive research work in the area of Supervisory Control and Data Acquisition (SCADA) systems, Substation & Distribution Automation and Smart Grid. She has published over 150 research papers in international journals and conferences of repute, supervised 16 Ph.Ds., and has successfully completed many research projects worth over Rs. 4 Crores. She authored the textbook Power System SCADA and Smart Grids published by CRC Press, Taylor & Francis, the USA. Mini was the Founder-Director of the Centre for Innovation and Entrepreneurship and her path-breaking initiatives have put the Department and JMI on the map among premier Institutes. In 2016, The National Institute of Technology, Tiruchirappalli (NIT Trichy), one of the top 12 Engineering Institutions in India welcomed Mini as its first woman director in 50 years. She shifted base to the NIT campus, 2500 km down south, relying on her strong support system comprising her husband, parents, and in-laws to hold fort for her in Delhi. Mini charted an ambitious growth path for NIT Trichy organization in her new capacity. Under her able guidance and leadership, NIT Trichy, in collaboration with Siemens Industry Software, set up the first-ofits-kind Centre for Excellence in manufacturing in 2018 with an investment of Rs.190 Crores. The institute grew exponentially in the areas of research publications, projects, consultancy, patent filing, and publications and rose to 9th among Engineering Institutions in India in 2021. She completed her 5-year term at NIT, Trichy, in November 2021, as a successful administrator, and is back in Delhi, teaching at JMI. Mini is currently on the Board of Directors of the IUSSTF (Indo-US Science and Technology Forum) and was the President of Shastri-Indo Canadian Institute (SICI) 2020–21, a binational organization supported by the Government of India, which promotes understanding between India and Canada through academic activities and exchanges. Mini won the IEEE EAB Meritorious Achievement Award in Continuing Education for “Design and development of curriculum and laboratory facilities for professionals and Students in the Electric Utility Industry” by IEEE Educational Activities Board, the USA, in 2015, and many more awards and recognitions. She is a certified trainer for “Capacity building of Women managers in higher education” and conducts regular training sessions for Women empowerment. Mini’s eyes light up when she speaks of her foremost passion – teaching. She dreams of a day, not too far in the future, when teaching becomes the profession of choice for the youngsters of today. Mini credits her involvement with IEEE, and the mentorship that she received from some of the most brilliant minds in the industry for much of her professional success. Mini has traveled extensively, delivering lectures at top universities across the globe.
Chapter 15
Electric Power Distribution Systems: Time Window Selection and Feasible Control Sequence Methods for Advanced Distribution Automation Karen Miu Miller and Nicole Segal
A framework for moving from static analysis and control tools toward more frequent, data-enhanced, and system need-driven tools will be discussed. For example, identifying critical times and time windows for power distribution system analysis will enable computationally approachable methods to support distribution operations, automation, and control. In addition, when critical operating conditions are predicted or encountered, distribution automation and control algorithms must provide physically realizable solutions to improve system behavior. This chapter also discusses tools to provide feasible control sequences for distribution operations. Combining these two approaches, better analysis and control tools are expected to facilitate operations of power distribution systems with large numbers of distributed energy resources.
15.1 Introduction Electric power distribution systems serve as the direct connection between consumers of electricity and one or more sources of electrical energy. The systems include the devices that consume and produce electricity at the same or different locations and all the devices that interconnect consumers and sources together, including cables, switchgear, power conversion devices, and monitoring and control devices.
K. M. Miller () Drexel University, Philadelphia, PA, USA e-mail: [email protected] N. Segal United States Federal Government, Washington, DC, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_15
399
400
K. M. Miller and N. Segal
Different types of electric power distribution systems exist. Terrestrial-based power systems are electric utilities and cooperatives that distribute electricity to individual residences, commercial entities, and industrial sites. Stand-alone microgrids exist, typically in industrial and commercial developments, as well as some university campuses, and provide electricity to select remote communities. Shipboard or maritime power systems involve electric power delivery within naval vessels, such as civilian cruise ships and military platforms. Vehicular power distribution systems focus on cars, trucks, buses, etc., which are experiencing a steady shift from traditional fuel to electric drive hybridization and all-electric drive. The amount of power, type of power, and voltage levels the systems operate at typically result in slightly different modeling and analysis approaches to the different types of systems. This chapter will present a general framework for electric power distribution system studies and focuses on terrestrial power system studies. Power distribution systems interconnect multiple consumers into an energy community. For example, historically in terrestrial power systems, the first miles of power lines built were distribution systems. In the very early years, these systems reflected the debate between direct current (DC) systems (Edison) and alternating current (AC) systems (Westinghouse). As electrification of many processes occurred, both industrial and residential, the amount of power and the distance the power had to travel between the sources and the physical location of the consumers significantly increased. This led to the adoption of high voltage, three-phase, AC transmission systems to achieve energy-efficient, bulk power delivery to local distribution systems. Then, these distribution systems – dominantly multi-phase (single, two, and three-phase) AC systems – distributed the power to individual consumers (residential users, commercial customers, industrial customers). Our power distribution energy community can be visualized as shown in Fig. 15.1. Since an electric power distribution system is an engineering infrastructure system, designers/planners and developers typically target system lifespans ranging from years to several decades. This is in contrast with distribution operators, who must monitor and control electric power systems within much shorter time spans of minutes to months. This time disparity between operators and planners has resulted in different types of analysis tools deployed for power distribution systems. Individual consumers
Fig. 15.1 Typical time horizons of energy community members: e.g., time-frames T1 ~ minutes, T2 ≤ 1 hour, T3 ~ 24 hours to days, T4 ~ months to years
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
401
DER4 PV
R
M1
M4
M2
R
P V
M3
DER2
M5
DER3 PV
R
Measurement Device
Substation Transformer
R
Controllable Load
PV
Load
Line with Recloser Sectionalizing Switch
Capacitor Bank
Photovoltaic Array
DER1
Generic Distributed Resource
Fig. 15.2 A generic single-phase diagram for a power distribution system with nodal consumer interconnects: net injection at a node represents generation (flow into the system) minus demand (flow out of the system) – the resulting value is seen as a “net load” to the power distribution system
are now expanding their roles in the energy community. They are changing from typically passive users of electricity to planners as they install or purchase solar energy systems, smart thermostats, smart HVAC (heating, ventilation, and air conditioning units), and electric vehicles. These types of component purchases are typically multi-year investments. Figure 15.2 reflects a distribution system where customers are both supplying electricity to and withdrawing electricity (load) from the system. The total electricity generation minus electricity demand is the net power injection at a node. Any specific node interconnects the net of those supplies and withdrawals and thus can be a positive or negative number. Analysis and control tools have evolved for electric power distribution systems. First, prior to remote monitoring of customer loads, rule of thumb methods and approximate load distributions within a circuit were utilized [1]. Then, as circuit and power theory grew in application, formal mathematical problem formulations specific to power distribution system analysis emerged, e.g., [2]. These techniques are different from bulk power system tools [3–6]. Electric power distribution tools utilize unbalanced system and component models. While this equates to a more computationally intensive process, it preserves individual consumer connections and/or limits the amount of customer and power network aggregation. This is especially critical as power systems expand to include electric vehicles and other distributed energy resources, such as residential solar energy systems which are single phase. Thus, system levels of imbalance may significantly increase with consumerdriven technology adoptions. In the past, the combination of systemic design to interconnect components with historical load behaviors and utility-owned control devices could maintain parameters within necessary (standard measures of) imbalance margins. Now, our electric utility systems are undergoing significant
402
K. M. Miller and N. Segal
expansions where the usage and injection of energy is less predictable (determined by human behavior and user adoption rates). This will challenge assumptions of balanced transmission system loads, in particular those representing aggregate distribution systems, and voltage levels made in bulk transmission system studies. These assumptions on aggregate power distribution system conditions are related and required for the safe operation of large electric machines/generators. Consequently, distribution system analysis and control tools must adapt to both capture the level of imbalance and, if necessary, control to correct the level. While the required unbalanced modeling challenges analysis and control tool complexities, some distribution system characteristics assist computation efforts. First, distribution systems are often operated in a radial manner (i.e., no loops), thus graph theory is often utilized to determine net power flows and decompose larger systems for distributed analysis and control, e.g., [7–14]. Second, many incoming technologies are inverter connected. These are fast-acting power electronic converters typically operating within microseconds versus the hundreds of millisecond response times of equivalently rated electro-mechanical components. Thus, individual controllers are fast-acting and, within larger system studies, are often viewed as discrete controllers and represented as controllable real power, P, and voltage magnitude, |V|, nodes, e.g., [15, 16]. As such, when control actions must be decided and selected, effective system constraints will still dominate resulting in control sequences vs. the dynamics of the controllers themselves. In order to simulate the impacts of solar energy systems and their intermittent power injections, quasi-static time-series (QSTS) analysis has been adopted. These include planning tools that simulate the behavior of the steady-state voltages over forecasted loads and photovoltaic injections. QSTS utilizes fixed windows of time, e.g., every minute [17] or every second [18, 19] across a given time horizon, e.g., 24 hours (hrs.) These tools allow planners to identify problem areas and times of operations; subsequently allowing for the evaluation of existing control schemes and the installation of new controllers and/or control schemes. Similarly, the inclusion of electric vehicles within distribution power flow analysis has been addressed by several researchers, e.g., [20–22]. Distribution automation (DA) allows for remote control of network devices through a distribution energy management system. Three effective applications of DA were identified in [23] to include network reconfiguration, capacitor placement, and service restoration. Thus, the remote operation of network switches and capacitor control settings have received much attention, beginning with [24–48]. The field remains active with extensive research and literature devoted to DA applications. As such, a comprehensive literature survey would be elusive, and this chapter focuses on earlier works to capture the development of the field. It is noted that many different computational intelligence and machine learning-based techniques have been applied to solve DA applications. These tools provide device control settings for each specific net injection level under study.
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
403
Some of the methods present solutions in a step-by-step manner for operators where the order of control setting operations are provided, e.g., for service restoration can be found in [24–30], network switching [31], and capacitor control [32, 33]. Many other optimization techniques, such as select mathematical programming (linear [34] and dynamic [35, 36]), genetic algorithms [37–40], evolutionary programming [41], particle swarm optimization [42], and other meta-heuristics [43], have also been adopted to determine final control settings which yield high-quality solutions. However, these search algorithms may not and typically do not provide a feasible control sequence to reach the end solution. Thus, in this chapter we advocate for algorithms that provide step-by-step control sequences that retain feasibility of the system throughout the control process and address post-processing methods to support more general optimization techniques [44]. Analysis tools for distribution operations are discussed in the next section. Since more data is available to distribution engineers and operators than ever before, how often do we need to coordinate and analyze the data and what actions, if any, should we take? Then, Sect. 15.3 discusses tools for generating feasible control schemes from optimal control methods. Since a large body of research and preliminary tools exist to optimally control electric power distribution systems, which methods can be translated or adapted into practice?
15.2 Non-uniform Time Scales and Quasi-Static Time-Series Analysis in Distribution Systems Advancements in communication and control technology in distribution systems have encouraged the adoption of online control of power distribution systems. Many utilities now have Distribution Energy Management Systems (DERMS), to actively monitor and control aspects of their power distribution systems. This advances previous techniques where a priori control, e.g., time-of-day, and local control, based on stand-alone local measurements, was utilized to maintain overall system performance. With respect to the electric power system itself, technology shifts in individual consumer connections have both required and enable time-varying analysis and online control. First, improved data: utility adoption of automated metering infrastructure (AMI) has provided more access to data on net power demand (electric load demand minus generation) at each distribution node than previously available. This change supplies distribution load forecasting and analysis tools with better, timely data. In turn, improved system automation and control decisions can be made considering expected changes in nodal power injections. Second, improved load control: consumers are adopting technology enabling remote and active control of their electricity usage. This includes electric power demand response via installing building energy management systems that are controlled by building owners or their energy suppliers and, for individual residential
404
K. M. Miller and N. Segal
and small commercial customers, internet-enabled, active thermal management via temperature setpoints (smart thermostats) and participation in direct load control programs with the utility. As consumers are interconnected within a larger system, the study of the individual control actions on the larger distribution system is important. Finally, increased distributed power generation and storage: solar-photovoltaic resources and other distributed energy resources (DER) have increased the volatility of net power injections/demands at each distribution system node. The adoption rate and installed location of solar installations, storage, and electric vehicles are dictated by individual owners. Thus, (i) the power levels, (ii) the electrical phasing, and (iii) the system imbalance levels are experiencing unprecedented uncertainty for electric utility planners and operators. Both historical daytime and nighttime net power demands are changing due to uncertainty in the timing and amounts of electricity associated with solar generation and vehicle charging. Subsequently, historical control schemes for power distribution systems must adapt. Thus, distribution system analysis techniques must expand beyond traditional select loading level/time-of-day studies to better capture time-varying power injection behaviors that we remotely monitor and can actively control. This has been seen with an increased number of works on quasi-static steady-state studies [17–19] and non-uniform, time-varying distribution analysis studies [49, 50], and two-time scale simulation techniques [51, 52].
15.2.1 Non-uniform Time Windows for Distribution System Analysis and Predictive/Corrective Control Given that we want to analyze distribution systems with their latest data and in an online setting, how frequently should this be done? From an electrical engineering viewpoint, (i) net injections change, (ii) this changes systems conditions (analysis), (iii) which may or may not require new actions (control). Then, for operations, analysis should be conducted both to find when control is required and to support the control decision process. Since DERMS are still relatively new, this chapter outlines how frequently control actions are needed based on engineering operating constraints. Many operating constraints exist within power distribution systems, generically represented here by G(·). Some are dictated by practical engineering concerns for control frequency/dead-bands and device maintenance. Several are computed from detailed analysis, e.g., online power flow analysis or state estimation, and must hold for all buses k = 1, . . . , # buses and each phase p ∈ {a, b, c} and/or for all lines and branches bus i to k, phase p, they include the following:
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . . .|
p
Vkmin |≤| Vk |≤| Vkmax | p max | min .| Iik |≤| Iik |≤| Iik p max | min .| Sik |≤| Sik |≤| Sik p max min .θVI ≤ θV I,k ≤ θV I %VU ≤ Unbalmax p,min p p,max .Qsub ≤ Qsub ≤ Qsub p p max .Vk (unew ) − Vk uprevious ≤ Vrise
(node voltage magnitude constraints) (line current flow magnitude constraints) (branch power flow magnitude constraints) (node power factor constraints) [53] (voltage unbalance (VU) constraints) [54] (substation reactive power limits) [32] (voltage magnitude rise limits) [32, 33]
405 (15.1) (15.2) (15.3) (15.4) (15.5) (15.6) (15.7)
These constraints must be satisfied at all net injection levels. It is noted that there exists a limit to the change in voltage magnitude between net injection levels caused by changes in control settings – represented by the last constraint. Then, this section will discuss a framework to focus analysis on times where the system may be near constraint violations. An injection capability method to identify critical times and associated non-uniform time windows for online analysis tools was presented in [49]. This technique overlays and supplements quasi-steady-state techniques which utilize a fixed, uniform time window to analyze system behavior (e.g., every 1 minute [17], every 10–15 minutes in practical DERMS applications) but may miss critical times for the distribution system within a window. Please see the next subsection for an example. Combined, these techniques represent an example of two-time scale systems; other examples include [51, 52] and one based on energy availability risk management can be found in [50]. It is noted that a single distribution circuit often contains thousands of interconnected customers and, hence, thousands of electrical nodes. The computation for a single point in time, which corresponds to a set of net power injections at the nodes and a specific set of control device settings, is measurable. When faced with optimal control decisions for DA, multiple control options are typically considered for a single point in time. Thus, it is computationally time consuming and the time requirement increases with the number of control devices. Thus, shortening the fixed-time window of a quasi-static solver would increase computation and could still miss a critical event. In practice, during distribution operations, frequent changes in control settings have typically been avoided for both maintenance costs of the physical control devices and to avoid electrical power quality disruptions and hunting between control devices (where one control action causes another control action vs. the inherent system behavior dictating control). Hence, practical time windows in distribution systems often span periods of 10–15 minutes or longer. However, within these windows, we may now encounter variations in net injections that cause operating constraint violations. Thus, here we discuss need-based analysis where critical times for analysis are identified based on forecasted net injections.
406
K. M. Miller and N. Segal
15.2.2 Problem Statement The operating conditions depend on the net power injections. Thus, load capability and net injection capability metrics have been developed [53, 55, 56] which can estimate if specific types of constraint violations will occur based on net forecast injections. Then, if critical net injections are incurred, the corresponding forecast times are identified as critical times for analysis and control. These times can supplement a priori times identified for analysis – for example, from quasi-static, fixed time-scale solvers. The method is referred to as the critical time identification problem using injection capability metrics. Specifically, given an integer number, K time window forecast, defined by: n Sinit ∈ C ˆ ˆ ˆ2 , . . . SˆK .S = S1 , S T = (τ 1 , τ 2 , . . . τ K ) U = (u1 , u2 , . . . uK )
The initial net complex power injection vector for each node 1, . . . , n a sequence of K forecast injection variation vectors Durations of each of K time windows Scheduled control setting vector for each time window
The problem state is to return: = λ1,1 , . . . , λ1,L1 , . . . , λK,1 , . . . , λK,LK
U ∗ = u∗1,1 , . . . , u∗1,L1 , . . . , u∗K,L1 , . . . , u∗K,LK
.
s.t.
F (x, λ, u∗ ) = 0 G (x, λ, u∗ ) ≤ 0
∀λk,l , u∗k,l
where is the sequence of consecutive sub-window durations λk, l. U∗ is the sequence of realized control settings .u∗k,l Lk ≥ 1 is the number of sub-windows in time window K which is constraint driven with F(·) the unbalanced distribution power flow equations [49] or interchanged with an obtained distribution state estimate, x G(·) the operating constraints. Figure 15.3 shows an illustration where five uniform time windows are initially desired, initial apparent power net injections are shown; and scheduled control for a battery (−1/0/1 → charge/off/discharge) and a capacitor (0/1 → off/on) are given. The dashed black line represents the net power injection before controls (i.e., all devices in the off state). Since the initial window has the battery charging, the green line is the resulting net power injection and has higher net injection than if the battery were off. At the end of the first time period, the battery is scheduled to turn off; please note the corresponding output .u∗k,l at the bottom row of Fig. 15.3.
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
407
Fig. 15.3 Example of implicit injection capability application to select critical times for power distribution system analysis and control [49]. Given inputs: initial five time windows at 3 hours each, Sinit initial apparent power level, scalar direction of net power injection changes, scheduled control settings uk for a battery and a capacitor. The automated process outputs: two identified critical times in windows k = 3 and k = 5 and the corresponding control required to mitigate constraint violations
Within time window k = 3,at hour 7.2 a constraint violation (undervoltage) is encountered, and two sub-windows are created as indicated by the vertical dash-dot lines. Then, the process takes a control action – here the capacitor is turned on which supplies reactive power and lowers the overall apparent power demand as indicated by the green line. Of note, if a uniform time window for analysis was retained, then the constraint would not be detected until the end of window 3, hour 9, and the system would be experiencing a violation for 1.8 hours. Finally, in time window k = 5, a violation (here over voltage) is again incurred, the window is subdivided into two sub-windows, and the capacitor is turned off. The corrective control action returned the system to a feasible operating region. It is noted that the default preference for the control actions in this example was to control the capacitor first before the battery. The framework allows for alternative control methods to be adopted. Analytically identified critical times and their adjacent sub-windows can then be highlighted for more frequent monitoring and analysis and for preventative control. This contrasts with QSTS methods which would require more frequent monitoring for all the study time. For example, a 24-hour QSTS simulation with one-second time steps yields 86,400 power flow solves and does not adjust for
408 Fig. 15.4 (a) Time-varying total apparent power demand forecast [50]. (b) Corresponding time-varying risk forecast with respect to power availability [50]
K. M. Miller and N. Segal
a
b
constraint violations. For an actual, 2556-node distribution system and the same 24hour simulation, the injection capability method required 3569 power flow solves (less than 5% of the QSTS computational demand) while actively identifying and avoiding constraints (15.1) through (15.4) violations [49]. Finally, while constraint-driven time identification techniques were discussed above, microgrids and isolated power systems such as on shipboard power systems must also manage energy availability with energy demand. As such, non-uniform, energy availability, and risk-based time window selections have also to be determined from net injection forecasts [50]. An example selection is displayed in Fig. 15.4. Now, given critical time points and time windows, the determination of feasible control sequences is now discussed.
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
409
15.3 Feasible Control Schemes In both entirely automated and partially automated systems, advanced DA applications require feasible control sequences to realize expected benefits. In other words, the process of trying to optimize system behavior should not worsen existing system conditions by creating additional constraint violations. In-service restoration problems, where out-of-service customers are re-energized by the opening and closing of network switches to re-route power, sequential step-by-step algorithms are established and are components of widely adopted methods [24–28]. These methods include forecasted changes in net injections to accommodate for phenomena such as cold-load pick up where prolonged outages increase load demand from pre-outage levels [24, 25] and expected load variations [28]. In voltage control problems, many works have focused on the optimality of the solution with respect to a resulting system metric – such as real power loss reduction. In distribution systems, these problems are typically studied using a fixed number of discrete load levels to represent forecast net injections, e.g., hourly load settings for 24 hours [35, 36, 47, 48]. Historically, voltage control devices have included transformer tap setting and capacitor control; however, as we see an increase in inverter-connected devices, the number and amount of voltage control devices dispersed in the distribution system will increase. In certain regions, pilot testing of consumer-owned, photovoltaic inverter control schemes for voltage control are underway [57]. However, at this time, distribution system operators still only have direct control over utility-owned devices. As such, this chapter focuses on capacitor control; however, concepts can translate to non-utility-owned devices. Again, many intelligent system applications produce high-quality solutions. Global optimizers typically return the control status of each capacitor bank at each discrete load level but do not provide a realizable sequence. Thus, if the control status of each capacitor bank can be realized, the system would be running in an optimal manner with no constraint violations. In this case, post-processing methods for distribution application functions can be developed to identify the following: (i) Why control devices change status (for optimization purposes and/or to maintain feasibility). (ii) When they must change to maintain feasibility. In such a manner, it may be possible to find a realizable control sequence to the optimal control settings [44]; however, there may exist many or no sequences which maintain feasibility. Thus, constraint-driven methodologies are important for practical implementations of distribution automation and control. As an example, distribution operations often focus lies on voltage spread reduction along a feeder. The voltage magnitude provided to each customer is a measure of power quality. Thus, voltage constraints from above exist to ensure minimum and maximum voltage levels at individual
410
K. M. Miller and N. Segal
Fig. 15.5 A 12.6 kV, 948 bus, 1224 node, multi-phase unbalanced distribution system with 5 capacitors, total nominal peak load (8214.87 kW, 2980.95 kVAr)
nodes. From a system standpoint, it is also desirable to reduce the difference between voltage magnitudes along a feeder. Here, an example of voltage spread reduction via capacitor control is presented. An analytically driven technique considering constraints (15.1–15.3), (15.6), and (15.7) was adopted where up to 20 discrete net injection levels were considered [32, 33]. A given number-at-a-time control method was developed; this accommodates multiple control actions in each step of the sequence, which is possible for near-synchronous, inverter-connected devices. One-at-a-time control results are presented for an actual distribution circuit shown in Fig. 15.5 which contains a mix of manual and remote-control capacitors. While up to 20 load/net injection levels were considered, for ease in discussions, first a focus on three load levels is made – a common planning practice: low net injection levels (where electricity demand is at its lowest level in a 24-hour period, here 26% of peak), medium net injection levels (here the 70% load profile), and peak load/injection profiles. Table 15.1 shows the resulting capacitor statuses for the given circuit. Table 15.2 shows the corresponding minimal voltage spread and other system metrics. Since the 70% load profile has the same settings as the peak load settings, no transitions occur between the load levels. Thus, our focus is on the range between the 26% and 70% loading/net injection levels; here, to compare the algorithm’s ability to select feasible sequences, an exhaustive search of sequences was also performed. First, it is noted that just turning on each capacitor that is off one-at-a-time does not always yield a feasible sequence. Please see Table 15.3 for select, infeasible switching sequences. These infeasible sequences result in 13 voltage rise violations which would disrupt connected customer equipment. In fact, feasible sequences for this circuit and load levels require that the capacitor at bus 1937 first be turned off before other capacitors are turned on. This is somewhat non-intuitive but can be consistently discovered via analysis. Please see
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
411
Table 15.1 Resulting capacitor placement and control settings for optimal voltage spread reduction
Cap bus #
Control operations for given load level/profile
Size (kVAr)
New/ existing capacitors
Type manual/ switchable
26%
70%
100%
1333
600
Existing
Manual
on
on
on
1937
600
Existing
Switchable
on
on
on
1292
600
Existing
Switchable
off
on
on
1177
1200
New
Switchable
off
on
on
1015
600
New
Switchable
off
on
on
Table 15.2 Corresponding system metrics: Qout – reactive power out of the substation, total real power loss PLoss , and VS – per unit voltage spread (minimization objective)
Given load level/profile Metric 26%
70%
100%
Qout (kVAr)
426.68 leading
543.34 leading
612.23 leading
PLoss (kW)
10.34
66.00
141.94
VS (p.u.)
0.00255
0.00926
0.01207
Table 15.3 Select infeasible switching sequences, resulting in 13 voltage rise violations
Capacitor switching sequence Order of operation
1
2
3
Seq.
Switch action (on/off)
on
on
on
1
Cap. bus #
1292
1015
1177
2
Cap. bus #
1177
1292
1015
Table 15.4 for select feasible sequences between the 26% and 70% load/net injection levels. This is somewhat against intuition, to turn something off, then on again; however, it is required to realize the optimal voltage spread. Twenty distinct injection levels mimicking load levels achieved throughout the 24-hour period for the selected circuit were studied, representing ~2.5% increases between the 26% and peak levels. Again, to focus discussions, Table 15.5 displays only the subset of load levels between which on-off transitions occurred.
412
K. M. Miller and N. Segal
Table 15.4 Selected feasible switching sequences for capacitor transitions from 26% to 70% levels
Seq. 1 2
Order of operation Switch action (on/off) Cap. bus # Cap. bus #
1 off 1937 1937
Capacitor switching sequence 2 3 4 On on On 1177 1292 1015 1177 1937 1015
5 on 1937 1292
Table 15.5 Capacitor control settings for voltage spread reduction and corresponding net injection levels where on-off transitions occur
Load settings (%) Cap. bus #
Load levels (scaled proportionally from peak profile)
Profile
26
33.5
38.5
46
56
63.5
68.5
70
1333
on
on
on
on
on
on
on
on
1937
on
off
on
on
off
on
off
on
1292
off
off
on
off
on
off
off
on
1177
off
off
off
off
on
on
on
on
1015
off
on
off
on
off
off
on
on
VS 0.00254 0.00621 0.00283 0.00379 0.00593 0.00838 0.00714 0.00926 (p.u.)
Here it is observed that capacitors fluctuate between on and off states in order to achieve minimal voltage spread and that, as expected, it is highly dependent on net injections. As such, distribution operations must limit the number of times a control device is operated across the overall time horizon and regardless of the number of analysis windows. Thus, practical control device limits can also impact the number of effective control time windows and care must then be taken in selecting these time windows, as discussed in Sect. 15.2.
15.4 Conclusion In conclusion, the adoption of consumer-driven devices will fundamentally alter power distribution systems. New analysis and control tools specific to emerging distribution systems are needed. These tools must consider the challenges facing distribution systems – where the infrastructure itself is not balanced, the data and software systems monitoring and forecasting consumer electrical behaviors are
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
413
separate from the power infrastructure distribution energy management systems, and electricity generation and demand are increasingly customer controlled. Since advanced distribution automation requires information from these multiple systems, the specific times and frequency of analysis are important for timely coordination of distribution data and analysis. These times can be selected with respect to electrical engineering considerations and constraints. Subsequently, control algorithms that produce feasible control sequences that do not violate system limits in the process of trying to achieve a better operating condition are preferred. Future power distribution engineering tools must be developed for the distribution grids themselves to maintain the same reliability levels and quality of service we have experienced in the past. In addition, to realize the hope of large amounts of renewable and sustainable energy technologies, the interconnecting systems must be re-designed and re-engineered to support system operations for communities at large – those with and without solar installations and electric vehicles, etc. This will require repeatable, analytically based metrics and controls to support distribution operations, regulations, energy policy, etc. Thus, again, the field of electric power distribution is very active, serves the community, and, the authors firmly believe, an attractive area for new talent to join. Acknowledgments Several students’ works are represented within this chapter. Of special note is Dr. Nicholas Coleman, Ph.D., for his contributions to time window selections.
References 1. T. Gonen, “Electric Power Distribution System Engineering,” McGraw Hill, New York NY, 1986. 2. W. Kersting, “Distribution System Modeling and Analysis,” CRC Press, 2002. 3. Federal Power Act – Section 215, 16 U.S. Code § 824o - Electric reliability https:/ /www.govinfo.gov/app/details/USCODE-2010-title16/USCODE-2010-title16-chap12subchapII-sec824e 4. A. Bergen and V. Vittal, “Power Systems Analysis,” Prentice Hall, Englewood Cliffs, NJ, 2000. 5. J. Grainger and W. Stevenson, “Power Systems Analysis,” McGraw Hill, New York, 1994. 6. J. Glover, M. Sarma, and T. Overbye, “Power System Analysis and Design,” 6th edition, Cengage, USA 2016. 7. W. H. Kersting and D. L. Mendive, “An Application of Ladder Network Theory to the Solution of Three Phase Radial Load-Flow Problems”, Proceedings of the IEEE/PES 1976 Winter Meeting, New York, NY, January 1976. 8. W. H. Kersting and W. H. Phillips, “A Radial Threephase Power Flow Program for the PC”, Presented at the 1987 Frontiers Power Conference, Stillwater, OK, October 1987. 9. H. D. Chiang, “A Decoupled Load Flow Method for Distribution Power Networks: Algorithms, Analysis and Convergence Study”, Electrical Power & Energy Systems, Vol. 13, No. 3, pp. 130–138, June 1991. 10. C. S. Cheng and D. Shirmohammadi, “A three-phase power flow method for real-time distribution system analysis,” IEEE Transactions on Power Systems, Vol. 10, No. 2, pp. 671– 679, May 1995.
414
K. M. Miller and N. Segal
11. R. D. Zimmerman and Hsiao-Dong Chiang, “Fast decoupled power flow for unbalanced radial distribution systems,” IEEE Transactions on Power Systems, Vol. 10, No. 4, pp. 2045–2052, Nov. 1995. 12. F. Zhang and C. Cheng, “A Modified Newton Method for Radial Distribution Power Flow Analysis,” IEEE Transactions on Power Systems, Vol. 12, No. 1, pp. 389–396, Feb. 1997. 13. P. A. N. Garcia, J. L. R. Pereira, S. Carneiro, Jr., V. M. da Costa, N. Martins, “Three-Phase Power Flow Calculations Using the Current Injection Method,” IEEE Transactions on Power Systems, Vol. 15, No. 2, pp. 508–514, May 2000. 14. M. Kleinberg, K. N. Miu, C. Nwankpa, “Distributed Multi-Phase Distribution Power Flow: Modeling, Solution Algorithm, and Simulation Results,” Transactions of the Society for Modeling and Simulation International, Vol. 4, No. 8–9, pp. 403–412, August-Sept. 2008. 15. P. A. N. Garcia, J. L. R. Pereira, S. Carneiro, Jr., M. P. Vinagre, F.V. Gomes, “Improvements in the Representation of PV Buses on Three-Phase Distribution Power Flow,” IEEE Transactions on Power Delivery, Vol. 19, Issue 2, pp. 894–896, April 2004. 16. H. Chiang, T. Zhao, J. Deng and K. Koyanagi, “Homotopy-Enhanced Power Flow Methods for General Distribution Networks With Distributed Generators,” IEEE Transactions on Power Systems, Vol. 29, No. 1, pp. 93–100, Jan. 2014. 17. B. A.Mather, “Quasi-static time-series test feeder for PV integration analysis on distribution systems,” Proceedings of the 2012 IEEE Power Energy Society (PES) General Meeting, San Diego, CA, USA, pp. 1–8, 2012. 18. D. Paradis, F. Katiraei, and B. Mather, “Comparative analysis of time-series studies and transient simulations for impact assessment of PV integration on reduced IEEE 8500 node feeder,” Proceedings of the 2013 IEEE PES General Meeting, Vancouver, BC, Canada, pp. 1–5, 2013. 19. R. J. Broderick, J. E. Quiroz, M. J. Reno, A. Ellis, J. Smith, and R. Dugan, “Time series power flow analysis for distribution connected PV generation,” Sandia Nat. Lab., Albuquerque, NM, USA, Rep. SAND2013-0537, 2013. 20. A. Jiménez and N. García, “Power Flow Modeling and Analysis of Voltage Source ConverterBased Plug-in Elecrtric Vehicle,” Proceedings of the 2011 PES General Meeting, Detroit MI, 24–28 July 2011. 21. A. Jiménez and N. García, “Unbalanced three-phase power flow studies of distribution systems with plug-in electric vehicles,” Proceedings of the 2012 North American Power Symposium (NAPS), pp. 1–6, 2012. 22. U. Hanif Ramadhani *, R. Fachrizal, M. Shepero, J. Munkhammar, J. Widen, “Probabilistic load flow analysis of electric vehicle smart charging in unbalanced LV distribution systems with residential photovoltaic generation”, Sustainable Cities and Society 72 (2021). 23. J. B. Bunch, Jr., “Guidelines for Evaluating Distribution Automation,” EPRI EL-3728 Final Report, Nov. 1984. 24. C. Ucak, A. Pahwa, “An Analytical Approach for Step-By-Step Restoration of Distribution Systems Following Extended Outages,” IEEE Transactions on Power Delivery, Vol. 9, Issue 3, pp. 1717–1723, July 1994. 25. C. Ucak, A. Pahwa, “Optimal Step-by-Step Restoration of Distribution Systems During Excessive Loads Due to Cold Load Pickup,” Electric Power Systems Research, Vol. 32, 1995, pp. 121–128. 26. C. C. Liu, S. J. Lee, S. S. Venkata, “An Expert System Operational Aid For Restoration and Loss Reduction of Distribution Systems,” IEEE Transactions on Power Systems, Vol. 3, No. 2, pp. 619–626, May 1988. 27. K. N. Miu, H.-D. Chiang, B.Yuan, G. Darling, “Fast Service Restoration for Large-Scale Distribution Systems with Priority Customers and Constraints,” IEEE Transactions on Power Systems, Vol. 13, pp. 789–795, Aug. 1998. 28. K. N. Miu and H.-D. Chiang, “Service restoration for unbalanced radial distribution systems with varying loads: solution algorithm,” Proceedings of the 1999 IEEE Power Engineering Society Summer Meeting, pp. 254–258, 1999.
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
415
29. K. N. Miu, H.D. Chiang, R.J. McNulty, “Multi-Tier Service Restoration through Network Reconfiguration and Capacitor Control for Large-Scale Radial Distribution Systems,” IEEE Transactions on Power Systems, Vol. 15, No. 3, pp. 1001–1007, Aug. 2000. 30. K. L. Butler, N. D. R. Sarma, V. R. Prasad, “Network Reconfiguration for Service Restoration in Shipboard Power Distribution Systems,” IEEE Transactions on Power Systems, Vol. 16, No. 4, pp. 653–661, Nov. 2001. 31. N. N. Schulz and B. F. Wollenberg, “Incorporation of an Advanced Evaluation Criteria in an Expert System for the Creation and Evaluation of Planned Switching Sequences,” IEEE Transactions on Power Systems, Vol. 12, No. 3, Aug. 1997. 32. N. Segal, M. Kleinberg, A. Madonna, K. Miu, H. Lehmann, T. Figura, “Analytically Driven Capacitor Control for Voltage Spread Reduction,” Proceedings of the IEEE Power Engineering Society Transmission and Distribution Conference and Exposition 2012, pp, 1–4, May 7–10, 2012. 33. N Segal, “Realizable Constraint Driven Capacitor Placement and Control Sequences for Voltage Spread Reduction in Distribution Systems,” Ph.D. dissertation, Drexel University, Philadelphia, PA, USA, July 2015. 34. A. Abur, “A Modified Linear Programming Method for Distribution System Reconfiguration,” Electrical Power & Energy Systems, Vol. 18, No. 7, pp. 469–474, Oct. 1996. 35. Y.Y. Hsu, H.C. Kuo, “Dispatch of capacitors on distribution system using dynamic programming,” IEE Proceedings on Generation, Transmission and Distribution, Vol. 140, No. 6, pp. 433–438, Nov 1993. 36. F.C. Lu, Y.Y. Hsu, “Reactive power/voltage control in a distribution substation using dynamic programming,” IEE Proceedings on Generation, Transmission and Distribution, Vol. 142, No. 6, pp. 639–645, Nov 1995. 37. K. Iba, “Reactive Power Optimization by GA,” IEEE Transactions on Power Systems, Vol. 9, No. 2, pp. 1292–1298, May 1994. 38. S. Sundhararajan, A. Pahwa, “Optimal Selection of Capacitors for Radial Distribution Systems Using a Genetic Algorithm,” IEEE Transactions Power Systems, Vol. 9, Issue 3, pp. 1499– 1507, Aug. 1994. 39. Y. Fukuyama, H.-D. Chiang and K. N. Miu, “A Parallel Genetic Algorithm for Service Restoration in Electric Power Distribution Systems,” Electric Power & Energy Systems, Vol. 18, No. 2, pp. 111–119, Feb. 1996. 40. K. N. Miu, H. D. Chiang, G. Darling, “Capacitor Placement, Replacement and Control in Large-Scale Distribution Systems by a GA-Based Two-Stage Algorithm,” IEEE Transactions on Power Systems, Vol. 12, No. 3, pp. 1160–1166, Aug. 1997. 41. K.Y. Lee and F.F. Yang, “Optimal reactive power planning using evolutionary algorithms: a comparative study for evolutionary programming, evolutionary strategy, genetic algorithm, and linear programming,” IEEE Transactions on Power Systems, Vo1. 13, No. 1, pp. 101–108, Feb. 1998. 42. A. A. Eajal and M. E. El-Hawary, “Optimal Capacitor Placement and Sizing in Unbalanced Distribution Systems With Harmonics Consideration Using Particle Swarm Optimization,” IEEE Transactions on Power Delivery, Vol. 25, No. 3, pp. 1734–1741, July 2010. 43. Y. C. Huang, H. T. Yang and C. L. Huang, “Solving the Capacitor Placement Problem in a Radial Distribution System Using Tabu Search Approach,” IEEE Transactions on Power Systems, Vol. 11, No. 4, pp. 1868–1873, Nov. 1996. 44. K. Miu, J. Wan, “A post-processing method for determining the control sequence of distribution application functions formulated using discrete load levels,” Proceedings of the IEEE Power Engineering Society Summer Meeting, Vol. 1, pp. 91–95, Seattle, WA, July, 2000. 45. M. Y. Chow, L. S. Taylor and M. S. Chow, “Time of Outage Restoration Analysis in Distribution Systems,”, IEEE Transactions on Power Delivery, Vol. 11, No. 3, pp. 1652–1658, July 1996. 46. R. P. Broadwater, A. H. Khan, H. E. Shaalan and R. E. Lee, “Time Varying Load Analysis To Reduce Distribution Losses Through Reconfiguration,” IEEE Transactions on Power Delivery, Vol. 8, No. 1, pp. 294–300, Jan. 1993.
416
K. M. Miller and N. Segal
47. Y. Deng, X. Ren, C. Zhao, D. Zhao, A heuristic and algorithmic combined approach for reactive power optimization with time-varying load demand in distribution systems,” IEEE Transactions on Power Systems, Vol. 17, No. 4, pp. 1068–1072, Nov. 2002. 48. Z. Hu, X. Wang, H. Chen, G.A. Taylor, “Volt/VAr control in distribution systems using a timeinterval based approach,” IEE Proceedings on Generation, Transmission and Distribution, Vol. 150, No. 5, pp. 548–554, Sept. 2003. 49. N. S. Coleman and K. N. Miu, “Identification of Critical Times for Distribution System Time Series Analysis,” IEEE Transactions on Power Systems, Vol. 33, No. 2, pp. 1746–1754, March 2018. 50. N. S. Coleman and K. N. Miu, “Time Window Selection via Online Risk Assessment for Power Distribution System Analysis,” Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, 2018. 51. J. Su, H. -D. Chiang and L. F. C. Alberto, “Two-Time-Scale Approach to Characterize the Steady-State Security Region for the Electricity-Gas Integrated Energy System,” IEEE Transactions on Power Systems, Vol. 36, No. 6, pp. 5863–5873, Nov. 2021. 52. D. Jin, H. -D. Chiang and P. Li, “Two-Timescale Multi-Objective Coordinated Volt/Var Optimization for Active Distribution Networks,” IEEE Transactions on Power Systems, vol. 34, No. 6, pp. 4418–4428, Nov. 2019. 53. N. S. Coleman and K. N. Miu, “Distribution Load Capability With Nodal Power Factor Constraints,” in IEEE Transactions on Power Systems, Vol. 32, No. 4, pp. 3120–3126, July 2017. 54. Z. Minter, J. Hill, J. C. de Oliveira, K. N. Miu and S. M. Hughes, “A Study of Imbalance Levels Attributed to Photovoltaic Penetration in Distribution Systems,” Proceedings of the 2020 IEEE/PES Transmission and Distribution Conference and Exposition (T&D), 2020. 55. K. N. Miu and H. D. Chiang, “Electric Distribution System Load Capability: Problem Formulation, Solution Algorithm and Numerical Results,” IEEE Transactions on Power Delivery, Vol. 15, No. 1, pp. 436–442, Jan. 2000. 56. H. Chiang and H. Sheng, “Available Delivery Capability of General Distribution Networks With Renewables: Formulations and Solutions,” IEEE Transactions on Power Delivery, Vol. 30, No. 2, pp. 898–905, April 2015. 57. Petition of PPL Electric Utilities Corporation for Approval of Tariff Modifications and Waivers of Regulations Necessary to Implement its Distributed Energy Resources Management Plan Docket No. P-2019-3010128, Pennsylvania Public Utilities Commission, Harrisburg, PA, granted Nov. 17, 2020.
Karen Miu Miller, Ph.D., is presently a Professor of Electrical and Computer Engineering at Drexel University in Philadelphia, PA. She graduated with a B.S. (1992), M.S. (1995), and Ph.D. (1998) all in electrical engineering from Cornell University, Ithaca, NY. As an undergraduate, Karen was exposed to several fields of electrical engineering, and then she became an undergraduate research assistant in the field of power systems. This was a wonderful experience, and she noted that many fields: signal processing, systems, mathematics, computing, etc., all have direct applications to power systems. She was sold: power systems engineering would never be boring and always adopts new technologies from various fields. Upon graduation, she joined Drexel University in 1998 as an Assistant Professor. There, she has been active in power distribution system research and education. She has worked with over 40 undergraduate and graduate research assistants – building hardware laboratories and software tools for analyzing terrestrial and shipboard power systems. Along the way, she has been an advisor to the student section of the Society of Women Engineers (SWE) and helped host an SWE Region E conference. In addition, she has advised several women to the completion of their M.S. and Ph.D. degrees – often engaging with students early on through undergraduate research assistantships.
15 Electric Power Distribution Systems: Time Window Selection and Feasible. . .
417
Dr. Miu has served as an expert witness, advocating for increased controls within electric power distribution systems. She has been awarded a National Science Foundation (NSF) Career Award, an Office of Naval Research (ONR) Young Investigator Award, the HKN Outstanding Young Electrical Engineer Award, IEEE PES Walter Fee Young Engineer Award, and her teams’ works have been continuously supported for over 24 years by grants from the NSF, ONR, US Dept. of Energy, PPL Electric Utilities, PECO Energy, Lockheed Martin, PJM Interconnection, among others. Nicole Segal, Ph.D., is presently an electrical engineer employed by the United States government and the views expressed in this chapter do not necessarily represent the views of her employer or the United States. Before joining her present agency, Nicole was a Senior Fellow at the Department of Energy’s Solar Energy Technologies Office. Prior to working as a Senior Fellow, she was a Senior Engineer at the North American Electric Reliability Corporation (NERC). At NERC, Nicole was the lead facilitator for two NERC-industry stakeholder groups; the essential reliability services working group and the distributed energy resources task force. Additionally, while at NERC, Nicole was the lead researcher and primary author of the white paper on the August 21, 2017, North American Solar Eclipse which received international recognition from the Financial Times, Forbes, Bloomberg, and the Economist. Before joining NERC, Nicole completed her doctorate in Electrical Engineering at Drexel University. While pursuing her doctorate, Nicole performed Advanced Distribution Automation on 34 in-service distribution circuits and developed and delivered automated capacitor placement software for PPL Electric Utilities under a Department of Energy Grant (DE-OE0000305). Prior to her doctorate, Nicole was with PJM Interconnection LLC in Norristown Pennsylvania where she was a Planning Engineer and performed short circuit analysis for the Generation Interconnection Department. Nicole earned her Bachelor of Science (’08), Master of Science (’08), and Doctor of Philosophy (’15) in Electrical Engineering from Drexel University. Nicole’s research interests include analyzing grid transformation impacts across the transmission and distribution interface, distributed generation, and electric power distribution system optimization, automation, and control. Nicole is a member of the Institute of Electrical and Electronics Engineers and its Power and Energy Society.
Chapter 16
Intelligent and Self-Sufficient Control for Time Controllable Consumers in Low-Voltage Grids Stephanie Uhrig, Sonja Baumgartner, and Veronika Barta
16.1 Introduction The increasing number of decentralized power generation plants, such as wind turbines or photovoltaics (PV), causes more and more fluctuating feed-in, especially in the distribution grid. To balance power generation with electrical loads, a promising approach is the temporal shift of controllable consumers depending on the grid state, meaning energy deficit or surplus. The so-called demand-side management is one possible alternative to grid reinforcements, allowing minimization of peak loads and thus increasing grid efficiency. The German Energy Industry Act (§14a EnWG) enables distribution system operators (DSOs) to control controllable consumers at reduced grid charges. However, the needed communication is subjected to high standards for data security, that is, the data volume should be kept small to reduce effort and costs. In addition, especially in rural areas with high PV feed-in, the grid state changes within a few kilometers. A general schedule for a larger grid section would not meet the individual situation. To overcome these issues, highly automated and self-sufficient control for controllable consumers was developed and investigated. The innovative algorithm processes the locally measured voltages and loads and estimates the grid state. A schedule for blocking times for different clusters of controllable consumers is calculated. No communication is necessary for this basic principle, but DSOs still have the possibility to send prioritized switching commands. This local control system works in compliance with already existing regulations and within contractual frameworks. It is tested in a real laboratory of
S. Uhrig () · V. Barta HM University of Applied Sciences, Munich, Germany e-mail: [email protected] S. Baumgartner LEW Verteilnetz GmbH, Augsburg, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_16
419
420
S. Uhrig et al.
about 100 private households in rural and urban areas. The results show a high potential for this decentralized load management to serve the grid.
16.2 Demand-Side Management in Low-Voltage Grids Decades ago, the power grid was planned and established in a different way than it is used today. Large, often conventional, power plants were built near load centers, such as big cities or industrial areas, to keep the transportation distances short. The conventional power plants were scheduled to balance the expected load. Nowadays, the transition in power generation changed the situation dramatically. Renewable energy sources are located depending on the adequate supply of, for example, wind or radiation. Their controllability is limited. The energy market helps to balance the feed-in with the load. Nevertheless, the transportation distances in high and extra high voltage levels are increasing to balance the system. Further flexibilities are needed with regard to feed-in or consumption. The demand-side management is a promising approach to influence the grid stability through load control. This approach aims to shift high power consumptions to times of energy surplus. Usually, controllable consumers in the distribution grid are utilized, but not all loads are flexible. For example, industrial processes or essential infrastructure require a continuous energy supply. However, several flexibilities exist, which can be addressed. Looking at a feeder in the low-voltage grid, the voltage level strongly depends on energy consumption or generation. Assuming the voltage to be constant at the local transformer station, the power flow toward consumers will cause a voltage drop over the length of the feeder. The voltage drop will increase with increasing load and length (Fig. 16.1). The situation changes for the dominant power feed-in within the feeder. Especially in rural areas with high photovoltaic penetration, the feed-in can exceed the consumption. This results in a power flow toward a local substation causing a voltage increase along the line as illustrated in Fig. 16.2. In general, the change in voltage compared to the average voltage level at one location is an indicator of either an energy surplus or deficit in the low-voltage grid. In the past, energy consumption was dominant during the day and in the early evening. To avoid consumption peaks, controllable consumption devices – so-called controllable consumers – were shifted to night times. For example, the charging of storage heaters was triggered by ripple control technology to start in the late evening. Nowadays, the high penetration of photovoltaics (PV) installed in low-voltage grids leads to an energy surplus on sunny days. This is already a common case for rural areas in the south of Germany. For such days, it would be reasonable to shift energy consumption to midday as illustrated in Fig. 16.3. However, general schedules are inappropriate. On cloudy or rainy days, the feed-in is limited and the consumption will again be predominant. The conclusion is to base the control on the current grid state.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
421
Fig. 16.1 Illustration of voltage trend in a feeder of a load-dominated low-voltage grid
Fig. 16.2 Illustration of voltage trend in a feeder of a low-voltage grid dominated by high PV feed-in
Fig. 16.3 Schematic of optimum time shift for controllable consumers
422
S. Uhrig et al.
The average voltage in households in the low-voltage grid varies individually. It depends on the voltage regulation of the substation, location in the feeder as well as installed power of feed-in and consumption. Even within one municipality, it can differ considerably [1]. Instead of general commands or schedules addressing larger areas, it would be beneficial to calculate locally adapted schedules for controllable consumers. In a central approach, this would imply a large amount of data traffic between the responsible distribution system operator (DSO) and the control unit. The data transfer is subjected to high standards concerning consumer privacy and data security [2], resulting in a high effort for secure communication. A decentralized, self-sufficient solution could help to decrease this effort significantly. The innovative approach presented here addresses the issues mentioned above. A self-sufficient control for controllable consumers in the distribution system was developed. The algorithm calculates individual blocking times for controllable consumers depending on local voltage and load measurements. This provides schedules optimized for each household without communication effort. The following sections explain the regulatory and contractual framework and the algorithm and present results measured in a real laboratory, where this solution is tested.
16.3 Regulatory and Contractual Framework 16.3.1 Legal Framework and Technical Guidelines The implementation of the decentralized load management concept is already enabled by law in Germany through §14a EnWG (German Energy Industry Act) [3] This allows distribution grid operators to control controllable consumers in a way that serves the grid while providing reduced grid fees to the respective customers. The regulatory prerequisites for participation in §14a EnWG are, in addition to the controllable consumption device, a separate electricity meter and the responsibility of the grid operator concerned. The control actions made possible by §14a EnWG in the form of blocking periods depend not only on the type of controllable consumption device but also on the responsible grid operator. The length of the blocking periods is anchored in the existing customer contracts and must be considered. Figure 16.4 illustrates the concept of decentralized load management based on the VDE (FNN) [1]. The basis is the field level with the customer installations. Here, the controllable consumers are connected via a control unit and, if necessary, the intelligent metering system (IMSYS). The field level can be controlled from the business level via the operating level. Two communication paths are possible here. On the one hand, the paging network enables a unidirectional control command [5], on the other hand, the intelligent measuring system forms a bidirectional path. Both communication paths consider applicable consumer privacy and data security regulations based on Art. 13 DSGVO [2]. However, these two communication paths
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
423
Fig. 16.4 Concept in accordance with VDE FNN of optimum time shift for controllable consumers [1]
are only used when necessary (e.g., in critical grid condition) to decrease the communication effort. During normal operation, the control box works autonomously.
16.3.2 Contracts of German DSOs Regulating Decentralized Load Management The DSOs in Germany have developed very heterogeneous contracts to regulate, for example, daily blocking periods for controllable consumers. The duration of these blocking periods is specifically based on type of controllable consumer. Figure 16.5 shows blocking periods per day of different German DSOs according to the type of controllable consumer. Whereas electrical storage heaters are treated similarly, the regulations for electric vehicles and heating pumps are very heterogeneous. Electric vehicles, for example, are blocked. • For a maximum of six hours a day (DSO 1) • During certain periods of the day (DSO 2) • At fixed daily blocking periods (DSO 3) The existing contracts have to be considered for each DSO and type of controllable consumer. During the real laboratory, the regulations of the responsible DSO were obeyed.
424
S. Uhrig et al.
Fig. 16.5 Blocking periods per day of several distribution system operators (DSOs) depending on type of controllable consumers [6–10]
16.4 Control Setup for the Real Laboratory The developed control unit with an intelligent algorithm is tested in a real laboratory for one year in a rural area in south of Germany as well as in the urban city grid of Berlin. These very different grid types were chosen, because one requirement of the developed algorithm was to be applicable without adaptation in various grid types. About 100 control units are installed in private households. After a short passive phase, where only voltages and power consumption are measured, the active phase starts, and controllable consumers are directly controlled. The implementation of this real laboratory complies with all existing contracts, guidelines, and applicable laws. Against the background of the guidelines and laws, it has to be mentioned that the amount of installed intelligent metering system (IMSYS) is still not sufficient for the real laboratory to ensure a proper signal path for the grid control center. Therefore, the additional path using a secure paging network is used in the field test, as shown in Fig. 16.6, which can be replaced in the future by the IMSYS communication paths.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
425
Fig. 16.6 Communication paths between the grid control center and the module – in the real laboratory and in the future according to EnWG §14a [11]
16.4.1 Components of the Control Unit The applicable guidelines in Germany [4] suggest a control unit with four relay outputs, allowing an analog, binary on/off signal each. The output is used to define the blocking periods, but not the times of operation. This means an “off” signal applies directly to the controllable consumer, whereas an “on” signal merely enables a subsequent control to switch if necessary. The self-sufficient control unit illustrated in Fig. 16.4 consists of two components. The first component, an FNN-compliant control box [4], handles the hardware-technical circuitry. The controllable consumers in the household are connected to the corresponding relay of the control box. The second component is the developed smart module (Fig. 16.7), which contains the measurement unit, records the measurement data, and processes it using a newly formulated algorithm to determine the current grid state. The smart module sends the switching command to the control box, which triggers the relays accordingly. During normal operation, the smart module sends switching signals for each relay to the control box and no further communication is necessary. Nonetheless, the existing communication paths from the DSO can be used, for example, to send prioritized switching commands in emergency cases. This approach allows a significant reduction of the amount of sensitive and personal data to be transmitted compared to a centralized control solution.
426
S. Uhrig et al.
Fig. 16.7 Smart module containing the measurement unit, data storage, and processes using the newly developed algorithm to determine the current grid state
16.4.2 Definition of Flex-Clusters The four output relays do not allow mapping all individual consumers to a single relay. Therefore, the controllable consumers have to be clustered in such a way that all contractual conditions are fulfilled and all clusters can be addressed in the real laboratory. Controllable consumers which can be blocked by the DSO are defined by a draft § 3 No. 30a EnWG [3] in Germany. These are electrical (storage) heaters, heating pumps, charging points for electro-mobility, air conditioners, and storage batteries. Due to their relatively low number in Germany, air conditioners and storage batteries will not be addressed in the real laboratory. Figure 16.8 illustrates the recommendation for classification, forming five clusters of controllable consumers (Flex-Cluster). Flex-Cluster 1 consists of controllable consumers which can be blocked for less than 2 hours per day. Flex-Cluster 2 is formed by electrical storage heating. They usually can be switched simply on or off and blocking periods up to 14 hours per day are possible. Direct heating systems like heating pumps and charging points of electro-mobility are either switchable on/off or in steps of, for example, 0%, 30%, 60%, and 100% power consumption. Both types have similar blocking periods of, for example, 6 hours per day. Therefore, heating pumps together with charging points, which are switchable on/off form the Flex-Cluster 3. The same, but switchable in steps of power consumption, form the Flex-Cluster 4. Some households already possess a home energy management system (HEMS), which controls all flexibilities within. The HEMS-related systems belong to the Flex-Cluster 5.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
427
Fig. 16.8 Recommendation for classification of controllable consumers for real laboratory
16.4.3 Operation Modes The control box can handle two modes of operation. Within the first mode, one controllable consumer can be connected to one relay. Each of the four output relays triggers a simple on/off signal, applicable to Flex-Clusters 1, 2, and 3 in Fig. 16.8. Each relay is activated by a color zone as described below. Each zone is therefore assigned to Flex-Clusters. The second mode addresses the gradual power consumption, possible for Flex-Clusters 4 and 5. Here as well, each relay is activated by a specific color zone, but then corresponding to, for example, 0%, 30%, 60%, and 100% power consumption. In this case, a connection to all four relays is necessary for one controllable consumer.
16.5 Algorithm for Calculating Blocking Times 16.5.1 Matching of Color Zones to Flex-Clusters The algorithm divides the estimated grid condition into four zones (green, yellow, orange, and red), which are calculated independently for each module. The key advantage is the self-adjustment to the specific location and therefore individual voltage level in a low-voltage grid. Table 16.1 shows the assignment of the color zones with corresponding blocking times to the Flex-Clusters illustrated in Fig. 16.8. In zone red, all clusters are blocked, in zone green, all clusters are enabled. Zone orange together with zone red has a maximum duration of 6 hours per day, so that the loads in Flex-Cluster 3 are not blocked for a longer period of time. In zone yellow, the loads of Flex-Cluster 2, which are to be very strongly limited, are blocked. Since these may be blocked for up to 14 hours, conversely this implies that the zone must be green for at least 10 hours a day.
428
S. Uhrig et al.
Table 16.1 Overview of the defined flex clusters, blocking times, and proposed color zones
Cluster 1
Blocking Time Up to 2h
2
Up to 14h
3
Up to 6h
4
Up to 6h
Flexible Load Regulatory and contractual conditions Night storage heating Heating pump, charging point (0% | 100%) Heating pump, charging point (stepped, e.g., 0% | 30% | 60% | 100%)
red OFF
Zone orange yellow
OFF
OFF
OFF
OFF
OFF
OFF
green
OFF
16.5.2 Calculation of Color Zones The calculation of the color zones is based on a comparison between the current measured data and specific threshold values, which are calculated on a daily basis. The moving average of the voltage of the first phase over the last 15 minutes is used as the current measurement value. Short-term outliers are smoothed out by this averaging. The daily threshold values are determined based on the historical, local measurement data from this metering point. In addition to the voltage at the grid connection point, the power peaks of the household are taken into account as well [1]. Unusually high power consumption of the measured household leads to a downrating of the color zones. For example, the yellow zones become orange and so on. Figure 16.9 shows an example of the allocation of the color zones based on the grid condition. As shown in Table 16.1, all flex clusters are released in the green zone during higher voltage between 8 am and 5 pm, caused by PV feed-in in this grid section. In this case, it would be beneficial to the grid if large consumptions would be shifted to the day. This could reduce the voltage rise during the day. In this example, the charging of an electric vehicle, which is typically done in the evening, would be briefly interrupted during the early evening hours (red and orange zone). The same metering point shows exactly 1 week earlier a very different voltage profile as shown in Fig. 16.10. On a cloudy day, the feed-in from PV is strongly reduced, leading to low voltages between 2 pm and 5 pm. These are exactly the times of the highest voltages 1 week after. As this example illustrates, the grid situation is changing and steady schedules, even for a specific household, cannot follow the grid state.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
429
Fig. 16.9 Zones calculated by comparing the moving average of the voltage with the voltage thresholds, measured in a household in a grid dominated by photovoltaic feed-in
Fig. 16.10 Zones calculated for the same households as in Fig. 16.9, but one week later
16.5.3 Self-Adjustment of Thresholds in the Urban and Rural Grid The load management concept is designed to follow the local grid condition individually for each household. At the beginning of each day, the threshold values
430
S. Uhrig et al.
Fig. 16.11 Daily calculated threshold value between green and yellow zone (Vth,g ) one household in the urban grid
applicable for that day are calculated per metering point. The individual threshold values lead to an optimized adaptation of the zones over the day. Figure 16.11 shows the example of the calculated threshold Vth.g between green and yellow zones for one household in the urban grid. The graphs show the moving average Vfloat of the phase L1 calculated over several weeks. The start of the day is defined at 05:00 (blue vertical). The zones determined by the algorithm (red, orange, yellow, and green) are colored in the background. The median voltage is relatively constant with a small covered voltage band of 4–6 V, which applies for most households in the urban grid. Due to the load-dominated characteristic of this grid type, the high loads in the evening lead to a frequent voltage drop between 5 pm and 8 pm. As the diagram shows, the algorithm triggers the orange and red zone for this time, blocking controllable consumers and mitigating the power consumption. Figure 16.12 shows the same graphs for three households of different feeders, but all connected to the same local transformer station in the rural grid. Quite significant differences in the voltage measurements can be noticed. Besides different median voltages, the covered voltage band is different. Example (a) covers up to 6 V per day, while (b) covers up to 9 V and (c) even about 12 V per day. The threshold value changes from day to day slightly with a delta of maximum 1 V. This behavior was found to be typical for rural grids. However, this approach of local and individual calculation of blocking periods allows controllable consumers to be switched to serve the grid individually for each connection point.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
431
Fig. 16.12 Daily calculated threshold value between green and yellow zones (Vth,g ) for three different households in different feeders, but connected at the same local transformer station in the rural grid over the same period of time
432
S. Uhrig et al.
16.6 Grid Serving Behavior of the Approach The following section shows several results from the real laboratory. Due to the limited number of installed control units, the share related to all households is still small. Therefore, no significant change in voltage is expected here. The impact is to be estimated in the future by extrapolations used in network calculations. However, already the results from the real laboratory allow some qualitative conclusions on the effect of the local control approach.
16.6.1 Typical Behavior and Constraints for Individual Metering Points A major benefit of this approach is the calculation of the probable grid state, meaning either energy surplus (green) or deficit (red) in the low-voltage grid. Surely, the approximation by the algorithm will not deliver a correct assessment in 100% of the cases but offers a very good indicator most of the time. Especially in the low-voltage grid, the grid state can vary significantly for single feeders and is often unknown. The superposition of generation and energy consumption is complicated due to the growing amount of difficulty to predict renewable feed-in. Consequently, energy surplus or deficit in the low-voltage grid depends on a variety of factors, such as grid type and extent, state of superordinate grid, number of associated consumers, types of consumers, rated power of connected feed-in, weather, and others. Due to the measurements directly at the households, this algorithm delivers a very good indicator without knowledge of the factors mentioned above. It operates autonomously and is self-adjusting to the specific grid connection point. Moreover, it can provide highly valuable information about the grid. One example is illustrated in Fig. 16.13, showing the color zones for a household in the rural grid with a relatively high amount of PV generation. The calculated color zones show the expected tendency for energy deficit in the morning between 5 am and 7:30 am and especially in the evening between 5 pm and 8 pm. However, during midday, where another load peak would be expected, merely green and yellow zones appear. This leads to the conclusion that the consumers are fully compensated by the high amount of PV feed-in installed in this grid section. Such characteristic patterns as shown in Fig. 16.13 for a household with 6.7 kW installed PV generation can be identified for each metering point, allowing further analysis. DSOs can gain valuable information about critical grid sections and might use the results for conclusions regarding, for example, stability or grid extension.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
433
Fig. 16.13 Calculated color zones by the algorithm for one household in the rural grid with 6.7 kW installed photovoltaic generation
Fig. 16.14 Voltage and power measurement over three days of a household with night storage heat
16.6.2 Avoidance of Unintended Simultaneity in a Grid Section Already today unwanted high peak loads due to simultaneous power consumption cause issues in the distribution grid. Figure 16.14 shows the measured three-phase voltage and apparent power at a grid connection point of a household with electrical storage heating. Approximately 25 A is consumed per phase at peak. Another electrical storage heater is installed in the same feeder. A total of seven storage heaters are connected to the corresponding local transformer station. Historically, the storage heaters were released by a ripple control signal at 9:30 pm. As this example shows, after changing to a time switch, the timing seems to stay the same
434
S. Uhrig et al.
and the storage heaters connected to the local transformer station are released at 9:30 pm. This simultaneous high power consumption is reflected in a strong voltage drop of all three phases up to 12 V at this measuring point each day. Such superposition of high load consumption is forecasted to increase in the future due to the charging behavior of electric vehicles in households [12]. A local control can help to mitigate this effect. With the method of local control, calculating individual blocking time results in varying periods of power consumption. Figure 16.15 shows the voltage measured at two households of one feeder. The general voltage behavior is very similar, but due to different threshold values caused by slightly differing voltage levels, also the color zones vary in time. In this example, Flex-Cluster 3, which includes charging points for electric vehicles, is released either at 12:30 (a) or 11 am (b). This is illustrated by the color change from orange to yellow. As a conclusion, it can be deduced that simultaneities could be efficiently avoided by applying such a local control.
16.6.3 Self-Optimization and Avoidance of Self-Induced Load Peaks German law enables financial benefits for customers, if they agree to be partly controlled by the responsible DSO. This is not necessarily the case in other countries. However, from a consumer perspective, self-optimization is becoming increasingly relevant, through the installation of private charging points, storage, and PV systems [13]. Self-optimization is understood here as the adjustment of consumption to one’s own, for example, PV feed-in. The local and self-sufficient approach discussed here provides an inherent starting point for self-optimization. Figure 16.16 illustrates the example of a private household with two electric vehicles and a heat pump. The charging station with two charging points is connected in a single phase leading to a high load on this phase. Without control as shown in (b), the first vehicle starts charging with 10 A charging current directly when it is connected to the charging point at 5:30 pm (gray). At 8:30 pm, the second vehicle is connected for charging with the same power (green). This doubles the power consumption leading to a self-made load peak. Furthermore, this power peak appears exactly at peak consumption time [14]. In terms of self-optimization, the charging periods have to be adapted to the feed-in situation. In case a PV system is installed at the customer, it would be most beneficial to shift the charging to times of high feed-in, which would be automatically promoted by the control algorithm. Even if no PV generation system exists, a load shift as stimulated by the algorithm is beneficial. One example is simulated and shown in Fig. 16.16c. The algorithm would trigger upon the high power consumption to change from, for example, yellow to orange color zone. If the electric vehicles provide the possibility of charging with reduced power, both charging power values would be lowered to decrease the overall consumption and thus prolong the charging time. In case
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
435
Fig. 16.15 Voltage measured at the same day for two households of the same feeder
charging is not possible with reduced power, the second electric vehicle would be connected to another relay, which is blocking for the orange zone. Doing so, the charging would be shifted in time as illustrated in Fig. 16.16c.
436
S. Uhrig et al.
Fig. 16.16 Voltage (a) and power (b) measurement of a household with two electric vehicles and a heat pump and simulated load shift (c)
16.7 Sustainability of the Approach The aim of the work described here was to find and investigate a future-proof concept for grid serving usage of controllable consumers. DSOs benefit by using the concept for area-wide operation, in case the recognized and estimated potential is considered high. On the other hand, this concept should be beneficial to customers as well, to provide a strong motivation to install such controls. Here, self-optimization seems to be a good approach, although this scenario may not be applicable in all countries unless financial benefits can be granted by the DSO. Thus, for a sustainable and long-term solution, both interests need to be served. The sustainability concept as illustrated in Fig. 16.17 serves the interests of both DSOs and customers. During the first step, the implementation of the real laboratory, the principal feasibility, is proven. Hardware and software are developed. The reliability of the algorithm can be assessed and improved, which is advantageous for customer acceptance. Furthermore, the measurement data allow first conclusions regarding the grid serving behavior. However, due to the small penetration of these controls installed in the low-voltage grid, no significant changes in voltage are expected. The next, not yet realized step, is the transferability of the approach. Extrapolations will allow an estimation of the resulting influence on the voltage in the lowvoltage grid. This will be done by grid simulations, projecting a higher penetration of such controls. The necessary level is analyzed to achieve a significant influence in the grid. Concrete predictions on the potential of grid-serving behavior should be possible with these results. For the customer side, the regulatory framework in
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
437
Fig. 16.17 Sustainability concept of the investigated approach
other regions and countries has to be investigated to allow a conclusion about the transferability. The final step is a future-proof setting for the usage of controllable consumers. The DSOs will need regulatory framework conditions which allow an effective and efficient access to controllable consumers. As this framework is in the flow for many countries, recommendations are required. The experience of the real laboratory delivers valuable information in this context. The implementation, however, will strongly rely on the customer acceptance and profitability for a household. Selfoptimization of self-consumption with regard to own feed-in will be a strong economic motivation.
16.8 Summary An autonomous algorithm for controlling controllable consumers can help to adjust power consumption in the low-voltage grid to the grid state, meaning power surplus or deficit. Such an approach was realized and investigated in a real laboratory established in an urban and rural grid involving up to 100 households. The real laboratory was established in such a way that all existing contracts and regulatory framework conditions were considered and fulfilled. It shows that such a control is applicable already today. For this purpose, the types of controllable consumers were clustered, and blocking times were calculated individually for each cluster and location. The developed algorithm used voltage and power measured at the individual households. By doing so, no communication is necessary for operation, but DSOs still have the possibility for priority commands. It was found that the control algorithm reliably calculates blocking times based on an estimation for the grid state. The blocking times are strongly related to each household, the individual consumption, and feed-in behavior, resulting in a specific schedule for each. The analysis of data from the real laboratory showed some obvious differences between the urban grid with relatively constant voltage and the rural grid with high
438
S. Uhrig et al.
voltage fluctuations. The algorithm independently adapts to the network type. The calculated color zones can be interpreted as an indicator for the grid state and therefore deliver data about typical behavior for single locations. This is valuable information for distribution system operators to identify critical grid sections and derive conclusions regarding stability or grid extension. Furthermore, the individual schedules help to avoid unintended simultaneities, which might be caused by central commands. Also from a customer perspective, this control provides an inherent selfoptimization, by shifting high loads toward times of energy feed-in. Outgoing from the results of the real laboratory the sustainability of the approach has to be investigated in the future. A quantitative potential for grid-serving behavior and self-optimization will be derived from grid simulations using extrapolations. Furthermore, the transferability to other regions and countries will be analyzed, allowing recommendations for the future definition of the contractual and regulatory framework. These entire results are expected to be of value for the ongoing transition of energy systems all over the world and contribute to making our future life more energy-efficient and sustainable. Acknowledgments The approach presented in this chapter is being developed within the FLAIR2 project. The authors sincerely thank our project partners LEW Verteilnetz GmbH, Stromnetz Berlin GmbH, and e*Message Wireless Information Services Deutschland GmbH for their excellent support, as well as Professor Rolf Witzmann from TUM Technical University München for scientific cooperation.
Literature 1. V. Barta, S. Baumgartner und S. Uhrig, ,Algorithmus zur autarken netzdienlichen Steuerung von zeitlich flexiblen Lasten,“ 17. Symposium Energieinnovation, Graz, AT, 2022. 2. Europäische Union, Datenschutz-Grundverordnung, Verordnung Nr. 2016/679 des Europäischen Parlaments und Rates vom 27.4.2016. 3. Energiewirtschaftsgesetz - EnWG, vom 7. Juli 2005 (BGBl. I S. 1970, 3621), zuletzt geändert Art. 84 G v. 10.8.2021 I 3436. 4. VDE FNN, FNN-Konzept zum koordinierten Steurzugriff in der Niederspannnung über das intelligente Messsystem, Berlin: VDE FNN, 2018. 5. S. Baumgartner, V. Barta und S. Uhrig, Regulatory Framework for the Real Laboratory of a Decentralized Load Management Concept,“ CIRED workshop, Porto, P, 2022. 6. Lechwerke AG, LEW, Wärmestrom,“ Lechwerke AG, Augsburg, Germany, 2021. 7. Bayernwerk Netz GmbH, ,Netzentgelte für steuerbare Verbrauchseinrichtungen gemäß § 14a EnWG in der Niederspannung - Elektromobilität,“ Bayernwerk Netz GmbH, Regensburg, Germany, 2021. 8. Bayernwerk Netz GmbH, ,Stromnetz,“ Bayernwerk Netz GmbH, Regensburg, Germany, 2021. 9. Bayernwerk Netz GmbH, ,Schwachlast-regelung,“ Bayernwerk Netz GmbH, Regensburg, Germany, 2021. 10. E.DIS Netz GmbH, ,Preisblätter Netzentgelte Strom der E.DIS Netz GmbH,“ E.DIS Netz GmbH, Fürstenwalde/Spree, Germany, 2019. 11. F. M. f. E. A. a. Climate, ,Entwurf eines Gesetzes zur zügigen und sicheren Integration steuerbarer Verbrauchseinrichtungen in die Verteilernetze und zur Änderung weiterer energierechtlicher Vorschriften,“ Berlin, Germany, 2020.
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
439
12. D. Heinz, ,Erstellung und Auswertung repräsentativer Mobilitäts- und Ladeprofile für Elektrofahrzeuge in Deutschland,“ Working Paper Series in Production and Energy , Bd. 30, p. 104, 2018. 13. Renewable Energy Research Network, ,Forschungsroadmap Systemdienstleistungen,“ Jülich, Germany, 2020. 14. S. Baumgartner, V. Barta und S. Uhrig, ,Praktische Umsetzung eines Reallabors für ein dezentrales Lastmanagement-Konzept,“ 17. Symposium Energieinnovation, Graz, AT, 2022.
Stephanie Uhrig (née Raetzke) started her studies in electrical engineering in 1998, having in mind to become a medical engineer, developing medical equipment and help curing people. However, during her first years, she found out that a power engineering focus would fit better. She liked the applicability of many technical aspects to everyday life. During her studies, one of her favorite topics was electromagnetic fields. Combining both aspects, she finally became a high-voltage engineer. She graduated from Technical University Munich (TUM), Germany, in 2003 and subsequently started her Ph.D. Her research work aimed for new insulation materials, with improved resistance to typical stresses in high-voltage equipment. She earned her Ph.D. degree from TUM in 2009. In 2010, she started her industrial career with OMICRON electronics, a manufacturer of measurement equipment for condition assessment of high voltage equipment. As a product manager, she was responsible for various test systems and a specialist in diagnostic methods, like dielectric response measurement or frequency response analysis. In her duty to define potentials and limits of methods as well as identify new applications, Stephanie traveled a lot. She was fortunate to perform measurements on power equipment worldwide and train users in different methods. A particular interesting aspect was the discussions with operators of power equipment. She experienced that evaluation and decision in many respects do not purely depend on technical quality, but also on legal, social, or cultural circumstances. With two teachers as parents, it was a kind of a natural step to become a professor at the University of Applied Sciences in Munich (Germany) in 2017. Giving lectures matches very well to Stephanie’s preference to work with people and forward knowledge to others. Furthermore, she gained the possibility to engage herself in sustainability within her research work. She became a founding member of the Institute of Sustainable Energy Systems at her University. This research institute focuses on education and investigation to support a sustainable power supply and resourceefficient use of energy. The energy systems accomplish a complex transition: Power generation is changing toward decentralized, fluctuating renewables. Strategies and intelligent equipment are developed for smart grid operation. Energy consumption is shifted and optimized. Circular economy aspects and resource efficiency are considered in asset management strategies. All these developments are unavoidable, based upon wide support from the general
440
S. Uhrig et al. public, even though it means economic or practical disadvantages. In her work as a professor, Stephanie is impressed by young people’s support: their strong dedication, ingenuity, and willingness to change. Since Stephanie was a child, she always enjoyed solving logic puzzles, combining different information into a picture, containing even more information than the sum of all pieces. Being open for methods and knowledge from different disciplines, such as humanitarian thinking, technical excellence, or understanding sustainability, help to identify promising solutions. She is firmly convinced that interdisciplinary is the key to a large number of emerging questions.
Sonja Baumgartner started her studies in International Information Systems Management in 2013 to learn management tools in technical fields. During her studies, she focused on the management of renewable energies. Through travel, she is fascinated by nature and landscapes all over the world and wants to preserve nature for further generations. Therefore, she decided to continue studying renewable energies and combine it with research and development in the master studies. Her Master of Applied Research in engineering sciences enabled her to gain practical and scientific experiences for managing a research project. In order to expand the practical know-how, she started as an electrical engineer at a German distribution grid operator, LEW Verteilnetz in Southern Germany. There, she learned the assets of the electrical grid and how to optimize the grid stability, including a growing number of decentralized energy resources such as photovoltaic systems through asset management. To deepen the knowledge of how to optimize the integration of renewable energy resources into the existing grid, she decided, in 2020,to become a part-time scientific employee of the University of Applied Sciences Munich. Now she is able to work scientifically in combination with practical tasks from a distribution grid operator. With a background in both subject areas, she is able to investigate new ways for better integration of renewable energies that cause less CO2 emissions and thus enable further generations to admire the unique nature.
Veronika Barta started her studies in Regenerative Energies – Electrical Engineering at the University of Applied Sciences Munich in 2015. During the last Bachelor’s semesters, she concentrated on electrical energy technology. In order to develop personally and thematically, she decided to take the researchoriented Master of Applied Research in Engineering Sciences directly after her Bachelor’s degree. This provided the potential to act in a structured and independent manner during her studies and she learned how to work scientifically. Her work focused on power grid technology. Even at school, science subjects were among her favorites. Natural sciences run in her family; both her grandfather and her mother studied theoretical physics. A career in a scientific,
16 Intelligent and Self-Sufficient Control for Time Controllable Consumers. . .
441
technical field was a natural choice. To get deeper into scientific practice, she started working as a research assistant at the Munich University of Applied Sciences at the end of 2020. As part of a cooperative doctorate at the Technical University of Munich, she is investigating decentralized, self-sufficient control systems for flexible loads. Through the concept of mitigating power peaks and maintaining grid stability in low voltage, she wants to contribute her part to the future, and secure power supply of the community.
Chapter 17
Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters ˇ Milosavljevi´c, S. Huseinbegovi´c, B. Veseli´c, B. Peruniˇci´c-Draženovi´c, C. and M. Petronijevi´c
17.1 Introduction Variable structure systems (VSS), as a control approach, originated from Russia 65 years ago and gradually entered the control research community worldwide. Sliding mode control (SMC) in VSS is recognized as a powerful way to get a robust performance. Researchers from the University of Sarajevo significantly contributed to the initial development of the VSS. They worked on the problems of stability, sensitivity, and robustness of VSS, and their results had a significant impact on the further development of SMC. The first paper published in the field of SMC in the English language is Ref. [1]. There, for the first time, the matching conditions in multivariable VSS were formulated as well as the averaged control, later renamed into the equivalent control in Ref. [2]. Another significant result of the mentioned cooperation was the SM speed control of a three-phase induction machine (IM) [3], which drew the attention of more researchers in the field of power electronics and SMC of electrical drives (EDs). That paper has shown that VSS with SMC are natural candidates for the control of EDs and power converters. The application of digital hardware in control systems has directed research toward a digital approach to SMC. The impact of time discretization into
B. Peruniˇci´c-Draženovi´c · S. Huseinbegovi´c () Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina e-mail: [email protected] ˇ Milosavljevi´c C. Faculty of Electrical Engineering, University of East Sarajevo, Sarajevo, Bosnia and Herzegovina e-mail: [email protected] B. Veseli´c · M. Petronijevi´c Faculty of Electronic Engineering, University of Niš, Niš, Serbia e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_17
443
444
B. Peruniˇci´c-Dra´zenovi´c et al.
continuous-time (CT) SM was analyzed for the first time in Ref. [4], which is a pioneering work in the field of discrete-time (DT) SMC. It was shown that the implementation of CT-SMC design by digital hardware, which has a finite sampling frequency, produced a zig-zag motion around the sliding surface, named as quasisling mode (QSM). Necessary and sufficient conditions for the existence of the bounded QSM have also been studied in Refs. [5–9]. DT equivalent control was first introduced by Drakunov and Utkin [6]. These initial conclusions were followed by further research aimed at discovering other features of the DT-SMC systems, identifying implementation problems and proposing solutions to overcome or mitigate them. It has been shown that DT equivalent control, which is linear in the case of a linear control system, gives a deadbeat response, since it brings the system state onto the sliding hyperplane in a single sampling period, remaining on the hyperplane at the forthcoming sampling instants. This motion was denoted as ideal DT-SM, and it is possible only in a perfectly known linear system without any uncertainties and disturbances, which is rare. However, this shows that it is theoretically possible to achieve DT-SM by linear control, which is different from CT-SMC systems where control must be discontinuous. The earliest DT control laws (e.g., DT reaching law [10]) with the dominant switching component ensured the system trajectories to successively intersect the sliding surface in a QSM domain having a dimension of order O(T). The switching control component is the very cause of chattering induction, which is the main obstacle to the widespread use of these systems. Digital realizations allowed more complex control laws to be implemented. In Ref. [11], another DT reaching law was proposed, based on nonstationary sliding hyperplane and additional integral action, which resulted in a narrower QSM domain and less chattering compared to Ref. [10]. In an effort to reduce or even eliminate chattering, DT-SMC algorithms have emerged that apply linear control in the final phase of the system motion toward the sliding surface. Linear deadbeat (equivalent) control in Ref. [12] was applied as soon as the actuator limitations allowed it, while in Ref. [13] that control was applied in a predefined sliding surface vicinity. Application of smooth control in a sliding surface vicinity certainly avoided chattering, but robustness to parametric and external disturbances, which was the main advantage of SMC systems, was notably reduced in DT-SMC. It is important to emphasize here that DT-SMC cannot even theoretically provide invariance neither to parameter changes nor to external disturbances. The main reason is that the control stays constant during the sampling period while disturbances impact the system all the time. Higher sampling frequencies usually produce large control values, which are not allowed due to system input limitations. To improve the DT-SMC system robustness, one approach was to estimate and compensate disturbances. One step delayed disturbance estimation was applied in Ref. [14] that narrowed QSM domain to O(T2 ), but the steady-state error for state regulation achieved an accuracy of O(T). More accurate tracking performance was obtained in Ref. [15] by application of DT integral sliding manifold, which resulted in both O(T2 ) QSM domain and state regulation accuracy. The robust
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
445
DT-SMC design is a very attractive problem that is solved using a disturbance observer or estimator [16–25] or an adequate sliding manifold design for unmatched disturbances [26]. The authors of this chapter have also made significant contributions to the development of DT-SMC systems. They worked on the development of DT-SMC algorithms [4, 13, 16, 17] and disturbance estimation via sliding variable [18–22] as well as on the sliding hyperplane design methods [25, 26]. Having extensive experience in the application of DT-SMC in EDs [18, 19, 21, 27–31] and power converters [32–36], the authors had the opportunity to experimentally test the performances of various control algorithms. Noticing the main problems in practical realizations, the authors came to the structure of a controller that gave good results. The goal was to provide a fast response of the system without overshoot and good robustness to parametric and external disturbances, all under the assumption that the control magnitude was saturated by the actuator limits. A description of a DT-SMC algorithm that meets the given objectives is given in the next section. The main objectives of this chapter are to present an application of DT-SMC in EDs and power converters. As examples, authors will present their most important control designs that have been published over many years of research work. To fully understand the presented control designs, step-by-step procedures with all the specifics will be presented in detail. Section 17.2 gives an introduction to DT-SMC theory, then concisely describes an essential chattering free [13] with two scales reaching law DT-SMC algorithm and stability conditions. EDs dynamical model, the corresponding DT-SMC design, and its verification are presented in Sect. 17.3. The torque (current), speed, and position cascaded control of the induction machine (IM) are considered. This section ends with experimental verification. Section 17.4 considers the DT-SMC of grid-connected LCL-type power converters and offers some solutions. The presented controller designs are intended for a grid-connected inverter in an energy storage and/or conversion system. This section ends with experimental results showing the performances and effectiveness of the proposed control design. The conclusion is made in Sect. 17.5.
17.2 Discrete-Time VSS with Sliding Mode This section summarizes our DT-SMC experience presented in publications [13, 20, 26, 27, 37, 38], by describing a DT-SMC algorithm that, in our view, gives excellent results in the control of EDs and power converters. Consider a linear time-invariant CT dynamic system, given by a state-space model x˙ (t) = Ax(t) + b (u(t) + d(t)) ,
.
(17.1)
where x ∈ Rn is the state vector, u ∈ R is the control signal, and A ∈ Rn × n and b ∈ Rn × 1 are the state matrix and the input vector, respectively. Let the system be
446
B. Peruniˇci´c-Dra´zenovi´c et al.
subjected to a bounded matched disturbance d, |d(t)| ≤ d0 < ∞. DT model of the considered system (17.2) is obtained by zero-order hold in the form xk+1 = Ad xk + bd (uk + dk ) , Ad = e
.
AT
, bd =
T
eAt bdt,
(17.2)
0
under the assumption that the sampling period T is sufficiently small and that d(t) is slowly varying. Then, the disturbance d(t) can be regarded as constant during the sampling period, which preserves the matching conditions [1] in the DT domain. To establish SM with desired dynamics, it is necessary first to construct an appropriate sliding surface and then to find a control that provides reaching of the sliding surface and sliding along it. Each of these two phases of system motion has its corresponding control component. An approach that enables a clear distinction between these two control components uses δ-transform. Hence, DT model (17.3) can be represented in the so-called δ-domain in the following way: xk+1 − xk A d − In , bδ = bd /T . = Aδ xk + bδ (uk + dk ) , Aδ = T T (17.3)
xk =
.
SM should be organized along the sliding hyperplane gδ, k = 0 in the state space, where the sliding variable g in δ-domain is defined as gδ, k = cδ xk , with the common assumption cδ bδ = 1. Desired SM dynamics is defined by a chosen spectrum of the system eigenvalues in CT domain λ = [λ1 λ2 · · · λn − 1 0 ]. Zero eigenvalue indicates that SM dynamics is λofT reduced (n − 1) order. CT eigenvalues can be mapped into δ-domain as .λδ,i = e i − 1 /T , i = 1, · · · , n, [13], which gives λδ = [λδ, 1 · · · λδ, n − 1 0 ]. Sliding hyperplane vector cδ that provides desired SM dynamics can be found using the formula [25]. cδ = [kδe 1 ] · [Aδ bδ ]† ,
.
(17.4)
where kδe is the state feedback gain vector in the system (17.3) that provides the desired spectrum λδ . Operator † denotes matrix pseudo-inverse. Sliding variable dynamics can be expressed using (17.3) as δgδ,k =
.
gδ,k+1 − gδ,k = cδ δxk = cδ Aδ xk + uk + dk . T
(17.5)
Equivalent control in δ-domain is determined by solving gδ, k + 1 = 0 (dead-beat response), which according to (17.5) gives ueq, k = − cδ Aδ xk − g,k /T − dk . The term gδ, k /T is the reaching control component that becomes zero on the sliding surface. The equivalent control requires information about the disturbance dk , which is usually unavailable, so the feasible part of the equivalent control is
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
1 g,k = − kδe + cδ xk . .uk = −cδ Aδ xk − T T
447
(17.6)
The control task is to provide a fast response without overshoot and reduced chattering, while exhibiting strong robustness to matched disturbances. To meet these design requirements, the authors opted for the concept of application of linear control assisted by disturbance estimation and compensation [20]. The linear control must be like (17.6) to obtain a fast response, but (near-) dead-beat control generates large control efforts that are usually above actuators’ limits. Therefore, system input saturation must be taken into consideration. The authors have proposed several similar variants of this DT-SMC strategy in [31, 37] applying nonlinear disturbance estimation. The estimator essence is the integration of a nonlinear function of the sliding variable as in the super-twisting control approach [39]. Because of the DT integrator inside the estimator, disturbance compensation is activated in the vicinity of the sliding surface to avoid overshoot due to integrator windup, related to inevitable control saturation arising during the action of the linear dead-beat control. A rigorous mathematical analysis was performed in Ref. [38] for the case when the signum function of the sliding variable is integrated within the estimator. Although such a discontinuous signal passes through the DT integrator, some chattering has been observed. In order to further reduce chattering, a continuous approximation of the discontinuous signum function is applied within the estimator in this chapter. The proposed DT-SM controller is described by the following equations: U0 sgn u,k , u,k > U0 , uk = u,k , u,k < U0 , u,k = ul,k − p2,k uc,k ,
ul,k = −cδ Aδ xk − T −1 ks1 + 1 − p2,k ks2 gδ,k g . uc,k = uc,k−1 + kint T β+|δ,k−1 , k > 0, β ≥ 0, gδ,k−1 | int 0, u,k > U0 , p2,k = p1,k−1 , p1,k = 1, u,k < U0 ks1 , ks2 > 0, ks1 + ks2 ≤ 1.
(17.7)
The overall control u, k has linear component ul, k and nonlinear (compensational) component uc, k . When the system state is far away from the sliding surface, the controller output uk is saturated to U0 if the calculated control u, k is above the actuator limit, which is usually the case since the control tries to bring the system state onto the sliding surface (in the case of nominal system) in a single step. When the controller output leaves the saturation, the linear control ul,k = −cδ Aδ xk − T −1 (ks1 + ks2 ) gδ,k
.
(17.8)
448
B. Peruniˇci´c-Dra´zenovi´c et al.
is applied during one sampling period. This mechanism is implemented using auxiliary variables p1, k and p2, k . In the limit case when ks1 + ks2 = 1, the control (17.8) becomes deadbeat control that yields gδ, k + 1 = 0 in the case of nominal system (dk = 0). In a real system where dk = 0 (including unmodeled dynamics), selected gains ks1 + ks2 ≤ 1 bring the system state in a sliding surface vicinity. In the next sampling period, the linear control gain is reduced (ks2 = 0), and the nonlinear compensational control is activated. Now, the applied control is described by
.
uk = −cδ Aδ xk − T −1 ks1 gδ,k − uc,k , g uc,k = uc,k−1 + kint T β+|δ,k−1 g |.
(17.9)
δ,k−1
Control component uc, k is the output of DT integrator with the gain kint , which tends to compensate for disturbance action. Parameter β affects the shape of the signum function approximation. For β = 0, the compensational control becomes uc, k = uc, k − 1 + kint Tsgn(gδ, k − 1 ). System stability implies the system trajectory convergence toward the sliding surface in both control modes, in saturation and out of saturation. In the saturation, the control signal is constant (uk = U0 ) and does not depend on the applied DTSMC algorithm. Convergence analysis in saturation was investigated in Ref. [38]. The obtained condition on the value U0 that forces the system to move toward the sliding surface and to leave the saturation is given by the following proposition. Proposition 1 [31, 38]. The stable DT system (17.3) with the controller (17.7) operating in the saturation will leave it in a finite number of sampling periods if U0 > |cδ Aδ xk | + d0 , ∀k ≥ 0.
.
(17.10)
After leaving the saturation, the control (17.8) is applied for one sampling period, which brings the system state into an O(T) vicinity of the sliding surface, depending on the disturbance magnitude. In the next sampling period, the gain is reduced and the disturbance compensation is activated, so the applied control is described by (17.9). Taking into account (17.5), sliding variable dynamics can be described by gδ,k+1 = (1 − ks1 ) gδ,k + T dk − uc,k , . gδ,k−1 . uc,k+1 = uc,k + kint T β+|g δ,k−1 |
(17.11)
It is obvious that compensational control uc, k actually represents a disturbance estimate [20]. Let the estimation error be denoted as zk = dk − uc, k , then the system dynamics (17.11) can be represented as
.
gδ,k+1 = (1 − ks1 ) gδ,k + T zk , g zk+1 = −kint T β+|δ,k−1 g | + zk + k , k = dk+1 − dk . δ,k−1
(17.12)
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
449
The stability of the nonlinear system (17.12) can be analyzed by using pseudolinear form [40], which in this case gives the following model:
.
σk = gδ,k
σk+1 = gδ,k σk + pk ,
−kint T zk , gδ,k = 1 − ks1 T β+ |g | 1 , pk = [0 k ] .
(17.13)
δ,k
The following proposition gives stability condition. Proposition 2 Convergence of the DT system (17.13) governed by the DT-SM controller (17.7) is satisfied within the area defined by .
gδ,k > kint T 2 /ks1 − β.
(17.14)
Proof The proof is analogous to the one in Ref. [38]. Condition (17.14) shows that the introduction of continuous approximation of signum function characterized by β, besides chattering reduction, can expand convergence area, i.e., reduces vicinity around the sliding surface where convergence is not guaranteed. It is interesting to see that for β = kint T2 /ks1 condition (17.14) becomes |gδ, k | > 0, which indicates that convergence exists in the whole state space. In that case, DT-SM occurs. It should be noted here that the conclusions drawn are valid in the case of constant and slowly varying disturbances. For other types of perturbations, it should be expected that their amplitude and frequency will affect the width of the QSM domain, which will certainly be larger than the theoretically obtained width determined by the convergence region (17.14). Application of continuous approximation results in a certain loss of robustness, and thus accuracy, due to the smooth compensation component. Therefore, it is necessary to find a compromise between the acceptable level of chattering and the required accuracy.
17.3 DTSM Control for Electrical Drives SMC of EDs is widely described in [41]. Control of particular variables of EDs using DT-SMC principle has been published in many papers, including ours, e.g., [18, 19, 21, 27–29]. Complete control of an ED assumes control of torque (current), speed, and position. Control of these variables is usually organized as a cascade control with three control loops. Each inner control loop obtains reference from the outer one via a limiter, which constrains the maximal permissible value of the inner loop control variable. This control approach is very popular in conventional EDs control practice using proportional-integral (PI) controllers. The application of cascaded SM controllers has been rarely reported in publications. One of the first publications in which cascade control structure employed higher order SMC of permanent magnet DC machine is [42], where the controllers were designed in CT
450
B. Peruniˇci´c-Dra´zenovi´c et al.
Fig. 17.1 Block diagram of the IM position control
domain with adaptation for DT realization. In Ref. [43], control of position, speed, and torque of IM is given. In these two papers [42, 43], acceleration was used in the speed control loop. We present in this section, as an example, cascade control of three-phase IM [31] using DT-SMC algorithm, proposed in Sect. 17.2, which does not require information about acceleration. The same approach can be used for other machine types. The explanation below assumes that the readers have basic knowledge of field-oriented control (FOC) principle of alternating current (AC) machines, using Clark and Park coordinate transformations. A block diagram of the considered IM cascade control is presented in Fig. 17.1. The controlled plant occupies the shaded area (in the right-hand-side of the figure) and contains: the three-phase IM with current and position sensors, and the controlled DC/AC voltage converter (PWM block with three-phase voltage source inverter (VSI)). The central part in the block diagram displays FOC transformations. Blocks 3/2 and 2/3 realize Park’s/Clark’s 3 to 2-phase transformation and vice versa, respectively. Block T(θ ) = [coscos θ , θ ; sinsin θ , coscos θ ] is the projection matrix from the stationary αβ into the synchronously rotating dq frame aligned with the rotor, which is determined by the slip calculator. Besides the mentioned blocks, there is a flux-producing current ids controller (C1f), which serves to keep the machine flux constant. The rotor angular speed ω is detected by differentiating the position angle θ , obtained from the encoder. To control IM variables iqs , ω, and θ , controllers C1, C2, and C3 are, respectively, used. As explained already, the outputs of these controllers are saturated to predefined values. The controllers C1, C2, and C3 designed using the DT-SMC approach, given in Sect. 17.2, will be explained after a brief introduction of IM mathematical model. The three-phase squirrel-cage IM is described by an eighth-order mathematical model: three differential equations for the stator, three for the rotor, and two for the mechanical part. To simplify this three-phase model, it is transformed into the following two-phase model [19]:
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
x˙ = (A + A) x + buqs + bl Tl
.
451
(17.15)
T
1 T −1 0 , bl = 0 .x = θm ωm iqs , b = 0 0 σ Ls J ⎡
⎤ ⎡ ⎤ 0 1 0 0 0 0 kt −B ⎦ , A = ⎣ 0 ⎦ .A = ⎣ 0 0 0 J J 2 0 −2kt Lr /3σ Lm −Rr /σ Lr 0 0 −1/Tel
L2m .σ = 1 − , kt = Ls Lr
3pp 2
L2m Lr
∗ ids , Tel = σ Ls /Rs .
where Rr , Rs , Lr , and Ls are the resistances [] and inductances [H] of the rotor and stator windings and Lm is mutual inductance [H]; Tl is the load torque [Nm]; pp is the pole pairs of IM; ωm and θ m are the rotor mechanical speed [rad/s] and its ∗ is the reference of d-axis current controller. The values of the position [rad]; and .ids concrete IM parameters are given in Ref. [31]. Current iqs controller should be designed first and then the ωm speed controller. It is obvious from (17.15) that iqs current dynamics is described by the first-order differential equation .
diqs = ai iqs + bi uqs + di , dt
ai = −Tel−1 , bi =
.
(17.16)
2kt Lr Ls Rr 1 , di = − ωm − iqs . 2 Ls σ σ Lr 3Lm
The matched disturbance di should be compensated by DT-SMC and disturbance ref − i , compensator, described in Sect. 17.2. By introducing a new variable .x = iqs qs we obtain the current error dynamics equation d ref ref x˙ = ai x − bi uqs + di − ai iqs + iqs , dt
.
(17.17)
in which an additional disturbance occurs caused by the reference. By applying δtransform on (17.17), the first-order DT δ-model is obtained in the form of (17.3). Assuming cδ bδ = 1 (as in Sect. 17.2) and kδe = cδ Aδ , it implies in this case that aδ.i 1 .cδ,i = bδ,i and .kδe,i = cδ,i aδ,i = bδ,i . The described procedure in Sect. 17.2 for designing the current controller in the form of (17.7) can be further directly applied.
452
B. Peruniˇci´c-Dra´zenovi´c et al.
After establishing SM in the current control loop, its dynamics are reduced to zero-order. Then the rotor speed equivalent dynamics, according to (17.15) becomes ω˙ m = aω ωm + bω (uω + dω ) , uω = iqs
.
aω =
.
(17.18)
−Tl kt −B . , bω = , dω = J B J
The speed controller is designed in a similar manner as the current controller, by introducing speed error xω = ωref − ωm . The following design relations are 1 δ.ω obtained: .cδ,ω = bδ,ω , .kδe,ω = baδ,ω . Note that aδ and bδ can be easily obtained by MATLAB commands: [ad , bd ] = c2d(a, b, T); .aδ = (adT−1) ; .bδ = bTd . The rest of the current and speed controllers parameters are chosen according to the recommendation kint ≤ T−1 [18, 20]. Gains ks1 , ks2 , and kint are adjustable parameters that can be tuned in the final adjustment of the controllers. Position controller design. Since the speed controller is designed for the firstorder dynamical system, the equivalent SM dynamics that describe the closed speed control loop theoretically become zero-order dynamics. In practice, however, due to inevitable unmodeled dynamics, such as computation and sensor delays, the equivalent speed control loop should be treated as first-order dynamics, whose time constant should be experimentally identified. Therefore, the equivalent motor position dynamics in terms of the position error xθ = θ ref − θ m should be modeled as (17.1) with Aθ =
.
0 1 , 0 −aθ
bθ =
0 , bθ
(17.19)
where aθ = bθ = 1/Tes , Tes is the estimated time constant of the closed-loop dynamics of the speed control loop, u = uθ is the control for the position loop generated by the position controller, d = dθ is the overall disturbance including the disturbance from the reference. By choosing desired dynamics λ = [λ1 0] in CT domain, according to the instruction in Sect. 17.2 and using MATLAB commands, sliding surface vector cδ, θ can be determined as
kδe,θ = acker Aδ,θ , bδ,θ , λδ , cδ,θ = kδe,θ 1 ∗ pinv Aδ,θ bδ,θ . (17.20)
.
The other controller parameters are chosen as proposed in Sect. 17.2 and finally tuned. Also, kint ≤ T−1 [18, 20]. Design example and experimental results. Rated parameters of the controlled IM are given in Table 17.1. The obtained parameters of the current, speed, and position controllers are given in Table 17.2. Note that ids current (flux) controller is identical to the iqs controller.
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
453
Table 17.1 Rated parameters of the IM [31] Parameter Nominal power [W] Phase voltage [V] Frequency [Hz] Phase current [A] Nominal speed [rpm] Total inertia [kgm2 ]
Value 1500 230 50 3.25 2860 0.0035
Parameter Stator resistance [] Rotor resistance [] Total stator inductance [H] Magnetizing inductance [H] Total rotor inductance [H] Friction coefficient [Nms/rad]
Value 5.45 3.1852 0.4531 0.4413 0.4531 0.0022
Table 17.2 Controller parameters q-axis current controller ai = − 233.8936 bi = 42.8992 kδe = − 5.4521 cδ = 0.0236 ks1 = 0.25 ks2 = 0.15 kint = 1000 √ .U0 = 2·230 β = 0.001
Speed controller aω = − 0.6286 bω = 340.7439 kδe = − 0.0015 cδ = 0.0025 ks1 = 0.12 ks2 = 0.8 kint = 100 U0 = 5.02 β = 0.5
Position controller Aθ = [0, 1; 0, −500] bθ = [0; 500] kδe = [0, −0.9373] cδ = [0.0627,0.0025] ks1 = 0.9 ks2 = 0.1 kint = 40 U0 = 30 β = 0.001
Full experimental verification of the proposed IM control approach using the experimental laboratory platform [31] is conducted. The platform contains two mechanically directly coupled IMs (IM1 and IM2), where IM1 serves as the controlled ED, whereas IM2 produces the desired load torque Tl . Both IMs are controlled via separate power converters FC1 and FC2. The realization of the control algorithms has been implemented using the DS1103 dSPACE control board programmed by a personal computer (PC). Sample time for the FOC subsystem, as well as for the current controllers in the innermost control loops are chosen to be ∗ = 1.85 A. The Ts = 0.1 ms. The magnetizing flux current reference is set to .ids sample time for the speed controller is T = 0.5 ms and for the position controller is Tpos = 1 ms. Optical incremental encoder with quadrature decoder gives the angle resolution of 0.00019175 rad. Euler backward difference method is applied for the speed estimation using the position measurements, resulting into the angular speed resolution of 0.3835 rad/s. Experimental investigation and the fine tuning of the controller parameters have been carried out using the described laboratory ED prototype with IM, whose block scheme is presented in Fig. 17.1. The switching frequency of the PWM inverter is set to 8 kHz. A test position profile is chosen as a complex reference (17.21), while the applied mechanical load is defined by (17.22). θref (t) = 10 [h (t − 1) − 2h (t − 5) + h (t − 7) − th (t − 11) + 2th (t − 12)
.
454
B. Peruniˇci´c-Dra´zenovi´c et al.
−2t (h − 14) + 2t (h − 16) − th (t − 17) + h (t − 18) sin (0.5π t)] , (17.21) Tl (t) = 5h (t − 2) − h (t − 4) + h (t − 8) sinsin 4π t.
.
(17.22)
where h(t) is the Heaviside step function. Results of the experiments are presented in Fig. 17.2. Figure 17.2a shows the position reference generated by computer (black line) and the output IM position (green line), measured by encoder. It is evident that the proposed control system provides excellent reference tracking, except in the time instants where the Lipschitz conditions (bounded reference time-derivative) are not satisfied. Figure 17.2b gives the reference signal for the speed control loop (black line), generated by the position controller, and the measured IM speed (green line). It can be seen that the position controller operates in the saturation mode, when the position reference time-derivative is unbounded, and consequently the position tracking error is high. The current reference for the current control loop (black line), generated by the speed controller, and the measured IM current (green line) are shown in Fig. 17.2c. It can be seen that the current dominantly tracks the reference signal. The peaks of the current error are due to a sudden change in the speed signal. However, after leaving the saturation, the position error becomes zero. The same phenomena can be noticed in the speed and current control loops. Note that in the speed and the current control loops, the error signals are also the sliding variables with very small chattering amplitude.
17.4 DTSM Control for Power Converters In the last few years, the use of renewable energy sources and energy storage systems has seen a dramatic increase [44]. There are three primary motivators that stimulate the growth of these technologies: energy security, economic impacts, and carbon dioxide emission reduction. The power electronic converters play a crucial role in the integration of renewable energy sources and energy storage systems into the power grid [45–47]. As the power electronic converters, the three-phase voltage source converters (VSCs) are widely used for grid-connected applications. VSCs are designed to perform voltage magnitude control and frequency conversion using semiconductor devices and control circuits. They are regularly used to transfer power from a DC system to an AC system or make back-to-back connections to AC systems with different frequencies. Their key feature is to transfer the active and/or reactive power in both directions. A variety of VSC converter topologies have been reported in the literature [48–50]. Three control approaches for the grid-connected VSC are popular: current control, voltage-oriented control, and direct power control [44, 48, 49]. These
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
455
Fig. 17.2 Experimental results in control of IM. (a) Response of position tracking (above) and position tracking error (below). (b) Response of speed tracking (above) and speed tracking error (below). (c) Response of current tracking (above) and current tracking error (below)
approaches are obtained using various control theories: hysteresis control, linear control, SMC, optimal control, predictive control, adaptive control, and intelligent control. All published control approaches with respect to the switching frequency can be classified into two groups: constant switching frequency and variable switching frequency [45, 47, 48]. In the variable switching frequency approaches,
456
B. Peruniˇci´c-Dra´zenovi´c et al.
the control signals are generated directly from a look-up table, their switching frequency is not constant, and undesired current harmonics result. In the constant switching frequency approaches, the control signals are generated by modulation techniques, such as pulse-wide modulation (PWM) or space-vector modulation (SVM), which reduces the undesired current harmonics. In order to suppress the undesired harmonics, an L-type or LC/LCL-type filter is usually used. However, the LCL filter suffers from a resonance problem and it could lead to grid current oscillation or even system instability. Due to this problem, the control of LCL-type grid-connected VSC has been attracting more researchers [51–54]. For LCL-type grid-connected VSC, the main task of the LCL filter is to attenuate the highfrequency ripple of the current injected into the power grid. The grid-connected LCL-type VSCs can be seen as a third-order dynamic system, and they require more complex current control strategies to maintain system stability. The possible instability of the system can be caused by the zero impedance at the LCL filter resonance frequency. To overcome this problem, current control approaches with many indicators are proposed. Multi-loop current control approaches give a good performance, but they require a number of sensors. Single-loop current control approaches are simpler, but their robustness is poor. Dynamics equations of the grid-connected VSC are highly nonlinear with mandatory switching, so that they may be considered as VSS [2]. The first SMC application for a power electronic converter was proposed by [3]. This approach selects the switching sequence for each switch of the power converter so that the motion of the controlled system satisfies desired requirements. In the literature, the various SMC-based control approaches were presented. In theory, the switching frequency of a CT-SMC system is infinite. If digital hardware is used for algorithm implementation, its sampling frequency is high but finite. Also, this frequency must be reduced if a complex algorithm is implemented or the number of A/D channels is high. It follows that the digital implementation of CT-SMC algorithms inevitably limits switching frequency, which produces QSM [4]. Also, DT implementation of CT-SMC algorithms does not ensure its stability. However, the discretization of the SMC algorithm is important and the DT-SMC seems to be a good approach when the digital hardware is used in a control system. Some examples of DT-SMC for grid-connected VSCs have been published in [32–36]. This section presents DT-SMC-based current control of grid-connected LCLtype VSC. The electrical scheme of the considered grid-connected LCL-type VSC with the proposed control system is shown in Fig. 17.3. Here, the robust single-loop current control design is proposed in order to regulate the converter-side inductor current. The proposed control approach represents the chattering-free DT-SMC [13] with two rates reaching law limits, taking into account the plant input saturation. The three-phase circuit can be represented as ⎤⎡ ⎤ ⎤ ⎡ R1 ⎤ ⎡ 1 − L1 − L11 0 0 i1abc i1abc L 1 d ⎣ abc ⎦ ⎢ 1 ⎥ v1abc ⎢ 1 ⎥ ⎣ abc ⎦ 0 − Cf ⎦ vc = ⎣ Cf + ⎣ 0 0 ⎦ abc , . vc v2 dt abc R2 1 i2abc i 0 − L11 0 − 2 L2 L2 ⎡
(17.23)
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
457
Fig. 17.3 Electrical scheme of grid-connected LCL-type VSC with DT-SMC used in the inverter current control
where .i1abc is the converter-side inductor current vector, .i2abc is the grid-side inductor current vector, .vcabc is the capacitor voltage vector, .v2abc is the grid voltage vector, abc is the output converter voltage vector, L and L are the inductances of the .v 1 2 1 converter-side and grid-side filtering inductors, R1 and R2 are the resistances of the converter-side and grid-side filtering inductors, and Cf is the filter capacitance. In three-phase applications, the LCL filter plant would usually be controlled by a proportional resonant controller in the αβ stationary reference frame or by a PI controller in the dq rotating reference frame [51]. In the inductor current controller design, the capacitor influence in the LCL filter can be neglected because it aims to primarily reduce the high-frequency current ripple and its value is low. Taking this into account, the dynamic model described by (17.23) can be transformed into the dq rotating reference frame model suitable for control design: d . dt
i1d i1q
=
ad ωg − ωg aq
i1d bd 0 v1d cd 0 v2d + − i1q v1q v2q 0 bq 0 cq
(17.24)
1 1 1 +R2 where .ad = aq = − R L1 +L2 , .bd = bq = L1 +L2 , .cd = cq = L1 +L2 and ωg is the distribution grid angular frequency. In (17.24), the currents i1d and i1q are measurable, and they are chosen as the system states. Voltages v2d and v2q can be considered as measurable disturbances which can be compensated. In the d-axis dynamic model, the influence of the current i1q , expressed over ωg i1q , is matched by the output converter voltage v1d , and i1q can be considered as an unknown disturbance. Similarly, the influence of the current
458
B. Peruniˇci´c-Dra´zenovi´c et al.
i1d in the q-axis dynamic model, expressed over −ωg i1d , is matched by the output converter voltage v2d , and i1d can be considered as unknown disturbance. ∗ − i and .e = i ∗ − i , the current error By introducing new variables .ed = i1d 1d q 1q 1q dynamics equations can be obtained as .
d dt
ed eq
=
ad 0 0 aq
ed b 0 v1d c 0 v2d d − d + d + d , eq v1q v2q dq 0 bq 0 cq (17.25) di ∗
di ∗
1q R1 +R2 ∗ 1 +R2 ∗ where disturbances are .dd = dt1d + R L1 +L2 i1d − ωg i1q , and .dq = dt + L1 +L2 i1q + ωg i1d . The last equation is suitable for applying the DT-SMC algorithm and disturbance compensator described above in Sect. 17.2. Note that system (17.25) is decoupled and controllers for each channel can be designed independently. Since the channel parameters are identical, design of their controllers becomes the same as described in Sects. 17.2 and 17.3 for design of current and speed controllers. Design example and experimental results. The performances of the designed controllers are verified on the experimental setup of the grid-connected LCL-type VSC. It was supplied by a bidirectional programmable DC power supply ITECH model IT6000C. DC/AC converter (Danfoss FC302 frequency converter with IPC card) is used as VSC grid converter. For safety reasons, the system was connected to the power grid via a three-phase isolation transformer 400/230 V, rated power 6kVA. The grid voltages .v2abc were measured through signal conditioners (Electronic design CA-30), while the grid currents were measured by Hall effect current sensors (LEM LA55-P). The presented control algorithm was developed and tested using dSPACE 1103 rapid prototyping system with MATLAB/Simulink environment and Control Desk software for data recording. The switching frequency is set to 5 kHz, while the sampling period is 0.1 ms. All parameters for experimental setup and implemented controllers are given in Table 17.3. Figure 17.4 shows the dynamic performance of the proposed control design where a step change of the reference converter-side current is applied. The objective of this experiment is to verify the transient performance and tracking ability of the inductor currents by the designed controllers. The reference q-axis inductor current
Table 17.3 Experimental setup and current controller parameters
Experimental setup L1 = 26.5 mH R1 = 0.176 L2 = 1.84 mH R2 = 0.1017 Cf = 4.7 μF I1nom = 7.2 A VDC = 500 V
Current controller kδe = − 0.277 cδ = 0.0284 ks1 = 0.25 ks2 = 0.10 kint = 1476 U0 = 288 T = 100 μs
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
459
Fig. 17.4 Dynamics performance of the proposed controller in the case of step reference converter-side inductor current: (a) d-axis reference and measured converter-side inductor currents, (b) zoomed details of converter-side inductor currents
is set to zero, while the reference of the d-axis inductor current i1d, ref is stepped up to 5A, 10A, −10A, and 0A in time instants 2 s, 6 s, 10 s, and 14 s, respectively. It can be observed that the presented waveforms during various reference inductor currents have fast transients, no steady-state errors and negligible overshoot. The higherresolution recordings of grid voltage .v2a and currents .i2abc under the step change at t = 6 s, recorded with digital storage oscilloscope Tektronix model DSO 3034, are shown in Fig. 17.5. From the zoomed details in Fig. 17.4b, it can be seen that the harmonic distortion of the converter-side currents .i1abc is low. By comparing the inductor current waveforms in Figs. 17.4 and 17.5, it can be concluded that the harmonic content is quite similar.
460
B. Peruniˇci´c-Dra´zenovi´c et al.
Fig. 17.5 The converter-side inductor current waveforms (traces 1–3) under step reference change from 5A to 10A
Fig. 17.6 The converter-side inductor current waveforms (left) and harmonic spectrum (right) under full reference converter-side inductor current
The detailed analysis of harmonic distortion was carried out using the steadystate waveforms when the reference inductor current is 10 A. In this experiment, the laboratory grid voltage total harmonic distortion (THD) is 2.4% where the fifth, seventh, and 11th harmonics are dominant. Using power analyzer Fluke 435, the power quality compliance of grid-side currents .i2abc with IEEE 1547 standard is verified. Figure 17.6 shows the screenshots of the waveforms and frequency spectrum of the inductor current measured by the Fluke analyzer. It can be seen that the THD of grid currents is below 1.9% which is much less than the limit value of 5%. In fact, there are mainly odd harmonics, especially the fifth and seventh harmonics, but this is due to the grid voltage harmonic distortion. The presented grid-side currents harmonic spectrum confirm full compliance with the mentioned standard.
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
461
17.5 Conclusion This chapter presents the authors’ contributions and experiences in DT-SMC of EDs and power converters. After a short, concise historical survey of SMC development and published works relevant for the chapter context, a description of the originally developed DT-SMC algorithm is given. The proposed algorithm uses the DT δmodel, which, in another way than the DT shift model, displays the realization of DT-SM with explicit separation of the reaching control and the equivalent control. Moreover, two rates reaching law is introduced with the compensation of matched disturbance via integral of the sliding variable. Since this type of compensation can cause overshoot in case of control saturation (windup), in the proposed algorithm this problem is solved in combination with two rates reaching law intended to chattering minimization. By introducing the signum element at the disturbance compensator input, the proposed controller becomes super-twisting like a DT controller with high robustness to disturbances but with some chattering, which can be avoided by replacing the signum element with continuous approximation. The effectiveness of the controller is illustrated in the control of torque, speed and position of IM via the cascade control principle, widely used for high-quality servo systems, as well as in current control of grid-connected VSC with LCL filter with a single control loop. In both cases, a short design procedure and many experimental results are given which indicates the high performance of the proposed control method. In our further research, the proposed control principle will be used in more complex control of grid-connected voltage source inverters via cascade control of relevant state variables.
References 1. Draženovi´c, B., (1969) The invariance conditions in variable structure systems. Automatica, 5(3), pp. 287–295. 2. Utkin, V., (1977) Variable structure systems with sliding modes. IEEE Transactions on Automatic Control, 22(2), pp. 212–222. 3. Sabanovic, A. and Izosimov, D.B., (1981) Application of sliding modes to induction motor control. IEEE Transactions on Industry Applications, (1), pp. 41–49. ˇ (1985) General conditions for the existence of a quasi-sliding mode on the 4. Milosavljevi´c, C., switching hyperplane in discrete variable structure systems. Automatic Remote Control. 46, pp. 307–314 5. Sarpturk, S.Z., Istefanopulos, Y. and Kaynak, O., (1987) On the stability of discrete-time sliding mode control systems. IEEE Transactions on Automatic Control, 32(10), pp. 930–932. 6. Drakunov, S.V. and Utkin, V.I., (1990) On discrete-time sliding modes. In Nonlinear Control Systems Design 1989, pp. 273–278. 7. Furuta, K., (1990) Sliding mode control of a discrete system. Systems & Control Letters, 14(2), pp. 145–152. 8. Sira-Ramirez, H., (1991) Non-linear discrete variable structure systems in quasi-sliding mode. International Journal of Control, 54(5), pp. 1171–1187. 9. Chan, C.Y., (1991) Servo-systems with discrete-variable structure control. Systems & Control Letters, 17(4), pp. 321–325.
462
B. Peruniˇci´c-Dra´zenovi´c et al.
10. Gao, W., Wang, Y. and Homaifa, A., (1995) Discrete-time variable structure control systems. IEEE Transactions on Industrial Electronics, 42(2), pp. 117–122. 11. Bartoszewicz, A., (1998) Discrete-time quasi-sliding-mode control strategies. IEEE Transactions on Industrial Electronics, 45(4), pp. 633–637. 12. Bartolini, G., Ferrara, A. and Utkin, V.I., (1995) Adaptive sliding mode control in discrete-time systems. Automatica, 31(5), pp. 769–773. ˇ (2000) Robust discrete-time chattering free sliding mode 13. Golo, G. and Milosavljevi´c, C., control. Systems & Control Letters, 41(1), pp. 19–28. 14. Su, W.C., Drakunov, S.V. and Ozguner, U., (2000) An O(T2 ) boundary layer in sliding mode for sampled-data systems. IEEE Transactions on Automatic Control, 45(3), pp. 482–485. 15. Abidi, K., Xu, J.X. and Xinghuo, Y., (2007) On the discrete-time integral sliding-mode control. IEEE Transactions on Automatic Control, 52(4), pp. 709–715. ˇ (2004) Discrete-time VSS. In Variable Structure Systems: From Principles 16. Milosavljevi´c, C., to Implementation. Chapter V. 66, pp. 99–129. IET. 17. Veseli´c, B., (2006) Application of digital sliding modes for coordinated tracking of complex trajectories. Ph.D. Dissertation (in Serbian). University of Niš, Faculty of Electronic Engineering, Niš ˇ Peruniˇci´c-Draženovi´c, B., Veseli´c, B. and Miti´c, D., (2007) A new design of 18. Milosavljevi´c, C., servomechanisms with digital sliding mode. Electrical Engineering, 89(3), pp. 233–244. ˇ (2008) High-performance position 19. Veseli´c, B., Peruniˇci´c-Draženovi´c, B. and Milosavljevi´c, C., control of induction motor using discrete-time sliding-mode control. IEEE Transactions on Industrial Electronics, 55(11), pp. 3809–3817. ˇ and Veseli´c, B., (2011) Disturbance com20. Lješnjanin, M., Draženovi´c, B., Milosavljevi´c, C. pensation in digital sliding mode. In 2011 IEEE EUROCON-International Conference on Computer as a Tool, pp. 1–4. IEEE. ˇ Perunicic-Drazenovic, B. and Veselic, B., (2012) Discrete-time velocity 21. Milosavljevic, C., servo system design using sliding mode control approach with disturbance compensation. IEEE Transactions on Industrial Informatics, 9(2), pp. 920–927. ˇ (2021) Distur22. Huseinbegovi´c, S., Peruniˇci´c-Draženovi´c, B., Veseli´c, B. and Milosavljevi´c, C., bance observer based dead-beat control of multi-input systems with unknown disturbances and bounded inputs using discrete-time higher order sliding mode. International Journal of Robust and Nonlinear Control, 31(8), pp. 3310–3329. 23. Qu, S., Xia, X. and Zhang, J., (2013) Dynamics of discrete-time sliding-mode-control uncertain systems with a disturbance compensator. IEEE Transactions on Industrial Electronics, 61(7), pp. 3502–3510. 24. Han, J.S., Kim, T.I., Oh, T.H. and Lee, S.H., (2019) Effective disturbance compensation method under control saturation in discrete-time sliding mode control. IEEE Transactions on Industrial Electronics, 67(7), pp. 5696–5707. ˇ and Veseli´c, B., (2013) Comprehensive approach to sliding 25. Draženovi´c, B., Milosavljevi´c, C. mode design and analysis in linear systems. In Advances in Sliding Mode Control, pp. 1–19. Springer, Berlin, Heidelberg. ˇ (2015) Integral sliding manifold design 26. Veseli´c, B., Draženovi´c, B. and Milosavljevi´c, C., for linear systems with additive unmatched disturbances. IEEE Transactions on Automatic Control. 61(9), pp. 2544–2549. ˇ (2010) Improved discrete-time 27. Veseli´c, B., Peruniˇci´c-Draženovi´c, B. and Milosavljevi´c, C., sliding-mode position control using Euler velocity estimation. IEEE Transactions on Industrial Electronics, 57(11), pp. 3840–3847. ˇ N., Maši´c, S., Huseinbegovi´c, S. and Peruniˇci´c-Draženovi´c, B., (2016) A discrete28. Colo, time sliding mode speed controller with disturbance compensation for a 5kW DC motor. In IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society, pp. 2612– 2617. IEEE. ˇ Peruniˇci´c-Draženovi´c, B., Veseli´c, B. and Petronijevi´c, M., (2017) High29. Milosavljevi´c, C., performance discrete-time chattering-free sliding mode-based speed control of induction motor. Electrical Engineering, 99(2), pp. 583–593.
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
463
ˇ and Veseli´c, B., (2017) Discrete30. Petronijevi´c, M., Peruniˇci´c-Draženovi´c, B., Milosavljavi´c, C. time speed servo system design–a comparative study: proportional–integral versus integral sliding mode control. IET Control Theory & Applications, 11(16), pp. 2671–2679. ˇ Veseli´c, B., Peruniˇci´c-Draženovi´c, B. and Huseinbegovi´c, 31. Petronijevi´c, M.P., Milosavljevi´c, C., S., (2021) Robust cascade control of electrical drives using discrete-time chattering-free sliding mode controllers with output saturation. Electrical Engineering, 103(4), pp. 2181–2195. ˇ and Veseli´c, B., (2012) Direct power control 32. Huseinbegovi´c, S., Peruniˇci´c, B., Milosavljevi´c, C. for various topologies of three phase grid-connected voltage sources converters using sliding mode control. In 2012 IEEE International Conference on Industrial Technology, pp. 795–801. IEEE. 33. Huseinbegovi´c, S. and Peruniˇci´c-Draženovi´c, B., (2013) Discrete-time sliding mode direct power control for three-phase grid connected multilevel inverter. In 4th International Conference on Power Engineering, Energy and Electrical Drives, pp. 933–938. IEEE. 34. Huseinbegovi´c, S., Peruniˇci´c-Draženovi´c, B. and Hadžimejli´c, N., (2014) Discrete-time sliding mode direct power control for grid connected inverter with comparative study. In 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE), pp. 459–464. IEEE. 35. Huseinbegovi´c, S. and Peruniˇci´c-Draženovi´c, B., (2014) Direct power control with disturbance compensation for grid connected power converters-A discrete sliding mode approach. In 2014 13th International Workshop on Variable Structure Systems (VSS), pp. 1–6. IEEE. 36. Huseinbegovi´c, S., Peruniˇci´c, B., Maši´c, Š. and Dolinar, D., (2015) Discrete-time sliding mode control system of the grid-connected doubly fed induction generator at the low sampling frequency. In 2015 IEEE International Conference on Industrial Technology (ICIT), pp. 2254– 2260. IEEE. ˇ Petronijevi´c, M., Veseli´c, B., Peruniˇci´c-Draženovi´c, B. and Huseinbegovi´c, 37. Milosavljevi´c, C., S., (2019) Robust discrete-time quasi-sliding mode based nonlinear PI controller design for control of plants with input saturation. Journal of Control Engineering and Applied Informatics, 21(3), pp. 31–41. ˇ Peruniˇci´c-Draženovi´c, B., Huseinbegovi´c, S. and Petronijevi´c, 38. Veseli´c, B., Milosavljevi´c, C., M., (2020) Discrete–Time Sliding Mode Control of Linear Systems with Input Saturation. International Journal of Applied Mathematics and Computer Science, 30(3), pp. 517–528. 39. Rivera, J., Garcia, L., Mora, C., Raygoza, J.J. and Ortega, S., (2011) Super-twisting sliding mode in motion control systems. In Sliding mode control. Chapter XIII. pp. 237–254. IntechOpen. 40. Koch, S., Reichhartinger, M., Horn, M. and Fridman, L., (2019) Discrete-time implementation of homogeneous differentiators. IEEE Transactions on Automatic Control, 65(2), pp. 757–762. 41. Utkin, V., Guldner, J. and Shi, J., (2017) Sliding mode control in electro-mechanical systems. CRC press. 42. Pisano, A., Davila, A., Fridman, L. and Usai, E., (2008) Cascade control of PM DC drives via second-order sliding-mode technique. IEEE Transactions on Industrial Electronics, 55(11), pp. 3846–3854. 43. Tarchała, G. and Orłowska-Kowalska, T., (2016) Sliding mode speed and position control of induction motor drive in cascade connection. In Robust Control-Theoretical Models and Case Studies. Chapter IV. IntechOpen. 44. Infield, D. and Freris, L., (2020) Renewable energy in power systems. John Wiley&Sons. 45. Chen, Z., Guerrero, J.M. and Blaabjerg, F., (2009) A review of the state of the art of power electronics for wind turbines. IEEE Transactions on Power Electronics, 24(8), pp. 1859–1875. 46. Maza-Ortega, J.M., Gomez-Exposito, A., Barragan-Villarejo, M., Romero-Ramos, E. and Marano-Marcolini, A., (2012) Voltage source converter-based topologies to further integrate renewable energy sources in distribution systems. IET Renewable Power Generation, 6(6), pp. 435–445. 47. Yazdani, A. and Iravani, R., (2010) Voltage-sourced converters in power systems: modeling, control, and applications. John Wiley&Sons. 48. Abu-Rub, H., Holtz, J., Rodriguez, J. and Baoming, G., (2010) Medium-voltage multilevel converters—State of the art, challenges, and requirements in industrial applications. IEEE Transactions on Industrial Electronics, 57(8), pp. 2581–2596.
464
B. Peruniˇci´c-Dra´zenovi´c et al.
49. Perez, M.A., Bernet, S., Rodriguez, J., Kouro, S. and Lizana, R., (2014) Circuit topologies, modeling, control schemes, and applications of modular multilevel converters. IEEE Transactions on Power Electronics, 30(1), pp. 4–17. 50. Tewari, S. and Mohan, N., (2017) Matrix converter based open-end winding drives with common-mode elimination: Topologies, analysis, and comparison. IEEE Transactions on Power Electronics, 33(10), pp. 8578–8595. 51. Liu, F., Zhou, Y., Duan, S., Yin, J., Liu, B. and Liu, F., (2009) Parameter design of a two-current-loop controller used in a grid-connected inverter system with LCL filter. IEEE Transactions on Industrial Electronics, 56(11), pp. 4483–4491. 52. Xin, Z., Wang, X., Loh, P.C. and Blaabjerg, F., (2016) Grid-current-feedback control for LCL-filtered grid converters with enhanced stability. IEEE Transactions on Power Electronics, 32(4), pp. 3216–3228. 53. Guzman, R., de Vicuña, L.G., Castilla, M., Miret, J. and de la Hoz, J., (2017) Variable structure control for three-phase LCL-filtered inverters using a reduced converter model. IEEE Transactions on Industrial Electronics, 65(1), pp. 5–15. 54. Bierhoff, M. and Soliman, R., (2020) Analysis and design of grid-tied inverter with LCL filter. IEEE Open Journal of Power Electronics, 1, pp. 161–169.
B. Peruniˇci´c-Dra´zenovi´c graduated from the Faculty of Electrical Engineering in Belgrade in 1960. In the same year, she moved to Bosnia and Herzegovina where she was employed by the Energoinvest Company, as a Project Leader. In parallel with her work at Energoinvest, her work in science and education was ongoing. She passed all the exams at the postgraduate study at the Faculty of Electrical Engineering in Belgrade in 1964 and went to specialization in 1965 at the Institute of Automatics and Telemechanics of the USSR Academy of Sciences. There she entered a whole new area that existed only in the USSR, called Variable Structure Systems. She defended her doctoral dissertation “Managing Multiple-Input Variable-Structure Systems” in 1968 as a PhD dissertation, equivalent to a PhD thesis, before the Scientific Council of the Automation and Telemechanics Institutes, which was then one of the leading institutes in the world in the field of control. The article “The invariance conditions in variable structure systems” published in 1969 in the journal Automatica based on some of the results of that work has more than 1200 citations, establishing an invariance condition or a matching condition named after her. In 1961, the Faculty of Electrical Engineering was founded in Sarajevo, where Branislava Perunicic worked from its foundation until 2008, with interruptions during which she worked abroad as a professor or a visiting professor. She went through her teaching career through the positions of teaching assistant, lecturer, assistant professor, and associate professor to become a full professor in 1976. In her professor career, she mentored more than 20 Ph.D. students. The main areas of science she dealt with are sliding modes in variable-structure systems, the use of discrete samples of system states in power systems, and the application of graph theory. Branislava Perunicic was a visiting Fulbright professor at Grand Valley State University in Michigan, twice visiting professor at the University of Illinois at Urbana-Champaign, Lamar University in Texas and the University of Kentucky. She also taught one semester at Monterey University in Mexico. She also worked as a
17 Discrete-Time Sliding Mode Control for Electrical Drives and Power Converters
465
full professor at Lamar University for 5 years. She was admitted to the Academy of Sciences of B&H in 1986. Among other, she was the secretary of the Department of Technical Sciences, a member of the Presidency of the Academy and the Vice-President of the Academy. It should be noted that in 1999, she initiated and later founded the IEEE Sarajevo Student Branch. Her involvement in this field subsequently resulted in the founding of the IEEE BH Section in 2005. She was the first and founding Chair of the IEEE BH Section from 2005 to 2010. ˇ Milosavljevi´c was born in the former Yugoslavia in 1940. He C. received the Dipl. Ing. degree from the Faculty of Automatic and Computer Science, Moscow Power Institute, Moscow, Russia, in 1966, the M.Sc. degree from the Faculty of Electronic Engineering, University of Niš, Serbia, in 1975, and the Ph.D. degree from the Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia, and Herzegovina, in 1982. From 1967 to 1977, he was with Electronic Industry Corporation Niš and partly works as a teaching assistant at the Faculty of Electronic Engineering. From 1977 to 1978, he was a professor at the High Technical School in Niš. From 1979 to 2005, he was with the Faculty of Electronic Engineering, University of Niš, where he was the organizer of the graduate and postgraduate studies in the field of automatic control and the Founder of the Laboratory of Automatic Control. Since 1997, he has been a Visiting Professor with the Faculty of Electrical Engineering, University of East Sarajevo, Bosnia and Herzegovina. He has published over 230 papers, mainly on variable-structure systems with discrete-time sliding modes, four chapters in monographs published in Springer/IEEE, and eight textbooks. It is designed over 50 devices in the areas of power supply, motion control, and industrial electronics. He is a pioneer in investigations of discrete-time sliding-mode control. His research interests include sliding modes, motion control systems, and industrial electronics. S. Huseinbegovi´c is an Associate Professor with the Department of Automatic Control and Electronics, Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina. Since 2019, he is Head of Department of Automatic Control and Electronics, Faculty of Electrical Engineering, University of Sarajevo. He received his B.S., M.S., and Ph.D. degrees in electrical engineering from the University of Sarajevo, Bosnia and Herzegovina, in 2006, 2009, and 2015, respectively. As a researcher, he visited Friedrich-Alexander-Universität ErlangenNürnberg, Germany, and the University of Maribor, Slovenia. His research interests include sliding mode control for power electronic converters, microgrids, electrical machines, and drives. He served as general secretary of the 2018 IEEE ISGT Europe conference and general chair of the 2019 IEEE ICAT conference. He serves as a reviewer in several journals in electrical engineering and power electronics. Also, Senad Huseinbegovi´c is a Member of IEEE and a Member of the CIGRE Committee.
466
B. Peruniˇci´c-Dra´zenovi´c et al. B. Veseli´c graduated from the Faculty of Electronic Engineering, Department of Automatic Control, University of Niš in 1994. He defended his M.Sc. thesis and Doctoral dissertation in automatic control at the Faculty of Electronic Engineering, University of Niš in 2000 and 2006, respectively. Since 1995, he has been with the Department of Automatic Control at the Faculty of Electronic Engineering Niš, where he currently is an Associate Professor. His major field of study is automatic control with special expertise in sliding mode control systems, on which he has published over 130 scientific papers. Prof. Boban Veseli´c is Head of the Laboratory for Automatic Control at the Faculty of Electronic Engineering Niš and has participated in realization of several national and international scientific projects. He is a senior member of IEEE and serves as a reviewer in several prestigious journals in electrical engineering and automatic control. In 2013, he was an invited lecturer in Advance Digital Control at the Malta College of Arts, Science and Technology.
M. Petronijevi´c graduated from the Faculty of Electronic Engineering, Department of Power Engineering, University of Niš in 1993. He received his M.Sc and Ph.D. degrees in electric power engineering from the University of Niš, Serbia in 1999 and 2012, respectively. Since 1994, he has been with the Power Engineering Department at the Faculty of Electronic Engineering Niš, where he currently is an Associate Professor. His research interests are in control of power electronic converters and electric drives. His current research interests include the development of real-time application of sliding mode control in servo drives and power electronic converters in microgrids. Prof. Petronijevi´c is Head of Laboratory for Electric Machines and Drives at the Faculty of Electronic Engineering Niš. He has participated in realization of many industrial and scientific projects. He serves as a reviewer in several journals in electrical engineering and power electronics.
Chapter 18
Self-Healing Shipboard Power Systems Karen Butler-Purry, Sarma (NDR) Nuthalapati, and Sanjeev K. Srivastava
18.1 Introduction With the US Navy’s demands for reduced manning and increased system survivability, new techniques are needed which efficiently and automatically restore service under catastrophic situations. Their goals are to increase survivability, eliminate human mistakes, make intelligent reconfiguration decisions more quickly, reduce the manpower required to perform the functions, and provide optimal electric power service through the surviving system. Shipboard power systems are very similar to isolated utility systems in that the available generators are the only source of supply for the system loads. There are, however, several differences between utility and shipboard power systems, such as ships have large dynamic loads relative to generator size and a larger portion of nonlinear loads relative to power generation capacity, and transmission lines are not nearly as significant as for utilities because of their short lengths. The typical AC radial shipboard electric power system (SPS) found on surface combatant ships consists of AC generators linked to ring-connected switchboards with radial distribution to loads below the generator switchboards. SPSs supply power to the sophisticated systems for weapons, communication, navigation, and operations of warships. During battle, weapons can attack a ship and cause severe
K. Butler-Purry () Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, USA e-mail: [email protected] S. (NDR) Nuthalapati NDRS SEMS Consultancy LLC, Round Rock, TX, USA S. K. Srivastava Amazon, Arlington, VA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3_18
467
468
K. Butler-Purry et al.
damage to the electrical system on the ship. When faults occur as a result of battle damage or material casualty, isolation by protective devices can lead to deenergizing of critical loads on a ship that can eventually decrease a ship’s ability to survive the attack. It is very important, therefore, to maintain the availability of energy to the loads that keep the power system operational. In AC radial shipboard power systems, the majority of the operations of power restoration to these deenergized loads are performed manually. However, continuous monitoring of a shipboard electric power system and automated reconfiguration of the SPS before or after battle damage, or material casualty, can help maintain the mission while providing the benefits of reduction in the number of crew required to perform reconfiguration and improved fight-through survivability.
18.2 Self-Healing Shipboard Power Systems Self-healing systems activate control actions to steer power systems to a more secure, less vulnerable operating condition. Self-healing strategies have been developed to enable the national power grid to self-heal in response to threats, material failures, and other de-stabilizers. These strategies include control options that are typically initiated in a preventive self-healing mode or corrective/restorative self-healing mode. In the preventive self-healing mode, operating conditions are determined for which the power system is highly vulnerable to a specific event. A control solution is provided to arm the system. If the event occurs, this control solution is activated to either eliminate or localize the effect of the event and to minimize the possibility of cascading outages. In the corrective or restorative selfhealing mode, some portion of the power system is not operating at an optimal level due to protective device operations or control actions activated during the preventive self-healing mode. Restorative self-healing performs control actions that restore the power system to an optimum operating condition. Researchers in the Power System Automation Laboratory (PSAL) at Texas A&M University developed a self-healing methodology for AC radial shipboard electric power systems [1] which includes two preventive methods and one restorative method. One of the preventive methods operates according to the traditional preventive self-healing definition by determining control solutions to arm the SPS for critical contingencies. The second preventive method, which we termed “predictive,” performs control actions before an incoming weapon hit based on predicted damage to the SPS to reduce the risk of cascading failures when the weapon hits [2]. This self-healing reconfiguration methodology presents a framework for determining control actions during the preventive and restorative modes of an SPS. The control actions represent the control commands for circuit breakers, and bus transfer and low voltage protection devices. Figure 18.1 shows a block diagram of the self-healing SPS reconfiguration methodology. Voltages at load and critical bus nodes, currents at critical branches,
18 Self-Healing Shipboard Power Systems
469
Measured data; Topology info; Weapon info
GIS database
Historical data and Operating limits databases
On-line Assessment of Threats & Events Selection of Mode of Operation
Predictive Self-Healing
Preventive Self-Healing
System Optimal?
No Restorative Self-Healing
System Optimal after weapon hit? No Yes
Yes
Control actions & Visual Rep of New System State
Fig. 18.1 Block diagram of self-healing SPS reconfiguration methodology
and protective devices status are continuously monitored. Also, static data such as topology information and component ratings data are input to the system. The data are managed through several databases. An on-line system assessment technique continuously monitors the SPS and, based on the detection of a threat or abnormal condition, selects one of the self-healing modes of operation.
18.2.1 Data Management and On-Line System Assessment Shipboard power systems (SPS) consist of generators that are connected in ring configuration through generator switchboards. Bus-tie circuit breakers interconnect the generator switchboards that allow for the transfer of power from one switchboard to another. Load centers and some loads are supplied from generator switchboards.
470
K. Butler-Purry et al.
Fig. 18.2 Line diagram of notional AC radial SPS
Load centers in turn supply power to loads directly and supply power to power panels to which some loads are connected. Feeders supplying power to load centers, power panels, and loads are radial in nature. Hence, the system below the generator switchboards, referred to as the shipboard distribution system by the authors, is radial. Loads are categorized as either vital or non-vital. For vital loads, two sources of power (normal and alternate supply) are provided from separate sources via automatic bus transfers (ABTs) or manual bus transfers (MBTs). Circuit breakers (CBs) and fuses are provided at different locations for isolation of faulted loads, generators, or distribution system from unfaulted portions of the system. The singleline diagram of a notional SPS and the details of the system are given in Fig. 18.2.
18 Self-Healing Shipboard Power Systems
471
A Geographic Information System (GIS) is used to model the shipboard electric power system based on the geographical three-dimensional layout profile of a typical surface combatant ship [3]. A GIS is a computerized system designed to capture, store, process, analyze, and manipulate characteristic and spatial data. MicroStation [4] was used to develop a 3D diagram of the SPS. The GIS consists of two parts: digital map and database; it integrates digital diagrams such as computeraided design and drafting (CADD) diagrams with information systems such as relational database management systems (RDBMS). A three-dimensional CADD map of the SPS was constructed with an information database containing the electrical parameters of the SPS. The 3D map of the electrical layout contains the various SPS components located according to their spatial position within the ship. The components in the CADD map are linked to an attribute database. The database stores connectivity, real-time data, and ratings data for each component. In addition to the GIS database, other databases such as Historical database and Constraint database are used to store historical measurement data and system operating conditions limit values. Automated queries extract data from the databases as required by the self-healing methods. An on-line failure assessment system (FAST) was developed to continuously assess the state of the SPS. If it is warned of an incoming threat, it activates the predictive self-healing method; or if it detects an abnormal condition in the SPS, it activates the restorative self-healing method. The preventive self-healing method uses FAST off-line to assess the damage that would result in an SPS for each critical contingency; then during on-line operation when FAST detects the occurrence of one of the critical contingencies, it initiates the stored control solution. The following two sections discuss the restorative and predictive self-healing methods.
18.2.2 Restorative Self-Healing Method As shown in the flowchart in Fig. 18.1, if FAST detects an abnormality, it assesses the state of the system after the protection system responds to the abnormality. The restorative self-healing method determines the control actions to restore deenergized loads. FAST detects the presence of an abnormal system condition by determining if the system is operating within tolerable limits. After an abnormality is detected, FAST performs faulted section location using expert system technology to identify the faulted sections, de-energized loads, and components in the faulted sections. The restorative self-healing method was implemented using two approaches, an expert system-based approach [5] and an optimization-based approach [6, 7]. The optimization-based approach is discussed in this section. The optimization-based restorative self-healing approach directly determines the control solution that restores maximum load satisfying the system operating constraints and also ensuring the radial condition [6, 7]. While doing so, it considers the priority of the loads and also considers a vital load’s preferred path to have
472
K. Butler-Purry et al.
a higher priority than its other path. An innovative approach was developed to formulate the problem as a mixed integer linear optimization problem for which an optimal solution can be easily obtained. The approach does not require circuit analysis to verify the current and generator capacity, and voltage constraints. It directly suggests the reconfigured network that restores maximum load satisfying the constraints and ensuring the radial condition. In AC radial SPSs, the generators are connected in ring configuration through generator switchboards, bus-tie breakers, and cable connecting generator switchboards. All components below the generator switchboards are operated in radial configuration, and faults on any of these components may interrupt the power supply to some loads. For the restorative optimization self-healing approach, if the fault is not on a component in the ring, the network is modified by merging the generator switchboards, bus-tie breakers, and cables connecting these switchboards. This reduced network is used to reconfigure the network to restore the service. On the other hand, if the fault is on one of the components connected in the ring, then that component is assumed to be isolated from the system and the remaining network is used to reconfigure to restore the service. It may be noted that, when a component in a ring is removed, the total network becomes radial. The problem is formulated as a variation of the fixed charge network flow problem [8]. The mathematical formulation of the problem is shown below with its objective function and constraints.
18.2.2.1
Objective Function
Maximize .
M i=1
(W’i − 1 Li − 1 + W’i − 2 Li − 2 + W’i − 3 Li − 3)
T T Hnj + j =1
(18.1) The weighted factors, Wi− 1 , Wi− 2 , and Wi− 3 , represent the weighted factor corresponding to load currents in phase a, b, and c, respectively, at ith node and are calculated as follows: For a high priority load : Wi =
.
W + 1, li
For a low priority load : Wi = 1,
.
(18.2)
(18.3)
where li is the maximum possible value (current rating) for load Li and W is equal to the maximum value (current rating) of the largest low priority load in the system.
18 Self-Healing Shipboard Power Systems
473
These weighted factors ensure that high priority loads are given priority in restoring the loads. Equation (18.1) represents maximization of the sum of the weighted loads in the system. The second term in (18.1) represents the sum of the normal path switch status of the ABT/MBT of each vital load, thus giving priority to the normal paths. Ha : Status for an edge “a” and is defined as follows: Ha = 1, if edge “a” is closed = 0, otherwise. Hnj : Status for the normal path switch at jth ABT/MBT Haj : Status for the alternate path switch at jth ABT/MBT HH represents the total number of vital loads with ABT/MBTs in the system
18.2.2.2
Constraints
(a) Source Capacity Constraints: At any source node “i,” the sum of the flows going out of the source node should not exceed the total capacity of the respective source node. (b) Node constraints: At any node “i,” (except source node) the sum of flows into the node should be equal to the sum of the flows coming out of the node. (c) Load constraints: Two types of loads have been modeled: variable and fixed type. For a variable type load, the load can be restored up to its maximum current rating ( Lmax i ). Such loads represent a lump load in a panel consisting of several group of loads that can be independently controlled by opening/closing their respective switches. This is similar to circuit breaker panels used in our houses. But for fixed type of loads, it can be restored either at its maximum current rating or cannot be restored at all. Such loads, when connected, will be equal to its rated current value. The load constraints are formulated as follows: At a load node “Di ” which is a variable type Li −j − limax −j ≤ 0
.
(18.4)
where “j” = a, b, and c phases. At a load node “Di ” which is fixed type Li −j − fi −j ∗ limax −j = 0
.
(18.5)
where fi _ j is a 0–1 variable and “j” = a, b, and c phases. This will ensure that the solution Li _ j is either limax −j or 0. (d) Flow constraints: The flow in an edge must be zero if the edge is open and it must not exceed the capacity of the edge if the edge is closed.
474
K. Butler-Purry et al.
(e) Radiality constraint: The system should be radial. This implies that at any node “i,” there should be only one edge feeding that node. (f) Voltage value constraints: The expressions for the voltage at any node “i” can be written in terms of voltage of the node that feeds the node “i” through an edge. If node “i” is feeding node “k” through an edge a = (i,k), then voltage in phase “j” at node “k” can be written as below. Vk −j = Vi −j − za −j ∗ Ia −j
.
(18.6)
where za _ j is the impedance in phase “j” of edge “a” and j = a, b and c phases. (g) Voltage limit constraints: Voltages at all nodes should be within tolerable limits. To illustrate the self-healing restorative method, a simplified AC radial SPS was used as shown in Fig. 18.3. It was assumed that the value of impedance for all edges was 0.01 ohms and the value of the transformer primary and secondary side impedances was 0.432 and 0.282 ohms, respectively. In this work, DC models of data and electrical behavior were used. Even though the DC results yield approximate results, the optimization algorithm will still tend to determine the optimal configurations among various candidate configurations based on voltage drop and other costs [9]. The transformation ratio of the transformer was assumed to be 450/120 V. The voltage limits were assumed to be 438 (min) and 450 (max) volts at all the nodes on the high voltage side and 113 (min) and 120 (max) at all nodes on the low voltage side of the network. Also, it was assumed that the capacity of each edge was 300 amps. It was also assumed that the current capacity of each of the generators was 175 amps and generators 1 and 2 were in operation. At a given time only two generators are usually in operation. The third generator serves as an emergency generator. In the example system shown in Fig. 18.3, each load Li was modeled in three phases with values as Li _ 1 , Li _ 2 , and Li _ 3 . For the illustrative case, the CPLEX program [10] was used to solve each resulting optimization problem. For this example that illustrates the self-healing restorative method’s handling of priority of loads, faults on components were simulated and then the proposed optimization formulation was solved to restore maximum load satisfying the constraints and considering the priority of loads and normal paths. A fault was considered on the cable (edge 15) connecting load L4 (at node 17). After it was isolated, there would be no power to the load L4 at node 17. For this case, it was assumed that the loads L1, L2, L3, and L4 were all fixed type balanced loads with 25 amps of load in each phase. In this case, we will illustrate that the method restores high priority loads first. In addition, in order to make sure that only the high priority load (among L1 and L4) could be restored, initially, load L1 was a vital load and other loads were non-vital loads. Further, the total available generation capacity was 25 amps (so that both cannot be restored).
18 Self-Healing Shipboard Power Systems
475
Fig. 18.3 Example AC radial shipboard power system
Since there were no faults on the components connected in the ring, Fig. 18.4 was modified to the system shown in Fig. 18.5 by merging the nodes corresponding to the generator switchboards connected in a ring. In Fig. 18.5, node 30 represents the new node after merging the generator switchboards and bus-tie breakers (nodes 2, 22, 23, 11, 27, 26, 25, 28, 29). Node 31 represents the new source whose capacity was equal to the sum of the capacities of generators supplying power at nodes 1, 10, and 24. Accordingly, the capacity of edge 32 was equal to the capacity of source node 30. It was assumed that load L1 was three single-phase unbalanced loads supplied thru a transformer. L1 can be varied from zero amps to 15, 20, and 25 amps in phase a, phase b, and phase c, respectively. Loads L2 and L3 were three-phase balanced loads that varied from zero amps to 70 and 100 amps (in all the phases), respectively. Load L4 was assumed to be a fixed balanced load of 80 amps. Also, loads L1 and L3 were vital loads and L2 and L4 were non-vital loads. Based on these load values, the weighted factors were calculated for all the loads using Eqs. (18.2) and (18.3). For the two ABT/MBTs, the switches numbered 7 and 16 were assumed to correspond to the normal paths, respectively. Accordingly, H7 and H16 were included in the objective function. The objective function for this example was as follows: .
Maximize 2L1−1 + L2−1 + L3−1 + L4−1 + 2L1−2 + L2−2 + L3−2 +L4−2 + 2L1−3 + L2−3 + L3−3 + L4−3 + H7 + H16
(18.7)
476
K. Butler-Purry et al.
Fig. 18.4 Graphical representation of the example system
1 29
1
2
31
24
2 3
3 5
30
6
4
7 7
L2
6 8
28
19 19
29 24
71
81 25
4 25
72
82
73
5 20
26
9
8
23 13
21
21 L1
26
18
27
L4 18
17
17 16
L3
16 15 15
13 14
11
14 12
12 28 27
10 11
22 9
23
10
Since the fault was on component 15, the following new constraints were added to the data shown for case 1.1. This indicates that these components are not available. Also, the affected load (load that has lost supply) due to this fault was L4. I15−1 = 0; I15−2 = 0; I15−3 = 0; H15 = 0
.
(18.8)
Formulating the problem as explained with these modifications, CPLEX generated the following results for the loads. Load values (in amps): This solution indicates that only L1 was restored and other loads could not be restored.
18 Self-Healing Shipboard Power Systems
477
31 32
30 4 5
10
2 3 5
9
8
16 18
17
72
82
14 15 16 17 L4
71
81
12
14
15
L2
7
19 19
13
7
8
11
4
6
6
18
12
3
13
73
20
21
13
21 L1
Fig. 18.5 Graphical representation of example system after merging components in ring configuration Load L1 L2 L3 L4
Phase a 25 0 0 0
Phase b 25 0 0 0
Phase c 25 0 0 0
Then load L4 was made as the high priority load and load L1 was made as the low priority load and all other conditions remained the same. The objective function for this condition is as follows: Maximize L1−1 + L2−1 + L3−1 + 2L4−1 + L1−2 + L2−2 + L3−2 + 2L4−2 . +L1−3 + L2−3 + L3−3 + 2L4−3 + H7 + H16 (18.9) CPLEX generated the following results for the loads. Load values (in amps):
478
K. Butler-Purry et al.
This solution indicates that L4 (modeled as a high priority load) was restored and other loads could not be restored. Load L1 L2 L3 L4
Phase a 0 0 0 25
Phase b 0 0 0 25
Phase c 0 0 0 25
18.2.3 Predictive Self-Healing Method In the event of battle, various weapons might attack the ship and cause severe damage to the electrical system on the ship. These damages can lead to faults in the electrical system, including cascading faults in the electrical system, and interruption of power supply to the loads. There exists technology that enables the detection of an incoming weapon and prediction of the geographic area where the incoming weapon will hit a ship. This information can be used to determine changes in the electric system connections (reconfiguration control actions) before the actual hit, to reduce the damage to the electrical system and the possibility of cascading failures. A probability-based predictive self-healing reconfiguration method [2] was developed to perform such functions. When FAST is alerted of an incoming weapon, this probabilistic approach predicts the damage to the electrical system based on the weapon information. It calculates the expected probability of damage (EPOD) for each electrical component on a ship based on the prediction of the geographic area of the incoming weapon hit. Further a heuristic method uses the EPOD to determine control actions to reconfigure the ship’s electrical network to reduce the damage to the electrical system (pre-hit reconfiguration). The prediction of future events, which in this case is prediction of damaged components, does not guarantee that the event will take place exactly the way it was predicted. Thus, a non-deterministic or probabilistic methodology was developed to predict the weapon damage and determine steps to perform reconfiguration before the weapon hits the ship to reduce the actual damage caused to the SPS by the weapon hit. After the weapon hit, restorative self-healing is performed. First, FAST is initiated to evaluate the protection system responses to the resulting fault to assess the damage to the SPS and identify the de-energized loads. Then reconfiguration for restoration is activated to perform the appropriate control actions to restore as many de-energized loads as possible (post-hit reconfiguration). The overall methodology for the Predictive Reconfiguration self-healing method is as shown in Fig. 18.6. The method consists of two different methodologies—Pre-hit Reconfiguration and Post-hit Reconfiguration. The Pre-hit Reconfiguration determines control
18 Self-Healing Shipboard Power Systems
479
Fig. 18.6 Block diagram of overall predictive reconfiguration methodology
actions to reduce the damage that will be caused by the incoming weapon hit, before the actual weapon hit takes place. The Post-hit Restoration method determines control actions to restore loads, which are de-energized due to the damage caused by the weapon hit. Both of these methods interact with databases to obtain data required to execute the methodology. The Reconfiguration database, used in the Pre-hit Reconfiguration method, is a temporary local database that is created using the GIS (Geographic Information System) and Constraints database. The Reconfiguration database has information regarding the status of various switches in the system that defines the current (pre-hit) configuration of the electrical network. The Restoration database, used by the Reconfiguration for Restoration module in the Post-hit Restoration method, is also a temporary local database that is created using the GIS and Constraints databases. The Restoration database contains connectivity information and static information of all electric components. As soon as the incoming weapon has been detected and identified and the hit location has been predicted, the Predictive Reconfiguration method is initiated
480
K. Butler-Purry et al.
which first calls the Weapon Damage Assessment module. This module uses information regarding type of weapon, the predicted location of hit, and the geographic information about various electrical components on ship from Reconfiguration database, to calculate the expected probability of damage (EPOD) for each electrical component in the system. Two assumptions are used to calculate EPOD. The first assumption is the predicted location of weapon hit was modeled as a normal probability density function (PDF), p(x,y,z), as shown in (18.10), where σ is the standard deviation, μx is the mean in x direction, μy is the mean in y direction, and μz is the mean in z direction.
p (x, y, z) =
.
1
(2π )
3
2 − (x − μx )2 + y − μy + (z − μz )2 2 σ2 e
2σ 3 (18.10)
The second assumption is the modeling of a damage function, h(x,y,z), that describes the probability of kill/damage for a point at a given distance from the point of actual hit. The probability of predicted hit will be greatest at the location represented by coordinate points (μx , μy , μz ), with respect to the ship’s coordinate axes. Therefore, the means (μx , μy , μz ), represent the predicted hit location. If the coordinate axes on the ship are moved to the mean (μx , μy , μz ), then (18.10) can be simplified to (18.11), a normal distribution with zero mean, with respect to the shifted coordinate axes where β is the effectiveness factor. The effectiveness factor, β, represents a weapon’s effectiveness in destroying the target and causing widespread damage.
− x 2 + y 2 + z2 β2 .h (x, y, z) = e
(18.11)
The EPOD values obtained by (18.12) are specific to the normal density function assumed in (18.10) and (18.11). If the predicted location and probability of kill functions p(x, y, z) and h(x, y, z) are not normal, then the appropriate functions can be substituted in (18.10) to obtain an equation, similar to (18.12), for computing the EPOD values. The details of the derivation of the equations are given in [11]. The main goal of the Weapon Damage Assessment method is to compute an EPOD value for each electrical component in an SPS. The EPOD given by (18.12) computes the damage probability for each component in the electrical system on the ship. During the computation, each electrical component, C, is divided into n very small cubes, C1 , C2 , . . . , Cn . The points where the diagonals of the cuboids meet are called PC1 , PC2 , . . . , PCn . Then using (18.12), the EPOD for each PCi is calculated. These EPOD are represented by EPODPC1 , EPODPC2 , . . . , EPODPCn. The EPOD for the component C, defined as EPODC , is computed as the maximum value of EPOD among all the EPOD values computed for all points on component C. This is represented by (18.13).
18 Self-Healing Shipboard Power Systems
EP OD =
.
1 1+
2σ 2 β
481
2 + y 2 + z2 − x
0 0 0 exp β 2 + 2σ 2
3 2
(18.12)
EP OD C = max (EP OD PC1 , EP OD PC2 , . . . .EP OD PCn )
(18.13)
.
The main objective of the electrical network on a ship is to supply electrical energy to various loads. This electrical energy is supplied via various electrical components, which form a radial path from the electrical energy source to load. Some loads (vital loads) in the system have more than one possible radial path, i.e., the electrical energy to these loads can be provided through more than one path. But at any given time, the load gets power supply via one path only. The damage assessment output provides information about the probabilities of damage of electrical components in these radial paths. A simplified electrical network of an SPS is shown in Fig. 18.7. This network consists of 2 generators (Gen1, Gen2), 2 switchboards (SB1, SB2), 2 load centers
Gen1 CB10 CL12
R1
CB1 CL1
CB12
SB1
CL14
CB2 CL5
LC1 CB3 CL2
CB4
L3 CB13
CL3
R2 BT1
L1
CB8
CL10
CL7
CB6
Gen2 CB11
CL4 L4
CL9
CB9
LC2
CL13
L2 SB2 L5
Fig. 18.7 Two possible paths for a load L2
CL8
CB7
482
K. Butler-Purry et al.
(LC1, LC2), 5 loads (L1,L2, . . . ,L5), 13 circuit breakers (CB1,CB2, . . . ,CB13), 14 cables (CL1,CL2, . . . ,CL14), and 1 manual bus transfers (BT1). The normal path for the BTs is shown by continuous lines and alternate path is shown by dotted lines. In this figure, two possible radial paths R1 and R2 for a load L2 are shown. Each radial path includes various electrical components. Load L2 is assumed to receive electrical supply via R2. Then path R1 and R2 would be comprised of electrical components as given by (18.14) and (18.15). R1 : {L2, CL4, BT1, CL3, CB4, LC1, CL1, CB1, SB1}
(18.14)
R2 : {L2, CL4, BT1, CL10, CB8, LC2, CL7, CB6, SB2}
(18.15)
.
.
Assuming that EPODC represents expected probability of damage for a component C, then Availability Probability (AP)C for a component C is defined as the probability that electrical energy will be able to flow through component C when a weapon hit occurs. In an AC radial SPS, electrical components can be divided into two categories, switch-controlled devices and non-switch-controlled devices. A device, Ci , which is not a switch-controlled device, can transfer electrical energy if it is not damaged. Since EPODCi represents expected probability of damage for Ci , the AP for Ci is given by (18.16). (AP)Ci = 1 − (EP OD)Ci
.
(18.16)
Next, consider a component Cj , which is a switch-controlled device, such as a circuit breaker. In this case, electrical energy will be able to be transferred through Cj if it is not damaged and is in closed position. If expected probability of damage for Cj is (EPOC)Cj and its status (close or open) is represented by SCj , then AP for Cj is given by (18.17). If status of Cj is closed, then SCj = 1; otherwise, if Cj is in open position, then SCj = 0. (AP )Cj = 1 − (EP OD)Cj ∗ SCj
.
(18.17)
Then, consider path R1 which is shown in Fig. 18.7. In path R1, if an electrical component is damaged, then path R1 will not be able to supply electrical energy to load L2. Also, if circuit breaker CB3 is damaged such that it leads to a short circuit fault at CB3, then, because of coordination between protective devices, CB1 will open causing interruption of supply in path R1. If the damage on CB3 leads to an open fault condition at CB3, then it will not cause opening of CB1. This means that to supply power in path R1, none of the circuit breakers at load center level should be in short circuit fault condition. This means that all electrical components in path R1 and all circuit breakers at load centers with a short circuit fault are in series, as damage to any of these components will result in interruption of electrical supply to load L2.
18 Self-Healing Shipboard Power Systems
483
It is extremely difficult to tell beforehand that damage to a circuit breaker will lead to a short circuit fault or an open circuit fault. If we assume that the probability that damage caused by a weapon hit will lead to an open circuit fault on a circuit breaker is p1, then we can define the probability that the circuit breaker will have a short circuit fault due to a weapon hit as (1-p1). Given that the weapon hit has occurred, there are three possible states of circuit breaker C with expected probability of damage EPODC . State 1 represents the situation when the weapon hit does not cause damage to circuit breaker. State 2 represents the situation when the circuit breaker is damaged and has an open circuit fault. State 3 represents the situation when the circuit breaker is damaged and has a short circuit fault. The probability that the circuit breaker will not be damaged is given by (18.18). PC,not damaged = 1 − EP OD C
.
(18.18)
The probability of an open circuit or a short circuit fault on the circuit breaker, in the event of a weapon hit, is given by (18.19) and (18.20). PC,damaged,
.
PC,not damaged,
.
= p1 ∗ EP OD C
(18.19)
= (1 − p1) ∗ EP OD C
(18.20)
open circuit
short circuit
The probability that circuit breaker C will not be in the short circuit state after a weapon hit is given by (18.21). PC,
.
not short circuit
= 1 − PC,not damaged,short circuit = 1 − (1 − p1) ∗ EP OD C (18.21)
Then the Path Availability Probability (PAP) was defined as the probability that a path can transfer electrical energy through it, in case of a weapon hit. For that case, PAP for path R1 can be given by Eq. (18.22). (PAP)R1 = (AP)L2 ∗ (AP)CL4 ∗ (AP)BT1 ∗ (AP)CL3 ∗ (AP)CB4 ∗ (AP)CL1 ∗
.
(AP)CB1 ∗ (CB3 not short circuited) (18.22) From (18.21), (18.22) can be rewritten as (18.23). (PAP)R1 = (AP)L2 ∗ (AP)CL4 ∗ (AP)BT1 ∗ (AP)CL3 ∗ (AP)CB4 ∗ (AP)CL1 ∗ (AP)CB1 ∗ 1 − (1 − p1CB3 ) ∗ EP OD CB3 (18.23)
.
Assuming that damage to a circuit breaker will always lead to an open circuit condition, then p1 = 1. Therefore, (18.16) reduces to (18.24).
484
K. Butler-Purry et al.
(PAP)R1 = (AP)L2 ∗(AP)CL4 ∗ (AP)BT1 ∗ (AP)CL3 ∗ (AP)CB4 ∗ (AP)CL1 ∗ (AP)CB1 (18.24)
.
In other words, if a path “RK ” has “n” components and their APs are represented by [APC1 , APC2 , . . . . APCn ], respectively, then the Path Availability Probability for that path is given by eq. (18.25). (PAP)Rk =
n
.
(AP)Ci
(18.25)
i=1
PAP for an electrical path basically gives a measure of success a path will have in delivering electrical energy to the load in that path. The higher the PAP for a path, the greater the success that the path will have in providing electrical energy to a load. The control actions determined by the Reconfiguration for Reduction of Supply Interruption (RRSI) module will always involve a change of electrical supply path of bus transfers. The bus transfer onboard an SPS can be of two types—automatic and manual. To change the supply path of a manual bus transfer (MBT), a manual action can be taken to “transfer BT.” But for an automatic bus transfer (ABT), an automatic action to change the supply path of the circuit breaker (CB) happens when a certain condition is met. Hence for ABTs, the control action would be to “Open CB.” Implementing any control action involves certain cost. If the difference in the (PAP)R1 and (PAP)R2 values is small, then a good decision may be to disregard the suggested control action. Also, it is possible that the difference in PAPs values might be substantial but the individual PAP values are small. For that case, a good decision may be to disregard the suggested control action. Hence the absolute value of the difference in PAP for R1 and R2 and the PAP value of the selected path is provided as an output. It is then up to the operator to determine whether the suggested action should be implemented. The heuristic discussed above is applied to loads that have more than one supply path. The RRSI is only applicable for vital loads. Since the objective of this module is to reconfigure vital loads such that the probability of supply interruption to those vital loads is reduced, the method is referred as Reconfiguration for Reduction of Supply Interruption for vital loads. When a weapon hits a ship, it causes damage to electrical components of the ship leading to electrical faults and cascading faults in the electrical system. If the components that will be damaged can be exactly determined before the weapon hit takes place, then these components are isolated, and the electrical faults and cascading faults can be completely prevented. Since the weapon hit is a future event, it is impossible to accurately predict the components that will be damaged by a weapon hit. The probability of damage for components provided by the Weapon hit Damage Assessment methodology, discussed earlier, can be used to identify components that have a very high probability of being damaged by a weapon hit.
18 Self-Healing Shipboard Power Systems
485
Components can be identified which have EPOD higher than a threshold value, or in other words have a high probability of being damaged. If these identified components are isolated before a weapon hit takes place, then the chances of electrical faults and cascading faults are reduced. However, after the weapon hits, the component may not be damaged. In that case, it is possible that because of the isolation of that component, the electrical supply to some loads downstream of that component might be unnecessarily interrupted. If the load is a vital load, then interruption of electrical supply to that load might reduce the ship’s chances of surviving the attack. Therefore, it is best not to isolate components upstream of a vital load which have a damage probability greater than the threshold. Hence the methodology to prevent electrical faults and cascading faults is implemented only for non-vital loads, meaning only the components lying in the radial path of non-vital components are considered for isolation. Hence this module is referred as Reconfiguration for Component Isolation for non-vital loads. A case study is presented to illustrate the functionality discussed above. This case is based on a simplified electrical network of an AC radial SPS, as shown in Fig. 18.8. For this case, non-vital loads L1 and L4 are considered. This network
Gen1 CB10 SB1
CL12
R1 CB1
CB12
CL14
CB2
CL1
CL5
LC1 CB3 CL2
BT2
CB4 CL3
CB13
CL6
BT1
L1
CB5
CL11
L3
CB8
CL10
LC2 CL7
CL4 L4
CL9
Gen2 CB6
CB11 CL13
CB9
L2 SB2 L5
Fig. 18.8 Simplified electrical network for an SPS
CL8
CB7
486
K. Butler-Purry et al.
consists of 2 generators (Gen1, Gen2), 2 switchboards (SB1, SB2), 2 load centers (LC1, LC2), 5 loads (L1,L2, . . . ,L5), 13 circuit breakers (CB1,CB2, . . . ,CB13), 14 cables (CL1,CL2, . . . ,CL14), and 2 manual bus transfers (BT1,BT2). Continuous lines show the normal path for the BTs and the alternate path is shown by dotted lines. The components having EPOD greater than the threshold and lying in the path supplying power to a non-vital load are referred as non-critical components. For this case, it was assumed that a missile was detected. Therefore, to assess the damage that will be caused by the actual weapon hit, the Weapon-hit Damage Assessment module was executed and the EPOD was computed for each component. A threshold EPOD limit, EPODthreshold , was assumed. Further it was assumed that among all the components in the electrical network, only cable CL2 had EPOD, EPODCL2, greater than the threshold. In other words, EPODCL2 > EPODthreshold . To isolate CL2, before the hit, the first circuit breaker upstream of CL2, CB3, would be opened. This action, if implemented, will cause de-energization of load L1. But in the event of an actual hit in which cable CL2 was damaged, load L1 will be de-energized anyway. If CL2 was isolated before the weapon hit takes place, then electrical faults or cascading faults which can happen because of damage to CL2 have been prevented. In case CL2 was not damaged by the weapon hit and was isolated before the weapon hit, then de-energization of L1 was not needed and should be restored by a Restoration program. But since load L1 is a non-vital load, it can be assumed that its temporary de-energization will not hamper the ship’s chances of surviving the attack. After the determination of control actions by the Pre-hit Reconfiguration methodology, the Post-hit Restoration methodology is called. Once the weapon hits the ship, it will cause damage to the SPS. The damage will result in faults in the SPS. The output of the Failure Assessment method is a list of the components in the faulted sections and a list of loads that are de-energized due to these faults. This information is then passed to the Reconfiguration for Restoration module which is based on an expert system method which first prioritizes the de-energized loads and then tries to determine control actions to restore each load. For each restorable load, it also determines whether the implementation of control actions, that will restore the load, will cause any system constraint violation(s). If a system constraint violation is identified, the control actions for the load are discarded and the load is determined as unrestorable. Otherwise for loads whose control actions do not result in system constraint violation(s), the loads are set as restorable, and the control actions are stored. In case of a generation capacity constraint violation, an expert system-based load shedding method is called, which determines control actions to shed load in increasing order of priority to remove the generation capacity constraint violation. Finally, the list of control actions for restorable loads, the control actions for load shedding (if it was performed), and the list of unrestorable loads are given as output.
18 Self-Healing Shipboard Power Systems
487
18.3 Conclusion This chapter provided concepts of self-healing developed by authors to enhance the survivability of AC radial shipboard power systems during battle damage or material casualty. Proposed methods are illustrated with some examples.
References 1. K. L. Butler and NDR Sarma, “Self-Healing Reconfiguration Naval Shipboard Power Systems”, IEEE Transactions in Power Systems, vol. 19, no. 2, May 2004, pp. 754–762. 2. S. K. Srivastava and K. L. Butler-Purry, “Probability-based predictive self-healing reconfiguration for shipboard power systems,” IET Generation, Transmission & Distribution, vol. 1, issue 3, May 2007, pp. 405–413. 3. U. P. Rajbhandari, Development of a Geographical Information System-based Model of Shipboard Power System,” M.S. Thesis, May 2001, Texas A&M University. 4. Microstation. https://www.bentley.com/en/products/brands/microstation 5. S. Srivastava and K.L. Butler-Purry, “Expert-system method for automatic reconfiguration for restoration of shipboard power systems,” IEE Proceedings-Generation, Transmission and Distribution, vol. 153, issue 3, May 2006, p. 253–260. 6. K. L. Butler, NDR Sarma and V. R. Prasad, “Network Reconfiguration for Service Restoration in Shipboard Power Systems”, IEEE Transactions on Power Systems, Vol. 16, No. 4, Nov 2001, pp. 653–661. 7. K. L. Butler-Purry, N. D. R. Sarma and I. V. Hicks, “Service restoration in naval shipboard power systems,” IEE Proceedings of Generation, Transmission, Distribution, vol. 151, Jan. 2004, pp. 95–102. 8. G. L. Nemhauser and L. A. Wolsey, Integer and Combinatorial Optimization, Wiley Interscience Publications, pp. 8, 495–513, 1988. 9. H. L. Wills, Power Distribution Planning Reference Book, New York: Marcell Dekker, Inc., 1997, p 709. 10. ILOG CPLEX. https://www.ibm.com/products/ilog-cplex-optimization-studio 11. S. Srivastava, Multi-agent System for Predictive Reconfiguration of Shipboard Power Systems, PhD dissertation, Dec. 2003, Texas A&M University.
Karen Butler-Purry, PhD, PE, FIEE, began her undergraduate studies in Electrical Engineering at Southern University, Baton Rouge, Louisiana, in 1981, with the intention to completing a bachelor’s degree, entering industry, and living happily ever after. Karen’s early career aspirations were to become a math teacher like her dad. She grew up in a close-knit community of black teachers and observed the daily joy and fulfillment and community respect that her dad and his colleagues experienced from teaching. In high school, a family friend introduced to her the notion of a career in engineering. After participating in a summer pre-college engineering program at Southern University in 1979, she set her goals on becoming an electrical engineer. The summer before Karen began college at Southern University, she was introduced to the university’s Vice-Chancellor. His
488
K. Butler-Purry et al. vision that Karen pursue a PhD and his mentorship and coaching guided Karen through a host of experiences that would enhance her ability to pursue graduate studies at a top university. Also from her participation in those experiences, she decided to pursue a PhD and teach college students. She had come full circle, sort of, to her earliest career aspiration. Also it was the presence of her Calculus III professor, the only female math, science, or engineering professor she had at Southern University, which made Karen believe that she could be a professor in a male-dominated field. Later Karen received a Master’s in Electrical Engineering from the University of Texas at Austin and a PhD in Electrical Engineering from Howard University. For a year and a half between the completion of her master’s studies and the beginning of her PhD studies, Karen worked as a software developer in industry. In 1994, Karen began her academic career at Texas A&M University. After more than 25 years, she navigated the tenure process and was promoted through the professorial ranks to professor, served as Assistant Dean for Graduate Programs in the College of Engineering for 3 years, served as Associate Department Head for two and a half years, and as Associate Provost and Dean of the Graduate and Professional School for 12 years. Throughout her time at Texas A&M, Karen has benefitted from the mentorship of a more senior female ECE professor and administrator at Texas A&M University. Further Karen has been a licensed professional engineer since 1995. Karen’s passion is in attracting the next generation of students from historically underrepresented populations and female students into Science, Technology, Engineering, and Math (STEM) fields. She currently leads a project studying the use of video games to transform student learning and impact the attitude of college and high school students toward electrical and computer engineering. Also, Karen has directed several US federally funded fellowship and education programs that target recruitment, retention, and advancement of pre-college, college, and graduate students in STEM fields. Karen has received numerous teaching and service awards including the 2005 American Association for the Advancement of Science (AAAS) Mentor Award for efforts to mentor students from underrepresented groups and for leadership in promoting PhD careers for them in electrical engineering and computer sciences. She was elevated to the Institute of Electrical and Electronics Engineer (IEEE) Fellow status in 2018. Further in 2021, Karen received the Council of Graduate Schools Debra Stewart Award for Outstanding Leadership in Graduate Education.
18 Self-Healing Shipboard Power Systems
489
Sarma (NDR) Nuthalapati went to a small school in a mid-size town in India and was very inspired by his science teacher who made him build small projects and participate in local- and regionlevel science fairs. Since there was no one in the family those days who did higher studies, because of this teacher’s guidance and inspiration, doing engineering became a natural choice for Sarma. He joined Regional Engineering College at Warangal (RECW) which is now called the National Institute of Technology Warangal (NITW) in India and obtained his Bachelors of Technology (BTech) in Electrical Engineering in 1983 and continued there to obtained his Master of Technology (MTech) in Power Systems Engineering. As a part of his MTech program, he had an opportunity to do a semester long project at the R&D Center of CMC Limited which was a reputed technology development company that was set up by Government of India. At CMC, he worked in the area of Energy Management Systems (EMS) and grid operations and that interest continued throughout his career. After working a couple of years as a faculty member at RECW, his interest in higher studies continued and motivated him to join the Indian Institute of Technology at Delhi (IIT, Delhi) for his PhD and work in the area of Distribution Automation Systems (DAS). CMC became interested in this work and invited Sarma for an internship to adapt his research work as a part of their product for DAS. This provided an opportunity for Sarma to join CMC later in 1991 and continue to work in the areas of EMS and DAS which became his most important areas of interest in his career. While working in CMC, Sarma got interested in getting exposed to international research work and came to Texas A&M University (TAMU) initially in 1997, as a Post-Doctoral Research Associate. At TAMU, he worked in the area of shipboard power systems in the Department of Electrical Engineering. After working for about 9 years at CMC, he came back to TAMU during 2000–2003 and 2006–2007. Due to his interest in grid operations, he worked in the area of power system grid operations for over 15 years at ERCOT, PEAK Reliability, LCRA, and Dominion Energy in the United States providing technical support to operation engineers and control center operators. Inspired by his high school teacher and other teachers in his life, Sarma is always interested in sharing industrial experience with young engineers, students, and academicians. This interest gave him the opportunity to be an adjunct professor at TAMU since 2016. He is also actively involved in IEEE and initiated panel sessions and task forces. He is also the editor for the book Power System Grid Operation Using Synchrophasor Technology published by Springer (June 2018) and for the book Use of Voltage Stability Assessment and Transient Stability Assessment Tools for Grid Operations published by Springer (June 2021).
490
K. Butler-Purry et al. He is very active at the North American Synchrophasor Initiative (NASPI) and was given Control Room Solutions Task Team (CRSTT) Most Valuable Player (MVP) Award in October 2015 and CRSTT Volunteer of the Year Award in April 2018, by the U.S. Department of Energy. He is currently involved in providing consulting services in the areas of EMS and Synchrophasor Technology. He is also an Adjunct Professor at TAMU and provides advice to students and faculty to give an industry perspective for their research. He is a senior member of IEEE and the IEEE Power Engineering Society.
Sanjeev K. Srivastava entered Madan Mohan Malviya Engineering College, Gorakhpur, India in 1993 intending to become an Electrical Engineer. During this time, he realized his passion for understanding the complexity of power networks and finding solutions for power system problems. After completing the Bachelor’s degree, he joined the Master’s program at Indian Institute of Technology Delhi, India specializing in Power Systems. During his Master’s he got selected for the DAAD scholarship program and got an opportunity to perform his Master’s thesis work at Institut für Elektrische Anlagen und Energiewirtschaft, Aachen Germany. Here he got an opportunity to apply Machine Learning methods to large electrical network problems. After finishing his Master’s, he joined an electronic metering company Secure Meters Limited where he worked on developing algorithms and software solutions for power system analysis and metering. While working at Secure Meters Limited, he realized the need to develop a deeper understanding of power networks and complexities associated with it. He then joined Texas A&M University, College Station, TX in 2000 as a PhD candidate under Dr. Karen Butler-Purry. During his PhD, he worked on Shipboard Power Systems analysis and reconfiguration issues and developed an in-depth understanding of the complexities and issues related to ship systems. After finishing his PhD in 2003, he decided to join Center for Advanced Power Systems at Florida State University, Tallahassee, FL. There he conducted basic and applied research to advance the field of power systems technology for electric utility, defense, and transportation. He realized that what interests him most is to apply advance techniques in the areas of Machine Learning, Natural Language Processing, and Knowledge Graphs to develop innovative solutions for complex problems. So, in 2013 he joined Siemens where he got the opportunity to research and develop Machine Learning method-driven solutions for problems in a wide range of domains, including power systems, building management systems, computer-aided design and engineering software, gas turbines, transformers, robotics, and manufacturing technologies including 3D printing. In 2020, he joined Deloitte where he led teams to develop applications to automate and simplify various aspects of financial Audit.
18 Self-Healing Shipboard Power Systems
491
Sanjeev has numerous publications and patents. For his patent on control architecture for microgrid resiliency, he received the Thomas Alva Edison Patent Award from the R&D Council of New Jersey in 2019. He has won hackathons and excellence awards for his work. He continues his love for applying Machine Learning and Natural Language Processing–based solutions to complex problems in different domains. Currently, he is at Amazon where he leads a data science team to apply these advance methods in the advertising domain.
Index
A Advanced information technology, 352, 363–370 Asset management (AM), 87–147, 215, 355
B Boiling water reactors (BWRs), 157, 158, 167, 179, 181 Bulk Electric System (BES), 42, 48, 49, 54, 55, 57, 59–61
C Careers in energy, 24, 30 Climate-resilient power grids, 209, 224, 230 Condition monitoring, 90, 98, 103, 113–115, 122, 131–147, 200, 388–390, 392, 394 Controllable consumers, 419–438 Controllable consumption devices, 420, 422 Control sequences, 399–413 Critical Infrastructure, 39, 40, 48–51, 60–61, 194, 209, 224, 237, 388 Cyberattack, 49, 61, 94, 195 Cyber security, 49, 60, 199, 355, 364, 369
D Decarbonization, 87, 90, 310, 343 Decision trees (DTs), 249, 257–259 Demand-side management, 74, 76, 354, 420–422 Digital substation, 386–388, 394, 395
Discrete-time sliding mode, 443–461 Distributed control, 236, 312, 359 Distributed energy resources (DERs), 68, 71–74, 234, 266, 271, 275, 276, 313, 333, 352–356, 363, 401, 404 Distribution automation (DA), 210, 231, 355, 379, 399–413 Distribution grid, 209, 216, 225, 226, 233, 236, 275, 341, 359, 419, 420, 422, 433, 457 Dynamic monitoring and decision systems (DyMonDS), 308, 333, 340, 341
E Electrical drives, 443–461 Electricity generation, 88, 89, 109, 117, 131, 157–159, 363, 401, 413 Electric power systems modeling and control, 310 Electric Reliability Organization (ERO), 48, 49, 54–57, 60, 61 Electric utility regulation, 40, 42–46, 62 Encapsulated Nuclear Heat Source Reactor (ENHS), 167–170 Energy careers, 24, 30 Energy dynamics, 307–343 Energy equity, 73, 190 Energy justice, ix, 67–79 Energy management, 272, 275–277, 341, 352, 353, 370, 402, 403, 413, 426 Energy Policy Act, 47, 53 Energy transition, 32, 68, 70–75, 79, 87–88, 93, 94, 201 Energy workforce, 21–27
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. S. Tietjen et al. (eds.), Women in Power, Women in Engineering and Science, https://doi.org/10.1007/978-3-031-29724-3
493
494 F Failure assessment, 486 Failure mode, effects, and criticality analysis (FMECA), 104, 117 Fault detection, 90, 131–146 Faults, 88, 97, 106, 114, 131–146, 211, 212, 224, 226, 232, 236, 248–251, 287, 289–292, 294–303, 337, 381, 385, 390–393, 407, 468, 470–472, 474, 475, 478, 482–486 Federal Energy Regulatory Commission (FERC), 41–43, 45–49, 51, 53–55, 61 Flexibility, 27, 73, 75, 90, 189, 202–204, 214, 225, 236, 355, 420, 426 Frequency, 68, 88, 102, 106, 110–113, 117, 118, 120, 121, 126–129, 131, 135, 191, 195, 209, 211, 213, 271–274, 281, 302, 303, 308, 310–312, 317, 318, 322–325, 327–330, 336–338, 342, 413, 444, 449, 453–458, 460 Frequency control, 274, 308, 320
G Grid-connected power converters, 445 Grid resilience metric, 216–224
H High Impact Low Probability (HILP), 194, 197, 200, 203, 213–217, 219–222, 224, 225, 228, 230, 231 Hybrid methods, 27, 259
I Innovative teaching, 352, 370–372 Intelligent electronic devices (IEDs), 378, 380, 382, 384, 385, 388, 389, 391–395 Interaction variable, 307–343 Interdisciplinary, 351–372 Internet, 89, 194, 351, 352, 354, 357–363, 366, 368, 372, 404
L Layered architecture, 233–236, 357–362 Light water reactors (LWRs), 157, 164–167, 169, 172 Lyapunov stability, 322
Index M Machine learning (ML), 76, 99, 101, 132, 247, 249, 256–259, 266, 272, 367, 402 Maintenance optimization, 95, 101, 106–111, 113, 130 Merging units (MU), 383 Microgrids, 71, 215, 225, 231–234, 275, 313, 317, 333, 337, 353, 357, 359, 363, 371, 400, 408 Modeling and control, 307–343
N Natural disaster, 202, 214, 237, 357 Net injection and load capability, 406 Networked microgrids, 231, 235 Neutron transport, 170–183 North American Electric Reliability Corporation (NERC), 14, 43, 49, 54–57, 61, 212, 298, 303 Nuclear fission, 160–162, 165, 169, 184 Nuclear fuel cycle, 184 Nuclear power, 14, 89, 100, 101, 157–184, 326, 330, 334 Nuclear reactor analysis, 170–183
O Operational flexibility, 214, 236
P Pioneering women in energy, 15 Pioneering women in the electric utility industry, 39–46, 48, 50, 56 Power distribution, 209, 212, 213, 215–216, 219, 223, 225, 227, 231–236, 288, 399–413 Power electronics, 90, 118, 128–129, 266, 308, 311, 333–337, 342, 355, 356, 393, 443, 454, 456 Power engineers, 4, 69, 70, 79, 287 Power system stability, 247–253, 259 Predictive maintenance, 94–99, 101 Predictive self-healing, 471, 478–486 Process bus, 386, 387, 389, 395 Protection, ix, 15, 43, 45, 49, 121, 159, 194, 200, 287–304, 312, 330, 351, 354, 366, 380–382, 386, 389–392, 395, 468, 471, 478
Index
495
R Reinforcement learning, 265–282 Relaying, 298, 389–391, 394 Relays, 11, 288–304, 310, 380–382, 385, 386, 389–391, 394, 395, 425–427, 435 Reliability centered asset management (RCAM), 87–146 Reliability-centered maintenance (RCM), 95, 100–108, 115–131 Reliability standards, 14, 196 Resilience, 28, 88, 197–201, 204, 209–237, 354, 355 Risk, 49–51, 55–58, 60–61, 93–96, 108, 112, 123, 125, 132, 135, 138, 139, 145–147, 157, 158, 161, 191, 195, 197–204, 217, 219, 221, 223–225, 227–230, 237, 341, 354, 355, 405, 408, 470 Risk-averse planning, 225, 228, 229
Supervisory control and data acquisition (SCADA), 88, 131–134, 139, 142, 143, 146, 147, 275, 308, 333, 340, 366, 377–382, 385, 386, 388, 389, 392–395 Sustainability, 28, 59–60, 89, 157, 159, 189, 190, 201, 202, 265, 436–438 System protection, ix, 287–304, 330, 395
S SA application functions, 389–394 Security of electricity supply, 189–204 Self-healing, 355, 357, 467–487 Service restoration, 210, 222, 231, 402, 409 Shipboard power systems, 400, 408, 467–487 Single machine equivalent (SIME), 252–254, 256, 259 Small modular reactors (SMRs), 158–160, 166–170, 184 Smart grid, 189, 202, 224, 225, 232, 274, 351–372 STEM careers, 26, 33 Substation automation (SA), ix, 377–395 Substation automation laboratory, 394–395 Sub-synchronous resonance (SSR), 308, 312, 343
V Variable structure systems (VSS), 443, 445–449, 456 Virtual reality (VR), 370 Vulnerability, 197–201, 204, 369
T Time-series analysis, 403–408 Transient stability (TS), ix, 247–253, 258, 259, 311, 327, 330, 334
U US Federal Sentencing Guidelines (UFSGO), 51
W Wind energy, 88, 118, 130, 354 Wind turbine, 30, 87–147, 356 Women in energy, 23, 30 Women in solar energy, 6, 8, 9 Women in the electric utility industry, 39–46, 48, 50, 56 Women in the power industry, 100, 332, 337, 352, 353 Workforce development in energy, 21–27