257 100 46MB
English Pages 470 [472] Year 2022
Safety Science and Technology
SAFETY SCIENCE AND TECHNOLOGY
Edited by: Zoran Gacovski
ARCLER
P
r
e
s
s
www.arclerpress.com
Safety Science and Technology Zoran Gacovski
Arcler Press 224 Shoreacres Road Burlington, ON L7L 2H2 Canada www.arclerpress.com Email: [email protected] e-book Edition 2023 ISBN: 978-1-77469-658-3 (e-book) This book contains information obtained from highly regarded resources. Reprinted material sources are indicated. Copyright for individual articles remains with the authors as indicated and published under Creative Commons License. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable data and views articulated in the chapters are those of the individual contributors, and not necessarily those of the editors or publishers. Editors or publishers are not responsible for the accuracy of the information in the published chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The editors and the publisher have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not been acknowledged, please write to us so we may rectify. Notice: Registered trademark of products or corporate names are used only for explanation and identification without intent of infringement. © 2023 Arcler Press ISBN: 978-1-77469-528-9 (Hardcover)
Arcler Press publishes wide variety of books and eBooks. For more information about Arcler Press and its products, visit our website at www.arclerpress.com
DECLARATION Some content or chapters in this book are open access copyright free published research work, which is published under Creative Commons License and are indicated with the citation. We are thankful to the publishers and authors of the content and chapters as without them this book wouldn’t have been possible.
ABOUT THE EDITOR
Dr. Zoran Gacovski’s current position is a full professor at the Faculty of Technical Sciences, “Mother Tereza” University, Skopje, Macedonia. His teaching subjects include Software engineering and Intelligent systems, and his areas of research are: information systems, intelligent control, machine learning, graphical models (Petri, Neural and Bayesian networks), and human-computer interaction. Prof. Gacovski has earned his PhD degree at Faculty of Electrical engineering, UKIM, Skopje. In his career he was awarded by Fulbright postdoctoral fellowship (2002) for research stay at Rutgers University, USA. He has also earned best-paper award at the Baltic Olympiad for Automation control (2002), US NSF grant for conducting a specific research in the field of human-computer interaction at Rutgers University, USA (2003), and DAAD grant for research stay at University of Bremen, Germany (2008 and 2012). The projects he took an active participation in, are: “A multimodal human-computer interaction and modelling of the user behaviour” (for Rutgers University, 2002-2003) - sponsored by US Army and Ford; “Development and implementation of algorithms for guidance, navigation and control of mobile objects” (for Military Academy – Skopje, 1999-2002); “Analytical and non-analytical intelligent systems for deciding and control of uncertain complex processes” (for Macedonian Ministry of Science, 1995-1998). He is the author of 3 books (including international edition “Mobile Robots”), 20 journal papers, over 40 Conference papers, and he is also a reviewer/ editor for IEEE journals and Conferences.
TABLE OF CONTENTS
List of Contributors........................................................................................xv
List of Abbreviations..................................................................................... xxi
Preface.................................................................................................... ....xxv Section 1: Safety-Critical Systems Chapter 1
Architecture Level Safety Analyses for Safety-Critical Systems.................. 3 Abstract...................................................................................................... 3 Introduction................................................................................................ 4 Error Model Annex in Architecture Analysis & Design Language (AADL).... 5 Implementation of Proposed Research........................................................ 8 Safety Analyses......................................................................................... 12 Conclusion............................................................................................... 16 Acknowledgments.................................................................................... 17 References................................................................................................ 18
Chapter 2
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences...................................................................... 21 Abstract.................................................................................................... 21 Introduction.............................................................................................. 22 Selection of Safety Analysis Methodology................................................. 25 Characterisation Liquefaction Hazard....................................................... 31 Conclusion............................................................................................... 39 References................................................................................................ 40
Chapter 3
The Fukushima Nuclear Accident: Insights on the Safety Aspects............ 45 Abstract.................................................................................................... 45 Introduction.............................................................................................. 46 A Brief Description About the Fukushima Nuclear Accident..................... 47
Nuclear Safety Culture.............................................................................. 48 Barrier Protection Against Tsunami........................................................... 49 Diversity and Redundant Safety Systems................................................... 51 Training.................................................................................................... 53 Vent System Failures................................................................................. 54 Filtered Venting......................................................................................... 56 Structures, Systems and Components Shared at Nuclear Power Plants...... 57 Impact Evaluation of the Ssc Sharing in the Fukushima Accident.............. 58 Hydrogen Explosions................................................................................ 63 Lessons Learned........................................................................................ 65 Conclusions.............................................................................................. 67 Acknowledgements.................................................................................. 68 References................................................................................................ 69 Chapter 4
An Augmented Framework for Formal Analysis of Safety Critical Systems............................................................................. 73 Abstract.................................................................................................... 73 Introduction.............................................................................................. 74 Methodology and Components................................................................. 74 Formal Analysis of Safety Critical Systems (SCSS)...................................... 75 Simulation Results.................................................................................... 78 Conclusions.............................................................................................. 82 References................................................................................................ 85
Chapter 5
Concepts of Safety Critical Systems Unification Approach & Security Assurance Process.................................................................................... 87 Abstract.................................................................................................... 87 Introduction.............................................................................................. 88 Research Method...................................................................................... 89 Background of Research........................................................................... 90 Existing Research Review.......................................................................... 91 Proposed Concept of Security Assurance Safety-Critical Systems............... 93 Designed Defensive Strategy as a Solution to Deal Business Logic Layer Concerns...................................................................... 97 Conclusion............................................................................................... 99 References.............................................................................................. 100
x
Section 2: Safety Simulation Techniques Chapter 6
The Marine Safety Simulation based Electronic Chart Display and Information System................................................................................ 105 Abstract.................................................................................................. 105 Introduction............................................................................................ 106 Automatic Feasibility Identification in Route Plan................................... 107 Simulation Model of Route Plan............................................................. 109 Route Monitoring Simulation Model....................................................... 110 Early-Warning Model of Own Ship Dynamic Information....................... 111 Experiment and Analysis......................................................................... 112 Conclusions............................................................................................ 113 Acknowledgments.................................................................................. 114 References.............................................................................................. 115
Chapter 7
Improved Modelling and Assessment of the Performance of Firefighting Means in the Frame of a Fire PSA................................... 117 Abstract.................................................................................................. 117 Introduction............................................................................................ 118 Methods Implemented in Mcdet............................................................. 119 Fire Event, Firefighting Means, and Modelling Assumptions.................... 124 Analysis Steps and Results...................................................................... 129 Conclusions............................................................................................ 136 Acknowledgments.................................................................................. 138 References.............................................................................................. 139
Chapter 8
Scenario Grouping and Classification Methodology for Postprocessing of Data Generated by Integrated Deterministic-Probabilistic Safety Analysis............................................ 143 Abstract.................................................................................................. 143 Introduction............................................................................................ 144 Classification Approach.......................................................................... 146 Application............................................................................................. 153 Results.................................................................................................... 155 Discussion.............................................................................................. 164 Acknowledgments.................................................................................. 165 References.............................................................................................. 166 xi
Chapter 9
Demonstration of Emulator-Based Bayesian Calibration of Safety Analysis Codes: Theory and Formulation..................................... 169 Abstract.................................................................................................. 169 Introduction............................................................................................ 170 Overview of Emulator-Based Bayesian Calibration.................................. 172 Gaussian Process-Based Emulators......................................................... 175 Calibration Demonstration: Friction Factor Model................................... 194 Conclusions............................................................................................ 206 References.............................................................................................. 208
Chapter 10 Microscopic Simulation-Based High Occupancy Vehicle Lane Safety and Operation Assessment: A Case Study.................................... 211 Abstract.................................................................................................. 211 Introduction............................................................................................ 212 Literature Review.................................................................................... 214 Methodology.......................................................................................... 215 Case Study.............................................................................................. 223 Concluding Remarks............................................................................... 233 Acknowledgments.................................................................................. 234 References.............................................................................................. 235 Section 3: Safety in Transport and Vehicles Chapter 11 Safety of Autonomous Vehicles.............................................................. 241 Abstract.................................................................................................. 241 Introduction............................................................................................ 242 Levels of Automation.............................................................................. 245 Types of Errors for Autonomous Vehicles................................................. 248 On-Road Testing and Reported Accidents............................................... 252 Opportunities and Challenges................................................................. 257 Summary and Concluding Remarks........................................................ 260 References.............................................................................................. 262 Chapter 12 Studying the Safety Impact of Autonomous Vehicles Using Simulation-Based Surrogate Safety Measures......................................... 273 Abstract.................................................................................................. 273 Introduction............................................................................................ 274
xii
Methodology.......................................................................................... 277 Results and Discussion........................................................................... 282 Conclusion............................................................................................. 289 References.............................................................................................. 291 Chapter 13 Advanced Modeling and Simulation of Vehicle Active Aerodynamic Safety............................................................................... 295 Abstract.................................................................................................. 295 Introduction............................................................................................ 296 System of Data Acquisition and Active Control of Movable Aerodynamic Elements................................................... 298 Models And Simulations......................................................................... 300 Road Tests............................................................................................... 317 Conclusions............................................................................................ 321 Acknowledgments.................................................................................. 321 References.............................................................................................. 322 Chapter 14 Analyzing Driving Safety Using Vehicle-Water-Filled Rutting Dynamics Model and Simulation............................................................ 325 Abstract.................................................................................................. 325 Introduction............................................................................................ 326 “Vehicle-Water-Filled Rutting” Open-Loop Dynamics Model.................. 332 Conclusions and Recommendations....................................................... 349 Acknowledgments.................................................................................. 351 References.............................................................................................. 352 Section 4: Safety Analysis of Medicinal Units (Hospitals) Chapter 15 Establishing Patient Safety in Intensive Care— A Grounded Theory...... 357 Abstract.................................................................................................. 357 Introduction............................................................................................ 358 Method................................................................................................... 359 Findings.................................................................................................. 362 Conclusion............................................................................................. 372 References.............................................................................................. 373
xiii
Chapter 16 Analysis of Critical Incidents during Anesthesia in a Tertiary Hospital.. 377 Abstract.................................................................................................. 377 Introduction............................................................................................ 378 Methods................................................................................................. 379 Results.................................................................................................... 379 Discussion.............................................................................................. 388 References.............................................................................................. 394 Chapter 17 Healthcare Professional’s Perception of Patient Safety Measured by the Hospital Survey on Patient Safety Culture: A Systematic Review and Meta-Analysis.................................................... 399 Abstract.................................................................................................. 399 Introduction............................................................................................ 400 Methods................................................................................................. 401 Results.................................................................................................... 404 Discussion.............................................................................................. 411 Conclusions............................................................................................ 413 Authors’ Contributions............................................................................ 413 Acknowledgments.................................................................................. 413 References.............................................................................................. 414 Chapter 18 Uncertainty of Clinical Thinking and Patient Safety............................... 425 Abstract.................................................................................................. 425 Introduction............................................................................................ 426 The Uncertainty of Clinical Thinking....................................................... 426 How to Deal With the Uncertainty of Clinical Thinking.......................... 428 Summary................................................................................................ 432 Authors’ Contributions............................................................................ 433 References.............................................................................................. 434 Index...................................................................................................... 437
xiv
LIST OF CONTRIBUTORS
K. S. Kushal Aerospace Electronics & Systems Division, CSIR-National Aerospace Laboratories, Bangalore, Karnataka, India Manju Nanda Aerospace Electronics & Systems Division, CSIR-National Aerospace Laboratories, Bangalore, Karnataka, India J. Jayanthi Aerospace Electronics & Systems Division, CSIR-National Aerospace Laboratories, Bangalore, Karnataka, India Tamás János Katona University of Pécs, Boszorkány Utca 2, Pécs 7624, Hungary Zoltán Bán Budapest University of Technology and Economics, Budapest 1521, Hungary Erzsébet Győri Seismological Observatory, MTA CSFK GGI, Meredek Utca 18, Budapest 1112, Hungary László Tóth Seismological Observatory, MTA CSFK GGI, Meredek Utca 18, Budapest 1112, Hungary András Mahler Budapest University of Technology and Economics, Budapest 1521, Hungary Zieli Dutra Thomé Department of Nuclear Engineering, Military Institute of Engineering, Rio de Janeiro, Brazil Rogério dos Santos Gomes Directorate of Radiation Protection and Nuclear Safety, Brazilian National Commission for Nuclear Energy, Rio de Janeiro, Brazil
Fernando Carvalho da Silva Department of Nuclear Engineering, COPPE/UFRJ, Rio de Janeiro, Brazil Sergio de Oliveira Vellozo Department of Nuclear Engineering, Military Institute of Engineering, Rio de Janeiro, Brazil Monika Singh College of Engineering & Technology (FET), Mody University of Science & Technology, Laxmangarh, India V. K. Jain College of Engineering & Technology (FET), Mody University of Science & Technology, Laxmangarh, India Faisal Nabi School of Management and Enterprise, University of Southern Queensland, Toowoomba, Australia. Jianming Yong School of Management and Enterprise, University of Southern Queensland, Toowoomba, Australia. Xiaohui Tao School of Management and Enterprise, University of Southern Queensland, Toowoomba, Australia. Muhammad Saqib Malhi Melbourne Institute of Technology, Melbourne, Australia Umar Mahmood Melbourne Institute of Technology, Melbourne, Australia Usman Iqbal Melbourne Institute of Technology, Melbourne, Australia Xin Yu Zhang Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China Yong Yin Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China
xvi
Jin YiCheng Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China XiaoFeng Sun Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China Ren HongXiang Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China Martina Kloos GRS gGmbH, Boltzmannstraße 14, 85748 Garching, Germany Joerg Peschke GRS gGmbH, Boltzmannstraße 14, 85748 Garching, Germany Sergey Galushin KTH, Division of Nuclear Power Safety, AlbaNova University Center, 106 91 Stockholm, Sweden Pavel Kudinov KTH, Division of Nuclear Power Safety, AlbaNova University Center, 106 91 Stockholm, Sweden Joseph P. Yurko MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA FPoliSolutions, LLC, 4618 Old William Penn Highway, Murrysville, PA 15668, USA Jacopo Buongiorno MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA Robert Youngblood INL, Idaho Falls, ID 83415-3870, USA Chao Li Department of Building, Civil and Environmental Engineering, Concordia University, Montréal, QC, Canada Mohammad Karimi Department of Building, Civil and Environmental Engineering, Concordia University, Montréal, QC, Canada
xvii
Ciprian Alecsandru Department of Building, Civil and Environmental Engineering, Concordia University, Montréal, QC, Canada Jun Wang Department of Civil and Environmental Engineering, Mississippi State University, Starkville, MS 39762, USA Li Zhang Department of Civil and Environmental Engineering, Mississippi State University, Starkville, MS 39762, USA Yanjun Huang Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada Jian Zhao Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada Mark Mario Morando Monash Institute of Transport Studies, Department of Civil Engineering, Monash University, Melbourne, VIC, Australia Qingyun Tian School of Civil and Environmental Engineering, Nanyang Technological University, Singapore Long T. Truong Monash Institute of Transport Studies, Department of Civil Engineering, Monash University, Melbourne, VIC, Australia Hai L. Vu Monash Institute of Transport Studies, Department of Civil Engineering, Monash University, Melbourne, VIC, Australia Krzysztof Kurec Warsaw University of Technology, Institute of Aeronautics and Applied Mechanics, Warsaw 00-665, Poland Michał Remer Warsaw University of Technology, Institute of Aeronautics and Applied Mechanics, Warsaw 00-665, Poland
xviii
Jakub Broniszewski Warsaw University of Technology, Institute of Aeronautics and Applied Mechanics, Warsaw 00-665, Poland Przemysław Bibik Warsaw University of Technology, Institute of Aeronautics and Applied Mechanics, Warsaw 00-665, Poland Sylwester Tudruj Warsaw University of Technology, Institute of Micromechanics and Photonics, Warsaw 02-525, Poland Janusz Piechna Warsaw University of Technology, Institute of Aeronautics and Applied Mechanics, Warsaw 00-665, Poland Yandi Zhang School of Highway, Chang’an University, Xi’an, Shaanxi 710064, China Bobo Yuan School of Highway, Chang’an University, Xi’an, Shaanxi 710064, China Yukun Chou School of Highway, Chang’an University, Xi’an, Shaanxi 710064, China Marie Häggström Department of Nursing Sciences, Mid Sweden University, Sundsvall, Sweden Malin Rising Holmström Department of Nursing Sciences, Mid Sweden University, Sundsvall, Sweden Mats Jong Department of Nursing Sciences, Mid Sweden University, Sundsvall, Sweden Ling Antonia Zeng Department of Anaesthesia, Singapore General Hospital, Singapore City, Singapore Shin Yi Ng Department of Anaesthesia, Singapore General Hospital, Singapore City, Singapore Sze Ying Thong Department of Anaesthesia, Singapore General Hospital, Singapore City, Singapore
xix
Julia Hiromi Hori Okuyama Universidade de Sorocaba, Graduate Program of Pharmaceutical Science, Sorocaba, Brazil Tais Freire Galvao Universidade Estadual de Campinas, Faculty of Pharmaceutical Sciences, Campinas, Brazil Marcus Tolentino Silva Universidade de Sorocaba, Graduate Program of Pharmaceutical Science, Sorocaba, Brazil Qian Zhao Emergency Department, Hebei General Hospital, Shijiazhuang, China Zhangshun Shen Emergency Department, Hebei General Hospital, Shijiazhuang, China Hui Guo Emergency Department, Hebei General Hospital, Shijiazhuang, China Jianguo Li Emergency Department, Hebei General Hospital, Shijiazhuang, China
xx
LIST OF ABBREVIATIONS AADL
Architecture Analysis & Design Language
ACC
Adaptive Cruise Control
ACI
Aggregated Crash Index
ADMAS
Automatic Dynamic Analysis of Mechanical Systems
AHRS
Attitude and Heading Reference System
AM Adaptive Metropolis AMR
Adaptive Mesh Refinement
API
Application Programming Interface
APSD
Additive Part of Safety Distance
ARD
Automatic Relevance Determination
ASA
American Society of Anesthesiologists
ASSD
Average Standstill Distance
AV Autonomous Vehicle BV Boundary Value Analysis C&E
Cause & Effect
CART
Classification and Regression Tree
CCA Common Cause Analysis CCF
Common Cause Failure
CCNs
Critical Care Nurses
CCS
Control and Management Systems
CFD
Computational Fluid Dynamics
CHRS
Containment Heat Removal System
CMA Common Mode Assessment CPT Cone Penetration Tests CSR
Cyclic Stress Ratio
DARPA
Defense Advanced Research Projects Agency
DC Decision Coverage DCQA
Distance Close Quarters Situation of Approach
DDT Dynamic Driving Task
DET Dynamic Event Tree DG Diesel Generators DM Damage Measures DR Deceleration Rate DSA Deterministic Safety Analysis DTVS
Direct Torus Venting System
DV Decision Variables ECDIS
Electronic Chart Display and Information System
EDP
Engineering Demand Parameter
EPC
Equivalence Partition Class
ERO
Emergency Response Organization
FDS
Fire Dynamics Simulator
FFGP
Function Factorization with Gaussian Process
FHA Functional Hazard Analysis FHA Functional Hazard Assessment FHWA Federal Highway Administration FMEA
Failure Mode and Effect Analysis
FOM
Figure of Merit
FTA Fault Tree Analysis FWD Front-Wheel Drive GP Gaussian Process GP General Purpose GPFA
Gaussian Process Factor Analysis
GPS
Global Positioning System
GT Gap Time GT Grounded Theory HMC
Hamiltonian Monte Carlo
HOV High Occupancy Vehicle HSOPS
Hospital Survey on Patient Safety Culture
HVs Human-driven Vehicles IAEA
International Atomic Energy Agency
IC Isolation Condenser ICU
Intensive Care Unit
IDPSA
Integrated Deterministic and Probabilistic Safety Analysis xxii
IDPSA
Integrated Deterministic-Probabilistic Safety Assessment
IID
Independent and Identically Distributed
IIME
Institute for International Medical Education
IM Intensity Measure IN Integrative Nursing INSAG
International Nuclear Safety Advisory Group
LHS
Latin Hypercube Sampling
LIDAR
Light Detection and Ranging
LOCA
Loss of Coolant Accident
LOSP
Loss of Offsite Power
MBE Model-Based Engineering MC Monte Carlo MCDET
Monte Carlo Dynamic Event Tree
MCMC
Markov Chain Monte Carlo
MPSD
Multiplicative Part of Safety Distance
MRSA
Methicillin Resistant Staphylococcus Aureus
MVN Multivariate Normal NHTSA
National Highway Traffic Safety Administration
NMF
Nonnegative Matrix Factorization
NPP
Nuclear Power Plant
PACT
Pilot Authorisation and Control of Tasks
PBA Power-Boat Autopilot PBEE
Performance-Based Earthquake Engineering
PC Passive Component PC Path Coverage PCA Principal Component Analysis PET Postencroachment Time PGA Peak Ground Acceleration PIRTs
Phenomena Identification and Ranking Tables
PMT Probable Maximum Tsunami PSA Probabilistic Safety Analysis PSA Probabilistic Safety Assessment PSD
Proportion of Stopping Distance
PSHA
Probabilistic Seismic Hazard Assessment xxiii
PWM Pulse Width Modulation PWR Pressurized Water Reactor RCS
Reactor Coolant System
RDVS
Reference Distribution Variable Selection
RTMS
Road Traffic management System
RWM Random Walk Metropolis SAE
Society of Automotive Engineer
SAQ Safety Attitudes Questionnaire SBO Station Blackout SC Statement Coverage SCPT
Seismic Cone Penetration Tests
SCSs
Safety Critical Systems
SE Squared-Exponential seismic PSA
Seismic Probabilistic Safety Assessment
SIS
Safety Injection System
SoS System-of-Systems SPT Standard Penetration Tests SQL
Structured Query Language
SRS
Software Requirement Specification
SRV Safety Relief Valves SSAM
Surrogate Safety Assessment Model
SSC
Structures, Systems and Components
TCQA
Time Close Quarters Situation of Approach
THIEF
Thermally Induced Electrical Failure
TTC
Time to Collision
UML
Unified Modeling Language
UQ Uncertainty Quantification USNRC
United States Nuclear Regulatory Commission
VAP Vehicle Actuated Programming WHO
World Health Organization
WOA Whale Optimization Algorithm XTE Cross-Track Error
xxiv
PREFACE
The importance of safety and wellbeing at work should be considered from a humane, social and economic point of view. Working in safe conditions is important for every individual, but also a success and pride for the organization, the employer and for society as a whole. The social significance of the safety is mostly expressed through the large number of employees who are injured or lose their lives on the workplace, suffer from occupational diseases and other work-related diseases that often burden their families, as well as for society that must take care. The third, the economic dimension of the safety is seen through the consequences of injuries at work, occupational and other diseases and is expressed in certain financial indicators that depend on the number and severity of such cases. Namely, injuries at work and occupational diseases are often accompanied by accidents, absence from work, which creates costs and expenses because the worker is not working. It causes production delays and significant funds are allocated for the treatment of workers, compensation and wages. other costs and expenses borne by the employer and social security funds. This means that safety and health at work affect the productivity and economy of business in the company, as well as the quality and competitiveness of products on the market. So, the employer has a direct interest in making it safe and efficient as possible. That is why any investment in safety measures is a useful investment for the employer. The level of safety depends on the health and working ability of employees and affects the productivity within the company, which is reflected at the national level in the amount of national income. At the same time, better protection of health at work and reduction of the number of injuries at work, occupational and other diseases. It will reduce the burden on social security funds, which is directly reflected in the amount of these funds based on contributions from the national budget. Therefore, the health and work ability of employees depends on the level of safety, and thus the productivity in the company, but also the working ability of the population at national level, which ultimately affects the national income and standard of all citizens. This edition covers different topics from safety science and technology, including: safety-critical systems, safety simulation techniques, safety in transport and vehicles, and safety analysis in medicine (hospitals). Section 1 focuses on safety-critical systems, describing architecture level safety analyses for safety-critical systems, safety assessment of nuclear power plants for liquefaction
consequences, the Fukushima nuclear accident: insights on the safety aspects, an augmented framework for formal analysis of safety critical systems, and concepts of safety critical systems unification approach & security assurance process. Section 2 focuses on safety simulation techniques, describing the marine safety simulation based electronic chart display and information system, improved modelling and assessment of the performance of firefighting means in the frame of a fire PSA, scenario grouping and classification methodology for postprocessing of data generated by integrated deterministic-probabilistic safety analysis, demonstration of emulatorbased Bayesian calibration of safety analysis codes: theory and formulation, microscopic simulation-based high occupancy vehicle lane safety and operation assessment: a case study. Section 3 focuses on safety in transport and vehicles, describing safety of autonomous vehicles, studying the safety impact of autonomous vehicles using simulation-based surrogate safety measures, advanced modeling and simulation of vehicle active aerodynamic safety, and analyzing driving safety using vehicle-water-filled rutting dynamics model and simulation. Section 4 focuses on safety analysis in medicine (hospitals), describing establishing patient safety in intensive care - a grounded theory, analysis of critical incidents during anesthesia in a tertiary hospital, healthcare professional’s perception of patient safety measured by the hospital survey on patient safety culture: a systematic review and meta-analysis, and uncertainty of clinical thinking and patient safety.
SECTION 1: SAFETY-CRITICAL SYSTEMS
Chapter 1
Architecture Level Safety Analyses for Safety-Critical Systems
K. S. Kushal , Manju Nanda , and J. Jayanthi Aerospace Electronics & Systems Division, CSIR-National Aerospace Laboratories, Bangalore, Karnataka, India
ABSTRACT The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of SafetyCritical Systems, and provide high reliability and dependability factors
Citation: K. S. Kushal, Manju Nanda, J. Jayanthi, “Architecture Level Safety Analyses for Safety-Critical Systems”, International Journal of Aerospace Engineering, vol. 2017, Article ID 6143727, 9 pages, 2017. https://doi.org/10.1155/2017/6143727. Copyright: © 2017 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
4
Safety Science and Technology
during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL) augmented with the Error Model Annex (EMV) are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA) system. The Error Model Annex (EMV) is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA). This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.
INTRODUCTION Systematic analyses of the architectural models modeled using the ModelBased Engineering (MBE) [1] practices, early and at every abstraction level, imbibe a greater confidence in the integration of the system. The creation and analysis of architectural models of a system support prediction and understanding of the system’s capabilities and its operational quality attributes. These attributes include performance, reliability, reusability, safety, and security. All along the developmental lifecycle, the faults such as their failure modes and their propagation effects, at system-level, can be predicted. Such issues remain unnoticed until system integration and testing. This proves to be a costly rework resulting in an unaccounted project time, cost, and maintenance. For safety-critical advanced complex embedded systems, the system design and development are in compliance with the safety standards and engineered with practices as specified by MIL-STD882 [2], SAE ARP-4761 [3], and DO-178B/C [4]. The process of development, management, and controlling these systems in conformance with the safety practices proves to have an impact on the system requirements, postsystem integration, and test. With the evolution of the system, availability and reliability of these models are to be consistent and this poses a great challenge.
Architecture Level Safety Analyses for Safety-Critical Systems
5
These safety practices include various availability and reliability prognosis with the help of system architectural models. Model-Based Engineering approaches for safety analyses address these issues and prove to provide consolidated information about the informal requirements and the architecture model of the system. The safety analyses performed on a system also take into consideration the physical environment of its deployment and functioning. Due to insufficient support of the formal languages trend is to make use of architecture description languages such as Architecture Analysis & Design Language (AADL) and Society of Automotive Engineers (SAE) standard. AADL, a high-level architectural descriptive language, basically provides a platform for overall integration of various system recommended components via formal semantics and syntax. This component-based modeling language is extended with the introduction of sublanguages as Annexes. AADL is packaged with multiple Annex sublanguages such as Error Model Annex (EAnnex) and Behaviour Annex (BAnnex) as standards. The EAnnex standard is suitably augmented with safety semantics and ontology of fault propagation, supporting error annotations on the architectural models [5]. This thus enables the component error models and their interactions to be considered in context to the system architecture modeled using AADL. This paper presents our contributions as a case study implementation (Speed Control Unit of Power-Boat Autopilot) to the standard approach for the illustration of its application. The paper is organized as follows. Firstly, we summarize the concept of Architecture Analysis & Design Language (SAE AADL) and Error Model Annex (EAnnex/EMV2). Next we provide an illustration of the architecture fault model specification for Speed Control Unit of a Power-Boat Autopilot (PBA). We also discuss the various safety analyses methods involved in MIL-STD882 safety practice. Finally, we conclude the paper with the assessment of these safety analyses based on the architecture fault models.
ERROR MODEL ANNEX IN ARCHITECTURE ANALYSIS & DESIGN LANGUAGE (AADL) Architecture Analysis & Design Language (AADL), an SAE International standard, is a unified framework providing extensive formal foundations for Model-Based Engineering (MBE) practices. These practices extend throughout the system design, integration, and assurance with safety standards. AADL distinctly represents a system hardware and software
Safety Science and Technology
6
components and their interactions via interfaces. Critical real-time computational factors such as performance, dependability, safety, security, and data integrity can be rigorously analysed with AADL. AADL also integrates custom analyses and specification techniques during the engineering process. This allows in the development and analysis of a single, unified system architectural model. AADL can be extended using the specialized language constructs that can be attached to the components of the architectural model defined by AADL. These components are reinforced with additional characteristics and requirements, referred to as Annex languages. The architectural model components are annotated with these properties and Annex language clauses for functional and nonfunctional analyses. Error Model Annex (EMV), which is an extension of AADL, aids in describing the failure conditions and fault propagations as error events, propagations, occurrence, and their distribution properties. With the integration of these constructs in the AADL model/s, as shown in Figure 1 [6], the existing components are extended as current models liable for Safety Evaluation and Analyses. This can be done with the help of the algorithms in OSATE or by using other third party tools.
Figure 1: AADL ecosystem.
(i)
Error Annex. The Error Model Annex (EAnnex) is a sublanguage of AADL. This sublanguage extension includes the analyses of the runtime architectures. The EAnnex [7, 8] annotates the hardware and the software component architectures with error states, error events, error transitions, and error propagations
Architecture Level Safety Analyses for Safety-Critical Systems
7
that may affect the component interacting with each other. In Error Model Annex subclause conditions can be specified under which the errors are propagated through designated component ports. Error Model Annex basically helps in defining the fault models, hazards, fault propagation, failure modes, and effects, as well as specifying compositional fault behaviour. AADL Error Model Annex supports architectural fault modeling at three levels of abstraction [9].(1)Modeling of faults in systems and their implications on other dependent components of the physical environment of its operation through propagation of these faults (including Hazard identification, fault impact analysis)(2) Modeling of faults occurring in a component of the system and analysing the behaviour of the same across various modes termed as failure modes and their effects on other components and their related propagations and being also inclusive of the recovery strategies involved(3)Compositional abstraction of system error behaviour in terms of its subsystems Error Model Annex (EMV2) overlays major focus on the standards set of error types and error propagation, defined by AADL as a standard syntactic construct through the introduction of Annex libraries. These Annex libraries provide an overlook of the formally specified error propagation behaviours [10, 11]. Some of the common error types are as follows [9]. (1) Commission and Omission Errors. They represent loss of message/ command and failure to provide readings from a component. (2) Timing Errors. They represent arrival rate, service too early or late, and unsynchronized rate. (3) Value Errors. They represent individual service item error or errors in a sequence of values. (4) Replication Errors. They represent replicates of states or services being communicated. (5) Concurrency Errors. They represent accessing shared logical or physical resources. Along with these the error model types can be referenced in the Error Model Annex subclause. The constructs for the EMV2 are similar to the syntax and style as defined for AADL. An exception is that any set of textual language constructs can be included within an Annex that includes Object Constraint Language (OCL) [12] or a temporal logic notation [13].
8
Safety Science and Technology
IMPLEMENTATION OF PROPOSED RESEARCH In this section we exhibit the architecture fault modeling in AADL, along with the extension of EMV2, at three levels of abstraction with a suitable case study, Speed Control Unit of Power-Boat Autopilot (PBA). This unit is a simplified speed control model, including a pilot interface unit for input of relevant Power-Boat Autopilot information, a speed sensor that sends speed data to the PBA, the PBA controller, a throttle actuator that responds to the commands specified by the PBA controller, and a display unit. The type definitions defining the component, component names, their runtime category, and interfaces are identified and defined. The speed sensor, pilot interface, throttle actuator, and the display unit are modeled as devices, while the PBA control functions are represented as process, as shown in Figure 2. With all these we perform the safety analyses with the specification of the source of error and its propagation across the system and its components. This is carried out by defining the error states and their corresponding compositional fault behaviour. This is followed by the expansion of the fault logic with respect to its error behaviour related to each component of the system and its response to the failures.
Figure 2: PBA Speed Control Unit without error specification.
Architecture Level Safety Analyses for Safety-Critical Systems
9
(ii) Specification of Error Source and Propagation. The source of errors and their propagation with respect to each component of a system in PBA Speed Control Unit is defined, as shown in Box 1. In the case study on_flow_src related to the device, pilot interface unit is sourcing the fault. The component error propagations are also defined with the error NoValue & NoService. Box 1: Error source and propagation.
The component error behaviour is also defined for the system components that correlate to the faults that are possible to occur. Here in this system the NoValue due to failure passes on from the pilot interface unit to the throttle actuator. The same is being conveyed to the display unit feature status. In addition to this fault, there occurs another propagation of error that is NoService. This fault results in the Failed state of the system. Here we can observe that the specification is automatically inherited by the instances of each component and their interactive neighbors. The error propagation paths inherent in such system architecture AADL models form a basis, as a need for the representation of Failure Mode and Effect Analysis (FMEA) and Common Cause Analysis (CCA). (iii) Composite Error Behaviour. The Error Model Annex library is associated with the state machine defined for the system component
10
Safety Science and Technology
model using the declaration use behaviour, as shown in Box 2. This maps the error state behaviour of the subcomponents (both hardware and software components) onto the error states of the system itself. In this case study of Speed Control Unit of PBA, we have two error states defined for each component, that is, Failure and Failed. But here we have considered only the Failed state as the subcomponent error state and the state Operational as the recovery state. We can see in this example that the system error behaviour is mapped from the subcomponent behaviours defined as [throttle.Failed and display_unit_inter.Failed] -> Failed. Box 2: Composite error behaviour.
We assume that the system fails if either of the devices, that is, throttle actuator or the display unit, behaves in the Failed state, while it tends to recover from the Failed state and remains to be Operational even if the display unit fails, as the speed control unit mainly depends on the throttle command in maintaining and controlling the speed of the PBA. [display_ unit_inter.Failed] -> Operational. This provides a scope for redundancy management for fault management capability of the system as well as seek for extensive solutions for reliability and availability analyses through various hierarchical levels of the system
Architecture Level Safety Analyses for Safety-Critical Systems
11
architecture. This methodology is not advisable for Markov Chains as the systems tends to grow quickly with their dependencies among various components within a system, as the number of components increases. (iv) Component Error Behaviour. The modeler will have the flexibility of analysing the possible error behaviour that may correspond to individual components of a system. This also provides an insight into the component internal failures and the divergent factors that may result in failure mode, in turn having an impact on other components. The case study in this paper specifies that there might be multiple failure modes like Failure and Failed. In Failed mode the entire component is assumed to be redundant while in the Failure the component is working but having erroneous outputs/output states, as shown in Box 3. Box 3: Component error behaviour.
The failure modes are represented using the error states with more likely coupled error behaviour of the subsystem/component. The consistency checker associated with the Error Model Annex abstracts the propagation specification to introduce unique and distinctive error types. While the modeling tool associated with the Error Model Annex validates the organization of the component error behaviour along with the propagation specification specific to each of the components in the system architecture, the actual system architecture must include the Safety System component/s that regulates the fault management and aids in safety analyses.
Safety Science and Technology
12
SAFETY ANALYSES Safety Analyses involve various analytical processes such as consistency checks, Fault Tree Analysis (FTA), Failure Modes and Effect Analysis (FMEA), Functional Hazard Assessment (FHA), and Common Mode Assessment (CMA) of the architectural model. The architecture model and its associated fault model are designed and developed in Open Source AADL Tool Environment (OSATE) [14]. It is an Eclipse based AADL modeling framework. There is also need to the safety analysis tool such as OpenFTA [15]. An Open Source tool for FTA is integrated into Eclipse environment, to assist in generation of FTA and its relevant documents, while CMA, FMEA, and FHA reports are generated as a built-in feature from OSATE. (i)
Consistency Checks. The consistency checks at the system integration level scan for the consistency in their functionality and the interfaces between various models/components, as shown in “Consistency Report” Section. This thereby strengthens the Virtual integration and analysis of the architecture model of the system. The consistency of various models deals with their integration feasibility while the consistency of the internal components in a model concentrates on the propagation capabilities, redundancies, and so on. With Error Model Annex the concept of consistency across the error models as specified checks for the consistency with respect to the component error behaviour along with the composite error behaviour of the system. It helps in defining the correctness of the error state as per the components specified in the architectural model. This may be proven with the substantial inclusion of Behaviour Annexes (BAnnex) [16] along with the Error Model Annex. The consistency report generated by the OSATE plugin for the case study is as follows. Consistency Report Warning! Complete_PBA_speed_control_ab_ Instance: C13: component Complete_PBA_speed_control_ab_Instance does not define occurrence for and state Failed Complete_PBA_speed_ control_ab_Instance: C13: component Complete_PBA_speed_control_ab_ Instance has consistent probability values for state Operational Warning! Complete_PBA_speed_control_ab_Instance: C13: component Complete_ PBA_speed_control_ab_Instance does not define occurrence for and state Failed Warning! Complete_PBA_speed_control_ab_Instance: C13: component Complete_PBA_speed_control_ab_Instance does not define occurrence for and state Failed Warning! Complete_PBA_speed_control_
Architecture Level Safety Analyses for Safety-Critical Systems
13
ab_Instance: C13: component Complete_PBA_speed_control_ab_Instance does not define occurrence for and state Failed Complete_PBA_speed_ control_ab_Instance: C13: component Complete_PBA_speed_control_ab_ Instance has consistent probability values for state Failed (ii) Fault Tree Analysis (FTA). It is a widely used safety and reliability analysis [17] feature in aerospace, medical electronics, and industrial automation industries [18]. In this analysis the major focus is on the top-level event (Minimal Cut-Set), from a set of combinations of basic events (Faults). It provides a hierarchical representation of the errors of the system (top-level event) from the basic events, related to components as specified in component error behaviour, in the form of a tree. OSATE depicts this composite error behaviour of the system from the underlying component error behaviours as a fault tree that represents specific error state of the system. This is achieved in the form of two files from OSATE for the representation of the fault tree, one being the database of primary events (.ped), as shown in Figure 4, causing the top-level error event, and the Fault Tree Analysis file (.fta). These files are viewed using OpenFTA, as shown in Figure 3.
Figure 3: FTA view for PBA Speed Control Unit in OpenFTA.
14
Safety Science and Technology
Figure 4: PED view in OpenFTA.
The FTA analysis is in conformance with MIL-STD882 standard and the generated fault tree is validated, as shown in Figure 5.
Figure 5: FTA validation report.
The artifacts related to FTA as specified by MIL-STD882 deal with error composites and error events. FTA is a top-down approach of analysis. The Minimal Cut Set is evaluated in the OpenFTA tool and is as shown in Figure 6.
Figure 6: Minimal cut set analysis report from OpenFTA.
Architecture Level Safety Analyses for Safety-Critical Systems
15
(iii) Failure Modes and Effects Analysis (FMEA) and Functional Hazard Assessment (FHA). Analysis of the failure modes associated with the system and the determination of its effects over the hierarchical evolution, performed systematically with a bottom-up approach, is FMEA. With respect to the errors of the system, FMEA provides the information about the deficient component/models and their related effects. It also provides sufficient overview of the failing component such as its phase of failure, severity/impact, and so on. FMEA is based on the artifacts that include error propagation paths (error source, error path, and the error sink). FHA provides the possible list of error upon the synthesis of the architectural model of the system. The major artifacts from FHA comprise the source of the error and the error events, as shown in Table 1. The details of FHA are processed from the OSATE tool after the model is instantiated and the relevant error information is suitably extracted from these architecture models. The report will be in the form of an excel spreadsheet with the specification of the error event details. Table 1: FHA report Component Error
Hazard description
Functional failure
Operational Severity phases
speed_sensor
“Failure on Failure”
“Faulty speed values”
“Loss of sen- “Acquire” sor readings”
Critical
speed_sensor
“Failed on “Failure of Failed” sensor”
“Sensor failed”
Catastrophic Frequent “Is a major hazard. Pilot cannot estimate the speed due to sensor failure”
throttle
“Failure on Failure”
throttle
“Failed on “Faulty Failed” actuator”
“Acquire”
“No com“Faulty or no “Output” mand inputs commands” due to actuator failure”
“Actuator in “Output” failure state”
Critical
Likelihood
Comment
Probable “Speed values are read as faulty”
Remote
“Becomes a major hazard if there are command inputs to the actuator”
Catastrophic Frequent “Is a major hazard. Pilot cannot control the PowerBoat with proper throttle”
16
Safety Science and Technology
interface_ unit
“Failure on Failure”
“Faulty or no “Loss of ac- “Input” input values tuator input and comvalues” mands”
interface_ unit
“Failed on “Failure of Failed” actuator”
“Actuator in “Input” failure state”
Catastrophic Frequent “Is a major hazard. Pilot cannot set proper speed value or input commands”
display_ unit_inter
“Failure on Failure”
“Faulty values or commands on display”
“Output”
Marginal
Remote
“Remote possibility with display showing faulty values or commands”
display_ unit_inter
“Failed on “Display unit “Faulty dis- “Output” Failed” not working play unit” properly”
Marginal
Remote
“Not a major hazard”
“Improper display due to faulty values or commands”
Critical
Probable “Becomes a major hazard if there happens to be faulty input values”
CONCLUSION In this paper, we have proposed a novel approach of safety analyses of Safety-Critical Systems using AADL and the related Error Model Annexes. In spite of the comprehensive activities involved in safety analyses, the needs for such approaches are proved to be very much necessary. This is achieved and projected with the implementation of a suitable case study, Speed Control Unit of Power-Boat Autopilot. The employment of analysis techniques such as Fault Tree Analysis (FTA), Functional Hazard Analysis (FHA), and consistency of the model along with the conduction of qualitative and quantitative reliability analyses as part of these techniques can assess the system hazards and faults. The assessment covers the generation of suitable reports justifying the analyses. These methodologies or techniques provide grant for early identification and probability of the occurrence of potential problems. This also provides a perspective to explore additional architectural properties. Reuse and analysis of the evolved models, provided with suitable extensions with limited effort, can be achieved with this approach. The overall effect induces a greater confidence over abstracted stages of development and safety analyses of these architectural models of the system. Also analysing the system based on the Safety-Critical Requirements, with the expectation of exceptional conditions, hazards are expedited in the development of Safety System architecture models which
Architecture Level Safety Analyses for Safety-Critical Systems
17
will have an impact in certifying the same. This also avoids the unnecessary certification costs by understanding the change impact or the exceptional causes impacts during system engineering.
ACKNOWLEDGMENTS The authors thank the Director of CSIR-NAL, Bengaluru, for supporting this work.
18
Safety Science and Technology
REFERENCES P. H. Feiler and D. P. Gluch, Model-Based Engineering with AADLAn Introduction to the SAE Architecture Analysis & Design Language, Pearson Education-Addison Wesley, Upper Saddle River, NJ, USA, 2012. 2. MIL-STD882(E): Department of Defence Standard Practice, System Safety, May 2012. 3. SAE International, “Guidelines and methods for conducting the safety assessment process on civil airborne systems and equipments,” Tech. Rep. ARP-4761, 1996, http://standards.sae.org/arp4761/. 4. RTCA, “Software Considerations in Airborne Systems and Equipment Certification,” December 2011, http://www.rtca.org/. 5. A. Joshi, S. Vestal, and P. Binns, “Automatic generation of static fault trees from AADL models,” in Proceedings of the IEEE/IFIP Conference on Dependable Systems and Networks’ Workshop on Dependable Systems, Edinburgh, UK, 2007. 6. J. Delange, Safety Evaluation with AADLv2, Software Engineering Institute, Carnegie Mellon University, 2013. 7. B. Hall, K. R. Driscoll, and G. Madl, Investigating System Dependability Modeling Using AADL, NASA/CR-2013-217961, Honeywell International, Golden Valley, Minn, USA, 2013. 8. Q. Li, Z. Gao, and X. Luo, “Error modeling and reliability analysis of airborne distributed software based on AADL,” Advanced Science Letters, vol. 7, pp. 421–425, 2012. 9. J. Delange and P. Feiler, “Architecture fault modeling with the AADL error-model annex,” in Proceedings of the 40th Euromicro Conference on Software Engineering and Advanced Applications (SEAA ‘14), pp. 361–368, Verona, Italy, August 2014. 10. D. Powell, “Failure mode assumptions and assumption coverage,” in Proceedings of the 22nd International Symposium on Fault-Tolerant Computing, FTCS 22, pp. 386–395, IEEEXplore, Boston, Mass, USA, July 1992. 11. C. J. Walter and N. Suri, “The customizable fault/error model for dependable distributed systems,” Theoretical Computer Science, vol. 290, no. 2, pp. 1223–1251, 2003. 12. J. Cabot and M. Gogolla, Object Constraint Language(OCL): A Definitive Guide, Springer, Berlin, Germany, 2010. 1.
Architecture Level Safety Analyses for Safety-Critical Systems
19
13. M. Benammar and F. Belala, “How to make AADL specification more precise,” International Journal of Computer Applications, vol. 8, no. 10, pp. 16–23, 2010. 14. OSATE, https://wiki.sei.cmu.edu/aadl/index.php/Osate_2. 15. OpenFTA, http://www.openfta.com/. 16. SAE International, Annex Behavior Language Compliance & Application Program Interface, SAE International, Warrendale, Pa, USA, 2007. 17. C. Li, H. Yang, and H. Liu, “An approach to modelling and analysing reliability of breeze/ADL-based software architecture,” International Journal of Automation and Computing, In press. 18. J. Xiang, K. Yanoo, Y. Maeno, and K. Tadano, “Automatic synthesis of static fault trees from system models,” in Proceedings of the 5th International Conference on Secure Software Integration and Reliability Improvement (SSIRI ‘11), pp. 127–136, June 2011.
Chapter 2
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
Tamás János Katona1, Zoltán Bán2, Erzsébet Győri3, László Tóth3, and András Mahler2 University of Pécs, Boszorkány Utca 2, Pécs 7624, Hungary
1
Budapest University of Technology and Economics, P.O. Box 91, Budapest 1521, Hungary
2
Seismological Observatory, MTA CSFK GGI, Meredek Utca 18, Budapest 1112, Hungary
3
ABSTRACT In case of some nuclear power plants constructed at the soft soil sites, liquefaction should be analysed as beyond design basis hazard. The aim of the analysis is to define the postevent condition of the plant, definition of plant vulnerabilities, and identification of the necessary measures for accident management. In the paper, the methodology of the analysis of liquefaction effects for nuclear power plants is outlined. The procedure includes identification of the scope of the safety analysis and the acceptable
Citation: Tamás János Katona, Zoltán Bán, Erzsébet Győri, László Tóth, András Mahler, “Safety Assessment of Nuclear Power Plants for Liquefaction Consequences”, Science and Technology of Nuclear Installations, vol. 2015, Article ID 727291, 11 pages, 2015. https://doi.org/10.1155/2015/727291. Copyright: © 2015 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
22
Safety Science and Technology
limit cases for plant structures having different role from accident management point of view. Considerations are made for identification of dominating effects of liquefaction. The possibility of the decoupling of the analysis of liquefaction effects from the analysis of vibratory ground motion is discussed. It is shown in the paper that the practicable empirical methods for definition of liquefaction susceptibility provide rather controversial results. Selection of method for assessment of soil behaviour that affects the integrity of structures requires specific considerations. The case of nuclear power plant at Paks, Hungary, is used as an example for demonstration of practical importance of the presented results and considerations.
INTRODUCTION Proper understanding and assessment of safety of nuclear power plants (NPPs) with respect to external hazards became very important after 11 March 2011. Experience of Niigata-ken Chuetsu-oki earthquake of 16 July 2007, the Mineral (Virginia) earthquake of 2011, and also the response of Japan nuclear power plants to the Great Tohoku earthquake demonstrated that the design practice ensures the safety of nuclear power plants with respect to the vibratory ground motion. However, secondary effects of earthquakes that have not been properly considered in the design can heavily damage the plants, as in the case of the Fukushima Dai-ichi plant, where the tsunami led to fatal consequences after the plant survived the beyond design base ground vibratory motions. Soil liquefaction can also be one of those secondary effects of earthquakes that should be accounted for at soft soil sites. Usually the liquefaction is not considered as a design base hazard. If the soil at the site is susceptible to the liquefaction, soil improvement and appropriate foundation design have to be applied for excluding the potential hazard. However, at some NPP sites, soil liquefaction has to be considered as a beyond design basis event, especially if the safety factor to liquefaction is rather low in case of design base earthquake. Paks NPP is the only nuclear power plant in Hungary providing more than 40% of domestic electricity production. Originally, the plant was not designed for earthquake since the site seismic hazard was underestimated and the former Soviet design requirements did not require specific design measures for this case. In the early nineties, the site seismic hazard has been reevaluated using comprehensive probabilistic seismic hazard assessment
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
23
(PSHA) methodology [1]; new seismic design basis has been defined with peak ground acceleration 0.25 g for 10−4/a nonexceedance level. Extensive safety upgrading measures have been implemented to comply with new design basis requirements [2]. The Paks site soil conditions are shown in Table 1. The groundwater level is about 8.5 m below grade and varies with the seasonal variation of the Danube water level. Probabilistic liquefaction hazard analysis performed in the early nineties [3] provided annual probability for liquefaction less than 10−4/a. Consequently, the liquefaction was not considered as a design basis hazard, since the 10−4/a annual frequency is the criteria for accounting for an external hazard in the design basis. Table 1: Soil description Depth, m
Stratum
0–2
Fill, variable loose sand and silt and soft clay
2–8
Quaternary (Holocene) fluvial-aeolian strata with lenses from floods, very fine silty sand; the average thickness of lenses 1.0–1.5 m
8–15
Quaternary fluvial sand and gravel: medium to dense silty sand becoming gravelly sand at depth
15–27
Quaternary fluvial gravel
27–53
Pannonian, greenish grey to 45 m then becoming yellowish brown, weakly bedded, and very silty fine sand with bands of sandy silt
53–57
Pannonian, ochreous colored laminated and ripple-bedded micaceous sandy silt becoming very silty clay between 55 and 56 m
57–67
Pannonian, yellowish brown weakly bedded silty fine sand
67–86
Pannonian, ochreous colored alternating bands of laminated and ripple bedded silty fine sand, sandy silt, and very silty clay
86–100
Pannonian, yellowish brown laminated micaceous silty fine sand with some sandy silt, below 94 m cross bedded silty fine to medium micaceous sand
24
Safety Science and Technology
After implementing the seismic upgrading measures, comprehensive seismic probabilistic safety assessment (seismic PSA) has also been performed. The seismic PSA demonstrates significant margins with respect to earthquake vibratory effect. The seismic PSA also accounted for the possibility of liquefaction via rather simplified way, practically assuming “cliff-edge effect” when the soil liquefies. The seismic PSA has shown that the liquefaction could be one of the essential contributors to the core damage. This finding motivated the investigations of the liquefaction hazard and safety consequences of the liquefaction. These efforts received high attention after Fukushima accident. Recently the liquefaction hazard as well as the plant response to liquefaction is extensively investigated for definition of safety margins and development of severe accident management procedures and measures. These activities are part of the national programme developed in the frame of focused safety assessment (stress-test) initiated by the European Union [4]. Conclusiveness of the beyond design base safety analysis for liquefaction depends on proper consideration of epistemic and aleatory uncertainties related to the uncertainties of assessment of ultimate behaviour of plant structures due to low probability complex effects. Therefore, the beyond design basis safety assessment for liquefaction is a complex procedure that integrates deterministic as well as probabilistic elements. The procedure consists of the following tasks:(i)selection/development of the method for analysis of plant response to liquefaction that includes identification of accident scenarios and selection of methods for analysis of safety relevant systems, structures, and components,(ii)characterisation of liquefaction hazard that includes(a)probabilistic seismic hazard assessment,(b) investigation of the soil properties,(c)selection of the appropriate methods for characterisation of liquefaction hazard,(d)calculation of the liquefaction effects relevant for evaluation of plant response,(iii)performing the safety analysis, identification of the plants vulnerabilities, and definition of accident mitigation measures.Although the methods for assessment of the liquefaction hazard as well as the deterministic and probabilistic methodologies for evaluation of the liquefaction consequences have been widely studied, there is no experience or precedence for performing full scope safety analysis of an operating nuclear power plant for liquefaction. In the paper, the issues of the selection of the method for beyond design basis safety analysis, practical problems of the selection of the method for assessment of the liquefaction hazard, and the calculation of relevant liquefaction effects are presented and discussed.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
25
SELECTION OF SAFETY ANALYSIS METHODOLOGY Recently several methodical documents have also been published on the development of severe accident management procedures, for example, [5]. According to this, plant vulnerabilities in the case of accidents beyond the design basis should be identified; knowledge on the behaviour of the plant during a beyond design basis accident should be obtained; the phenomena that may occur and their expected timing and severity should be identified. However, unified methodology does not exist for analysis of beyond design basis accidents caused by external events in combination with secondary effects, for example, earthquake and earthquake-induced soil liquefaction. Contrary to this, essential progress has been achieved in the area of analysis for earthquake plus tsunami in the frame research programme of the International Atomic Energy Agency International Seismic Safety Centre [6]. The difficulties relate to the assessment of superposed effects of the ground vibratory motion and the liquefaction; that is, the liquefaction affects the plant twofold.(i)Due to liquefaction, the site response becomes strongly nonlinear; that is, the liquefaction affects promptly the ground vibratory motion.(ii)Liquefaction causes soil settlement, lateral spread, and so forth that can damage the plant structures.These consequences of liquefaction are caused by several mechanisms of soil deformations that depend on the soil conditions, earthquake parameters, and parameters of the structure in a very complex manner. This is shown in Table 2 (see also [7]). Table 2: Relation between mechanism of structural displacement and earthquake parameters as well as parameters of the structure Increase in parameter
Primary deformation mechanisms/mechanism of displacement Localized volumetric strains due to partial drainage
Sedimenta- Consolidation after tion due to liquefaction excess pore pressure dissipation
Partial bearing failure due to strength loss in foundation soil
SSI-induced building ratcheting due to cyclic foundation loading
Peak ground acceleration (PGA)
↑↑
↑↑
↑↑
↑↑
↑↑
26
Safety Science and Technology
Liquefiable layer rel. density (Dr)
↓↓
↓↓
↓
↓↓
↑↓
Liquefiable layer thickness
↑
↑
↑↑
↑
↑↓
Foundation width
↓
↑↓
↑
↓
↓↓
Static shear stress
↓
↓
↓
↑↓
—
Height/width ratio of structure
↑
↑
↑
—
↑↑
Building weight
↑↓
↑↓
↑↓
↑↓
↑↑
3D drainage
↑↑
↓
↑
↓
↑↓
ratio,
In Table 2, the arrows up indicate an increasing effect with increasing parameter value, while the arrows down indicate decreasing effect. Doubling of arrows corresponds to strong effect. Arrows in both directions indicate that the increasing of a parameter can cause controversial effects depending on conditions. For the illustration of the plant response, let us take the simplified plant event-tree of the earthquake and subsequent liquefaction event shown in Figure 1. Loss of offsite power (LOSP) is assumed to be the initiating event caused by earthquake vibratory motion. The reactor shutdown system (denoted by A) shall ensure the subcriticality. The emergency power system (B) and the emergency core cooling system (C) are needed for avoiding the core damage. The success path after earthquake will be as follows: the reactor is subcritical; the emergency power supply and the emergency core cooling are ensured.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
27
Figure 1: Simplified plant event-tree for earthquake and liquefaction.
In case of Paks NPP, analysis of the site soil conditions and the features of critical plant structures resulted in the conclusion that the differential ground settlement due to liquefaction is the dominating effect that can damage the critical plant structures. The differential settlement can result in tilting and relative displacement between adjacent buildings that result in loss of integrity. The relative displacement between buildings and underground piping and cables can damage these communication lines. The latter affects mainly the power cables of the emergency power supply and the piping to the ultimate heat sink. It means the emergency power supply (B) and the systems for heat removal (C) could be affected by the soil settlement with time delay after strong motion starts. Once worked, the system A will continue to ensure the subcriticality, though the reactor might be tilted together with the reactor building due to liquefaction. Although it is not indicated in Figure 1, the differential settlement can also cause loss of integrity of the containment that should be also evaluated. Assuming the
28
Safety Science and Technology
above scenario, the liquefaction can be considered as a separate load case subsequent to the vibratory motion. The analysis of safety consequences of the above scenario can be performed by either deterministic or probabilistic method. A probabilistic safety analysis of liquefaction (liquefaction PSA) requires the characterisation of the hazard, development of the plant eventtrees and fault-trees, and knowledge of the fragility of systems, structures, and components (SSCs) relevant to safety. The fragility has to be defined as a function of engineering demand parameter that should be correlated with appropriate intensity measure characterising the earthquake hazard. A performance-based earthquake engineering (PBEE) probabilistic framework for evaluation of the risk associated with liquefaction has been developed in [8–10]. In the PBEE, the earthquake is characterized by an intensity measure (IM), for example, peak ground acceleration. An engineering demand parameter (EDP) has to be identified and the EDP should be correlated with the damage measures (DM) that are measures of physical effect of EDP. The risk associated with DM has to be expressed in some decision variables (DV) applicable for the risk characterization (some measure of loss). The mean annual rate of exceedance of a given DV level can be calculated if the annual rate of the IM and the conditional probabilities connecting the IM to EDP, the EDP to DM, and the DM to DV are known. The mean annual rate of exceedance of a given DV level, , can be expressed as
(1)
where functions of the type 𝑃[𝑎 = 𝑎’ |𝑏=𝑏’ ] describe the conditional probability of random variable, a given 𝑏 = 𝑏’ . The Δ𝜆IM𝑖 is the 𝑖th increment of mean annual rate of exceedance of IM. The 𝑁DM, 𝑁EDP, and 𝑁IM are the numbers of increments corresponding to DM, EDP, and IM, respectively. In case of nuclear power plant safety analysis, the DV can be associated with loss of safety function if the measure of damage is exceeding the level of DM𝑘. The aggregate mean annual frequency of exceeding a particular value of DV is then determined by summing up the contributions from all combinations of possible intensity measures, engineering demand parameters, and damage measures.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
29
In spite of the comprehensiveness of the PBEE method, it is rather difficult to apply, since identification of parameters DM, EDP, and IM and definition of are not trivial. The calculation of the aggregate mean annual frequency for DV requires the knowledge of multivariate distribution of intensity measures; hence the engineering demand parameters like soil settlement depend on the peak ground acceleration and on the magnitude of earthquake. The calculation can be simplified, if the marginal distributions of intensity measures are used which implies week correlation between the intensity measures. Further difficulties are related to the definition of the conditional probabilities (fragility functions) of the plant SSCs. The available information on the fragility of buildings, underground structures, and lifelines of nuclear power plants is rather scarce. Nevertheless, a liquefaction PSA has certain advantages, since it quantifies the core damage and early large release frequencies and identifies the plant vulnerabilities. However, the liquefaction PSA would not provide input information for the design of upgrading or mitigating measures. Considering the abovementioned difficulties, a deterministic approach has been adopted for Paks NPP for the beyond design basis safety analysis of the liquefaction. Deterministic safety analysis assesses the integrity and function of the plant SSCs while calculating the loads due to liquefaction, the stresses, and strains caused by these loads and comparing these to ultimate values. Thus, the procedure for analysis of plant response to the liquefaction as beyond design base event consists of the following steps.(1)The first is probabilistic seismic hazard assessment that provides the peak ground acceleration and deaggregation matrices that are used in computations of magnitude for the liquefaction hazard analysis.(2)The second is calculation of soil settlements due to the liquefaction.(3)The third is identification of SSCs within the scope of liquefaction safety analysis. These are the SSCs needed for ensuring the heat removal from rector and spent fuel pool as well as the SSCs that are important for accident management. The identification of the SSCs has been done in the frame of Targeted Safety Review (see [4]). Thus, the SSCs within the scope of liquefaction safety analysis are as follows.(a) SSCs have to be functional or preserve their integrity, as it is required for emergency heat removal (see [4, 5]). In the case of Paks NPP that are first of all the essential service water system and the emergency power supply system, the essential service water system consists of piping, water intake structures, and water intake control building. The underground pipelines
30
Safety Science and Technology
connect the pumps located in the water intake building to the main reactor building and diesel building while crossing the lower level in the turbine hall. There are also back-up systems (e.g., the fire water system) that can be used as ultimate heat sinks in case of severe accident and back-up power supply systems, too. These should be also included in the scope of analysis. (b)Containment function has to be ensured for limiting the radioactive releases.(c)Structures and systems with limited radioactive inventory, for example, auxiliary building, should preserve certain level of structural integrity for limiting the site releases.(d)Control rooms and the structures along the escape routes: integrity and habitability of the barrack of fire brigade and Protected Command Centre have to be analysed and ensured. (e)The Laboratory and Service Building that is connected to the controlled area of the plant has to be checked, whether the life safety is ensured and the escape routes are safe.(f)Buildings that may collapse should not damage the essential service water system and emergency power lines or hinder the implementation of emergency measures.(4)The fourth is definition of the criteria for assessing whether the SSCs identified above comply with the above requirements as well as the methods for structural analysis. Definition of assumptions applicable for material properties and load combinations are also included in this step of process. Examples are as follows.(a) Permanent deformation of pipelines of the essential service water systems can be accepted assuming that the overall integrity and leak-tightness are ensured.(b)According to IAEA Safety Guide NS-G-1.10 [11], the following conditions can be accepted for the containment regarding structural integrity. (i)Level II: local permanent deformations are possible. Structural integrity is ensured, though with margins smaller than those for design base.(ii)Level III: significant permanent deformations are possible, and some local damage is also expected. Normally, this level is not considered in case of severe accidents. For leak-tightness, the following levels could be considered.(i) Level II: the leak rate may exceed the design value, but the leak-tightness can be adequately estimated and considered in the design.(ii)Level III: leaktightness cannot be ensured owing to large deformations of the containment structure. Structural integrity may still be ensured. Considering the design of Paks NPP, large permanent deformations of the containment walls and floors are allowed when the deformations are within the strain limits allowable for the liner that ensures the necessary leak-tightness of the containment. Relative displacement between containment and structures connected to the containment has to be assessed from the point of view of integrity of essential service water pipelines crossing these locations.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
31
(c)In case of Laboratory and Service Building, near collapse conditions (according to EUROCODE 8 Part 3 or FEMA-356 2000) are allowed, while the evacuation is ensured via safe escape routes. A near collapse condition is also acceptable in case of auxiliary building.(d)Specific attention has to be paid to the water intake structure whether the functioning of the pumps and free cross section for intake is ensured. Best estimate models, mean values of loads and material properties can be used in the analysis of the liquefaction effects. In best estimate models, contribution to the resistance of nonstructural elements can be accounted for. The calculation can be linear or nonlinear static. In case of containment (main reactor building), coupled soil-structure model is applicable. The structures within the scope of safety analysis are rather different. The main reactor building foundation is at the depth of 8.5 m, below groundwater table, while the piping of essential service water system is located near surface in dry sand. The analysis methods selected for each structure within the scope have to fit to the specific design and soil conditions.(5)The fifth is performing the analysis, conclusion on the plant response to liquefaction, and identification of the safety upgrading measures for ensuring the effective accident management and development of accident management guidance.
CHARACTERISATION LIQUEFACTION HAZARD Geotechnical Investigations Comprehensive geotechnical survey has been made for better understanding the site conditions and updating the database obtained prior to the construction of the plant. Altogether at the site (500 m * 1000 m area), there are nearly 500 boreholes and other test points and more than 100 groundwater-monitoring wells. Site geotechnical survey includes mapping soil stratigraphy, in situ definition of soil properties, full scope laboratory testing of samples, cyclic triaxial and resonant column tests, Standard Penetration Tests (SPT), Cone Penetration Tests (CPT), Piezometric Cone Penetration Tests (CPTu), and Seismic Cone Penetration Tests (SCPT). Data are stored and presented in GeoDin database. The geotechnical investigations were performed in compliance with standards ISO 22475, ISO 22476, ISO/TS 17892, ASTM D3999-11, and ASTM 4015-07. It has to be noted that the performance of geotechnical investigations at the plant site was rather difficult because of underground structures and lifelines. On the other hand, the soil conditions close to the buildings were
32
Safety Science and Technology
disturbed due to foundation excavations. Therefore, a control area has been selected north to the plant where undisturbed soil conditions could be studied. Of course the soil conditions at the control area are not completely identical to those below critical plant structures. These differences have to be accounted for properly. The classical method for determining liquefaction potential is based on SPT measurements, which had been the most widely used procedure. CPT, however, has approached the same level, and newly developed CPT based correlations now represent coequal or even better status with regard to accuracy and reliability. Compared to SPT, CPT offers advantages with regard to cost, efficiency, repeatability, and consistency. The accuracy of SPT measurements is operator-dependent and their usefulness depends on the soil type: they give the most useful results in case of fine-grained sands, while in case of clays and gravelly soils they provided results, which may very poorly represent the true soil conditions. However, the most important aspect is the continuity of data over depth. SPT can only be performed at vertical spacing of about 75 cm or more, so it can completely miss thin (but potentially important) liquefiable strata. CPT, in contrast, is fully continuous and so “misses” nothing.
Analysis of the Liquefaction Potential In practice, empirical methods based on in situ geotechnical tests are the most frequently used for liquefaction potential evaluation. For the Paks site, the preliminary calculations using well-known empirical correlations for liquefaction potential provided rather controversial results; see, for example, [12] and more recently [13]. In these calculations, thorough comparison has been made for selection of the most appropriate method for analysis. Nine of the newest and most commonly used cyclic stress based empirical correlations and two promising energy-related methods were considered initially; see Table 3. Table 3: Liquefaction potential evaluation methods compared in the analysis Method
Intensity measure
Empirical basis
Youd and Idriss (2001) [17]
SPT
Robertson and Wride (1998) [26]
CPT
Andrus and Stokoe (2000) [28]
Vs
Cetin et al. (2004) [15]
Peak ground acceleration and magnitude
SPT
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences Moss et al. (2006) [27]
CPT
Kayen et al. (2013) [29]
Vs
Idriss and Boulanger (2008, 2012, 2014) [20, 30, 31]
SPT, CPT
Juang et al. (2006) [32]
CPT
Kayen and Mitchell (1997) [33]
Arias intensity
SPT
Kramer and Mitchell (2006) [34]
CAV05
SPT
33
Shear wave velocity based and, especially, energy-related methods have seen little use in practice; there is not too much experience about their application. For this reason, our focus was narrowed to the traditionally used CPT and SPT based stress methods. The final goal of our investigation was to obtain seismically induced settlement map for the area of critical buildings and underground structures. Since the vicinity of the reactor building was explored mainly by CPTs and also because of the described above advantages of the CPT test, mapping was carried out using the CPT based methods. Since most of the empirical methods for settlement calculations rely on SPT blow counts, at those locations where CPT and SPT were performed in close proximity to each other, comparison of the settlement values obtained by the SPT and CPT based methods had been performed. One of them (signed by B3), located near the reactor building, was chosen in this paper for illustrative purposes. Soil conditions accounted for in the calculations are shown in Figure 2. The groundwater level was assumed to be 8 m below the surface.
Figure 2: Soil profile, SPT blow counts (N), shear wave velocities (Vs), CPT penetration resistances (qc), and friction ratios (FR) measured at one of the studied points, signed by B3.
34
Safety Science and Technology
The SPT and CPT based liquefaction assessment methods calculate factor of safety against liquefaction, that is, the ratio of cyclic resistance (CRR) to seismic demand, namely, to the cyclic stress ratio (CSR). There are two approaches for calculating the CSR: it can be determined with site response analysis and it can be approximated by simplified equations provided for each liquefaction evaluation method. Site response analysis has to be performed by nonlinear total stress method without taking into account pore pressure increase. In many cases, nonlinear behaviour is modelled by equivalent linear method, which takes into account the degradation of the shear modulus versus shear strain by an iterative procedure. Although the real nonlinear time history analysis can be regarded as more accurate, the applicability of equivalent linear method has been studied. Its reason is that the dynamic behaviour of the building complex can be studied also using equivalent linear method because of the large complex model. Peak ground acceleration (PGA or amax) was computed for the design basis earthquake with 10−4/a annual frequency. Nonlinear computations have resulted in amax=0.25g, while equivalent linear methods have given 0.29 g for the mean surface peak ground acceleration. From the point of view of liquefaction hazard, the moment magnitude of controlling earthquake has been determined to be equal to 6.0 using method of Marrone et al. [14]. In the simplified equations for CSR, the surface acceleration given by nonlinear method was applied. The stress reduction factors obtained using the equivalent linear and nonlinear approaches are shown in Figure 3(a) in comparison with the results of simplified equations of Cetin [15, 16], Youd and Idriss [17], and Idriss [18]. This shows that difference between the different approaches can be significant, which can strongly influence both factor of safety against liquefaction and the resulting settlement. For this reason, it is highly recommended to use site response analysis for the evaluation of stress reduction with depth for high-risk facilities. From the simplified equations, the formula of Cetin [16] provided the best estimation of the actual behaviour of the soil column. It can be noted that some difference can be observed between the result of equivalent linear and nonlinear approach; stress reduction factor computed by equivalent linear method decreases faster. However, in their corresponding cyclic stress ratio (CSR) values, negligible difference can be observed in the depth of interest (Figure 3(b)), because difference in stress reduction factor and PGA compensate each other. Therefore, the CSR computed by equivalent linear method has been used
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
35
hereinafter to be consistent with the selected method for the soil-structure interaction analysis.
Figure 3: Stress reduction factors computed by different simplified equations, in addition to nonlinear and equivalent linear method (a) and cyclic stress ratio determined by nonlinear and equivalent linear method (b).
The analysis of in situ test records and resistance against liquefaction show that subsoil conditions around the reactor buildings are slightly differing from those at the control site. Around the reactors from liquefaction-induced settlement point of view, the most vulnerable layer is located mostly between 10 and 16 m in depths, but in some locations settlement was predicted in 18–20 m depths also. This finding slightly differs from the earlier results of the authors [12], where the most susceptible layer for liquefaction was computed to be between 16 and 22 m in depths on the control site. This critical depth deserves attention, because of two main reasons.(i)The depth of 15–20 m is around the limit for which simplified procedures have been verified and uncertainty in the results can be significant.(ii)It should be mentioned that relatively large depth of the critical layers unfavourably influences pore pressure dissipation, but on the other hand the layers are underlain by gravelly deposit, which facilitate the dissipation of excess pore pressure.Even at those locations where CPT and SPT tests were performed in a close proximity and it is reasonable to assume that they represent the same soil conditions, factors of safety based on the two tests are rather differing (Figure 4(a)). Moreover, significant variation can be observed in the factors of safety provided by methods using the same in situ index
36
Safety Science and Technology
record. High uncertainty in the SPT based methods is mainly the result of difference in the methods’ CRR-normalized blow-count correlations, which can be traced back to misinterpretation of few field cases during their development [19]. CRR-normalized CPT tip resistance correlations agree quite well with each other for relatively low seismic loading conditions, so uncertainty of the CPT based methods largely arises from the tip resistance normalization, especially from the fines content correction. Effect of two available fines content corrections on factor of safety is illustrated by an example in Figure 4(b).
Figure 4: Factor of safeties against liquefaction computed by different SPT and CPT based methods (a) and by CPT based method of Idriss and Boulanger using different empirical correlations to determine fines content (b).
After thorough consideration, the method of Boulanger and Idriss [20] was selected from the CPT based methods for mapping liquefaction potential of the reactor area. Our choice has fallen on this method because of the following reasons.(i)Besides being deterministic, it also allows probabilistic approach of the problem, and probability of liquefaction occurrence can be incorporated into the performance-based framework of building safety
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
37
evaluation.(ii)The authors have developed CPT and SPT based procedures too and the results given by their SPT based method are the closest to the results of CPT based methods.(iii)The probabilistic method of Moss et al. was regressed from a liquefaction case history database, but for this reason its applicability in deeper depths than 12 m is uncertain. Boulanger and Idriss have used critical state soil mechanics to extend their formula in deeper depths, which is regarded as improvement in liquefaction potential evaluation [20].(iv)The authors of the method have been continuously revising and updating their method with their latest results for approximately 10 years. They published revised corrections and updates in 2008, 2010, and 2012 and recently in 2014 including liquefaction case histories from the most recent earthquakes of 2010-2011 Canterbury earthquakes and 2011 Great Tohoku earthquake. Thus this method can be considered as the most up-to-date correlation.(v)As the task was not a design problem but was the assessment of an existing building, at the selection of methods, we aspired to limit the conservatism involved in the calculation.
Settlement due to Liquefaction The main goal of the evaluation is to determine the anticipated postliquefaction displacements. As most of the empirical settlement calculation methods are based on SPT blow-count numbers, mainly these methods had been used in earlier studies. Because many CPT tests were carried out around the main building complex of Paks NPP and CPT has many advantages over SPT, CPT based liquefaction potential and settlement evaluation method was used finally. In the frame of preparatory studies for Paks NPP site, several methods for settlement computation have been compared: SPT based methods of Ishihara and Yoshimine [21], Tokimatsu and Seed [22], Wu and Seed [23], and Cetin et al. [24], as well as the CPT based method of Zhang et al. [25]. Most of these methods rely on factor of safety against liquefaction and/or normalized penetration resistance, so they have to be used in conjunction with liquefaction potential evaluation methods. The first two methods are compatible with the methods of Youd et al. and Idriss and Boulanger methods, while the Wu and Seed procedure can be used with the method of Cetin et al. Cetin et al. [24] used a new approach to develop their correlation for empirical settlement analysis. Instead of using laboratory results, highquality cyclically induced ground settlement case history formed the base of
38
Safety Science and Technology
their method, which allowed probabilistic assessment of the database. Their procedure is based on CSR and SPT blow-counts normalized to energy, overburden pressure, and clean sand. The method proposes the use of a depth weighting factor, which takes into account the observation that deeper layers play less important role in the surface settlement. Their statistical assessment showed that the optimum value of this threshold depth is 18 m. Because of these features, this method can be considered as most appropriate among the investigated ones. From the CPT based settlement calculation methods, only one option, the procedure of Zhang et al. [25], was available for the analysis. This method computes the volumetric strain from factor of safety against liquefaction and CPT tip resistance normalized to clean sand. The authors proposed the use of Robertson and Wride [26] method to compute the factor of safety, which was at that time the state-of-the-art CPT method. According to Cetin et al. [24], we have limited the depth of settlement calculations but the threshold depth was taken more conservatively to 20 m. For 3 point, the procedure of Zhang in conjunction with the methods of Robertson and Wride [26], Moss et al. [27], and Boulanger and Idriss [20] resulted in the following settlements: 0.8 cm, 9.2 cm, and 1.6 cm, respectively. We used the method of Boulanger and Idriss for mapping because of the reasons presented above in Section 3.2. Comparison of these values with the results of SPT based calculations showed that in general all of the SPT based settlements were significantly larger than the CPT based values. The largest settlement was predicted by Ishihara and Yoshimine as well as by the Tokimatsu and Seed methods, while Cetin et al. gave the lowest values. After thorough revision of the results, methods, and tests, it was noted that probably some kind of error distorts the SPT records in a few test points. In those places where all CPT, SPT, and Vs were available, CPT and Vs records showed very similar sequence of stiffer and softer layers, but SPT blow-count numbers have contradicted to that stratigraphy. The extent of the geotechnical survey allowed the assessment of lateral variability of soil conditions. Mapping of seismically induced settlement was based on altogether 29 CPT records around the reactor buildings and the values were varying between 0.1 cm and 5.1 cm. However, this maximum value is quite outlier, because in most of the test points less than 1 cm settlement was predicted, and the settlement exceeded 2 cm in only four test points.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
39
The free surface settlement was computed using effective stress method [19] to the average soil profile. It gave 0.67 cm as average settlement, which is consistent with the results given by the combination of Zhang et al. and Boulanger and Idriss methods.
CONCLUSION In the paper, an important application of liquefaction assessment is discussed: beyond design base analysis of liquefaction consequences for nuclear power plants. Detailed framework for performing the safety analysis for liquefaction consequences is outlined in the paper. Deterministic safety analysis of nuclear power plant for earthquake-induced liquefaction is a complex task that requires adequate modelling of the plant response and characterization of the hazard and engineering demand parameter of the liquefaction, as well as the assessment of the integrity and function of plants systems, structures, and components. Preparatory analyses and considerations show that the settlement could be the dominating engineering demand parameter for the case of Paks NPP site. Adequacy of safety analyses and conclusiveness of the results is mainly limited by the epistemic uncertainty of the methods of hazard definition and the engineering parameters characterising the consequences of liquefaction and controlling the plant response. In the paper, detailed comparison of available methodologies has been made for adequate selection of methods for calculation of the settlement.
40
Safety Science and Technology
REFERENCES 1.
L. Tóth, E. Győri, and T. J. Katona, “Current Hungarian practice of seismic hazard assessment,” in Recent Findings and Developments in Probabilistic Seismic Hazards Analysis (PSHA) Methodologies and Applications, Proceedings of the OECD NEA Workshop (NEA/ CSNI/R(2009)1), 2009. 2. T. J. Katona, “Seismic safety analysis and upgrading of operating nuclear power plants,” in Nuclear Power—Practical Aspects, W. Ahmed, Ed., chapter 4, pp. 77–124, InTech, New York, NY, USA, 2012. 3. O. Arup and Partner, “Seismic hazard reevaluation tasks 1, 2, 3, 4, and 7,” Final Report Project no.: 4.2.1, PHARE Regional Programme Nuclear Safety, 1995. 4. HAEA, National Report of Hungary on the Targeted Safety ReAssessment of Paks Nuclear Power Plant, HAEA, Budapest, Hungary, 2011. 5. IAEA, “Severe accident management programmes for nuclear power plants,” Safety Guide NS-G-2.15, International Atomic Energy Agency, Vienna, Austria, 2008. 6. http://www-ns.iaea.org/tech-areas/seismic-safety/default. asp?s=2&l=65. 7. S. Dashti, J. D. Bray, J. M. Pestana, M. Riemer, and D. Wilson, “Mechanisms of seismically induced settlement of buildings with shallow foundations on liquefiable soil,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 136, no. 1, pp. 151–164, 2010. 8. C. A. Cornell and H. Krawinkler, “Progress and challenges in seismic performance assessment,” PEER News, pp. 1–3, 2000. 9. G. G. Deierlein, H. Krawinkler, and C. A. Cornell, “A framework for performance-based earthquake engineering,” in Proceedings of the Pacific Conference on Earthquake Engineering, Christchurch, New Zealand, February 2003. 10. F. Zareian and F. Krawinkler, “Simplified performance based earthquake engineering,” Tech. Rep. 169, The John A. Blume Earthquake Engineering Center, Stanford University, Stanford, Calif, USA, 2009. 11. IAEA, Design of Reactor Containment Systems for Nuclear Power Plants, International Atomic Energy Agency, Vienna, Austria, 2004.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
41
12. E. Győri, L. Tóth, Z. Gráczer, and T. Katona, “Liquefaction and postliquefaction settlement assessment—a probabilistic approach,” Acta Geodaetica et Geophysica Hungarica, vol. 46, no. 3, pp. 347–369, 2011. 13. E. Győri, T. J. Katona, Z. Bán, and L. Tóth, “Methods and uncertainties in liquefaction hazard assessment for nuclear power plants,” in Proceedings of the 2nd European Conference on Earthquake Engineering and Seismology, Istanbul, Turkey, August 2014. 14. J. Marrone, F. Ostadan, R. Youngs, and J. Litehiser, “Probabilistic liquefaction hazard evaluation: method and application,” in Proceedings of the 17th International Conference on Structural Mechanics in Reactor Technology (SMiRT-17 ‘03), vol. 17 22, Prague, Czech Republic, August 2003. 15. K. O. Cetin, R. B. Seed, A. der Kiureghian et al., “Standard penetration test-based probabilistic and deterministic assessment of seismic soil liquefaction potential,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 130, no. 12, pp. 1314–1340, 2004. 16. K. O. Cetin, Reliability-based assessment of of seismic soil liquefaction initiation hazard [Ph.D. dissertation], University of California, Berkeley, Calif, USA, 2000. 17. T. L. Youd and I. M. Idriss, “Liquefaction resistance of soils: Summary report from the 1996 NCEER and 1998 NCEER/NSF workshops on evaluation of liquefaction resistance of soils,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 127, no. 4, pp. 297–313, 2001. 18. I. M. Idriss, “An update of the Seed-Idriss simplified procedure for evaluating liquefaction potential,” Proceedings: TRB Workshop on New Approaches to Liquefaction Analysis FHWA-RD-99-165, Federal Highway Administration, Washington, DC, USA, 1999. 19. I. M. Idriss and R. W. Boulanger, “SPT-based liquefaction triggering procedures,” Tech. Rep. UCD/CGM-10/02, University of California, Davis, Calif, USA, 2010. 20. R. W. Boulanger and I. M. Idriss, “CPT and SPT based liquefaction triggering procedures,” Tech. Rep. UCD/CGM-14/01, University of California, Davis, Calif, USA, 2014. 21. K. Ishihara and M. Yoshimine, “Evaluation of settlements in sand deposits following liquefaction during earthquakes,” Soils and Foundations, vol. 32, no. 1, pp. 178–188, 1992.
42
Safety Science and Technology
22. K. Tokimatsu and B. H. Seed, “Evaluation of settlements in sands due to earthquake shaking,” Journal of Geotechnical Engineering, vol. 113, no. 8, pp. 861–878, 1987. 23. J. Wu and R. B. Seed, “Estimating of liquefaction-induced ground settlement case studies,” in Proceedings of the 5th International Conference on Case Histories in Geotechnical Engineering, Paper 3.09, New York, NY, USA, April 2004. 24. K. O. Cetin, H. T. Bilge, J. Wu, A. M. Kammerer, and R. B. Seed, “Probabilistic model for the assessment of cyclically induced reconsolidation (Volumetric) settlements,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 135, no. 3, pp. 387–398, 2009. 25. G. Zhang, P. K. Robertson, and R. W. I. Brachman, “Estimating liquefaction-induced ground settlements from CPT for level ground,” Canadian Geotechnical Journal, vol. 39, no. 5, pp. 1168– 1180, 2002. 26. P. K. Robertson and C. E. Wride, “Evaluating cyclic liquefaction potential using the cone penetration test,” Canadian Geotechnical Journal, vol. 35, no. 3, pp. 442–459, 1998. 27. R. E. S. Moss, R. B. Seed, R. E. Kayen, J. P. Stewart, A. Der Kiureghian, and K. O. Cetin, “CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 132, no. 8, pp. 1032–1051, 2006. 28. R. D. Andrus and K. H. Stokoe II, “Liquefaction resistance of soils from shear-wave velocity,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 126, no. 11, pp. 1015–1025, 2000. 29. R. Kayen, R. E. S. Moss, E. M. Thompson et al., “Shear-Wave velocitybased probabilistic and deterministic assessment of seismic soil liquefaction potential,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 139, no. 3, pp. 407–419, 2013. 30. I. M. Idriss and R. W. Boulanger, “Soil liquefaction during earthquakes,” in Monograph MNO-12, p. 261, Earthquake Engineering Research Institute, Oakland, Calif, USA, 2008. 31. I. M. Idriss and R. W. Boulanger, “Examination of SPT-based liquefaction triggering correlations,” Earthquake Spectra, vol. 28, no. 3, pp. 989–1018, 2012.
Safety Assessment of Nuclear Power Plants for Liquefaction Consequences
43
32. C. H. Juang, S. Y. Fang, and E. H. Khor, “First-order reliability method for probabilistic liquefaction triggering analysis using CPT,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 132, no. 3, pp. 337–350, 2006. 33. R. E. Kayen and J. K. Mitchell, “Assessment of liquefaction potential during earthquakes by arias intensity,” Journal of Geotechnical Engineering, vol. 123, no. 12, pp. 1162–1174, 1997. 34. S. L. Kramer and R. A. Mitchell, “Ground motion intensity measures for liquefaction hazard evaluation,” Earthquake Spectra, vol. 22, no. 2, pp. 413–438, 2006.
Chapter 3
The Fukushima Nuclear Accident: Insights on the Safety Aspects
Zieli Dutra Thomé1, Rogério dos Santos Gomes2, Fernando Carvalho da Silva3, Sergio de Oliveira Vellozo1 Department of Nuclear Engineering, Military Institute of Engineering, Rio de Janeiro, Brazil 1
Directorate of Radiation Protection and Nuclear Safety, Brazilian National Commission for Nuclear Energy, Rio de Janeiro, Brazil 2
Department of Nuclear Engineering, COPPE/UFRJ, Rio de Janeiro, Brazil
3
ABSTRACT The Fukushima nuclear accident has generated doubts and questions which need to be properly understood and addressed. This scientific attitude became necessary to allow the use of the nuclear technology for electricity generation around the world. The nuclear stakeholders are working to obtain these technical answers for the Fukushima questions. We believe that, such
Citation: Thomé, Z. , Gomes, R. , Silva, F. and Vellozo, S. (2015), “The Fukushima Nuclear Accident: Insights on the Safety Aspects”. World Journal of Nuclear Science and Technology, 5, 169-182. doi: 10.4236/wjnst.2015.53017. Copyright: © 2015 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
46
Safety Science and Technology
challenges will be, certainly, implemented in the next reactor generation, following the technological evolution. The purpose of this work is to perform a critical analysis of the Fukushima nuclear accident, focusing at the common cause failures produced by tsunami, as well as an analysis of the main redundant systems. This work also assesses the mitigative procedures and the subsequent consequences of such actions, which gave results below expectations to avoid the progression of the accident, discussing the concept of sharing of structures, systems and components at multi-unit nuclear power plants, and its eventual inappropriate use in safety-related devices which can compromise the nuclear safety, as well as its consequent impact on the Fukushima accident scenario. The lessons from Fukushima must be better learned, aiming the development of new procedures and new safety systems. Thus, the nuclear technology could reach a higher evolution level in its safety requirements. This knowledge will establish a conceptual milestone in the safety system design, becoming necessary the review of the current acceptance criteria of safety-related systems. Keywords: Fukushima Nuclear Accident, Nuclear Safety, Safety Culture
INTRODUCTION In the past, TMI and Chernobyl nuclear reactor accidents were a source of operational experiences in accident conditions, pointing out new directions for all stakeholders, bringing improvements to existing power plants. The recent accident at the Fukushima-Daiichi Nuclear Power Plant, as expected, has generated reflections about the safety of nuclear power plants operation. Thus an improvement of safety systems and operating procedures becomes necessary, representing a technological challenge for the existing reactors and for next generation of nuclear power plants as well. The Fukushima nuclear accident has produced doubts and questions which need to be properly understood and addressed, in order to enable the continued use of nuclear power technology. Such lessons, which will come from Fukushima, will establish a conceptual milestone in the safety system design, becoming urgent the review and revaluation of the adequacy of the current acceptance criteria of safety-related systems at multi-unit stations, mainly those criteria concerning to sharing of structures, systems and components (SSC).
The Fukushima Nuclear Accident: Insights on the Safety Aspects
47
The accident was characterized by a progressive damage due to the loss of core cooling capability, which caused the core melting. The accident progression was characterized successively by: a) Loss of off-site power supply due to the earthquake; b) Loss of on-site power supply caused by flooding due to uncommon tsunami, producing a station blackout (SBO); c) Loss of functionality of residual heat removal systems; d) Heating reactor core and its subsequent melting; e) Generation of hydrogen due to the cladding oxidation at high temperatures and posterior hydrogen detonation, damaging the reactor buildings [1] . Several mitigation actions produced unsatisfactory results, which led to the drastic worsening of the accident, due to the existence of sharing of structures, systems and components, present in the designs of the Fukushima reactors. It is worth mentioning that this is a rather common practice in several nuclear power stations. This paper is organized as follow: Section 2 presents the accident description, Section 3 discusses the safety culture conception, in Section 4, we discuss the design basis for flooding, Section 5 performs an approach to improve the reliability of the safety systems, in Section 6, we point out that plant-specific training is an essential safety requirement , in Sections 7 and 8, we also point out the challenge of the venting system design, Sections 9 and 10 present the concept of structures, systems and components sharing at Nuclear Power Plants (NPP) and its impact in the Fukushima accident, Section 11 presents the hydrogen explosion problem and in Section 12 we summarize some lessons learned from Fukushima accident.
A BRIEF DESCRIPTION ABOUT THE FUKUSHIMA NUCLEAR ACCIDENT On March 11, 2011 a severe earthquake occurred off the northeastern coast of Japan with the epicenter about 180 km away from the Fukushima Daiichi NPP, generating a tsunami which caused devastating damage over the whole nuclear site, composed by six BWR nuclear reactors. The electric output of these nuclear reactors are of 460 MW for Unit I, 784 MW for Units 2, 3, 4 and 5, and 1100 MW for Unit 6. The reactors 1, 2 and 3 were in operation at rated power output before the event, while the reactors 4, 5 and 6 had been in an outage. Looking for the set of effects coming from the earthquake impact, one can conclude that the automatism of the power plants have worked quite
48
Safety Science and Technology
well, scramming all operating NPPs promptly. It should be noted that was mentioned that, however, there was the occurrence possibility of a smallscale loss of coolant accident at the Unit 1 [2] . This possibility is being examined further by the Japanese Government. Considering that the external electric power sources have been lost, the emergency diesel generators automatically started, supplying power to the residual heat removal, before the tsunami arrival. The tsunami impact, however, caused deep operational and structural damages, becoming inoperative the diesel generators and batteries. Without AC or DC power to the energy supply to the operation of decay heat removal systems, the fuel of nuclear reactors 1, 2 and 3 suffered total and partial melting respectively. A large amount of hydrogen was released, in consequence of the reaction between zirconium and high temperature steam water, producing strong explosions, damaging the buildings of the Units 1, 3 and 4. It should be noted that the Unit 4 was shutdown and had been in a refueling outage. All of the nuclear fuel had been removed from the reactor and placed in the spent fuel pool. Because of hydrogen explosion at Unit 4, it was initially supposed that the spent fuel was uncovered, and therefore, producing hydrogen by oxidation. Despite this previous hypothesis, subsequent assessments conclude that the spent fuel stored at the pool was not damaged by the accident. In November 2011, the Xenon-135 detection at the Unit 2 opened doubts about the evidence of nuclear recriticality, however, was concluded that the spontaneous fission process of actinides, from the damaged fuel, was the unique responsible for the xenon production [3] .
NUCLEAR SAFETY CULTURE The International Nuclear Safety Advisory Group (INSAG) in its report about the Chernobyl accident assessment coined the term safety culture to refer to the safety regime that should prevail at a nuclear plant operation, which was introduced globally to explain how the lack of knowledge and understanding of risk and safety by the employees and organization contributed to the Chernobyl disaster [4] . Later, INSAG, under the auspices of the International Atomic Energy Agency (IAEA), enlarged the safety culture concept, considering that the development of a safety culture must be embedded in the national legislative
The Fukushima Nuclear Accident: Insights on the Safety Aspects
49
and regulatory framework, establishing the proper chain of responsibility and authority for the required level of safety. In both operating and regulatory regimes, safety culture must be instilled in organizations through proper attitudes and practices of management [5] . A review of nuclear incidents indicates that safety culture problems affect both highly developed and developing countries [6] . Safety culture issues can arise at all stages of organizational life, and even in organizations previously recognized for their safety performance. Currently the majority of effort to improve safety culture has focused on nuclear power plants. Looking the Fukushima accident, it seems evidenced the need of improvements in the Japanese safety culture, taking into account the lessons learned from Three Mile Island and Chernobyl accident, as well as the tools development aiming to monitor its adequate implementation. So, must be highly recommended to perform international audits programs (peer reviews) covering both operational and regulatory regimes, aiming to strengthen and enhance the effectiveness of the safety culture taking into account the IAEA safety standards and the good international practices.
BARRIER PROTECTION AGAINST TSUNAMI Hydrological phenomena, such as flooding due to tsunamis, can cause several hazards that could affect the safety of nuclear power plants, leading to the risk of common cause failure for essentials safety systems, such as the emergency power supply systems or the electric switchyard, with the associated possibility of loss of off-site power, the decay heat removal system and other vital systems [7] [8] . IAEA presents that a conservative analysis of the tsunamis effects should be made for inclusion in the Nuclear Power Plants (NPP) design basis, taking into account the estimation of probable maximum tsunami (PMT), aiming to protect NPPs against all potential effects of external events, determined by using historical data and geological, tectonic and seismic investigations [7] . The licensing design basis for flooding generated by tsunamis of Fukushima Daiichi was initially estimated based on the effects at Japan of the tsunami generated by the magnitude 9.5 Chile earthquake in 1960. Thus, it was defined one PMT producing waves with up to 3.1 meters over mean sea level [9] . Later, new methodology to estimate tsunamis were developed by the Japan Society of Civil Engineers, in 2002, based on observations from
50
Safety Science and Technology
Shioyazaky-oki earthquake (magnitude 7.9) in 1938, which resulted in a maximum water level of 5.7 meters [9] [10] . This value was not reviewed or validated by the Japanese Nuclear Regulatory Body (NISA). This assessment was undertaken by plant’s operator (TEPCO) voluntarily without any instruction from NISA, and therefore, not officially recognized in the licensing documents [9] . This estimative corresponds to the tsunami height at the shoreline point in the entrance to intake structures level. The run up i.e. the water height reached at the maximum inundation point was not indicated in any presentation from TEPCO. It seems also that the calculation of the run up have not considered the specific and detailed arrangements of plant layout [9] . In the beginning of this century, geological evidences allowed to know the occurrence of the Jogan earthquake in 869 AD, with an estimated magnitude of 8.6. This earthquake caused a giant tsunami in the region of Sendai, in the province of Fukushima. According to estimates arising from the geological sediment analysis, the tsunami waves of 869 AD, similarly to the tsunami of 2011, penetrating up to 4 km from the coast, causing a great flood [11] , however, the design-basis flood were not updated to take into account this new historical data. It must be emphasized that was presents that, since 2006, NISA and TEPCO as well were aware of the possibility of a SBO at the Fukushima site if the nuclear site was reached by a tsunami and that NISA knew that TEPCO had not prepared any measures to lessen or eliminate the risk, and failed to provide specific instructions to remedy the situation [2] . It is practically consensus that the Japanese nuclear regulatory body and the plant operator did not follow international best practices and standards concerning defenses against extreme external events. Thus, the resistance of Fukushima-Daiichi NPP to tsunamis was underestimated [2] [11] . It must noted that during the assessment for life extension of Unit I, NISA has not imposed, as requirement, the update of flood studies related to the protection of tsunamis, with the inclusion of the Jogan earthquake at its design basis. During a meeting in London, on July 2011, the Vice-Chairman of the Japan Atomic Energy Commission noted that in recent years several warnings had been issued by the Japanese academic community about the vulnerability of nuclear power plants for the earthquakes and tsunamis occurrences. He
The Fukushima Nuclear Accident: Insights on the Safety Aspects
51
also noted that, in 2009, some members of NISA have questioned the noninclusion of the Jogan earthquake in studies to the design basis updating of Japanese nuclear power plants [12] .
DIVERSITY AND REDUNDANT SAFETY SYSTEMS Among several definitions, redundancy is a common approach to improve the reliability and availability of a system, thus enabling that safety systems adequately work, even if an individual item fail to perform on demand. It should be noted that the use of redundancy increases the cost and complexity of a system design, however, it is possible to postulate an event that could produce a common cause failure (CCF) inducing failures in two or more channels of a redundant system, which can lead to its inability to function as designed. The proper application of the concept of diversity is the way to protect redundant systems against CCF. The diversity includes the differences between the system components, considering design, manufacturers, installation, software, operation and maintenance procedures and the differences in their environment and location, in order to prevent the lost of their redundant functions [13] . The off-site power was lost when the earthquake occurred and the emergency diesel generators started up throughout the power station, as expected, however, the on-site power supply redundant systems (emergency diesel generators and batteries) were lost due to flooding caused by tsunami. Due to the lost of DC power, the instrumentation became unavailable to determine the main operational parameters of all reactors or to remotely actuate valves powered by DC power, except Unit 3 [1] . Concerning the location diversity for redundancies, is consensual that the installation of diesel generators and batteries at different levels at the Fukushima nuclear site, could reduce the vulnerability of this system to flooding, ensuring the survival of power generation system and, consequently, making feasible the operation of residual heat removal system. It must be noted that was priority the power supply recovery, connecting batteries in series to the terminals of the control panels, in order to open valves and restore the measuring instruments, but there were no batteries stored at the Fukushima Daiichi site. However, this was improvised through the removal of batteries of the private cars of employees and TEPCO’s service vehicles [14] .
52
Safety Science and Technology
Concerning the design diversity for redundancies, Fukushima Daiichi NPP had nine water cooled and three air cooled diesel generators. At this point, it is important to note that the sea water pumps, used to provide cooling to the diesel generators, were installed four meters above sea level, becoming, therefore, inoperative due to the flooding. It should be noted that some air cooled diesel generators (DG) in the Fukushima Daiichi NPP were not damaged by the tsunami, because they have been installed on an upper floor, however, their metal-clad switchgears were flooded because was installed on the lower floor, not permitting the use of these DGs. As one of the key lessons was that, in case of CCF, equipment reserves and essential resources, such as batteries, small generators and air compressors, should be available, in real time, to safety systems operation. If these components were readily available, the Fukushima accident could have been effectively mitigated. So, the possibility for replacing such equipments should be incorporated to the redundant system. It seems to indicate that should be recommended the immediate availability of DC power supply, through independent batteries bank placed near the control room, allowing the opening and closing of valves of essential safety systems, such as venting and safety relief valves. An independent batteries bank should power supply to others equipments of instrumentation and control. Additionally, should be important the installation of small diesel generators in order to recharge this battery bank, extending the battery lifetime. It must be noted that the absence of electrical power supply triggered a series of human failures (including decision-making) and equipments failures that have proven unreliable when activated. It must emphasized that the equipments qualification in aggressive environments is an essential prerequisite for the nuclear reactors design, concerning of the instrumentation and control, which would provide greater support for decision-making during the management of severe accidents. The concept of redundant systems of nuclear power plants should be revised and expanded to incorporate the new lessons learned from this accident. New safety system technologies are under study. It is important that an international effort, by different manufacturers, be encouraged in order to use passive and active redundancies to the next generations of reactors under licensing [15] .
The Fukushima Nuclear Accident: Insights on the Safety Aspects
53
TRAINING Nuclear safety is also closely linked with the technical and operational staff capacity and its ability to convert these experiences into operational procedures. For a proper severe accidents management, it is necessary that the personnel involved in performing the various actions during the accident conditions be adequately prepared to take effective on-site actions to prevent or to mitigate the consequences of such accident. It is necessary that these personnel be acquainted with expected nuclear plant behavior beyond of design basis conditions, and their consequences as well. In this way, the computational simulator is an important tool for the operational training. One must be emphasized, however, that the time decision-making is closely related to familiarity with the operational characteristic of each system. On the other hand, the management decisions are considered critical in the sense that the implementation of a decision can also include adverse effects. In such case, the potential negative consequences have to be properly assessed before an action is performed. The Investigation Committee on the Accident at Fukushima (hereinafter referred to as the “Investigation Committee”) quoted that the shift staff had not realized that all isolation condenser system (IC) valves were designed to be closed by the failsafe function when all power supplies were lost. According this reference, the shift team could not identify the operating status of the IC immediately after the tsunami [14] . Before the earthquake, no one in the shift staff, in charge of Unit 1, had knowledge about the operation characteristics of the IC, ignoring that a visual inspection of steam blow-out (and operating noise) could point out the operational status of IC [14] . As mentioned before, it becomes clear that the weak familiarity of operators with rapport the main characteristics of safety systems in emergency scenarios, inducing evaluation errors and mistaken makingdecision. For example, it was inhibited the water injection in the reactor core (Unit 1) during the early moments after the tsunami, causing its melting in an unexpected short time due the mistakes about the operational status of IC. It should be noted, that the plant-specific training is an essential requirement, since it allows a deeper knowledge of main safety subsystems behavior. This accident showed the need for a continuous and effective plant-specific training in preventive and mitigative actions on simulators,
54
Safety Science and Technology
which should take into account the identification and assessment of severe accident scenarios and its progression as well. One must be emphasized, therefore, that the time decision-making is closely related to familiarity with the characteristic of each system.
VENT SYSTEM FAILURES The free volume of the BWR Mark I containment is quite small relative to the other containment types. Therefore, during an accident, the containment can be overpressurized in a relatively short period of time unless steps are taken to mitigate the pressure rise [16] . Under severe accident conditions, such as core meltdown, the pressure of the containment must be released to the atmosphere to maintain the containment integrity. However, although the venting decreases the pressure in the containment, its implementation releases radioactive material into the environment. One of the most comprehensive studies of containment venting was performed on the Peach Bottom Atomic Power Station, a BWR with a Mark I containment. A main conclusion from that study was that the containment venting had limited potential for reducing the risk associated with the dominant severe accident sequences [17] . In 1989, the United States Nuclear Regulatory Commission (USNRC) recommended, to reduce the vulnerability of BWR Mark I containments in case of severe accidents, by the installation of a hardened wetwell vent, improving the venting capability [18] . In general, USNRC proposed a vent modification through the installation of a direct vent path from the torus to the main stack (Direct Torus Venting System―DTVS) by-passing the SGTS. Following the USNRC recommendation basic design, each individual plant owner designed and installed the hardened vent to meet its best plantspecific design criteria. Operators in Japan, including TEPCO, did the same [19] , but there is insufficient information to know the venting system installed at Fukushima. In the basic design, the DTVS must be isolated from the SGTS by installation of air-operated valves and a rupture disk. It should be noted that the valves of this system are normally closed and in case of loss of electrical and pneumatic power they are automatically closed. Additional filters were not considered by USNRC, because was concluded that the scrubbing at
The Fukushima Nuclear Accident: Insights on the Safety Aspects
55
suppression pool, during the venting, was effective on reducing the source term released to the environment. As the SBO consequence, the venting line was manually configured, through of the manual opening of vent valves, into reactor building, which required the installation of a DC power supplier and portable air compressor. It was mentioned that this manual opening is complicated due to the high radiation level that would be present following the reactor vessel failure, not allowing the access to these valves [16] . It should be noted that the venting implementation at Fukushima was interrupted sometimes by the high levels of radiation inside the reactor building [14] . As observed in Fukushima accident, the venting action did not work properly: i) In Unit 1, the venting operation produced a hydrogen explosion in the containment building; ii) In Unit 2, the recovery staff was not able to configure the venting line, thus the core depressurization, through safety relief valves (SRV), was performed, damaging the primary containment by over-pressure. The most probable over-pressure failure location in Mark I containments is the upper half of the pressure suppression pool torus. Let us comment that, on March, 15 was heard a strong sound coming from Unit 2, and its suppression pressure equalizing with atmospheric pressure, so, indicating a suppression pool rupture as expected [20] ; iii) In Unit 3, the venting operation produced hydrogen explosions in the Units 3 and 4 due to backflow into the Unit 4 through the SGTS piping, since the exhaust pipe of unit 3 and 4 sharing the same main stack [21] . After the venting beginning at units 1 and 3, a strong hydrogen explosion occurred on the roof of these reactor buildings. The hydrogen transport and its accumulation in the roof are not clear, as well. One possibility was that had a leakage of hydrogen through drywell head, due to the containment pressure, exceeding its design basis. However, another possibility would be the gas migration into the reactor building through the SGTS, during the depressurization. It should be noted that with SBO occurrence, the valves of SGTS should be automatically closed. The Investigation Committee related that the operation performed, by the recovery staff to the venting configuration, was through of the manual opening of a motor-operated valve near the stack and the opening of two air-operated valves (a larger and a smaller one) in parallel pipes [14] . As recommended by USNRC, the DTVS design provides a direct vent path from the torus to the stack bypassing the SGTS through parallel pipes [18] .
56
Safety Science and Technology
According to before mentioned, we are supposing that the simultaneous opening of the parallel air-operated valves allowed the venting through two paths, via DTVS and via SGTS. The ventilation pipes and filters of SGTS were not designed to withstand large internal pressures, so these pipes would likely fail after the vent valves were opened, allowing radioactive steam, fission products and hydrogen to escape into the reactor building, with the hydrogen accumulating at the roof and likely producing an explosion [16] . Investigating the hydrogen explosion occurred in Unit4, TEPCO concludes that the containment venting from unit 3 flowed into the Unit 4 through the SGTS pipes, since the exhaust pipes of unit 4 joins the Unit 3 exhaust pipes at the main stack. It must be emphasized, however, that the Unit 4 was in outage at the moment of earthquake [21] . It is unclear why the isolation valves in the SGTS at the Unit 4 was opened, allowing the gas backflow into the reactor building and the subsequent hydrogen explosion, since this valves should be automatically closed due to loss of electric power, ruling out the hydrogen migration possibility from Unit 3 to Unit 4. Probably, this fact would point to a possible failure in the design of the DTVS installed in Fukushima or a mistake on the manual alignment of the vent valves, with the opening of SGTS valves.
FILTERED VENTING As before mentioned, under severe nuclear accident conditions, the SGTS do not have operational capability to perform gas filtering during the venting [16] . So, when the venting was performed through suppression pool (torus), the pool scrubbing was the only filtration mechanism. It should be noted that there are another venting path, where the gases in the containment vessel are directly released to the main stack, bypassing the beneficial effects of scrubbing through the water in the torus. Therefore, the first path is preferable due to minimization of radioactive releases [14] . It was mention that the installation of a filtered vented containment system to an existing nuclear power plant has been suggested as one approach to mitigating the effects of a severe accident, since reduces considerably the radioactive releases to environment during venting operation [21] . Several manufacturers had presented filtered venting systems designs as an update to NPP safety system. It must be emphasized that filtered venting should have, among their main requirements, the operation without electric power, monitoring of adequate performance parameters in the control
The Fukushima Nuclear Accident: Insights on the Safety Aspects
57
room to provide status of system during operation, as well avoid potential hydrogen detonation system is initiated following a severe accident [21] . Additionally, it was commented that the Japanese were interested in install, in the 90´s, filtered venting systems for their nuclear plants, but its implementation was not done [22] . OCDE shown the filtered venting systems description selected for implementation in Germany, France and Sweden [23] . The high cost/benefit has discouraged the filtered venting system installation during the last thirty years, for almost all NPPs. However, looking for the consequences of Fukushima, we believe that the mandatory implementation of filtered venting will become, probably, a tendency by the main international nuclear regulatory bodies. The desired characteristics for the filtered venting system should be studied in depth by each regulatory body and widely discussed internationally, in order to produce a consensus in the nuclear sector.
STRUCTURES, SYSTEMS AND COMPONENTS SHARED AT NUCLEAR POWER PLANTS The SSC sharing aims, mainly, to reduce the construction, operation and maintenance costs. This procedure is widely used by the nuclear industry and other technologies. The SSC sharing between units at nuclear power plants translates into a reduction of overall construction time and in the respective costs for equipment, materials, and structures [24] . The reduction of these costs may have implications for sensitive items related to nuclear safety, involving risks which must be quantified. Regardless of the material benefits, safety cannot be, however, compromised, that is, the SSC sharing cannot impair the ability of the systems to perform their safety functions. Thus, an important objective of sharing systems is to reduce cost without disturbing the nuclear safety integrity. Due to safety, operability, and license ability reasons, the choice of SSCs to be shared cannot be exclusively based on economic assessments. So, a probabilistic risk assessment must be performed taking into account the safety impact due to the SSCs sharing and its consequences at multi-units station, compared to results obtained when applied for a unique reactor site. The existence of sharing increases the likelihood of scenarios that could impact a single unit independently, and creates a potential for scenarios that may involve several units at the site, as observed at Fukushima.
58
Safety Science and Technology
According to Oak Ridge National Laboratory, there are three different sharing strategies. The first one is a single SSC that supports both units simultaneously. In the second one there are independent SSCs for each unit. They are interconnected to attend both units. This sharing configuration increases the availability of SSCs [24] . It is important to note that these SSCs should be designed to support the demand when one unit is lost, avoiding the system shutdown due to overload. In the last type there are independent SSCs for each unit, but sharing of standby or spare equipment. It should be noted that, before Fukushima, the simultaneous occurrence events assumption, at multi-units stations, was not considered. Thus, in this scenario, the availability of standby equipment and resources could be, eventually, not correctly estimated. The USNRC determined that, because of the low probability of a severe reactor accident, a suitable design basis for multi-unit nuclear power plants was the assumption that an accident occurs in only one of the units at a time, with all remaining units proceeding to an orderly shutdown and a maintained cooldown condition [25] . It must be emphasized, however, that such scenario did not happen at Fukushima accident, where several units were impacted simultaneously. The USNRC had already limited the opportunities for sharing of onsite power systems at multi-unit power plant sites, because the sharing generally results in a reduction in the number and capacity of the onsite power sources to level below those required for the same number of units located at separate sites [25] . Thus, one must question whether the SSC sharing would weaken the functionality of the redundant installed devices. This could affect the availability and proper use of these shared devices during accident mitigative actions. As it is well known, redundancy is a common approach to improve the reliability and availability of a system, enabling safety systems performing satisfactorily its functions, even if individual systems fail when demanded. It should be noted that the use of redundant devices increases the cost and complexity of a system design. At this point, we question if the SSC sharing could be considered as in conflict to redundancy concept.
IMPACT EVALUATION OF THE SSC SHARING IN THE FUKUSHIMA ACCIDENT The Fukushima accident produced many issues that need to be answered. This knowledge needs to be incorporated in the new safety systems to be
The Fukushima Nuclear Accident: Insights on the Safety Aspects
59
implemented in next reactor generation designs. The current safety design must also be updated to consider the new requirements that are coming from Fukushima. It should be noted that this represents a challenge for all stakeholder, including the nuclear industry and the nuclear regulatory bodies. As explained previously, before the Fukushima, it was determined that, because of the low probability of a severe reactor accident, a suitable design basis for multi-unit nuclear power plants was the assumption that an accident would occur in only one of the units at a time. It should be noted that, in a technical analysis, events that are relatively improbable cannot be ruled out, creating the risk of not having resources for their mitigation. During the mitigation of the Fukushima accident, it is possible to observe the influence of sharing SSCs on final outcome of the accident. Thus, in a technically complex scenario some points should be analyzed separately, such as the current SSCs sharing methodology at multi-unit stations. In subsequent section we aim to evaluate some Fukushima SSCs shared that impacted on the ability of carrying out actions to mitigate the consequences of the nuclear accident.
Control Room Sharing There was a main control room shared by adjacent reactors at Fukushima Daiichi NPP; one for Units 1 and 2, one for Units 3 and 4, and another for Units 5 and 6. For each control room there was only one shift supervisor with the responsibility of making decisions during the course of the accident concerning the control and operation of both units, and reporting all basic information necessary to the emergency response organization [14] . For example, there are several multi unit US nuclear power plants where there are two complete control rooms that share the same shift supervisor. In the same way, at United Kingdom the AGR reactors sites have two reactors, sharing control rooms, supervised by the same shift supervisor. Therefore, the structural arrangement designed in Fukushima Daiichi control rooms is not uncommon. According to the Investigation Committee, communication failures between the shift staff and the on-site Emergency Response Organization (ERO) caused a misunderstanding concerning water injection in the unit 1. The ERO assumed that there was wrong evaluation of IC status. Thus, the ERO believed that the situation in Unit 2 was more dangerous than that in Unit 1. So, efforts were focused by the staff for Unit 2, instead of Unit 1,
60
Safety Science and Technology
delaying the decision concerning alternative water injection into Unit 1. As a consequence of the delay in implementing of mitigative actions in Unit 1, the reactor accident at Unit 1 rapidly progressed to total fuel melting, about six hours after the tsunami arrival [14] . The control room sharing by different Units at Fukushima Daiichi impacted the accident mitigation actions. For example, the accident mitigation actions for Unit 2 had priority over the Unit 1, so, impairing the cooling of this reactor that produced a fast melting. As observed, it must be emphasized that, according the USNRC’s acceptance criteria [26] , the control room should be considered a structure not subject to sharing. This situation is aggravated when the units shared, have different safety systems, as existent at Fukushima Units 1 and 2. Due to the accident progression in the Unit 1, high radiation levels were seen inside the main control room, shared between Units 1 and 2. Because of this, it was necessary to avoid certain areas at the main control room [14] . It must be noted that, the staff access and permanency in Unit 2 control room could become prohibitive due to the high radiation levels. Thus, it is evident that the accident in one unit could impair the other unit due to the control room sharing. It is important to remember that before Fukushima accident, the assumption of simultaneous occurrence of a severe accident at multi-units was not considered. So, the control room sharing, supposedly, would not disturb the activities of each team individually to perform their operational functions. This was not, however, confirmed effectively. Concerning previous USNRC sharing recommendation, we understand that it must be revised taking into account the Fukushima accident lessons. This subject must be deeply analyzed due to its operational consequences, concerning human factors engineering and human performance in order to evaluate and to improve the safety, the efficiency and the robustness of these work systems.
Main Stack Sharing The Fukushima Units 3 and 4 shared the same main stack which caused undesirable interactions among these units due to physical interconnection between them. Similarly, the Units 1 and 2 shared another main stack. With the accident progression the containment pressure at Unit 3 reached values above its structural design limits, necessitating venting to the
The Fukushima Nuclear Accident: Insights on the Safety Aspects
61
main stack, aimed at gradual depressurization containment [27] . Although the venting reduces the containment pressure, maintaining its integrity, its implementation releases radioactive materials to the environment. Since the venting pipes of Units 3 and 4 shared the same main stack, the existence of this interconnection (as a consequence of its sharing configuration) made possible hydrogen transport into the Unit 4. The hydrogen accumulated and caused an explosion in Unit 4, damaging its reactor building, despite the Unit being in outage at the moment of the earthquake [21] . It is unclear why the Unit 4 isolation vent valves were opened, in disagreement with its operational logic. Indeed, there was an operational mistake, due to the physical status of the valves (opened), allowing the hydrogen flowing into its reactor building mainly due to pressure difference among the shared venting pipelines. The existence of an inappropriate main stack sharing in the reactor design allowed that a single failure in the venting valves to damage Unit 4, increasing the accident consequences at the Fukushima Daiichi site. Looking for the economic benefit obtained with the main stack sharing, when faced with the observed damages; it allow us to conclude about the inadequacy of this sharing design. Thus, individual vent pipes, including the main stack, without shared devices, seems to be the best design choice for multi-unit stations.
Emergency Staff Sharing According to the IAEA report [28] , the emergency staff members are composed of evaluators, decision makers and implementers, each one with their specific responsibilities. Looking for Fukushima, the ERO staff was poorly sized to cover the occurrence of simultaneous events at all units. This inadequacy of technical support staff led to misunderstanding, failures and delays in decision making, mainly as a consequence of personal stress. This situation becomes more severe in the case of the same staff is requested to perform actions at plants with different safety configuration designs, as for example, different water injection systems installed in Units 1 and 2. Considering the technological differences of the Fukushima nuclear reactors, it is necessary to have unit-spe- cific training for accident conditions, including all possible scenarios. The staff should be familiar with the expected nuclear plant behavior beyond of basis design conditions and the consequences as well. This could be done using simulators for the operational training. It is more evident the need for specific training of the
62
Safety Science and Technology
emergency staff for each different technology of the reactors at multi-unit stations. Emergency staff sharing compromised implementation of actions needed to minimize the consequences of the accident. Failures in accident management occurred in all the damaged Fukushima units. For this reason, it is suggested that there be a dedicated emergency staff for each unit at multi-unit stations, in order to avoid loss of focus in case of emergency.
Resources for Emergency Actions In Fukushima, it was observed that there was not essential equipment of resources required for emergency actions, such as batteries, small AC generators and air compressors, which should be available promptly for mitigative actions. The non availability of these resources was based in the very low estimative of one event affecting simultaneously several units at the site. It was believed that the interconnection among the various systems shared would present a robust redundancy, thus it would not need the availability of these additional resources. It should be emphasized that accident scenarios, even improbable, cannot be discarded at a probabilistic risk assessment, otherwise they will not have adequate conditions and the resources required for mitigation if the event occurs. Additionally, according to IAEA report [28] , water is a resource that should be available between the elements necessary for severe accident mitigation after the loss of water injection capability, when the core damage became inevitable. So, it is necessary alternative water injection in order to flood the drywell in an attempt to preclude melt through of the reactor vessel [29] . As a result of the rapid accident progression, the molten fuel started to damage the reactor pressure vessel at the Unit 1 five hours after the tsunami arrival [11] . Thus, possible mitigative actions to maintain the fuel in the vessel became impossible. It should be noted that the only alternative for water injecting at Fukushima were the pumps installed on fire engines. It should be emphasized that there was only one fire engine to attend all damaged power plants of this site, since that one was destroyed by the tsunami and another fire engine remained stopped between units 5 and 6 because of the destruction of the internal roads [14] .
The Fukushima Nuclear Accident: Insights on the Safety Aspects
63
The existence of only three fire engines to attend six nuclear power plants at Fukushima can be understood as a resource sharing that proved to be inadequate to act in multi-unit accident scenario. It should be noted that ORNL describes the use of portable pumps provided by fire engines as a possible tool for alternative water injection into the core during mitigation actions, keeping the damaged core inside the reactor vessel and avoiding its subsequent failure and consequently the radioactivity release to the environment. It should be noted that these pumps must have adequate capacity and enough power to proper use as an alternative safety system [29] . The non-availability of these resources, in quantity and adequate capacity, disturbed the making-decision in due time, thus, implying that the performed actions have been compromised.
HYDROGEN EXPLOSIONS It should be emphasized that until the explosion at Unit 1, no one at the Fukushima Daiichi NPP, TEPCO Head Office or the Japanese Government considered the possibility of a hydrogen gas explosion occurring in the reactor building [14] . Thus, not having been considered this possibility of explosion, no possible mitigation action, related to hydrogen concentration, was done, until the explosion in Unit 1. Later, new hydrogen explosion occurred in units 3 and 4, as mentioned before. Thus, failures occurred concerning to the identification of the accident progression sequence, although already available in the literature. It should be noted, however, that mitigation strategies of hydrogen explosions were very limited. During the progression of the severe accident, the reactor core temperature increases, due to inoperability of the residual heat removal system, resulting on large amounts of hydrogen generation, due to zircaloy cladding oxidation reaction with steam water at high temperatures. So, it becomes evident that an important scientific challenge for the safety use of nuclear energy (for electricity generation) should be the minimization of hydrogen generation, in the case of the occurrence of severe accidents. The hydrogen combustion can cause explosions that may damage the containment building. Since hydrogen combustion represents a hazard to the containment integrity, mitigation strategies for hydrogen becomes one of the essential parts of any accident management program. The controlled venting, in an early stage of the accident, is assumed to prevent high hydrogen concentration in the containment [30] .
64
Safety Science and Technology
In most countries, there are no strict regulatory requirements on the implementation of hydrogen mitigation strategies for existing plants. However, for the new reactors that are planned or under construction, these considerations must be included into the design [30] . Some solutions have being considered, as the passive autocatalytic recombiners (PAR) for mitigation of hydrogen generated during a severe accident (due to zircaloy oxidation at high temperature). These devices recombine hydrogen with oxygen producing steam and heat. This exothermal reaction may lead to an overheating of the catalyst elements and consequently cause an unintended ignition of the hydrogen/air-mixture. However, research is ongoing to create PARs with reduced probability of hydrogen ignition [31] . Another mitigation strategy is the deliberate ignition system to initiate combustion wherever and whenever flammable mixtures arise, removing the hydrogen by slow deflagration. It should be noted that some works have recommended the use of dual concept, using integrated recombiner-igniter system for the gas control in order to cover a broader spectrum of accident sequences, and to provide adequate diversity [32] [33] . The search of alternative materials, for replacing the zirconium in the fuel cladding, is in progress. It is a way to prevent (or minimize) the hydrogen production. Although all metals react with steam water at high temperature, the stainless steel would be an option better than Zirconium alloys, regarding the resistance to oxidation. Another option being considered is the use of ceramics materials, such as silicon carbide aiming the hydrogen production minimization [34] . The material eventually chosen must have adequate neutronic properties for use in nuclear reactors. In Fukushima accident, the implementation of an early venting was not possible because of the loss of electric energy and pneumatic supply. There were no other procedures or available equipment for mitigation strategy. The later completion of the venting, after the manual configuration of its path, produced strong hydrogen explosions in units 1, 3 and 4. Additionally, it is important that the concentration of hydrogen in the containment must be permanently monitored. Some accident management procedures and making decisions may depend directly of the hydrogen concentration value. There were no gauges to hydrogen concentration measurement at the Fukushima NPP [14] .
The Fukushima Nuclear Accident: Insights on the Safety Aspects
65
As already mentioned, the Hydrogen problem must be rethought, remaining as a challenge for the nuclear manufacturers and for the scientific community.
LESSONS LEARNED The SSCs sharing has shown that mitigation actions do not produce the expected results in order to prevent the evolution of the accident, being worsened due to low familiarity of the emergency staff with the safety systems. Thus, it is important that the periodic training program must include beyond basis design scenarios. So, the emergency staff must have full knowledge about the safety-related systems. It is evident that the possibilities of sharing at multi-units site should be carefully reanalyzed, since its applicability becomes more restrictive. The main events that occurred during the Fukushima accident were considered, indicating the need of improvement of techniques and procedures to mitigate its consequences. So, the lessons learned from this accident are summarized below. Lesson learned 1―A complex failures scenario occurred at Fukushima, including omissions of nuclear regulatory body, due to lack of compliance with IAEA safety standards and good international practices, allowing an underestimation probability of external events occurrence. Thus, it is highly recommended to perform international technical audits programs, aiming to strengthen and enhance the effectiveness of the safety culture. Lesson learned 2―The concept of electrical interconnection between all power plants units, like existent at Fukushima, cannot be understood as a robust redundancy system to support common cause failure events, thus a most adequate use of diversity concept is the effective way to protect redundant systems minimizing the undesirable occurrence of SBO. Beyond this, the concept of redundant systems of nuclear power plants should be revised and expanded to incorporate the lessons from this accident. Indeed, this new redundant systems must be most complex, embedded in the concept of defense in depth. Lesson learned 3―It seems to indicate that should be recommended the immediate availability of DC power supply, through independent battery banks, placed near the control room, allowing the opening and the closing of valves of essential safety systems, such as venting and safety relief valves, and also the power supply to others equipments of instrumentation and control. Additionally, should be important the additional installation of
66
Safety Science and Technology
small diesel generators in order to recharge this battery bank, extending its lifetime. Lesson learned 4―It must be noted that the absence of electrical power supply triggered a series of human failures (including decision-making) and equipments failures that have proven unreliable when activated. It must emphasized that the equipments qualification in aggressive environments is an essential prerequisite for the nuclear reactors design, concerning of the instrumentation and control, which would provide greater support for decision-making during the management of severe accidents. Lesson learned 5―The severity of the Fukushima accident points to the necessity of an international consensus among all stakeholders, concerning the increase of nuclear safety conditions, thus it should be evident that after the accident in Fukushima, it becomes necessary to define new safety assessment criteria, including acceptance criteria for the use of SSCs sharing, taking into account the possibility of simultaneous accident occurrence at several reactors at multi-unit stations. The use of sharing for costs reduction may impair the nuclear safety. So, the sharing involving risks must be quantified. Regardless of the material benefits, safety must not be compromised. Due to nuclear safety reasons, the SSCs sharing design must not be exclusively based on economic assessments. Lesson learned 6―At Fukushima there was a main control room for adjacent units. In this way, in the first moments of the accident, efforts were focused for unit 2 instead the Unit 1, delaying the making-decision concerning to beginning of an alternative water injection into Unit 1. Thus, this work concludes that each control room should be considered a structure not subject to sharing, avoiding the hierarchical competitive relating makingdecision among power plants, during the mitigative actions. Lesson learned 7―It was identified inadequate training for emergency situations, resulting in delay in making-decisions and deterioration of the situation due to loss of time windows to apply the mitigative procedures. So, preventive and mitigative actions training can be performed on simulators, which should take into account the identification and assessment of severe accident scenarios and its progression as well. Lesson learned 8―It should be noted that the lack of support staff, of resources and of equipments not allowed that the mitigative actions were adequately performed, take into account a simultaneous accident scenario in several Units of Fukushima-Daiichi NPP. As observed in this study, there was a high degree of improvised actions to prevent the progression of the
The Fukushima Nuclear Accident: Insights on the Safety Aspects
67
accident, since the AMP documentation did not contained procedures to be followed in accident conditions due to external events. Lesson learned 9―It must be emphasized that venting actions do not work properly at Fukushima, thus its design must be rethought. However, looking the accident consequences we believe that the mandatory implementation of filtered venting will become a tendency by the main international nuclear regulatory bodies. Lesson learned 10―The Hydrogen problem must be also rethought, remaining as a challenge for the nuclear manufacturers and for the scientific community. It is in progress the search of alternative materials for replacing the zirconium in the fuel cladding as a way to minimize the hydrogen production. However, some solutions already available, as the autocatalytic recombiners and igniter systems, could be used to hydrogen concentration control in primary containment. Lesson learned 11―Due to the main stack sharing, the hydrogen migration from unit 3 to 4 caused an explosion, destroying the Unit 4 reactor building, despite being in outage. Regarding this accident, the sharing of venting pipelines, including the main stack, should be avoided at multi-unit stations design. Lesson learned 12―It should be emphasized that independently of the safety system improvements is essential the evolution on the personnel knowledge to actuate during a severe accident. These efforts should be focused the development of training programs, with massive use of simulators as the important tool for the operational training, in severe accident scenarios.
CONCLUSIONS This work performed a critical analysis of the Fukushima nuclear accident, focusing at the common cause failures produced by tsunami, as well as an analysis of the main redundant systems. It also assessed the mitigative procedures implemented during the accident and the subsequent consequences of such actions. Our analysis shows that the accident management gave results below expectations, not avoiding the progression of the accident. We did show that the inappropriate use of concept of SSC sharing in safety-related devices at multi-unit nuclear power plants had a negative effect on the nuclear safety, contributing to the Fukushima accident scenario. The NPPs of the world are waiting for the technical answers for the Fukushima lessons. Such challenges will be, certainly, implemented in the
68
Safety Science and Technology
next reactor generation, following the technological evolution. The technical audits could be an important mechanism to identify the main procedures to be implemented in each operational NPP and its respective priority. It is becoming clear that the continuity of the use of nuclear energy depends on the evolution of technologies and procedures improvement that should be incorporated to the emergency systems with appropriate redundancies, ensuring the adequate safety to the public, workers and environment. The efforts to provide such a condition must be of all stakeholders, including international organizations.
ACKNOWLEDGEMENTS Zieli Dutra Thomé, Rogério dos Santos Gomes, Fernando Carvalho da Silva, Sergio de Oliveira Vellozo We thank to CNEN and FAPERJ as the financial sponsors and IME by the support facilities.
The Fukushima Nuclear Accident: Insights on the Safety Aspects
69
REFERENCES 1.
Institute of Nuclear Power Operations (2011) Special Report on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station― INPO 011-005. INPO, Atlanta. 2. The National Diet of Japan (2012) The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission. The National Diet of Japan, Tokyo. 3. Thomé, Z.D., Gomes, R.S., Silva, F.C. and Gomes, J.D.R.L. (2012) An Attempt to Confirm the Origin of 135Xe Detected in the Fukushima Daiichi II Nuclear Power Plant. Nuclear Engineering and Design, 247, 123-127. http://dx.doi.org/10.1016/j.nucengdes.2012.03.015 4. International Nuclear Safety Advisory Group (1992) The Chernobyl Accident: Updating of INSAG-1, INSAG 7. IAEA, Vienna. 5. International Nuclear Safety Advisory Group (1991) Safety Culture, INSAG 4. IAEA, Vienna. 6. International Atomic Energy Agency (2002) Self-Assessment of Safety Culture in Nuclear Installations―Highlights and Good Practices― TECDOC-1321. IAEA, Vienna. 7. International Atomic Energy Agency (2003) Flood Hazard for Nuclear Power Plants on Coastal and River Sites― Safety Standard Series NSG-3.5. IAEA, Vienna. 8. International Atomic Energy Agency (2011) Meteorological and Hydrological Hazards in Site Evaluation for Nuclear Installations, Specific Safety Guide SSG-18. IAEA, Vienna. 9. International Atomic Energy Agency (2011) International Fact Finding Expert Mission of the Fukushima Dai-ichi NPP Accident Following the Great East Japan Earthquake and Tsunami, Mission Report. IAEA, Vienna. 10. Japan Society of Civil Engineers (2002) Tsunami Assessment Method for Nuclear Power Plants in Japan. JSCE, To- kyo. 11. Acton, J.M. and Hibbs, M. (2011) Why the Fukushima was Preventable. Carnegie Endowment for International Peace, Washington DC. 12. Suzuki, T. (2011) The Fukushima Nuclear Accident: Lessons Learned (So Far) and Possible Implications. Special Seminar on the Fukushima Dai-ichi Incident: Implications for the UK, France, Japan and the International Community, London, 6 July 2011.
70
Safety Science and Technology
13. Preckshot, G.G. (1994) Method for Performing Diversity and Defensein-Depth Analyses of Reactor Protection Systems. U.S. Nuclear Regulatory Commission, Washington DC. 14. Hatamura, Y., Oike, K., et al. (2012) Investigation Committee on the Accident at Fukushima Nuclear Power Stations of Tokyo Electric Power Company. http://www.cas.go.jp/jp/seisaku/icanps/eng/finalreport.html 15. Kaufmann, A., Grouchko, D. and Groun, R. (1977) Mathematical Models for the Study of the Reliability of Systems. Academic Press, New York. 16. Kelly, D.L. (1991) Overview of Containment Venting as an Accident Mitigation Strategy in US Light Water Reactors. Nuclear Engineering and Design, 131, 253-261. http://dx.doi.org/10.1016/00295493(91)90283-N 17. Dallman, R.J. and Galyean, W.J. (1988) Containment Venting as an Accident Management Strategy for BWRs with Mark I Containments. Nuclear Engineering and Design, 121, 421-429. http://dx.doi. org/10.1016/0029-5493(90)90022-P 18. United States Nuclear Regulatory (1989) Installation of a Hardened Wetwell Vent (Generic Letter 89-16). http://www.nrc.gov/reading-rm/ doc-collections/gen-comm/gen-letters/1989/gl89016.html 19. Davies, L. (2011) Beyond Fukushima: Disasters, Nuclear Energy, and Energy Law. Brigham Young University Law Review, 2011, 19371989. 20. Greene, S.R. (1990) The Role of BWR Secondary Containments in Severe Accident Mitigation: Issues and Insights from Recent Analyses. Nuclear Engineering and Design, 120, 75-86. http://dx.doi. org/10.1016/0029-5493(90)90286-7 21. Tokyo Electric Power Company (2012) Investigation of the Cause of Hydrogen Explosion at the Unit 4 Reactor Building. http://www.nsr. go.jp/data/000059292.pdf 22. Schlueter, R.O. and Schmitz, R.P. (1990) Filtered Vented Containments. Nuclear Engineering and Design, 120, 93- 103. http://dx.doi. org/10.1016/0029-5493(90)90288-9 23. Organization for Economic Co-Operation and Development (1988) Filtered Containment Venting Systems. Note on the Outcome of the May 1988 Specialists Meeting on Filtered Containment Venting Systems, Paris, 17-18 May 1988.
The Fukushima Nuclear Accident: Insights on the Safety Aspects
71
24. Muhleim, M.D. and Wood, R.T. (2007) Design Strategies and Evaluation for Sharing Systems at Multi-Unit Plants, ORNL/LTR/ INERI-BRAZIL/06-01. Oak Ridge National Laboratory, Oak Ridge. 25. US Nuclear Regulatory Commission (1975) Shared Emergency and Shutdown Electric System for Multi-Unit Nuclear Power Plants, Regulatory Guide 1.81. U.S. Nuclear Regulatory Commission, Washington DC. 26. US Nuclear Regulatory Commission (2003) Control Room Habitability at Light-Water Nuclear Power Reactors, Regulatory Guide 1.196. U.S. Nuclear Regulatory Commission, Washington DC. 27. Hirano, M., et al. (2012) Insights from Review and Analysis of the Fukushima Dai-Ichi Accident. Journal of Nuclear Science and Technology, 49, 1-17. 28. International Atomic Energy Agency (2009) Severe Accident Management Programmes for Nuclear Power Plants, IAEA Safety Guide NS-G-2.15. IAEA, Vienna. 29. Cook, D.H., Greene, S.R., Harrington, R.M., Hodge, S.A. and Yue, D.D. (1981) Station Blackout at Browns Ferry Unit One―Accident Sequence Analysis, NUREG/CR-2182. Oak Ridge National Laboratory, Oak Ridge. 30. International Atomic Energy Agency (2011) Mitigation of Hydrogen Hazards in Severe Accidents in Nuclear Power Plants, IAEATECDOC-1661. IAEA, Vienna. 31. Reineck, E., Tragsdorf, I.M. and Gierling, K. (2004) Studies on Innovative Hydrogen Recombiners as Safety Devices in the Containments of Light Water Reactors. Nuclear Engineering and Design, 230, 49-59. http:// dx.doi.org/10.1016/j.nucengdes.2003.10.009 32. Heck, R., Kelber, G., Schmidt, K. and Zimmer, H.J. (1995) Hydrogen Reduction Following Severe Accidents Using the Dual RecombinerIgniter Concept. Nuclear Engineering and Design, 157, 311-319. http:// dx.doi.org/10.1016/0029-5493(95)01009-7 33. Bröckerhoff, P., von Lensa, W. and Reinecke, E. (2000) Innovative Devices for Hydrogen Removal. Nuclear Engineering and Design, 196, 307-314. http://dx.doi.org/10.1016/S0029-5493(99)00310-6 34. Electric Power Research Institute (2011) Silicon Carbide Provides Opportunity to Enhance Nuclear Fuel Safety. http://mydocs.epri.com/ docs/CorporateDocuments/Newsletters/NUC/2011-09/09d.html
Chapter 4
An Augmented Framework for Formal Analysis of Safety Critical Systems
Monika Singh, V. K. Jain College of Engineering & Technology (FET), Mody University of Science & Technology, Laxmangarh, India
ABSTRACT This paper presents an augmented framework for analyzing Safety Critical Systems (SCSs) formally. Due to high risk of failure, development process of SCSs is required more attention. Model driven approaches are the one of ways to develop SCSs for accomplishing critical and complex function what SCSs are supposed to do. Two model driven approaches: Unified Modeling Language (UML) and Formal Methods are combined in proposed framework which enables the analysis, designing and testing safety properties of SCSs more rigorously in order to reduce the ambiguities and enhance the correctness and completeness of SCSs. A real time case study has been discussed in order to validate the proposed framework.
Citation: Singh, M. and Jain, V. (2017), “An Augmented Framework for Formal Analysis of Safety Critical Systems”. Journal of Software Engineering and Applications, 10, 721-733. doi: 10.4236/jsea.2017.108039. Copyright: © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
74
Safety Science and Technology
Keywords: Unified Modeling Language, Formal Methods, Z Notation, Safety Critical System
INTRODUCTION Rest of the paper is organized as follow: Section 2 presents the methodology and components of augmented framework.
Figure 1: Formal model for developing safety critical systems.
Section 3 presents the Formal Model of SCSs. Section 4 represents the simulation and section 5 presents the conclusion.
METHODOLOGY AND COMPONENTS The components of this holistic approach are: Unified Modeling Language and Z Notation [5] ; a Formal Method and described briefly as follow.
Unified Modeling Language (UML) UML [6] is a modeling technique that combines object oriented methods and concepts. It enhances the analysis and design of software system by allowing more cohesive relationships b/w objects. It has been observed that graphical representation of model is easily accessible and understandable to the user. The primary gap between the developer and the user has been easily fulfilled by the graphical description.UML composed of nine diagrams: Use case diagram, class diagram, sequence diagram, state diagram, activity diagram,
An Augmented Framework for Formal Analysis of Safety Critical Systems
75
interaction diagram, component diagram, deployment diagram and package diagram. Graphical representation always gives a better understanding of proposed system.
Z Notation The proposed framework is presented in Figure 1 which helps in: 1) Capturing and improving the readability of requirements of Safety Critical Systems. 2) Model and verify the correctness of Safety Critical Systems. 3) Construct and test the Safety Critical Cases.
Methodology The requirements are captured by using graphical modeling language i.e. Unified Modeling language (UML). Once the requirements are captured, Stereotypes are used in order to improve their readability. Formal model had been developed for Use case, Class and Sequence diagram of UNL in order to depict various systems’ level properties. For example, Formal Model of use case diagram ensures that the functional requirements are complete, consistent and unambiguous. The Formalization of Class Diagram provide us the correct design specification and formal transformation of testing criteria’s helps us to assure that the test specifications are complete and consistent. Figure 1 presents an overview of the proposed approach. The following steps are associated with this approach: 1) capturing and improving readability of informally defined requirements of safety critical systems, 2) formalization of various aspects of critical systems and formal verification of correctness―by construction as well as 3) validate the required safety properties are met.
FORMAL ANALYSIS OF SAFETY CRITICAL SYSTEMS (SCSS) This section is divided into three segments.
Safety Science and Technology
76
Capturing and Improving the Readability of Requirements of Safety Critical Systems The requirements are usually written in natural language. To get a better understanding of system functionality, graphical languages such a Unified Modeling Language (UML) are used. Graphical representation always gives a better understanding of proposed system. The UML―use case diagram defines the behaviour of a system i.e. the functionality of the system. Therefore one can get better understanding of system behaviour by making use case diagram of the system which further forms the root of Software Requirement Specification (SRS). Although UML has numerous good attributes yet not accepted for designing the safety critical system alone. One of the reasons is lack of preciousness in semantic used in graphical model. To improve the readability of requirements, Extension Mechanism [7] such as Stereotype, Tagged values and Constraints are used. The UML Extension Mechanisms are used to extend UML by: 1)
Adding new model elements, 2) Creating new properties and 3) specifying new semantics. Extensive Mechanism: } Stereotype } Tagged Value } Constraint In the context of this paper, Stereotype is used to sever the purpose. A stereotype is a model element that denotes additional values, additional constraint and optionally a new graphical representation. Moreover, a stereotype allows us to attach a new semantic meaning to a model element. The two type of sterotypes are used in this paper: •
•
“Include”―Include is used to extract use case fragments that are duplicated in multiple use cases. The included use case cannot stand alone and the original use case is not complete without the included one. “Extend”―Extend is used when a use case adds steps to another first class use case.
Model and Verify the Correctness of Safety Critical Systems Since class diagram is the basis for system structure and helps in designing modules of a system, but due to component of UML family, lack preciousness
An Augmented Framework for Formal Analysis of Safety Critical Systems
77
in semantic. To address this problem, we construct formal model for class diagram and sequence diagram in order to capture static and dynamic aspect of the system respectively. It guides the developer starting from the informal representation of class and sequence diagram (with UML) to building the corresponding parts of system module via formal modeling and verification in Z Notation and accompany toolsets. In the scope of this approach, we propose the automated Theorem Prover for demonstrating that formal system models are themselves are well defined. Moreover, in order to assure that there is no logical inconsistency and no model element contains any infeasible mathematical definition, simulation of Z specification has been done with Z/EVES [8] tool.
Construct and Test the Safety Critical Cases Testing [9] plays an important role for checking the correctness of system implementations. To test system, test cases are formed and system behavior has been observed during execution. Based on test execution, the decision is made for the correctly functioning of the system. However, the criterion for the correctness of test cases has been specified in the system specification. A specification prescribes “What” part of the system i.e. the function that a system supposes to do and accordingly forms the foundation for testing criteria. As system specifications are documented in natural language (informal), which is generally incomplete and ambiguous in nature, due to this many problems may occur in testing processes such as incompleteness, ambiguous and inconsistency in test specifications. With an unclear specification, it is next to impossible to predict how the implemented system will behave; consequently testing will be difficult as it is not clear what to test. This become more severs specifically in case of Safety Critical System. We propose an approach to rigorous construction of structured test cases by formalization of test-specification. In other words, formal transformation of testing criteria [10] such as white box and black box is done by refinement pattern. Formal model of each testing criteria such as statement coverage (SC), path coverage (PC), decision coverage (DC), boundary value analysis (BV), equivalence partition class (EPC) and cause & effect is formed. This approach helps the tester to completeness and correctness of test specification in automated environment by Theorem Prover toolset. Z/EVES serve the purpose of simulation in automated environment. Moreover, formal methods are rich in mathematics axioms, supporting the argument that all the model elements definitions are consistent and feasible.
78
Safety Science and Technology
SIMULATION RESULTS This section is divided into two parts: section 4.1 presents the formal model for safety Critical systems and section 4.2 represents the verification of results with automated Theorem Prover.
Formal Model Figure 2 shows the formal model for requirements and design of SCSs by formal transforming the use case, class diagram, and sequence diagram into Z notation element; Schema―a notion for structuring the formal specification written in Z notation. Moreover the formal model for testing specification is depicted in Figure 3. The testing strategies are broadly divided into two categories: White box testing and Black box testing. White box testing focuses on internal structure of software artifact.
Figure 2: Formal model for system design.
An Augmented Framework for Formal Analysis of Safety Critical Systems
79
80
Safety Science and Technology
Figure 3: Formal aspect of testing criteria’s.
One of the ways to test internal structure is to use either of following scenarios: Statement coverage (SC), decision coverage (DC) or path coverage (PC). Black box testing focuses on the input and output irrespective of internal structure. The famous black box testing criteria’s are: Equivalence partitions class (EPC), Boundary Value Analysis (BV), Cause & Effect (C&E). The formal model for all these testing criteria’s are formed with Z notation. To validate the proposed framework, case study of Road Traffic Management System has been discussed and formal model is shown in Figure 4. RTMS make use of real-time data acquired from the road network in order to reduce traffic congestion and accidents, and to save energy and preserve the environment. The Road Traffic management System (RTMS) has three active actors i.e. Admin, Vehicle Owner, Traffic police. Traffic police maintains the information which is provided by the users (Admin, vehicle owners). The UML model of RTMS consists of Use Case diagram, Class Diagram and Sequence Diagram in Figure 5.
An Augmented Framework for Formal Analysis of Safety Critical Systems
Figure 4: Formal model for RTMS elements.
81
Safety Science and Technology
82
Figure 5: UML model for RTMS.
Simulation To ensure the correctness and completeness of specification, design draft and test specification; the formal model is further verified on automated Theorem Prover i.e. Z/EVES in Figure 6. The testing criteria’s for construction of safety test case is formally verified in Figure 7 and Figure 8. The RTMS elements are further verified on Z/EVES in order to enrich the validation of proposed framework which is shown in Figure 9. The Table 1 presents the overall model analysis results on automated Theorem Prover and the parameters are: Syntax & Type checking, Domain checking and Proof by Reduction.
CONCLUSIONS The proposed framework is a complete, formal, framework which can be used to develop the safety-critical system, with a combination of graphical modeling language. This framework poses the following characteristics: •
Complete―The proposed method covers all the phases of the software development process i.e. starting from specification to design followed by verification and validation.
An Augmented Framework for Formal Analysis of Safety Critical Systems
•
83
Formal―This method uses rigorous mathematics for specification, verification and validation. Consequently, provides high level of confidence as compared to conventional manual and informal methods.
Figure 6: Verification of Z specification on Z/EVES.
Figure 7: Z/EVES result for Statement Coverage (SC).
Figure 8: Z/EVES result for Equivalence Partition Class (EPC).
Figure 9: Z/EVES model analysis for RTMS elements.
Safety Science and Technology
84
Table 1: Overall model analysis results on automated theorem prover SCSs Elements
Verified Via
Syntax & Type Checking
Domain Checking
Proof by Reduction
Requirements Capturing
Use case Diagram
Y
Y
Y
Design Specification
Class Diagram
Y
Y
Y
Construct and Test safety cases
Sequence Diagram
Y
Y
Y
•
•
•
• • •
Framework―Framework presents a 2-step verification and validation approach, helps in detecting errors in early phases of development process. Scalable―The approach is proficient in verifying the system (safety properties) of industrial size by allowing inductive reasoning and model checking together. Facilitate the safety assurance process of formally developed safety-critical systems, by an established link between formal specification and safety cases. Enhance the processes of Requirements Elicitation, Formal modeling and Verification. Facilitate the system certification by showing how to incorporate safety requirements into Formal model. Allows using the verification results of formal models as the evidence for construction of safety cases.
An Augmented Framework for Formal Analysis of Safety Critical Systems
85
REFERENCES 1.
Gowen, L.D. (1994) Specifying and Verifying Safety-Critical Software Systems. IEEE Seventh Symposium on Computer-Based Medical Systems, 235-240. https://doi.org/10.1109/cbms.1994.316018 2. Dunn, W.R. (2002) Practical Design of Safety-Critical Computer Systems. Reliability Press, Solvang, USA. 3. Hebig, R. (2014) On the Need to Study the Impact of Model Driven Engineering on Software Processes. In Proceedings of 2014 International Conference on Software and System Process, Nanjing, 26-28 May 2014, 164-168. https://doi.org/10.1145/2600821.2600846 4. Booch, G., Rumbaugh, J. and Jacobson, I. (1999) The Unified Modeling Language User Guide. Addison-Wesley, Boston. 5. Monin, J.-F. (2003) Understanding Formal Methods, Springer, Berlin. https://doi.org/10.1007/978-1-4471-0043-0 6. Spivey, J.M. (2001) The Z Notation: A Reference Manual. 2nd Edition, Prentice Hall, Upper Saddle River. 7. Rosenblum, D.S. (2005) Lightweight Extension Mechanisms for UML. Lecture Notes Advanced Analysis and Design (GS02/4022), London. http://www0.cs.ucl.ac.uk/teaching/syllabi/2006-07/ug/4022.htm 8. Saaltink, M. (1999) The Z/EVES 2.0 User’s Guide, Technical Report TR-99-5493-06a. ORA Canada, One Nicholas Street, Suite 1208. Ottawa, Ontario K1N 7B7 CANADA. 9. Myers, G.J. (2004) The Art of Software Testing. 2nd Edition, John Wiley & Sons, New York. 10. Jorgensen, P.C. (2013) Software Testing: A Craftsman’s Approach. 4th Edition, Auerbach Publications, Boca Raton.
Chapter 5
Concepts of Safety Critical Systems Unification Approach & Security Assurance Process
Faisal Nabi1, Jianming Yong1, Xiaohui Tao1, Muhammad Saqib Malhi2, Umar Mahmood2, Usman Iqbal2 School of Management and Enterprise, University of Southern Queensland, Toowoomba, Australia. 2 Melbourne Institute of Technology, Melbourne, Australia 1
ABSTRACT The security assurance of computer-based systems that rely on safety and security assurance, such as consistency, durability, efficiency and accessibility, require or need resources. This targets the System-ofSystems (SoS) problems with the exception of difficulties and concerns that apply similarly to subsystem interactions on a single system and system-as-component interactions on a large information system. This research addresses security and information assurance for safety-critical systems, where security and safety are addressed before going to actual
Citation: Nabi, F. , Yong, J. , Tao, X. , Malhi, M. , Mahmood, U. and Iqbal, U. (2020), “Concepts of Safety Critical Systems Unification Approach & Security Assurance Process”. Journal of Information Security, 11, 292-303. doi: 10.4236/jis.2020.11401 Copyright: © 2020 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http:// creativecommons.org/licenses/by/4.0
88
Safety Science and Technology
implementation/development phase for component-based systems. For this purpose, require a conceptual idea or strategy that deals with the application logic security assurance issues. This may explore the vulnerability in single component or a reuse of specification in existing logic in component-based system. Keeping in view this situation, we have defined seven concepts of security assurance and security assurance design strategy for safety-critical systems. Keywords: System Security, Assurance of Component Function, SafetyCritical Software, Software Assurance
INTRODUCTION The integration of components into industrial control systems such as railway control and management systems (CCS) is ongoing of commercial off-the-shelf hardware and software (COTS). However, the use of COTS components in a pre-owned security framework results in new security risks. The interplay of security is an important field of study in which several questions still need to be addressed. To mitigate risk and ensure the programme is dependable and secure; security assurance is an essential part of the safety-critical software development process. Deficiencies in infrastructure and deficiencies also can lead to software bugs and abuse by hackers and offenders seeking to manipulate flaws in the tech industry. Testing, accreditation and evaluation are carried out to justify the level of assurance of safety of logical function during the intercommunication interaction process. This strategy is applied at design stage that refers to traditional use to increase the trust of the programme in the programme validation process [1]. Software assurance during the engineering/development process has been an integral aspect of contemporary safety-critical systems’ overall innovation, ranging from weapons, avionic, even automotive control systems, industrial control systems and medical equipment. Software is used for tracking and regulating physical processes in these systems increasing failure may lead to loss of life or other catastrophic malfunction. Therefore, software assurance for safety-critical systems performs a role as backbone in commercial-off-the-shelf component-based system [2]. Ever more software, including embedded systems, is no longer purposebuilt in security systems. Instead, they are used (or reused) for COTS, GOTS Government off-the-shelf for software and hardware, open source, and other
Concepts of Safety Critical Systems Unification Approach & Security ...
89
non-developmental applications, often without alteration or advanced setup changes. Much of this no developmental software—especially COTS and open source software—is component: stand-alone software pieces which can be used as a building block for creating larger and more complicated systems of software. The smallest independent decomposition unit in a software-based system may or may not be a component [3]. In certain cases, components with smaller modules are assembled. To be usable as a component of a broader framework, an autonomous programme must provide interface(s), typically standardising to allow the integrating or mounting of other components. In this case, degree of component assurance and system safety is foremost priority in information assurance for safetycritical component-based software systems in organizational [4]. The most important aspect in security assurance of computer-based systems is inter-component specification. Interactions between the systems may be separated by one component and another function consumption [5]. The service (function or calculation) provided to another component can be specified as a contract between the consumer component and the supplier component by one component and the services requested from the other component and details of interface(s) by which these provisions and applications are made [6]. The expectations one component has regarding the contractual commitments other components may meet are clearly specified as the preconditions or constraints the component sets on the other components with which it may communicate. In this research, we are going to address the effects of security and information assurance on safety-critical component-based software systems, which discusses the security and safety in implementation & development process of systems. This requires a strategy to deal with the application logic security concerns that may explore the vulnerability in single component or a reuse of specification of existing logic in component based system.
RESEARCH METHOD We have used the applied research method for our research work. The method has a subclass called research evaluation. In this method, we address and assessment analysis is a kind of analytical study evaluating current research knowledge that is subject to empirical study results or to informed decision-making [7], for example, a scientific method of investigation because it applies existing scientific knowledge to consolidate situations to perform appraisal analysis to decide the research problem and proposed
90
Safety Science and Technology
theory. Therefore keeping in view this research method, we have proposed seven concepts for information assurance for safety-critical componentbased software.
BACKGROUND OF RESEARCH The speeding up of attacks as well as the obvious shift to further vulnerability appear to mean that our ability to resolve attacks diminishes and the divide between attacks and information defence broaden. Most of the modern information security is based upon concepts as defined by Saltzer and Schroeder in the 1974 ACM Communications article entitled “The Security of Information in Computer Systems”. Protection was characterised as “techniques to monitor who may access or change the device or information stored therein” and the three key categories of concern were described: secrecy, credibility and availability [8]. We constitute the security assurance in cyber security and information assurance for safety-critical component-based software systems as: “Software Assurance considered as security is a trust degree of protection from software several bugs, designed purposefully or unintentionally, are implemented in the software at any point during its life cycle such therefore the software works as intended”. With vulnerability breaches expanding through ransomware, bugs, and injections of structured query language (SQL), cross-site scripting, etc., these challenges have altered the structure and functionality of the programme. It has proven to be incredibly inadequate to rely solely on identity security. In addition, the importance of software in networks has evolved such that software now manages the majority of functionality and increases the effect of security failure [9]. The convergence and interoperation of security and safety-critical systems is becoming more and more apparent. It makes sense, therefore, to create an overall concept of software assurance covering safety and protection. The various methods proposed by the current concepts emerge in several cases from threats associated with complex structures [10]. Furthermore, the acceptance of commercial off-shelf (COTS) and open-source software as modules within a framework creates additional challenges for successful operating protection. The resulting operating systems combine applications from a wide variety of sources and assemble each piece in a distinct manner [11].
Concepts of Safety Critical Systems Unification Approach & Security ...
91
Systems cannot be built to eliminate safety risks but have the ability to recognise, resist and recover from attacks. The system should be prepared for implementation and maintenance in the initial acquisition and design. In order to ensure successful organisational protection over time, assurance must be scheduled over the life cycle [12]. Now we use the following concept of component-based software lifecycle assurance built for: Technologies and procedures are implemented to obtain the required degree of trust that applications and services work as expected are free of unintentional or deliberate flaws and have threat-friendly protection functionality as well as recovery from intrusions and failures.
EXISTING RESEARCH REVIEW There is not very much research work is done in the domain of security assurance unification process of safety-critical component-based software systems. However, we have considered some important work to cite the research work to underpin with the effort of research design. According to Faisal Nabi 2017, proposed security assurance unification process that defines. Author describes the architecture in two stages of abstraction of an information system. 1)
The design level of the method explains form for architectural form levels to be implemented at the highest-level abstraction. 2) The architecture definition of a logical part. To ensure safe deployment, protection needs to be applied using a design approach, rather than implementing a layer in the framework, by cooperation with the above-mentioned core elements of the security assurance process. The architecture can therefore be extracted by means of protection the assurance protocol course [1]. According to Tim Kelly 2019 explained that an alternative solution by establishing a structure for compliance and data assurance (SSAF) focused on the fundamental set of standards of security. Instead of a popular coassurance, which has identified major disadvantages, protection and safety should be individually co-assured. This often permits different processes and practitioners’ skills in each area. With this arrangement, attention is transferred from simpler convergence to integration through the correct knowledge exchange with the synchronisation activities at the right time [3].
92
Safety Science and Technology
According to Marsha Chechik (B, Rick Salay, Torin Viger, Sahar Kokaly, and Mona Rahimi 2019), Addresses the Test cases, test data, human decision or a mixture of these will provide data for software assurance. This means that experts strive to construct (safety-critical) structures with caution and to express that reasoning according to well-founded methodology in a safety case that is eventually tested by an individual. However, tech has deeper origins in uncertainty, the most complicated open world features (for example, a self-driven vehicle’s understanding of the state of the earth) often are not entirely predictable or not cost-effective; computing applications are also put in dangerous conditions, and there can be inconsistencies [2].
Impact of Safety & Security Risk Risk factors are security threats that add safety hazards to a system when discovered. Security impact risks are directly risks to the system’s Integrity and availability characteristics. Integrity is a security feature directly connected to durability and trustworthiness: it depends on its durability that it is not modified inadvertently, by mistake, by an unwanted entity, or through illegal means, either accidentally or purposely. The trust of the device is not compromised such that bugs or deceptive reasoning can be implemented [13]. The credibility of the system is therefore critical because unauthorised and unintended changes can only impair the system’s ability to run effectively, but may also prone the system to unnecessary compromises and/or incorporate unauthorised functions. In all cases, such modifications often wrongly indicate any of the conclusions based on careful examination and review of the system before it is implemented. Assuring the consistency of the system guarantees that the system is intact and that the system decisions are valid. Another security feature is closely associated with system reliability. To be available the system must, as defined in its specifications (e.g. 99 percent of the time, 95 percent of the time, etc.) be active and open to its intended users. The availability is similar to the “required uptime” and “quality of service”, only that it not only covers the system’s operating consistency and the consistency of system connectivity for those who use it.
Concepts of Safety Critical Systems Unification Approach & Security ...
93
PROPOSED CONCEPT OF SECURITY ASSURANCE SAFETY-CRITICAL SYSTEMS Components are engineered primarily to be combined into systems, and they really require security eventually. Composing security cations into broader systems is not only a non-trivial task, but also one of the toughly unanswered information security issues, in order to deal with this issue in the business logic of a compound (logical component-ware interface-orientated design) in an e-commerce application. For logical component-based fast advances, and for an increasingly increasing business process logic in e-commerce systems, we need the convergence of security process resources as shown in Figure 1. In order to reach a desired degree of trust for software security assurance, we recommend the seven concepts aimed at solving the problems associated with the information assurance for safety-critical component-based software systems, construction, deployment and retaining of systems.
Figure 1: Security assurance process and properties of unification.
1)
Risk guides decision-making in assurance. A risk-taking perception guides decision-making. Organizations that do not obtain reliable security assurances experience danger from efficient attacks on infrastructure and systems. They can use assurance options as a function of their perceptions of a threat of similar attack and the anticipated effect, such as strategies, procedures, methods and limitations, if this threat is understood. Organisation, because they struggle to grasp their challenges and impacts, may falsely
Safety Science and Technology
94
interpret risks. Efficient security allows businesses to share risk awareness with both partners and participants of the project. 2) Risk issues shall be associated with both stakeholders and strategic aspects intertwined. Highly linked networks such as the Internet require coordination of risk between all players involved and all technical elements linked to them; otherwise, at various points in the relationships, important risks are overlooked or ignored. When all are deeply intertwined, it is not enough to consider just deeply essential elements. Interactions are carried out at different levels of technology (e.g., network, security, infrastructure, and applications) and are assisted by a number of functions. Security at any of the stages may be applied and, if not well planned, may clash. Effective assurance requires clear identification, response to risk at all levels, and positions related to interactions. 3) When proved trustworthy, dependencies are not to be trusted. Because of the extensive use of digital supply chains, the guarantee of an automated commodity relies on the judgments of those in terms of commitment and the degree of faith imposed in them. All the guarantees’ shortcomings of each communicating component come from the optimised applications. In addition, any operating function, including utilities, security software and other programmes, is subject to the guarantee of any other function unless unique constraints and controls are in effect. A company still relies on the guarantee decisions of others. There is a chance. Organizations have to determine, however, how much confidence they put in their reliance on a practical appraisal of risks, consequences and opportunities across diverse experiences. Dependences are not stagnant and businesses have to revisit confidence ties on a daily basis to assess adjustments to be rethought. The following examples define assurance damages from weakness: •
•
Centralized technology vulnerabilities (e.g. operating systems, programming environments, firewalls, and routers) can act as publicly accessible software vulnerability entry points. The use of several common technology construction development tools efficiently assures the resulting digital product. The tool manufacturers may introduce vulnerabilities into software products.
Concepts of Safety Critical Systems Unification Approach & Security ...
4)
95
Attacks are expected. The secrecy, credibility and availability of technical resources are sacrificed for a wide group of assailants with increasing technological capabilities. No security from attacks is flawless and the profile of the attacker continues to evolve. In order to reach a consensus (known as social-technical response), attackers are using technologies, procedures, norms and practise. Some threats are using technologies, and others create unique conditions to exploit protections. They are the way we use technologies. 5) Ensuring that the software assurance concerned needs good teamwork. Organizations must extend security throughout their employees, procedures and technologies while assailants seek all potential access points. In addition, organisations must specifically define at an adequate level the policy authority and obligation for ensuring that corporate participants engage efficiently in cyber security. This theory presupposes that everybody is confident, but generally, it is not. Therefore, organisations need to prepare staff to maintain tech. 6) The guarantee is creative and well planned. Assurance may have a bridge between software and network administration, design and service and is extremely susceptible to improvements in any of these fields. To preserve this equilibrium, it is important to respond to frequent shifts, interconnections, organisational use and risks of applications. This is not a one-time occurrence, because transition is regular. It needs to proceed by organisational monitoring after the initial organisational deployment. This must be incorporated into the appropriate promise that companies require. This will not be added later. Every time, nobody has money to overhaul structures. 7) An overall assurance assessment and evaluation process should be implemented. Organizations cannot cope with something they cannot calculate, and consumers and consumers of technology will not take responsibility for policies until they take responsibility for it. If outcomes are tracked and calculated, Confidence cannot compete with other competitive needs effectively. To determine organisational assurance, all socio-technical elements like policies, processes and procedures need to be connected together. More efficient assurance process responds and rebound
96
Safety Science and Technology
more quickly. They will benefit about their and others’ reactive reactions, and predict and identify threats more carefully. For Example: Code faults is a standard implementation metric, and can be useful for code consistency, but is not acceptable proof for general certainty since it gives little insight into how code functions in an operating environment. Concentrating and systematic steps must be taken by organisations, to ensure sound protection is established for the components and efficient assurance of the relationship within components.
Evaluation Security Assurance Level Analysis Chart (Table 1) The security assurance is achieved through the validity of empirical analysis of proposed concepts and the process of system assurance, as it is explained in the given below model. This depicts the seven stages of security assurance level for information assurance that is concluded based on proposed seven concepts for safety-critical component-based software systems. Table 1: Evaluation of security assurance in safety-critical systems Evaluation assurance level
What is tested
Description
1
Functionality
Evaluation provides independent testing against a specification and an examination of the guidance documentation. Used when confidence in correct operation is required but the threats to security are not viewed as serious.
2
Structure
Evaluation provides a low to moderate level of independently assured security as Required by vendors or users.
3
Methodology
Evaluation provides an analysis supported by testing, selective independent confirmation of the vendor test results, and evidence of a vendor search for obvious vulnerabilities.
4
Methodology and Design
Evaluation provides a moderate to high level of independently assured security in conventional commodity products. Testing is supported by an independent search for obvious vulnerabilities.
5
Semiformal Design
Evaluation provides a high level of independently assured security in a planned development, with a rigorous development approach. The search for vulnerabilities must ensure resistance to penetration attackers with a moderate attack potential.
Concepts of Safety Critical Systems Unification Approach & Security ...
97
6
Semiformal Veri- Used for the development of specialized security prodfied Design ucts, for application in high risk situations. The independent search for vulnerabilities must ensure resistance to penetration attackers with a high attack potential.
7
Formal Design
Used in the development of security products for application in extremely high risk situations. Evidence of vendor testing and complete independent confirmation of vendor test results are.
DESIGNED DEFENSIVE STRATEGY AS A SOLUTION TO DEAL BUSINESS LOGIC LAYER CONCERNS This part of strategy will provide a strong risk management control plan focusing on providing rigours component ware assurance for rapid development of CBSD business application logic for safety-critical component-based software systems and its applications in e-commerce domain. Key elements of problem solution follow: 1) Strong risk management plan; 2) Solution artefacts; 3) Security characteristics of component-ware components. 1) Strong risk management plan: Ensure that every aspect of the application’s design must be clearly & sufficiently detailed to understand every assumption and designed function logic within the application by designer. Mandate that all CBSD should be clearly commented to include the following information throughout. a)
The purpose and intended use of each component (if component code available information of code, if not, its functional business logic within the component through usage contract description). b) The assumptions & logic made by each component about anything that is outside of its direct control. c) Reference to all client-component which makes use of the component clear documentation to this effect could have prevented the logic flaw within the online registration functionality. (Note: Client here dose not refer to the user-end of the client-server relationship but to other component (code) for which the component being considered is an Immediate dependency.)
Safety Science and Technology
98
2)
Solution Artifacts: As that there is no unique signature by which logic flaws in component-Based-Rapid developed web software application can be identified, because there is no silver bullet so far developed which could protect. Good Practice: Good practice that can be applied to significantly reduce the risk of logical flaws appearing within component-based-development and its logic. 3) Security Characteristics of Component Ware Components: Since a software component can be regarded as an IT product or system, it is natural to use the Common Criteria in assessing its security properties. The Common Criteria provide a framework for evaluating IT systems, and enumerate the specific security requirements for such systems. The security requirements are divided into two categories: – Security functional requirements – Security assurance requirements The security functional requirements: Describe the desired security behaviour or functions expected of an IT system to counter threats in the system’s operating environment. These requirements are classified according to the security issues they address, and with varied levels of security strength. They include requirements in the following classes: security audit, communication, cryptographic support, user data protection, identification and authentication, security management, privacy, protection of system security functions (security meta-data), resource utilization, system access, and trusted path/channels. The security assurance requirements: The security functional requirements mainly concern the development and operational process of the IT system, with the view that a more defined and rigorous process delivers higher confidence in the system’s security behaviour and operation. These requirements are classified according to the process issues they address, and with varied levels of security strength. The process issues include life cycle support, configuration management, development, tests, vulnerability assessment, guidance documents, delivery and operation, and assurance maintenance. Figure 2 presents the idea of security assurance process based on layer of security assurance of component-based software application logic for e-commerce systems. This process is also helpful for developers of safety-
Concepts of Safety Critical Systems Unification Approach & Security ...
99
critical component-based software systems, while reusing specification of existing logic for the current system.
Figure 2: Design strategy process for security assurance business application logic.
Therefore, it is important that safety-critical systems, those are almost in daily use of human interaction and from simple system to complex systems of component based require assurance before passing through development phase that guarantee the safety of the system in various environment.
CONCLUSION This paper addressed some of the key problems and information gaps in defence and protection in big, complex systems. These flaws are due to gaps between protection and safety systems, how threats are portrayed and clarified, and how claims should be viewed as templates. The seven concepts were described as a Safety Assurance and design security assurance strategy mechanism solution for the independent system or component to the difficulties of developing a mechanism that synchronises separate security and safety assurances and provides a more sophisticated and complex form of evaluating impacts. The seven concepts are blue print capable of modifying the relationship of the intelligence and security sectors and security design strategy process of modelling help developers to make sure the system security assurance at SDLC stage, which is proved to be as a guideline for safety-critical component-based software systems.
100
Safety Science and Technology
REFERENCES 1.
Nabi, F. and Nabi, M.M. (2017) A Process of Security Assurance Properties Unification for Application Logic. International Journal of Electronics and Information Engineering, 6, 40-48. 2. Chechik, M., Salay, R., Viger, T., Kokaly, S. and Rahimi, M. (2019) Software Assurance in an Uncertain World. In: Hähnle, R. and van der Aalst, W., Eds., FASE 2019, LNCS 11424, 3-21. https://doi. org/10.1007/978-3-030-16722-6_1 3. Kelly, T. (2019) An Assurance Framework for Independent CoAssurance of Safety and Security. New York University Press, New York. 4. Czarnecki, K. and Salay, R. (2018) Towards a Framework to Manage Perceptual Uncertainty for Safe Automated Driving. In: Gallina, B., Skavhaug, A., Schoitsch, E. and Bitsch, F., Eds., SAFECOMP 2018, LNCS, Vol. 11094, Springer, Cham, 439-445. https://doi. org/10.1007/978-3-319-99229-7_37 5. Carlan, C., Gallina, B., Kacianka, S. and Breu, R. (2017) Arguing on Software-Level Verification Techniques Appropriateness. In: Tonetta, S., Schoitsch, E. and Bitsch, F., Eds., SAFECOMP 2017, LNCS, Vol. 10488, Springer, Cham, 39-54. https://doi.org/10.1007/978-3-31966266-4_3 6. Carlan, C., Ratiu, D. and Schätz, B. (2016) On Using Results of CodeLevel Bounded Model Checking in Assurance Cases. In: Skavhaug, A., Guiochet, J., Schoitsch, E. and Bitsch, F., Eds., SAFECOMP 2016, LNCS, Vol. 9923, Springer, Cham, 30-42. https://doi.org/10.1007/9783-319-45480-1_3 7. Kriaa, S., Pietre-Cambacedes, L., Bouissou, M. and Halgand, Y. (2015) A Survey of Approaches Combining Safety and Security for Industrial Control Systems. Reliability Engineering & System Safety, 139, 156178. https://doi.org/10.1016/j.ress.2015.02.008 8. Symantec (2018, March) 2018 Security Threat Report. ISTR Internet Security Threat Report, Vol. 23. 9. Bird, J. (2017, October) 2017 State of Application Security: Balancing Speed and Risk. 10. Ullrich, J. (2016, April) 2016 State of Application Security: Skills, Configurations and Components. SANS Institute Survey.
Concepts of Safety Critical Systems Unification Approach & Security ...
101
11. Zakaszewska, A. (2016) Proportionality Approach Model for the Application of ASEMS. BMT Isis Limited (2016, March) (Issue 1). 12. Finnegan, A. and McCaffery, F. (2014) Towards an International Security Case Framework for Networked Medical Devices. International Conference on Computer Safety, Reliability, and Security, September 2014, Springer, Cham, 197-209. https://doi.org/10.1007/978-3-31924255-2_15 13. Gehr, T., Milman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S. and Vechev, M. (2018) AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, 20-24 May 2018. https:// doi.org/10.1109/SP.2018.00058
SECTION 2: SAFETY SIMULATION TECHNIQUES
Chapter 6
The Marine Safety Simulation based Electronic Chart Display and Information System
Xin Yu Zhang1, Yong Yin, Jin YiCheng, XiaoFeng Sun, and Ren HongXiang Marine Dynamic Simulation and Control Laboratoty, Dalian Maritime University, Dalian 116026, China
ABSTRACT Navigation safety has a huge impact on the world economy and our everyday lives. One navigation safety simulation model in ECDIS based on international standard format (S-57) is put forward, which is mainly involved in route plan and route monitoring. The universal kriging interpolation is used in the route planning and to compute the water depth of any place in the sea bottom. The man-machine conversation method is taken to amend planned route to obtain autodeciding of feasibility according to ECDIS information, and the route monitoring algorithm is improved by enhancing its precision caused by screen coordinate conversion. The DCQA (distance close quarters situation of approach) model and TCQA (time close quarters situation of Citation: Xin Yu Zhang, Yong Yin, Jin YiCheng, XiaoFeng Sun, Ren HongXiang, “The Marine Safety Simulation based Electronic Chart Display and Information System”, Abstract and Applied Analysis, vol. 2011, Article ID 586038, 8 pages, 2011. https://doi. org/10.1155/2011/586038. Copyright: © 2011 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
106
Safety Science and Technology
approach) model are adopted to judge if the close quarters situation or the risk of collision between own ship and target ship is emerging. All these methods are proven to be reliable through the navigation simulator made by Dalian Maritime University which is certified by DNV to class A.
INTRODUCTION Nowadays, the marine safety problem becomes more and more important with the increasing of shipping and pleasure boating. A cargo ship that carries hazardous loads poses serious threats to marine safety, as well as to the lives inhabiting coastal zones. The disaster and damage caused by major sea collision is difficult to deal with [1]. The navigation safety simulation in ECDIS is important to marine traffic, so route plan and route monitoring are two key navigational functions [2]. The role of route planning is to select a safe and economical route to avoid sudden events as possible, and insure the ship’s navigation safety while the role of route monitoring is to supervise ships and post a danger alarming. Based on the close quarters situation, how to reduce the risk of collision between own ship and target ship is also important. Currently, route planning and route monitoring are required in bridge equipment and navigation simulators [3–5], and were investigated by many scholars. For one example, Christiansen studied the route planning method based on the optimization decision support system [6], and Gunnarsson et al. used the integrated macroeconomic mathematical model to conduct route planning [7]. At the same time, Xu put forward a route planning and route monitoring method and implemented them in the electronic chart display and information system (ECDIS) [8], while Zhang et al.proposed the preliminary thought of the automatic optimization for ocean route [9]. All above theories promoted the rapid development of the route planning and route monitoring theory. In this article, based on international standard format (S-57), the navigation safety simulation model in ECDIS is put forward, which mainly studies on the route plan and route monitoring. The universal kriging interpolation is used in the route planning to compute the water depth of any place in the sea bottom. The man-machine conversation way is taken to amend planned route to realize feasibility autodeciding according to ECDIS information. Thus, the route monitoring algorithm is improved by enhancing its precision by screen coordinate conversion. The models of DCQA (distance close quarters situation of approach) and TCQA (time
The Marine Safety Simulation based Electronic Chart Display and ...
107
close quarters Situation of approach) are adapted to judge the close quarters situation or the risk of collision between own ship and target ship.
AUTOMATIC FEASIBILITY IDENTIFICATION IN ROUTE PLAN Identification Basis of the Feasibility As the international standard format S-57 is used in ECDIS which includes all the information about the chart, the feasibility in route plan can be identified automatically according to the chart information and ship navigation state. The key of route planning is to identify if the route is realtimely and accurately navigable, which is based on the ship navigation state and the ECDIS information for the ship navigational areas.(i)Ship navigation state: it mainly includes the maximum draft, tonnage, maximum speed, and gyration radius Among them, tonnage, maximum speed, ship modelled breadth, and yarage determine the cross-track error which takes planned sea route as a medial axis.(ii)ECDIS information in ship navigation areas: it mainly includes different obstructions information identification in the navigation area for the planned sea route, such as open reef, sunken reef, intertidal zone, wreck, and prohibited area, and acquisition of the depth value in any position and any scale level in ECDIS. The ship navigation status parameters can be acquired from ship motion model database, and sea chart information can be obtained from S-57 ECDIS database. However, water depth data known from sea chart are a discrete and fixed position, so computing the water depth in any position is a key problem in feasibility evaluation of planned sea route, as the ship position cannot always have the known depth value. Li et al. [10] computes water depth through improving Hardy quadric surface method while Wang guo-fu uses large-scale interpolation of discrete data based upon nodal point interpolation to estimate water depth and so on. Here, the universal kriging interpolation is used to compute water depth in any position.
Universal Kriging for Computing Water Depth in Any Position The basic premise of Kriging interpolation is that every unknown point can be estimated by the weighted sum of the known points: (2.1)
108
Safety Science and Technology
where
represents the unknown point, Zi refers to each known point, and
is the weight given to it. The Kriging algorithm body is involved in the appropriate weights selection. For the details about the Kriging interpolation theory, readers may refer to [11, 12]. Universal Kriging assumes a general linear trend model. It includes the drift functions to calculate m(x), which is the expectation of Z(x). Considering
(2.2)
where u, v are the coordinates of point x. Then, we can get (2.3)
In order to set up (2.3), the following equations can be gotten
(2.4) Set (2.5) in which
.
As where Lagrange multiplier rule, we have
,
(2.6) based
on
(2.7)
The Marine Safety Simulation based Electronic Chart Display and ...
109
which could be rewritten in the matrix form such as Ax=b to calculate the value of (i = 1, 2,...,n). From (2.1), we could finally get the estimation of unknown points. In order to enhance its efficiency, an improved universal kriging is adopted. Firstly, the contour-rectangle of the planned sea route is computed as the grid range of universal kriging calculation. Secondly, universal kriging is used to calculate water depth within the grid range based on the original water depth acquired from the navigation database. At last, if all the water depth values are larger than ship draft, the planned sea route is feasible, and if there are one or more water depth values smaller than ship draft, the planned sea route is unfeasible.
SIMULATION MODEL OF ROUTE PLAN The planned sea routes can be acquired by referring to “recommended route” which is stored in the navigation database, and the XTE (cross-track error) is designed according to ship route information, and are generated through linking turning points set by man-machine conversation automatically. At the same time, the feasibility of the turning points and the routes that were mouse clicked is judged intelligently as is shown in Figure 1.
Figure 1: The simulation flow of route planning.
110
Safety Science and Technology
ROUTE MONITORING SIMULATION MODEL The position of own ship is calculated realtimely according to the information that the operator inputs every 0.5 seconds, so that the route monitoring module can acquire the realtime ship position, display the figure of own ship and target ship, and send own ship information to instructor station, visual system, and radar display. The purpose of route monitoring is to warn and alarm by using dead reckoning to calculate yaw time and distance when the route is deviating, so that the operator can acquire the suggestion information. The simulation flow is show in Figure 2.
Figure 2: The simulation flow of route planning.
The key algorithm of route monitoring is to judge if the own ship is within the planned sea route. Xu judges if the ship position is in the range of route by using Windows API (application programming interface) which will produce errors because of utilizing screen coordinates. Here, a series of key coordinate transformations are used to enhance the safety and accuracy of the algorithm, as shown by Figure 3.
Figure 3: The conversion of coordinates.
The Marine Safety Simulation based Electronic Chart Display and ...
111
Then, the algorithm in route monitoring whether the point is in polygon is improved referencing to Z1-1 algorithm put forward by Zhou depei, first, different coordinate transformation models are built so that tablet coordinate can be transformed to geological coordinate according to Figure 4. Second, the own ship position geological coordinate is realtimely calculated according to the current parameters such as speed, rudder angle, course, and so on. In order to increase the algorithm efficiency, the contour-rectangle route area is computed to judge if own ship is within the rectangle. Furthermore, point (u) is judged if it is within the polygon (u1 : np), where u1 is the structure array storing the points of route area and np is the polygon point number. The concrete method is presented as follows: a ray T(u, v) is educed from test point to compute the intersection point number between array u1 and ray T, according to the principles that odd is in and even is out to judge if point (u) is in the polygon u1. Because geological coordinate is used in this algorithm and the ship position is calculated every 0.5 seconds, the accuracy and realtime can be satisfied in route monitoring.
Figure 4: Relative motion diagram of own ship and target ship.
EARLY-WARNING MODEL OF OWN SHIP DYNAMIC INFORMATION According to ship size, ship velocity, ship manoeuvrability, radar observation error, navigation environment, and so on, the ship driver should know DCQA. In order to supervise the collision risk in ship sailing, DCQA should be calculated by the above factors, as shown in Figure 4. Setting own ship O, ship heading C0, ship velocity V0, target T, ship heading CT , ship velocity VT , relative velocity VR, and the distance between own ship and target is DIS,
112
Safety Science and Technology
θ is the intersection angle between own ship and route, (5.1) Then,
(5.2) We know
, so, (5.3)
EXPERIMENT AND ANALYSIS Taking international standard format S-57 chart of ZhanJiang port and its adjacent sea area (chart number: CN581201, scale: 1 : 35,000), the ship position is set randomly to make the simulation of route planning and route monitoring. As the route in the coastal is complicated, “recommended route” is found out from navigation database, and the way of humanmachine interaction is taken to amend the planned sea route to realize the feasibility autodeciding according to the ECDIS information. The navigable area is yellow range (XTE), and the own ship is a black circle, as shown in Figure 5. Meanwhile, the universal kriging interpolation is used to calculate the water depth within the navigable area. To judge if the route is feasible, the steps are as follows.
Figure 5: Simulation of ownership outside of planning route.
The Marine Safety Simulation based Electronic Chart Display and ...
113
(1) 3156 original water depths are acquired as sample data for calculation.(2)Experiment variogram error conditions of universal kriging: lag distance: 2 m, angle tolerance: 30°, lag tolerance: 1 m. The parameters fitted by experiment variogram computation are showed in Table 1. Table 1: The parameter of variogram
The water depth calculation range is defined for the navigable area: the whole area is regarded as a grid system, the grid size is: 10.0 m (X direction), 10.0 m (Y direction), and the grid number is: 1056 (X direction), 885 (Y direction). Water depth calculation with universal kriging: water depth of every grid point is calculated according to the grid system definition, and the improved judgment method between water depth and ship draft is also used. Simulation experiment is conducted in Dalian Maritime University navigation simulator (identified to class A by DNV) with visual system cooperatively, and the operator can manipulate own ship from start to destination and amend planned sea route in ECDIS realtimely. The whole process has no touching bank, grounding, and collision. The improved route monitoring calculation can supervise whether the ship heading is away, when it is beyond the boundaries of XTD, and the autoalarm will appear on the top left of screen, as shown in the Figure 5.
CONCLUSIONS The navigation safety simulation model in ECDIS based on international standard format (S-57) is put forward, which mainly studies route plan and route monitoring. The universal kriging interpolation is used in the research of the route planning to compute the water depth of any place. The manmachine conversation way is taken to amend the planned route and realize feasibility autodeciding according to the ECDIS information. The route monitoring algorithm is improved by enhancing the precision caused by screen coordinate conversion. The model of DCQA and TCQA is adopted to judge the close quarters situation or the collision risk between own ship and target ship. So, the model cannot only enhance the simulator training
114
Safety Science and Technology
efficiency and accuracy but also the assistant decision-making in some fields such as port programming and reasoning and navigation safety. In addition, the model can be used in IBS (integrated bridge system).
ACKNOWLEDGMENTS The paper was supported by national Basic Research Program of China (973 Program, no. 2009CB320805) and the Fundamental Research Funds for the Central Universities (no. 2009QN012 and no. 2009QN007).
The Marine Safety Simulation based Electronic Chart Display and ...
115
REFERENCES 1.
R. Ward, C. Roberts, and R. Furness, “Electronic chart display and information systems (ECDIS): state-of-the-art,” in Nautical Charting. Marine and Coastal Geographical Information Systems, D. J. Wright and D. Bartlett, Eds., pp. 149–161, Taylor & Francis, London, UK, 1999. 2. R. Szlapczynski, “A new method of ship routing on raster grids, with turn penalties and collision avoidance,” Journal of Navigation, vol. 59, no. 1, pp. 27–42, 2006.View at: Publisher Site | Google Scholar 3. http://www.km. ongsberg.com/ks/web/NOKBG0237.nsf. 4. “Design and display of 3D geology model,” Chinese Journal of Progress in Mathematics Geology, vol. 31, no. 7, pp. 189–195, 1995. http://www.transas.com/company/news. 5. M. Christiansen, K. Fagerholt, and D. Ronen, “Ship routing and scheduling: status and perspectives,” Transportation Science, vol. 38, no. 1, pp. 1–18, 2004. 6. H. Gunnarsson, M. Rönnqvist, and D. Carlsson, “A combined terminal location and ship routing problem,” Journal of the Operational Research Society, vol. 57, no. 8, pp. 928–938, 2006. 7. K. Xu, “The completion of ship’s track monitoring and route design,” Journal of Shanghai Maritime University, vol. 21, no. 4, pp. 108–112, 2000. 8. L. H. Zhang, Q. Zhu, Y. C. Liu, and S. J. Li, “A method for automatic routing based on ECDIS,” Journal of Dalian Maritime University, vol. 33, no. 3, pp. 109–112, 2007. 9. Y.-H. Li, S.-P. Sun, and W.-H. Yu, “Auto evaluation of the feasibility of planned sea route in ECDIS,” Journal of Dalian Maritime University, vol. 26, no. 2, pp. 40–44, 2000. 10. Y. Han, On the Mathematical Models of Various Krige Methods, JinLin University, Chang Chun, China, 2003. 11. J. R. Carr, “On visualization for assessing kriging outcomes,” Mathematical Geology, vol. 34, no. 4, pp. 421–433, 2002.
Chapter 7
Improved Modelling and Assessment of the Performance of Firefighting Means in the Frame of a Fire PSA
Martina Kloos and Joerg Peschke GRS gGmbH, Boltzmannstraße 14, 85748 Garching, Germany
ABSTRACT An integrated deterministic and probabilistic safety analysis (IDPSA) was carried out to assess the performances of the firefighting means to be applied in a nuclear power plant. The tools used in the analysis are the code FDS (Fire Dynamics Simulator) for fire simulation and the tool MCDET (Monte Carlo Dynamic Event Tree) for handling epistemic and aleatory uncertainties. The combination of both tools allowed for an improved modelling of a fire interacting with firefighting means while epistemic uncertainties because lack of knowledge and aleatory uncertainties due to the stochastic aspects of the performances of the firefighting means are simultaneously taken into account. The MCDET-FDS simulations provided a huge spectrum of fire sequences each associated with a conditional occurrence probability at each Citation: Martina Kloos, Joerg Peschke, “Improved Modelling and Assessment of the Performance of Firefighting Means in the Frame of a Fire PSA”, Science and Technology of Nuclear Installations, vol. 2015, Article ID 238723, 10 pages, 2015. https://doi. org/10.1155/2015/238723. Copyright: © 2015 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
118
Safety Science and Technology
point in time. These results were used to derive probabilities of damage states based on failure criteria considering high temperatures of safety related targets and critical exposure times. The influence of epistemic uncertainties on the resulting probabilities was quantified. The paper describes the steps of the IDPSA and presents a selection of results. Focus is laid on the consideration of epistemic and aleatory uncertainties. Insights and lessons learned from the analysis are discussed.
INTRODUCTION IDPSA—frequently also called Dynamic PSA—can be regarded as a complementary analysis to the classical deterministic (DSA) and probabilistic (PSA) safety analyses [1, 2]. It makes extensive use of a deterministic dynamics code and applies advanced methods for an improved modeling and probabilistic assessment of complex systems with significant interactions between a process, hardware, software, firmware, and human actions [3]. An IDPSA is particularly suitable in the frame of a fire PSA, since sequences of a fire interacting with the means to be applied for firefighting can be realistically modelled while aleatory uncertainties due to the stochastic aspects of the performances of the firefighting means can be simultaneously taken into account. Besides aleatory uncertainties, epistemic uncertainties can be considered as well. They may refer to parameters of the applied deterministic dynamics code and to the reliability parameters used to quantify the stochastic performances of the firefighting means. An appropriate tool to conduct an IDPSA is MCDET (Monte Carlo Dynamic Event Tree) which allows for performing Monte Carlo (MC) simulation, the Dynamic Event Tree (DET) approach or a combination of both [4, 5]. Since MCDET can in principal be coupled to any deterministic dynamics code, the open source and freely available code FDS (Fire Dynamics Simulator) from NIST [6] was selected to be applied for fire simulation. What makes MCDET particularly useful for a fire safety analysis is its Crew Module which allows for considering human actions such as those applied for firefighting as a time-dependent process [7, 8] which can interact with the process modelled by any dynamics code chosen to be combined with MCDET such as FDS. In the past, MCDET was already applied to analyse and assess the plant behaviour during a station black-out scenario with power supply recovery [4]. In that application, MCDET was combined with the code MELCOR (version 1.8.5, [9]) for integrated severe accident simulation. In another
Improved Modelling and Assessment of the Performance of Firefighting...
119
application, MCDET was coupled to the thermal-hydraulics code ATHLET (mod 2.0, [10]) to assess the emergency operating procedure “Secondary Side Bleed and Feed” [7]. This procedure is to be employed in a pressurized water reactor (PWR) to achieve the protection goal of steam generator injection after the loss of feed-water supply. The fire event selected to be analysed was assumed to occur in a compartment of a German reference nuclear power plant (NPP). The main question to be answered by the IDPSA was whether the plant specific firefighting means to be applied in case of a fire are able to protect those structures, systems, and components (SSC) in the compartment which are important to nuclear safety. Therefore, the most important analysis result was the probability of safety related SSC to be damaged by the fire. The influence of epistemic uncertainties on the probability was quantified. Section 2 of this paper gives an overview on the methods implemented in MCDET. It is explained how these methods can be used to treat the aleatory and epistemic uncertainties of an IDPSA and how the influences of both types of uncertainties can be quantified. Details on the considered fire event, the plant specific firefighting means and on the modelling assumptions can be found in Section 3. The steps of the analysis and a selection of results are described in Section 4. Conclusions and lessons learned are presented in Section 5.
METHODS IMPLEMENTED IN MCDET The tool MCDET allows for performing Monte Carlo (MC) simulation, the Dynamic Event Tree (DET) approach, or a combination of both. How these methods can be used to consider aleatory uncertainties and to quantify their influence on the results of a deterministic dynamics code is described in Section 2.1. The method to handle epistemic uncertainties in addition to aleatory uncertainties and to get a quantification of their influence is topic of Section 2.2.
Consideration of Aleatory Uncertainties Coupled with a deterministic dynamics code such as the FDS code, the tool MCDET can perform Monte Carlo (MC) simulation, the Dynamic Event Tree (DET) approach, or a combination of both [4, 5]. The DET approach is quite useful, if rare events like, for instance, the failures of safety systems which generally occur with small probabilities have to be considered. The first tool presented in literature which applied
120
Safety Science and Technology
the DET approach is DYLAM [11, 12]. Other tools using the DET approach are, for instance, ADS-IDAC [13, 14], SCAIS [15, 16], ADAPT [17], and RAVEN [18]. The simulation of a DET starts with the calculation of a sequence running from the initial event until the occurrence of the first event for which aleatory uncertainties are to be taken into account (e.g., success/failure of a safety system). When this happens, a branching point is generated meaning that the calculations of all branches (alternative situations) which may arise at the corresponding point in time are launched, even those of low probabilities. For instance, at the point in time, when a safety system is demanded, both successful and failed operations of the system are considered and the corresponding simulation processes are launched. Each time when another event subjected to aleatory uncertainty occurs during the calculation of a branch, another branching point is generated and the simulations of the new branches are launched. With MCDET, a conditional occurrence probability is assigned to each branch constructed in the course of a DET simulation. Multiplication of the conditional probabilities of all branches which made up a whole sequence finally gives the sequence probability. The probabilities of all sequences of a DET in general sum up to 1. If a probabilistic cut-off criterion was applied, the sum is smaller than 1, because all sequences with a conditional probability less than a given threshold value are ignored. The DET approach avoids repeated calculations of dynamic situations shared by different sequences. Except for the first (root) sequence, any other sequence is calculated only from the time on where a corresponding branching occurs. The past history of a sequence is given by the parent sequence from which the sequence branches off, then, by the parent sequence of the parent sequence and so on. One drawback of the DET approach is that a continuous variable like the timing of an event (e.g., the failure of a passive component) has to be discretized, if it is subjected to aleatory uncertainty. A coarse discretization would provide less accurate results. A detailed time discretization would lead to an exponential explosion of the number of branches. The accuracy of results derived from a more or less detailed discretization is difficult to quantify. To overcome this difficulty, MCDET allows for applying a combination of MC simulation and the DET approach which can adequately handle the aleatory uncertainty of any discrete or continuous variables and provide output data appropriate for quantifying the accuracy of the results, for instance, in terms of confidence intervals.
Improved Modelling and Assessment of the Performance of Firefighting...
121
With MCDET coupled to a dynamics code, each DET is constructed on condition of values each randomly sampled for a continuous aleatory variable. Each new set of values for the continuous aleatory variables contributes to the generation of another DET. Result of this method is a sample of individual DETs, each constructed from a distinct set of values sampled for the continuous aleatory variables. The sampling of values for the continuous aleatory variables is not performed a priori, that is, before the calculation of a DET is launched. It is performed when needed in the course of the calculation. In this way, it is possible to treat not only the influence of aleatory uncertainties on the dynamics as calculated by the code but also the influence of the dynamics on aleatory uncertainties and to consider, for instance, a higher failure rate of a component, if a high temperature seriously aggravates the condition of the component. From the conditional probabilities assigned to each sequence and the corresponding curves of safety related output quantities calculated by the dynamics code, the post-processing modules of MCDET can calculate the conditional DET-specific and the unconditional scenario-specific distributions of safety related quantities. These scenario-specific distributions are the means over the corresponding DET-specific distributions. The accuracy of the resulting mean distributions and probabilities can be quantified in terms of 90% or 95% confidence intervals. Figure 1 comprises two schematical illustrations of the sample of DETs generated by MCDET. In Figure 1(a), each DET of the sample is represented in the time-event space with focus on the events subjected to aleatory uncertainty (e.g., failure-on-demand of the systems S1, S2, and S3, error of human actions HA1, or failure of a passive component PC). Timing and order of events might differ from DET to DET due to the influence of the different values sampled for MC simulation. Associated with each sequence of events is the process state at each point in time as calculated by the applied dynamics code and the corresponding conditional probability. In Figure 1(a), the state of a process variable P and the corresponding probability are exemplarily considered at the end of problem time. The probabilities over the range of P (e.g., from 0 to 10) obtained from all sequences of a DET constitute a distribution at each point in time (e.g., at the end of problem time as shown in Figure 1(a)). Figure 1(b) shows each DET in the timestate space where the focus is laid on the temporal evolution of the process variable P for each sequence of event.
122
Safety Science and Technology
Figure 1: Sample of DETs represented in the time-event space (a) and in the time-state space (b).
MCDET also allows for performing pure MC simulation to consider aleatory uncertainties of discrete or continuous variables. Regardless of whether MC simulation, the DET approach or a combination of both are applied, the probabilities of damage states (e.g., the probability of safety related SSC to be damaged by a fire) can be directly related to those process quantities of the dynamics code which are used to define failure criteria (e.g., high target temperatures and exposure times).
Consideration of Epistemic Uncertainties Like with continuous aleatory uncertainties, the influence of epistemic uncertainties is considered by Monte Carlo (MC) simulation. In a first step, the values of the parameters subjected to epistemic uncertainty (epistemic variables) are sampled. Then, for each element of that epistemic sample, a sample of individual DETs is generated. Each DET is constructed from the values of the epistemic sample element combined with respective values sampled for the continuous aleatory variables.
Improved Modelling and Assessment of the Performance of Firefighting...
123
The approach applied to quantify the influence of epistemic uncertainties at least needs two distinct DETs to be simulated per vector of the epistemic sample. If this condition is fulfilled, the simulation results can be used to quantify the overall influence of the epistemic uncertainties on a representative value R of the resulting scenario specific probability distribution. A useful representative value is the expected value of the probability distribution, especially if probabilities such as the probability of a damage state are to be provided as IDPSA results. These probabilities can be represented as expected values of appropriately chosen Bernoulli distributions. For instance, the probability (𝑋 > 𝑥) of variable 𝑋 to exceed the value 𝑥 is the expected value (𝐵) of the Bernoulli variable 𝐵 with 𝐵=1, if 𝑋>𝑥 and 𝐵=0, if 𝑋≤𝑥..
The expected value 𝑅Ep of the scenario-specific distribution of a variable 𝑌 (Section 2.1) per epistemic vector is the mean over the expected values 𝑅Ep,Al of the DETspecific probability distributions of the respective epistemic vector (Formula (1)). 𝑅Ep varies as a function of the epistemic variables Ep, while 𝑅Ep,Al varies as a function of both the epistemic variables Ep and the continuous aleatory variables Al: (1)
where 𝐸(⋅ | Ep) denotes the conditional expectation of a variable (𝑌 or 𝑅Ep,Al) as a function of the epistemic variables Ep. Formula 1 is true due to the following relationship:
(2) where 𝐸(𝑌 | Ep,Al) denotes the conditional expectation of variable 𝑌 as a function of the epistemic (Ep) and continuous aleatory variables (Al).
Formula 2 derives from the known equation for conditional expectations: (3)
where 𝑋 and 𝑌 denote two variables, 𝐸(𝑌) is the expectation of Y, and 𝐸(𝑌 | 𝑋) the conditional expectation of 𝑌 as a function of 𝑋.
A quantification of the epistemic uncertainty of the expected value 𝑅Ep of the scenario-specific distribution can be obtained by estimating the expectation (𝑅Ep) and the variance Var(𝑅Ep) of 𝑅Ep and by using the estimators, for instance, to derive the parameters of a distribution supposed to be appropriate for 𝑅Ep. If 𝑅Ep represents a probability, the Beta distribution might be an adequate distribution assumption.
124
Safety Science and Technology
The expectation (𝑅Ep) can be estimated as the arithmetic mean over the expected values 𝑅Ep,Al of the DET-specific probability distributions. This is based on the equation for conditional expectations (Formulae (1) and (3)): (4)
The variance Var(𝑅Ep) can be calculated from the following known equation:
(5) where Var(⋅) denotes the variance of a variable (𝑅Ep or 𝑅Ep,Al = 𝐸(𝑅Ep,Al | Ep)) and 𝐸(Var(𝑅Ep,Al | Ep)) is the expectation of the conditional variance of 𝑅Ep,Al given Ep.
The estimators of the mean (𝑅Ep) and variance Var(𝑅Ep) can also be applied to calculate well-known inequations from statistics such as those of Chebychev (Formula (6)) or Cantelli (Formula (7). These inequations can then be used to quantify the epistemic uncertainty of 𝑅Ep in terms of conservative estimations, for instance, of a 95% interval or of the 5%- or 95%-quantiles: (6) (7) Another alternative to quantify the epistemic uncertainty of 𝑅Ep is the calculation of two or one-sided (95%; 95%) tolerance limits [19]. The only requirement of this alternative is a minimum number of runs which account for the variations due to epistemic uncertainties [20]. For instance, at least 59 values for 𝑅Ep must be available to quantify the upper one-sided (95%; 95%) tolerance limit.
FIRE EVENT, FIREFIGHTING MEANS, AND MODELLING ASSUMPTIONS The fire event considered in the analysis and the assumptions of the corresponding FDS model are described in Section 3.1 of this Section. An overview on the plant specific firefighting means with emphasis on human
Improved Modelling and Assessment of the Performance of Firefighting...
125
actions and information on how these means were modelled are given in Section 3.2.
Fire Event and Modelling Assumptions The fire was assumed to occur in a compartment of a NPP including cooling and filtering equipment for pump lubrication oil and electrical cables routed below the ceiling. Since these cables carry out safety related functions, one aim of the analysis was to find out whether these cables can be sufficiently protected against the fire by the plant specific firefighting means. It was assumed that malfunction of the oil-heating system designated to heat up the pump lubrication oil in the start-up phase of the NPP causes an ignition of the oil. The dimensions of the compartment where the fire was supposed to start are about w × l × h = 8 m × 6.2 m × 6 m. Compartment walls are from concrete. The compartment is divided into a lower and an upper level by a steel platform at 2.4 m height. This is where the electrical oil heater is located and the fire was assumed to start (Figure 2). The steel platform can be reached by steel stairs. The three compartment doors lead to the lower level of the compartment. It was assumed that one of these doors might be (randomly) left in open position. The corresponding probability was considered as epistemic uncertainty (Table 1). The mechanical air exchange by an air intake and an exhaust vent was considered to be 800 m3/h. The air inlet duct (violet in Figure 2) has one diffusor above the fire and one to the lower level. The outlet duct (yellow in Figure 2) sucks air from the upper layer by two diffusors which can be closed by the fire damper. The fire damper at the exhaust vent was supposed to close after melting of a fusible link at 72°C. The probability of this mechanism to fail was considered as epistemic uncertainty (Table 1). If the outlet damper is closed, the mechanical air supply into the room was considered to be reduced to 400 m3/h. This value was chosen to account for increased pressure losses, if the inlet air leaves the room via other leakages.
126
Safety Science and Technology
Table 1: Epistemic parameters and specified probability distributions Epistemic parameter
Reference value
Distribution
Distribution parameters
Value of optical density at which emerging smoke is visible under fire compartment door [1/m]
0.3
uniform
Min = 0.2, Max = 0.4
Response time index for activation temperature
125
uniform
Min = 50, Max = 200
Threshold of optical density D below which fire compartment can be entered [1/m]
0.4
uniform
Min = 0.3, Max = 0.5
Fraction of fuel mass (oil) converted into smoke
0.097
uniform
Min = 0.095, Max = 0.099
Time to reach 1 MW heat release rate [s]
425
uniform
Min = 250, Max = 700
Conductivity of cable [W/m*K]
0.275
uniform
Min = 0.15, Max = 0.4
Specific heat of cable [kJ/kg*K]
1.225
uniform
Min = 0.95, Max = 1.5
Depth of cable isolation [m]
0.0016
uniform
Min = 0.0012, Max = 0.002
Cable density [kg/m3]
1131
uniform
Min = 833, Max = 1430
Specific heat of concrete [kJ/kg*K]
0.65
uniform
Min = 0.5, Max = 0.8
Conductivity of concrete [W/m*K]
1.75
uniform
Min = 1.4, Max = 2.1
Thickness of concrete walls in the fire compartment [m]
0.37
uniform
Min = 0.32, Max = 0.42
Probability that a fire door falsely stays open
0.005
beta
= 1.5, = 236.5
Probability that fire damper fails to close
0.01
beta
= 1.5, = 117.5
Failure temperature of I&C cables [°C]
170
uniform
Min = 145, Max = 195
Critical time periods [s] with temperatures of I&C cables
≥145°C
420
uniform
Min = 360, Max = 480
≥150°C
300
uniform
Min = 240, Max = 360
≥160°C
215
uniform
Min = 180, Max = 250
≥170°C
160
uniform
Min = 120, Max = 200
≥180°C
80
uniform
Min = 40, Max = 120
of fire dampers
Improved Modelling and Assessment of the Performance of Firefighting...
127
Figure 2: Snapshot of the compartment layout by FDS.
The fire simulation was performed by the Fire Dynamics Simulator (FDS) 6.0 [6]. FDS is a large-eddy simulation code for low-speed flows with emphasis on smoke and heat transport from fires. As input of FDS, the fire compartment was discretized in one mesh with a grid solution of 0.2 m in all three directions. The evolution of the fire depends on the leakage rate of the oil and was considered to be linear over time. The characteristic time to reach 1 MW heat release rate was varied from 250 s to 700 s (Table 1). Due to the assumed fire, the electrical cables below the ceiling are exposed to hot smoke and radiation. The thermal penetration of the cable material was described by the model for thermally induced electrical failure (THIEF) implemented in FDS. The THIEF model predicts the temperature of the inner cable jacket under the assumption that the cable is a homogeneous cylinder with one-dimensional heat transfer. The thermal properties—conductivity, specific heat, and density—of the assumed cable are independent of the temperature. In reality, both the thermal conductivity and the specific heat of polymers are temperature-dependent. In the analysis, conductivity, specific heat, density, and the depth of the cable insolation were considered as uncertain parameters with relevant influence (Table 1).
Firefighting Means If equipment and procedures work as intended, firefighting is a rather short process, because the compartment where the fire is assumed to occur
128
Safety Science and Technology
is equipped with a fixed fire extinguishing system which suppresses the fire with a sufficiently large amount of water after actuation by the fire detection and alarm system. However, if the automatic actuation of the fixed fire extinguishing system fails, the firefighting process is complex and essentially depends on the manual firefighting means performed by the plant personnel in charge. There are three states of the fire detection and alarm system which can be assumed as decisive for the manual firefighting means, namely, at least two detectors, only one detector or none of the detectors indicating an alarm signal to the control room. If at least two fire detectors send an alarm signal, the control room operator (shift leader) immediately instructs the shift fire patrol and the on-site fire brigade to inspect the compartment and to perform the necessary steps for fire suppression. If there is a signal by only one detector, the signal might be a faulty or spurious one (e.g., due to dust, steam, etc.). This is why the fire patrol trained for fighting incipient fires is instructed to inspect the fire compartment and to verify the fire. Suppose the fire patrol verifies the fire, the shift leader, who is immediately informed, calls the on-site fire brigade. In the mean-time, the fire patrol tries to suppress the fire either by a portable fire extinguisher or by manually actuating the fixed fire extinguishing system from outside the fire compartment. If none of the fire detectors sends an alarm, the detection of the fire depends on the shift patrol inspecting the compartment at a random time once during a shift. The fire patrol usually is the first person who arrives at the fire compartment. His/her success of suppressing the fire with a portable fire extinguisher was assumed to depend on the local optical density 𝐷 of the smoke at 3.20 m height (0.80 m above the level of the platform). For optical densities below 𝐷 = 0.1 m−1, it was assumed that the fire patrol can detect the fire and start to suppress it by means of a portable fire extinguisher after a delay of 10 s. For 0.1 m−1 2955 sec) in new variables can be also interpreted in the original coordinate system. Figures 5 and 6 illustrate the results of clustering analysis for the sequence [INI SIS IGNI] with uniform grid. The cells that contain failure scenarios are grouped into cluster representing the failure domain. For each cell in the cluster the algorithm calculates correspondent probability of failure (Figure 7).
Figure 5: Cluster representation of the failure domain (red) and safety domain (green) for the sequence [INI SIS IGNI], axes scaled between 0 and 1.
158
Safety Science and Technology
Figure 6: Cluster representation of the failure domain (red) and safety domain (green) for the sequence [INI SIS IGNI] (SIS∗, IGNI∗: in coordinate system defined by principal components of the dataset), axes scaled between 0 and 1.
Figure 7: Containment failure probability distribution for sequence [INI SIS IGNI].
Different values of probabilities in the different parts of the failure domain correspond to different H2 concentrations and respective probability distributions for the time delays of ignition event [8]. For instance, in Figure 8, H2 concentration is below ignition limit and above inflammability limit;
Scenario Grouping and Classification Methodology for Postprocessing ...
159
therefore the time delay before the first combustion is uniformly distributed between 0 and Δ𝑇max(H2) (see (7)). In Figure 9, H2 concentration is above its inflammability and ignition limits, therefore, according to [8], time delay before combustion is uniformly distributed between 0 and 20 mins.
Figure 8: H2 molar fraction (%) and H2 inflammability and ignition limits (%).
Figure 9: H2 molar fraction (%) and H2 inflammability and ignition limits (%).
160
Safety Science and Technology
Failure domain structure can be represented using clustering data and decision tree. To illustrate the approach and to provide a possibility to compare failure domains, presented in Figures 5 and 6, the results are visualized with the decision trees. In this work we use limited amount of uncertain parameters for the sake of visual comparison of the data representation; however, the main advantage of the decision tree approach is the ability to represent complex failure domains with four or more uncertain parameters, when it is difficult to visualize results using other methods. Decision tree complexity depends on the shape of the failure domain and level of details (initial grid and refinement step). However, it is possible to prune decision tree, so the complexity and precision are kept in acceptable levels. Pruning is the process of reducing a tree by turning some branch nodes into leaf nodes and removing the leaf nodes under the original branch [24]. Trees are pruned based on an optimal pruning scheme that first prunes branches giving less improvement in error cost. After computing an exhaustive tree, the algorithm eliminates nodes that do not contribute to the overall prediction, decided by another essential ingredient, the cost of complexity. This measure is similar to other cost statistics, such as Mallows’ CP [25], which adds a penalty for increasing the number of parameters in a model [24]. Decision tree results for the sequence [INI SIS IGNI] indicate that containment failure is possible if IGNI∗ event occurs in the time window between 1230.55 and 4444.07 sec (in coordinate system defined by principal components of the dataset). Depending on the timing of the occurrence of the events, H2 combustion within this time window can challenge containment integrity. The pruning (cutting) in the decision trees is done at the point where the further refinement will not improve the results and, on the other hand, increase the complexity of the decision tree. Decision trees (Figures 10 and 11) are built with data set in both original coordinate system and coordinate system defined by its principal components (Figures 5 and 6).
Scenario Grouping and Classification Methodology for Postprocessing ...
161
Figure 10: Decision tree fitted into clustering results data for the sequence [INI SIS IGNI] (sec) with pruning.
Figure 11: Decision tree fitted into clustering results data for the sequence [INI SIS IGNI] (sec) with pruning (SIS∗, IGNI∗: in coordinate system defined by principal components of the dataset).
162
Safety Science and Technology
Decision Support Model Let us consider as an example of sequence [INI SIS CHRS IGNI]. Figure 12 shows cluster representation of the failure domain in this sequence.
Figure 12: Cluster representation of the failure domain (red) and safety domain (green) for the sequence [INI SIS CHRS IGNI]. Axes scaled between 0 and 1.
When it comes to decision support, H2 ignition event (IGNI) in this sequence is entirely stochastic event; that is, the operator has no control over it. On contrary, water injection (SIS) and containment spray (CHRS) systems can be actuated by operator at specified moment of time and, therefore, they are controllable. Decision trees can be used to build decision support model based on the controllable events; that is, decision trees can help us to find an answer to the question “what can be done in case of LOCA initiating event to avoid containment failure?”. Figure 13 illustrates failure domain for the sequence [INI SIS CHRS IGNI] in terms of controllable events SIS and CHRS. Based on the clustering results we build a decision tree in variables representing time delays for actuation of the safety systems (SIS and CHRS) and correspondent outcome (Figures 14 and 15). Obtained results indicate that for the sequence [INI SIS CHRS IGNI] containment failure can be avoided in case of early actuation of water injection and containment spray systems (in the range of ∼492 seconds) or in case of late activation of containment spray (over ∼4000–6944 sec depending on the actuation time of water injection).
Scenario Grouping and Classification Methodology for Postprocessing ...
163
Figure 13: Cluster representation of the failure domain (red) and safety domain (green) for the sequence [INI SIS CHRS IGNI] in terms of controllable events, axes scaled between 0 and 1.
Figure 14: Decision tree fitted into clustering results data for the sequence [INI SIS CHRS IGNI] (sec) for controllable variables.
164
Safety Science and Technology
Figure 15: Decision tree fitted into clustering results data for the sequence [INI SIS CHRS IGNI] (sec) for controllable variables (SIS∗, CHRS∗: in coordinate system defined by principal components of the dataset).
DISCUSSION In this work we present an approach for grouping and classification of typical “failure/safe” scenarios identified using IDPSA methods. This approach allows the classification of scenarios that are directly amenable in classical PSA and scenarios where order of events, timing, and parameter uncertainty affect the system evolution and determine violation of safety criteria. We use grid based clustering with AMR and decision trees for characterization of the failure domain. Clustering analysis is used to represent the failure domain as a finite set of the representative scenarios. Decision trees are used to visualize the structure of the failure domain. Decision trees can be applied to the cases where four or more uncertain parameters are included in the analysis and it is difficult to visualize results in three-dimensional space. Proposed approach helps to present results of the IDPSA analysis in a transparent and comprehendible form, amenable to consideration in the decision-making process. Useful insights into the complex accident progression logic can be obtained and used for development of understanding and mitigation strategies of the plant accidents including severe accidents.
Scenario Grouping and Classification Methodology for Postprocessing ...
165
The insights can be employed to reduce unnecessary conservatism and to point out areas with insufficient conservatism in deterministic analysis. Results of the analysis can be also used to facilitate connection between classical PSA and IDPSA analysis.
ACKNOWLEDGMENTS This study was supported by the Swedish Radiation Safety Authority (SSM). The authors are grateful to Dr. Wiktor Frid (SSM) for very useful discussions.
166
Safety Science and Technology
REFERENCES 1.
Y. Adolfsson, J.-E. Holmberg, G. Hultqvist, P. Kudinov, and I. Männistö, “Proceedings of the deterministic/probabilistic safety analysis workshop October 2011,” Research Report VTT-R-07266-11, VTT, Espoo, Finland, 2011. 2. T. Aldemir, “A survey of dynamic methodologies for probabilistic safety assessment of nuclear power plants,” Annals of Nuclear Energy, vol. 52, pp. 113–124, 2013. 3. S. Hess, “Framework for risk-informed safety margin characterization,” EPRI Report 1019206, EPRI, Palo Alto, Calif, USA, 2009. 4. P. E. Labeau, C. Smidts, and S. Swaminathan, “Dynamic reliability: towards an integrated platform for probabilistic risk assessment,” Reliability Engineering and System Safety, vol. 68, no. 3, pp. 219–254, 2000. 5. E. Zio and P. Baraldi, “Identification of nuclear transients via optimized fuzzy clustering,” Annals of Nuclear Energy, vol. 32, no. 10, pp. 1068– 1080, 2005. 6. D. Mercurio, L. Podofillini, E. Zio, and V. N. Dang, “Identification and classification of dynamic event tree scenarios via possibilistic clustering: application to a steam generator tube rupture event,” Accident Analysis and Prevention, vol. 41, no. 6, pp. 1180–1191, 2009. 7. D. Mandelli, Scenario Clustering and Dynamic PRA, Nuclear Engineering Department, The Ohio State University, 2011. 8. E. Raimond, “SARNET workpackage 5.3—level 2 PSA Specification of a benchmark exercise relative to hydrogen combustion for application of dynamic reliability methods, IRNS/DSR/SAGR/FT.2005-154,” in Network of Excellence for a Sustainable Integration of European Research on Severe Accident Phenomenology, IRNS, 2005. 9. S. Galushin and P. Kudinov, “An approach to grouping and classification of scenarios in integrated deterministic-probabilistic safety analysis,” in Proceedings of the Probabilistic Safety Assessment and Management (PSAM ‘12), Honolulu, HI, USA, June 2014. 10. S. Tuffery, Data Mining and Statistics for Decision Making, Wiley Series in Computational Statistics, John Wiley & Sons, Chichester, UK, 2011. 11. I. T. Jolliffe, Principal Component Analysis, Springer Series in Statistics, Springer, New York, NY, USA, 2nd edition, 2002.
Scenario Grouping and Classification Methodology for Postprocessing ...
167
12. Ilango and V. Mohan, “A survey of grid based clustering algorithms,” International Journal of Engineering Science and Technology, vol. 2, no. 8, 2010. 13. T. M. Mitchell, Machine Learning, Mcgraw-Hill Series in Computer Science, McGraw-Hill, New York, NY, USA, 1997. 14. J. Han, H. Cheng, D. Xin, and X. Yan, “Frequent pattern mining: current status and future directions,” Data Mining and Knowledge Discovery, vol. 15, no. 1, pp. 55–86, 2007. 15. O. Z. Maimon and L. Rokach, Data Mining and Knowledge Discovery Handbook, Springer, New York, NY, USA, 2005. 16. M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD ’96), pp. 226–231, AAAI Press, 1996. 17. L. Fausett, Fundamentals of Neural Networks Architectures, Algorithms, and Applications, vol. 16 of Prentice Hall International Editions, Prentice Hall, Englewood Cliffs, NJ, USA, 1994. 18. Y. Vorobyov and T. N. Dinh, “A genetic algorithm-based approach to dynamic PRA simulation,” in Proceedings of the ANS PSA Topical Meeting—Challenges to PSA During the Nuclear Renaissance, American Nuclear Society, Knoxville, Tenn, USA, 2008. 19. W.-K. Liao, Y. Liu, and A. Choudhary, “A grid-based clustering algorithm using adaptive mesh refinement,” in Proceedings of the 7thWorkshop on Mining Scientific and Engineering Datasets, Lake Buena Vista, Fla, USA, 2004. 20. O. Z. Maimon and L. Rokach, Data Mining and Knowledge Discovery Handbook, Springer, New York, NY, USA, 2nd edition, 2005. 21. J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, San Francisco, Calif, USA, 2001. 22. L. Rokach and O. Maimon, “Top-down induction of decision trees classifiers—a survey,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 35, no. 4, pp. 476– 487, 2005. 23. K. P. Soman, S. Diwakar, and V. Ajay, Data Mining Theory and Practice, Phi Learning Private Limited, New Delhi, India, 2006.
168
Safety Science and Technology
24. L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees, CRC Press, Boca Raton, Fla, USA, 1984. 25. J. Neter, M. H. Kutner, C. J. Nachtscheim, and W. Wasserman, Applied Linear Statistical Models, Irwin, McGraw-Hill, Chicago, Ill, USA, 4th edition, 1996.
Chapter 9
Demonstration of Emulator-Based Bayesian Calibration of Safety Analysis Codes: Theory and Formulation
Joseph P. Yurko1,2, Jacopo Buongiorno1, and Robert Youngblood3 1
MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
2
FPoliSolutions, LLC, 4618 Old William Penn Highway, Murrysville, PA 15668, USA
3
INL, Idaho Falls, ID 83415-3870, USA
ABSTRACT System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key Citation: Joseph P. Yurko, Jacopo Buongiorno, Robert Youngblood, “Demonstration of Emulator-Based Bayesian Calibration of Safety Analysis Codes: Theory and Formulation”, Science and Technology of Nuclear Installations, vol. 2015, Article ID 839249, 17 pages, 2015. https://doi.org/10.1155/2015/839249. Copyright: © 2015 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
170
Safety Science and Technology
innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC) sampling feasible. This work uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognitiontype model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.
INTRODUCTION Propagating input parameter uncertainty for a nuclear reactor system code is a challenging problem due to often nonlinear system response to the numerous parameters involved and lengthy computational times, issues that compound when a statistical sampling procedure is adopted, since the code must be run many times. Additionally, the parameters are sampled from distributions that are themselves uncertain. Current industry approaches rely heavily on expert opinion for setting the assumed parameter distributions. Observational data is typically used to judge if the code predictions follow the expected trends within reasonable accuracy. All together, these shortcomings lead to current uncertainty quantification (UQ) efforts relying on overly conservative assumptions, which ultimately hurt the economic performance of nuclear energy. This work adopts a Bayesian framework that allows reducing computer code predictive uncertainty by calibrating parameters directly to observational data; this process is also known as solving the inverse problem. Unlike the current heuristic calibration approach, Bayesian calibration is systematic and statistically rigorous, as it calibrates the parameter distributions to the data, not simply tune point values. With enough data, any biases from expert opinion on the starting parameter distributions can be greatly reduced. Multiple levels of data are easier to handle as well, since Integral and Separate Effect Test (IET and SET) data can be used simultaneously in the calibration process. However, implementing Bayesian calibration for safety analysis codes is very challenging. Because the posterior distribution cannot be obtained analytically, approximate Bayesian inference with sampling is
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
171
required. Markov Chain Monte Carlo (MCMC) sampling algorithms are very powerful and have become increasingly widespread over the last decade [1]. However, for even relatively fast computer models practical implementation of Bayesian inference with MCMC would simply take too long because MCMC samples must be drawn in series. As an example, a computer model that takes 1 minute to run but needs 105 MCMC samples would take about 70 days to complete. A very fast approximation to the system code is thus required to use the Bayesian approach. Surrogate models (or emulators) that emulate the behavior of the input/output relationship of the computer model but are computationally inexpensive allow MCMC sampling to be possible. An emulator that is 1000x faster than the computer model would need less than two hours to perform the same number of MCMC samples. As the computer model run time increases, the surrogate model becomes even more attractive because MCMC sampling would become impractically lengthy. Gaussian Process- (GP-) based emulators have been used to calibrate computer code for a variety of applications. Please consult [2–5] for specific cases as well as reviews of other sources. This work applies a relatively new class of statistical model, the function factorization with Gaussian Process (FFGP) priors model, to emulate the behavior of the safety analysis code. The FFGP model builds on the more commonly used GP emulator but overcomes certain limiting assumptions inherent in the GP emulator, as will be explained later. The FFGP model is therefore better suited to emulate the complex time series output produced by the system code. The surrogate is used in place of the system code to perform the parameter calibration, thereby allowing the observational data to directly improve the current state of knowledge. The rest of this paper is organized as follows. An overview of the entire emulator-based Bayesian calibration process is described in Section 2. Section 3 discusses the emulators in detail. The first half of Section 3 summarizes the important expressions related to GP emulators. Most of these expressions can be found in numerous other texts and references on GP models, including [6, 7]. They are repeated in this paper for completeness as well as providing comparison to the FFGP expressions in the latter half of Section 3. Section 4 presents a method of manufactured solutions-type demonstration problem that highlights the benefits of the FFGP model over the standard GP model.
172
Safety Science and Technology
OVERVIEW OF EMULATOR-BASED BAYESIAN CALIBRATION As already stated, the emulator-based approach replaces the potentially very computationally expensive safety analysis code (also known as a simulator, computer code, system code, or simply the code) with a computationally inexpensive surrogate. Surrogate models are used extensively in a wide range of engineering disciplines, most commonly in the form of response surfaces and look-up tables. Reference [4] provides a thorough review of many different types of surrogate models. The present work refers to the surrogates as emulators to denote that they provide an estimate of their own uncertainty when making a prediction [5]. An emulator is therefore a probabilistic response surface which is a very convenient approach because the emulator’s contribution to the total uncertainty can be included in the Bayesian calibration process. An uncertain (noisy) emulator would therefore limit the parameter posterior precision, relative to calibrating the parameters using the long-running computer code itself. Obviously, it is desirable to create an emulator that is as accurate as possible relative to the computer code, which limits the influence of error and uncertainty on the results. The emulator-based approach begins with choosing the input parameters and their corresponding prior distributions. If the emulator was not used in place of the system code, the Bayesian calibration process would start in the exact same manner. The priors encode the current state of knowledge (or lack thereof) about each of the uncertain input parameters. Choice of prior for epistemically uncertain variables is controversial and relies heavily on expert opinion. Justification for the priors used in the applications of this work is given later on, but choice of the priors is not the focus of this work. Additionally, the choice of the specific input parameters to be used for calibration may be controversial. Dimensionality reduction techniques might be used to help screen out unimportant input parameters [4]. Some screening algorithms such as the Reference Distribution Variable Selection (RDVS) algorithm use GPs to identify statistically significant input parameters [8]. In the nuclear industry specifically, expert opinion-based Phenomena Identification and Ranking Tables (PIRTs) are commonly used to down-select the most important physical processes that influence a Figure of Merit (FOM) [9]. More recently, Quantitative PIRTs, or QPIRTs, have been used in place of the traditional expert opinion PIRTs to try to remove bias and to capture relevant physical processes as viewed by the computer code [10, 11]. No matter the approach, the set of input parameters and their corresponding prior distribution must be specified.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
173
In the emulator-based approach, the prior has the additional role of aiding in choosing the training set on which the emulator is based. As the phrase implies, the training set is the sequence of computer code evaluations used to build or train the emulator. Once trained on selected inputs and outputs, the emulator reflects the complex input/output relationship, so training is clearly an essential piece of the emulator-based approach. There are numerous methods and decision criteria for the selection of the training set; see [4, 5] for more details. Reference [12] provides an excellent counter point for the dangers of not using enough points in generating the training set. This work does not focus on choosing the “optimal” or “best” training set, which is an active area of research. The input parameter prior is used to set bounds on the input parameter values; Latin Hypercube Sampling (LHS) is then used to create a “space filling” design within those bounds. Although not guaranteed to produce the best possible training set, this method adequately covers the prior range of possible input parameter values. An active area of research is how to enhance the training set during the calibration process itself, in order to focus more on the posterior range of possible values. With the training input values chosen, the computer code is run the desired number of times to generate the training output. The complete training set is then the training input values with their corresponding training output. The emulator is then built by learning specific characteristics that allow the emulator to represent the input/output relationship encoded in the training set. The specific characteristics that must be learned depend on the type of emulator being used. Training algorithms for the standard GP emulator and FFGP emulator are described in Section 3. Once trained, the emulator is used in place of the computer code in the MCMC sampling via an emulator-modified likelihood function. The modified likelihood functions are presented in Section 3 for each of the emulators used in this work. Regardless of the chosen type of emulator, the emulator-based calibration process results in uncertain input parameter posterior distributions and posterior-approximated predictions, conditioned on observational data. A flow chart describing the key steps in the emulatorbased Bayesian calibration process is shown in Figure 1.
174
Safety Science and Technology
Figure 1: Emulator-based Bayesian calibration flow chart.
The emulator-based Bayesian calibration process presented in this work fixes the emulator once it is trained. Alternatively, the emulator could be constructed simultaneously with the calibration of the uncertain input parameters [2, 3]. The key difference between the two approaches is that the emulator-modified likelihood function in [2, 3] is not fixed since the emulator is not fixed. Formally, the alternative approach bases the emulator-modified likelihood function around the emulator prior predictive distribution whereas the work presented in this paper bases the emulator-modified likelihood function around the emulator posterior predictive distribution. The difference between the posterior and prior predictive distributions is described in detail in Section 3.2.4. The alternative approach therefore makes emulator predictions conditioned on both the training data and the observational data simultaneously. In some sense, the alternative approach is more of a data or information “fusion” method rather than a calibration focused approach. The drawback of the alternative “data fusion” approach is that the emulator is not built until after the entire Bayesian calibration process is complete. Thus, if multiple levels of data such as from IETs and SETs are present, the emulators for all of the IETs and SETs must be calibrated simultaneously, which considerably complicates and slows the MCMC sampling. For those reasons, this work does not use the “data fusion” approach but fixes the emulator before starting the calibration of the uncertain input parameters.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
175
GAUSSIAN PROCESS-BASED EMULATORS Overview The emulators used in this work are based on Gaussian Process (GP) models and are considered Bayesian nonparametric statistical models. Nonparametric models offer considerably more flexibility than parametric models because the input/output functional relationship does not have to be assumed a priori by the user. The training data dictates the input/output relationship, just as a look-up table functions. As stated earlier, the emulator is a probabilistic model; therefore, the emulators are essentially probabilistic look-up tables. Nonparametric models are however considerably more computationally intensive than parametric models, because the training data is never discarded. If a large number of training runs are required to accurately capture the input/output trends, a nonparametric model might be considerably slower to run than a parametric model of the same data (e.g., a curve that fits the data). The underlying principles of the GP model were developed in the 1960s in the geostatistics field where it was known as Kriging [4]. Since then Kriging has been widely used for optimization, but starting in the late 1980s and early 1990s, [13–15] popularized the approach as Bayesian approximations to deterministic computer codes. In the early 2000s, Kennedy and O’Hagan used the GP model to facilitate Bayesian calibration of computer codes [16]. Their work served as the foundation for this paper and many of the references cited in the previous section. The machine learning community has also extensively used GP models for both regression and classification (regression is used for continuous functions while classification is used for discrete data) [6, 7]. Even with all of their flexibility, GP models are still somewhat limited by certain underlying assumptions to be discussed later, as well as the limitation in handling very large datasets (just as with any nonparametric model). In order to overcome these limitations and handle more complicated input/output relationships, many different approaches have been developed [6]. One such approach is based on combining GP models with factor analysis techniques; this is referred to as Gaussian Process Factor Analysis (GPFA) models [17, 18]. The work presented here uses the factor analysis based approach in order to handle very large datasets following the formulation of Schmidt [17].
176
Safety Science and Technology
Standard Gaussian Process (GP) Emulators Formulation Within the Bayesian framework, a Gaussian Process (GP) prior is placed on the computer code’s unknown output. The computer code, such as RELAP, is actually deterministic, meaning that the same output will result if the same input parameters and settings are used over and over. The output, however, is in some sense unknown until the computer code is run, and it will therefore be treated as a random variable. A GP is a collection of random variables, any finite number of which have a jointly Gaussian distribution [6]. A Gaussian Process is simply a multivariate normal (MVN) distribution and is used presently as a prior distribution on the computer code input/output functional relationship. The input x will be all 𝐷 inputs to the computer code that the GP is trying to emulate: x = [𝑥1, 𝑥2,...,,...,𝑥𝐷]T. The superscript T denotes the transpose of the vector. The output, 𝑓(x), as stated above, is considered a random variable. A GP is completely specified by its mean function and covariance function. The mean function 𝑚(x) and covariance function (x, x’ ) are defined as [6]
(1)
The GP is then defined as (2) An important aspect of (2) is that the covariance between the outputs is written only as a function of the inputs. This is a key assumption in the simplicity of standard GP models, since all the covariance between two outputs depends only on the values of the inputs that produced those two outputs. Following [6], as well as many other sources, the mean function is usually taken to be zero. Besides being the simplest approach to use, a zero mean function gives no prior bias to the trend in the data, since no mean trend is assumed. Covariance functions themselves depend on a set of hyperparameters; therefore, even though the GP is a nonparametric model, these hyperparameters specify the covariance function and must be learned from the training data. However, the GP model is still considered a nonparametric model, because a prediction still requires regressing the
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
177
training dataset. Numerous covariance functions exist, ranging from very simple forms to very complex neural net-like functions [6]. Different forms have various advantages/disadvantages for different datasets, but the most common type used in the literature is the squared-exponential (SE) covariance function. The SE covariance function is usually parameterized as
(3)
where the subscripts 𝑝 and 𝑞 denote (potentially) two different values for the 𝐷-dimensional input vector x. The hyperparameters in (3) are the signal
variance and the matrix 𝑀, which is a symmetric matrix that is usually parameterized as a diagonal matrix:
(4)
Each diagonal element of 𝑀 is a separate hyperparameter, 𝑙𝑑, which serves as the characteristic length scale for the 𝑑th input. Loosely speaking, the length scale represents how far the input value must move along a particular axis in input space for the function values to become uncorrelated [6]. Since each input parameter has its own unique length scale, this formulation implements what is known as automatic relevance determination (ARD), since the inverse of the length scale determines how relevant that input is. If the length has a very large value, the covariance will become almost independent of that input. Linkletter et al. [8] used ARD to screen out unimportant inputs using GP models. Strictly speaking, the GP model can interpolate the training data exactly if no noise is allowed between the training data and the GP prior. However, implementation of an interpolating GP model might be difficult due to illconditioning issues [5, 6], which will be discussed later on. Allowing some hopefully very small noise between the GP prior and training data removes the numerical issues and turns the model into a GP regression (GPR) model. The GP prior is therefore actually placed on a latent (hidden) function, 𝑓(x), that must be inferred from the noisy data 𝑦 [6]. This viewpoint brings to light the signal processing nature of the GPR framework, since the latent function is the true signal that must be inferred from the noisy data. In emulating computer codes, the training output is not noisy, but this setup provides a useful mathematical framework.The computer model output of interest, 𝑦, is then related to the GP latent function 𝑓(x) as
178
Safety Science and Technology
(5)
where 𝜖 is the error or noise. The error can take a variety of forms, but if a Gaussian likelihood model is used with independent and identically distributed (IID) noise, with zero mean and variance , the remaining calculations are all analytically tractable. More complicated likelihood models can be used and are often required to handle very complex datasets, but the remaining calculations would no longer have analytical expressions. At this point, some important notation needs to be defined. If there are a total of 𝑁 training points, the inputs are stacked into an 𝑁×𝐷 matrix of all training input values:
(6) Each row of 𝑋 contains the 𝐷 input parameters for that particular training case run. The training outputs, 𝑦, are stacked into a vector of size 𝑁×1: y = [𝑦1, 𝑦2,...,] T. Since 𝑓(x) has a GP prior and the likelihood function is also Gaussian, the latent variables can be integrated yielding a Gaussian distribution on the training output [6]: (7) In (7), K(𝑋, 𝑋) is the training set covariance matrix and is the identity matrix. The training set covariance matrix is built by applying the covariance function between each pair of input parameter values [6]:
(8)
The training set covariance matrix is therefore a full matrix. If the SE covariance function in (3) is used, each diagonal element of K(𝑋, 𝑋) is equal to the signal variance,
. Evaluating the covariance function, however,
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
179
requires the hyperparameter values to be known, which is accomplished by training the GP emulator.
Training Training or building the emulator consists of learning the hyperparameters that define the covariance and likelihood functions. As discussed earlier, there are two types of hyperparameters in the SE covariance function, the signal variance, , and the length scales, 𝑙. The Gaussian likelihood function used presently consists of one hyperparameter, the likelihood noise (variance), . The complete set of hyperparameters is denoted by
.
Two ways to learn the hyperparameters will be discussed here: the empirical Bayes approach and the full Bayesian approach. “Full Bayesian” refers to inferring the hyperparameter posterior distribution given the training data. Due to the complexity of the relationship between the hyperparameters and the training output, sampling based Markov Chain Monte Carlo (MCMC) inference is required to perform the full Bayesian approach. The “empirical Bayes” method summarizes the hyperparameters with point estimates. The hyperparameter contribution to the output uncertainty is therefore neglected, but, as discussed by many authors, this is an acceptable approximation [4, 5]. The entire GP model is still considered Bayesian because the GP itself is a statement of the probability of the latent function, which can then make a statement of the probability of the output. The point estimates can be found either from sampling-based approaches or by optimization methods. With optimization procedures, the empirical Bayes approach would be much faster than the full Bayesian training approach. However, cross-validation is very important to ensure the optimizer did not get “stuck” at a local optimum [6]. However, this work used a hybrid approach to training. MCMC sampling was used to draw samples of the hyperparameter posterior distribution just as in the full Bayesian approach. The hyperparameters were then summarized as point estimates at the posterior mean values. Using point estimates greatly reduced the computer memory required to make predictions (which are described later). The sampling based approach removed having to perform cross-validation since the point estimates correspond to the values that on average maximize the posterior density. The prior distribution on the set of hyperparameters, known as the hyperprior, must be specified as part of the MCMC sampling procedure. The simplest hyperprior would be the “flat” improper hyperprior, (𝜙) ∝
180
Safety Science and Technology
1; however for GP models the input and output can be scaled to facilitate meaningful hyperprior specification. Following [2, 3, 5], the inputs used in this work are all scaled between 0 and 1, where 0 and 1 correspond to the minimum and maximum training set value, respectively. Additionally, the training output data are scaled to a standard normal, with mean 0 and variance 1. Since the signal variance, , defines the diagonal elements of the covariance matrix, it is biased to be near 1. The likelihood noise, , is biased to be a small value using a Gaussian distribution with prior mean of 10−6. This hyperprior format biases the sampling procedure to try to find length scale values that match the training output within this noise tolerance. The length scale hyperpriors are more difficult to set, but the formulation from Higdon was used [2, 3, 8], which a priori biases the length scales to yield smooth input/output relationships. Only the training data can reduce the length scales; therefore only the training data can dictate if an input strongly influences the output variability. Additionally, a small “nugget” or “jitter” term was added to the diagonal elements of K(𝑋, 𝑋). The nugget term is rarely mentioned outside of footnotes in most references in the literature [6], but it is a very important part of practical implementations of GP models. The nugget adds a small amount of additional noise, preventing a GP model from interpolating the training set exactly. This additional noise may be very useful at preventing the training set covariance matrix from being ill-conditioned. There have been some detailed investigations into the nugget’s influence on the training algorithm results [5], but for practical purposes the nugget is a simple way to make sure the covariance matrix is always invertible. The hyperparameter posterior, up to a normalizing constant, can be written as
(9)
In (9), (𝜙) is the hyperprior described previously and the likelihood function, 𝑝(y | 𝜙), is (7) rewritten to explicitly depend on the hyperparameters. Hyperparameter posterior samples were drawn using the Adaptive Metropolis (AM) MCMC algorithm [19]. The AM-MCMC algorithm improves the efficiency of the basic Random Walk Metropolis (RWM) sampling algorithm because the MCMC proposal distribution covariance matrix is empirically computed using the previous samples. Regardless of the type of MCMC algorithm used, the likelihood function must be evaluated for each MCMC sample. The log-likelihood, written up to a normalizing constant, is [6]
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
181
(10)
Equation (10) clearly shows that the training set covariance matrix must be inverted at each MCMC sample. This highlights why the nugget term is useful, if for a particular sample the likelihood noise, , does not provide enough noise to allow the matrix to be inverted. The inversion of the training set covariance matrix is the most computationally expensive part of the training algorithm.
Predictions Once the emulator is trained, predictions can be made at input values that were not part of the training set. If there are 𝑁∗ new test or prediction points, the test input matrix, 𝑋∗, is size 𝑁∗ × 𝐷. Under the GP model framework, the latent function at those new test points has the same GP prior as the training points: (11) Comparing (11) with the training output GP prior in (7), the key difference is that the covariance matrix is evaluated at the test input values rather than the training input values. As written, the test prior provides very little useful information, since it has no information regarding the structure of the training dataset. The test latent output must therefore be conditioned on the training output. The joint prior is a multivariate normal distribution [6]: (12) where K(𝑋, 𝑋∗) is the cross-covariance matrix between the training and test input values. The cross-covariance matrix is size 𝑁×𝑁∗ and K(𝑋∗, 𝑋) is its transpose. Standard multivariate normal theory easily allows computing the conditional distribution (f∗ | y) which gives the key predictive equations for the GPR model [6]: (13) with the posterior predictive mean given as
182
Safety Science and Technology
(14)
and the posterior predictive covariance is
(15) The posterior predictive distribution of the test targets, y∗, is the same as the latent posterior predictive distribution except the additional likelihood noise is added: (16) Equations (14) and (15) reveal the important features of the GP emulator. First, the posterior predictive covariance shrinks the prior test covariance as witnessed by the subtraction between the first and second terms on the righthand side of (15). Second, when making predictions at the training points (𝑋∗ = 𝑋), the predictive uncertainty shrinks to the allowable error tolerance.
Gaussian Process-Based Calibration
Once the GP emulator is constructed, it can be used to calibrate the uncertain input parameters, in place of the computer code. Before going into detail of the emulator-based calibration calculations, Bayesian calibration of the computer code itself is reviewed. The computer code functional relationship is denoted as y(xcv, 𝜃), where xcv is the set of control variables that are not uncertain and 𝜃 are the uncertain input parameters. Control variables are conditioned controlled by experimenters or in a transient could also include time. If the computer code could be used as part of the MCMC sampling, the uncertain parameter posterior distribution (up to a normalizing constant) could be written as
(17)
In (17), y𝑜 refers to the observational (experimental) data and xcv, are the control variables’ locations for the observational data. The computer code therefore acts as a (potentially very nonlinear) mapping function between the uncertain inputs and the observational data. As discussed previously, the computer code is computationally too expensive and the emulator is used in place of the computer code for Bayesian calibration. To facilitate the
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
183
emulator-based calibration, the likelihood function between the computer prediction and the observational data is split into a hierarchical-like fashion. The total likelihood consists of two parts. The first component is the likelihood between the observational data and the prediction, (y𝑜 | y). The second part is the likelihood between the computer prediction and the uncertain inputs, 𝑝(y | xcv, 𝜃). The posterior distribution is now the joint posterior distribution between the uncertain inputs and the computer predictions, both conditioned on the observational data:
(18)
The likelihood between the observational data and the computer predictions, (y𝑜 | y), is the assumed likelihood model for the experiment. This work uses a Gaussian likelihood with known independent measurement errors at each of the observational data points. Assuming 𝑁𝑜 independent data points, the likelihood function factorizes as
(19)
where is the measurement error variance for the 𝑙th observational data point. The likelihood between computer prediction and the inputs, 𝑝(y | xcv, 𝜃), is almost impossible to write analytically because of the very complex nature of the computer code. However, 𝑝(y | xcv, 𝜃) can be approximated using the emulator which leads to the emulator-modified likelihood function. As discussed in Section 2, there are two ways to accomplish this. The alternate “data fusion” approach of [2, 3] uses the GP prior distribution to approximate 𝑝(y | xcv, 𝜃). This work however uses the GP posterior predictive distribution, of the already built emulator, to approximate 𝑝(y | xcv, 𝜃). The training set is denoted as a whole as , and the hyperparameters are assumed to be already determined as part of the training algorithm. The joint posterior between the emulator estimated predictions y∗ and the uncertain inputs is
(20)
In (20), is exactly the same as (16), except that it is explicitly written to depend on the training set and hyperparameters. Since the GP posterior predictive distribution is Gaussian
184
Safety Science and Technology
and the likelihood between the observational data and computer prediction is also Gaussian, the emulator predictions can be integrated out of (20). The (integrated) posterior distribution on the uncertain inputs conditioned on the observational data is then
(21)
The likelihood between the uncertain inputs and the observational data is the GP emulator-modified likelihood function equal to the GP posterior predictive distribution with the measurement error added to the predictive variance:
(22)
In (22), Σ𝜖 is the measurement error covariance matrix which is assumed to be diagonal. If more complicated likelihood functions between the observational data and computer prediction were assumed, (21) and (22) would potentially be very different and even require approximations. Equation (22) also provides the direct comparison with the “data fusion” approach described in Section 2. The emulator-modified likelihood function given by equation 4 in [2] uses the GP prior mean and covariance matrix, while this work uses the GP posterior predictive mean and covariance matrix.
Function Factorization with Gaussian Process (FFGP) Priors Emulators For very large datasets, the inverse of the training set covariance matrix might be too expensive to compute. Typically, “very large” corresponds to training sets with over 10,000 points. These situations can occur for several reasons, the obvious being that a large number of computer code evaluations are required. Training sets become very large when the goal is to emulate multiple outputs, especially for time series predictions. If 100 points in time are taken from a single computer code evaluation (referred to as a case run) and 100 cases are required to cover the ranges of the uncertain variables, the total training set consists of 10,000 points. As stated previously, there are various solutions to this issue, most of which involve some form of a dimensionality reduction technique. The function factorization approach used in this work embeds the dimensionality reduction as part of the emulator through factor analysis techniques. The following sections describe the formulation and implementation of the function factorization model.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
185
Formulation The main idea of function factorization (FF) is to approximate a complicated function, 𝑦(x), on a high dimensional space, , by the sum of products of a number of simpler functions, 𝑓𝑖,𝑘(x𝑖), on lower dimensional subspaces, . The FF-model is [17]
(23)
In (23), 𝐾 is the number of different factors and 𝐾 is the number of different components within each factor. The function 𝑓𝑖,(x𝑖) is therefore the latent (hidden) function of the 𝑘th component within the 𝑖th factor. These hidden patterns are not observed directly, but rather must be inferred from the training dataset. The patterns represent a hidden underlying trend within the training data that characterizes the input/output relationship. In the context of emulating safety analysis codes, the patterns correspond to trends between the inputs and the code output of interest, a temperature, for example. With two factors, factor 1 could be the time factor which captures the temperature response through time and factor two could be the trend due to an uncertain input or the interaction of several uncertain inputs. These hidden patterns are not observed directly but interact together to produce the observed temperature response. As will be discussed later, constructing the FF-model requires learning these hidden patterns from the observed training data. The difference between a factor and component is more distinguishable when (23) is rewritten in matrix form. The training output data will now be denoted as a matrix Y of size 𝑀×𝑁. In the GP emulator discussion, 𝑁 was the number of training points. In the FF-model framework, 𝑁 refers to the number of computer code case runs and 𝑀 is the number of points taken per case run. If one data point is taken per case run, 𝑀=1, then the number of case runs equals the number of training points. With two factors, there are two sets of training inputs, x1 and x2. The inputs do not need to be the same size. If factor 1 corresponds to the number of points taken per case run, then x1 is size 𝑀×𝐷1. Factor 2 would then correspond to the number of different case runs; thus x2 is size 𝑁×𝐷2. The entire set of training input values will be denoted as X = {x1, x2} and the entire training set will be denoted as for the GP emulator, = {X,Y}. With 1-component for each factor the FF-model becomes a matrix product of two vectors f1 and f2:
186
Safety Science and Technology
(24) For more than one component, each factor is represented as a matrix. The columns within each factor’s matrix correspond to the individual components within that factor. For the 2- factor 2-component FF-model the factor matrices are F1 = then [17]
. The FF-model is
(25) The elements within each of the factor matrices are the latent variables which represent that factor’s hidden pattern and must be learned from the training dataset. Performing Bayesian inference on the FF-model requires specification of a likelihood function between the training output data and the FF-model as well as the prior specification on each factor matrix. In general, any desired likelihood function could be used, but this work focused on a simple Gaussian likelihood with a likelihood noise and mean equal to the FF-model predictive mean. The likelihood function is therefore the same as the likelihood function between the latent GP variables as the training output, just with the FF-model replacing the GP latent variable. The prior on each component within each factor is specified as a GP prior. Because the FF-model uses a GP, the emulator is known as the FFGP model. As described in detail by Schmidt, this FFGP approach is a generalization of the nonnegative matrix factorization (NMF) technique [17]. Each GP prior is assumed to be a zero-mean GP with a SE covariance function, though in general different covariance functions could be used. The GP priors on the kth component for each of the two factors are written as
(26)
The semicolon notation within each of the GP priors denotes that both priors depend on the respective set of hyperparameters. Each covariance function consists of a similar set of hyperparameters as those shown in (3), namely, the signal variance and the length scales. An additional nugget hyperparameter, , was included to prevent ill-conditioning issues, but rather than fixing its value it was considered unknown. The hyperparameters for the (𝑖, 𝑘)th covariance function are denoted as in (26),
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
187
. Writing the GP priors in vector notation requires applying each of the covariance functions to their respective number of input pairs. Using notation consistent with Section 3.2.1, the GP priors on the kth component for both factors are
(27) Comparing (27)to the GP emulator formulation immediately highlights the key differences between the two emulator types. First, the GP emulator was able to specify a prior distribution on the output data itself, as given by (7), while the FFGP emulator specifies prior distribution on the latent patterns. As described in Section 3.2.1, (7) was actually derived by integrating the GP latent variables. The FFGP latent variables cannot be integrated however, and so the FFGP model requires learning the latent variables as well as the hyperparameters as part of the training algorithm. This adds significant complexity compared to the training of the standard GP emulator. However, this added complexity may enable an important computational benefit. The standard GP emulator covariance matrix consists of the covariance function applied to every input pair in the entire training set. For the present scenario there are a total of 𝑁𝑀 training points, which means the covariance matrix is size 𝑁𝑀×𝑁𝑀. In the FFGP framework, each factor’s covariance matrix is constructed by evaluating the factor’s covariance function only at each of that particular factor’s input pairs. The factor 1 covariance matrix is therefore size 𝑀×𝑀 and the factor 2 covariance matrix is size 𝑁×𝑁. By decomposing the data into various patterns, the FFGP emulator is a dimensionality reduction technique that works with multiple smaller covariance matrices.
Training Training the FFGP emulator requires learning all of the latent variables and hyperparameters. For notational simplicity, the following set of expressions will assume a 2-factor 1-component FFGP model. The joint posterior for FFGP models with more components is straightforward to write out. Denoting the set of all hyperparameters as Ξ = , the joint posterior distribution (up to a normalizing constant) between all latent variables and hyperparameters for a 2-factor 1-component FFGP model is
188
Safety Science and Technology
(28)
The log-likelihood function (up to a normalizing constant) between the training output data and the FF-model is [17]
(29)
In (29) denotes the Frobenius norm. The log prior for each of the factor’s priors is
(30)
The two factors are assumed to be independent a priori in (28). With more components, the setup is the same if all components within each factor are also assumed independent a priori. Any correlation between any of the components as well as across the factors is induced by the training data through the likelihood function. Drawing samples from the joint posterior with MCMC does not require any assumptions about the posterior correlation structure. Therefore any data induced posterior correlation can be completely captured by the MCMC inference procedure. Following Schmidt in [7], the Hamiltonian Monte Carlo (HMC) MCMC scheme was used to build the FFGP emulator. The HMC is a very powerful MCMC algorithm that accounts for gradient information to suppress the randomness of a proposal. See [7, 9, 16] for detailed discussions on HMC. The HMC algorithm is ideal for situations with a very large number of highly correlated variables, as is the case with sampling the latent variables presently. This work has several key differences from Schmidt’s training algorithm in [17], to simplify the implementation and increase the execution speed. Following [20], the latent variables and hyperparameter sampling were split into a “Gibbs-like” procedure. A single iteration of the MCMC scheme first samples the latent variables given the hyperparameters and then samples the hyperparameters given the latent variables. The latent variables were sampled with HMC, but the hyperparameters can now be sampled from a
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
189
simpler MCMC algorithm such as the RWM sampler. Although less efficient compared to the HMC scheme, the RWM performed adequately for this work. The next key difference relative to Schmidt’s training algorithm was to use an empirical Bayes approach and fix the hyperparameters as point estimates, similar to the hybrid style training algorithm of Section 3.2.2. The hyperparameter point estimates are denoted as . Once the hyperparameters are fixed, the HMC algorithm is restarted, but now the hyperparameters are considered known. The end result of the HMC algorithm is a potentially very large number of samples of all of the latent variables. One last simplification relative to Schmidt’s setup was to summarize the latent variable posteriors as Gaussians. Their posterior means and covariance matrices were empirically estimated from the posterior samples. All of the latent variables are denoted in stacked vector notation as the latent variables are
and the empirically estimated means of The empirically estimated covariance
matrix of all the latent variables is . As will be shown in the next section, this assumption greatly simplified making predictions with the FFGP emulator and ultimately provided a very useful approximation that aided the overall goal of emulator-based Bayesian model calibration.
Predictions The expressions required to make predictions with the FFGP emulator were summarized briefly in [21], but they will be described in detail here. Prediction with the FFGP emulator consists of two steps: first, make a prediction in the latent factor space and then combine the factor predictions together to make a prediction on the output directly. A latent space posterior prediction is very straightforward following MVN theory and is identical in procedure to posterior predictions with the GP emulator. The joint prior between the training latent variables and test latent variables is written out similar to (12). Before writing the joint prior, the two factors are stacked together into a single stacked vector, . Because the factors are independent a priori, the stacked covariance matrix is a block diagonal matrix:
190
Safety Science and Technology
(31) If more components are used, the individual factor covariance matrices are themselves block diagonal matrices. The training latent variables prior in the stacked notation is (32) The subscript f is used on the stacked covariance matrix to denote that it is the stacked training covariance matrix. The test latent variables prior in stacked notation is similar to (32): (33) The subscript ** is used on the stacked covariance matrix in (33) to denote that it is the stacked test covariance matrix. The cross-covariance matrix between the training and test points in stacked notation is defined as , which requires evaluating the covariance function between the training and test inputs within each factor. The stacked joint prior is now easily written as
(34) Equation (34) is identical in format to (12), except for one key difference. The joint prior is defined between the training and test latent variables, not between the training output and the test latent variables. Conditioning on the training latent variables, the test latent variable posterior predictive (conditional) distribution is
(35)
The posterior predictive (conditional) mean is (36) and the posterior predictive (conditional) covariance matrix is (37)
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
191
The goal is to make a prediction conditioned on the training dataset, not on particular values of the training latent variables. Therefore the training latent variables must be integrated out using their own posterior distribution computed during the training algorithm. The resulting predictive distribution will be approximated as a Gaussian with the mean estimated using the Law of Total Expectation [7]:
(38)
Substituting in (36) gives (39) The expression within the integral of (39) is simply the mean of the (stacked) training latent variables, which was empirically estimated from the posterior MCMC samples from the training algorithm. Thus, the posterior predictive test latent variable means are (40) The Law of Total Covariance is used to estimate the posterior predictive covariance of the test latent variables. In words, the Law of Total Covariance sums up the mean of the predictive conditional covariance with the covariance of the predictive conditions means, which is given as
(41)
Substituting in (36) and (37) as well as rearranging yields
(42)
Equations (40) and (42) are the approximate posterior predictive test latent variable mean and covariance matrix. They are referred to as being approximate because the training latent variable posterior distribution was approximated as a Gaussian with empirically estimated means and covariance matrix from the training algorithm.
192
Safety Science and Technology
The FF-model predictions can now be estimated. The FFmodel predictive distribution is approximated as a Gaussian, with the estimated FF-model predictive means stored in an 𝑀∗ × 𝑁∗ matrix denoted as H∗. 𝑀∗ is the number of predictive “locations” to be made per case, and 𝑁∗ is the number of cases to predict. If the FFGP model is emulating a transient, 𝑀∗ is the number of predictions per case and 𝑁∗ is the number of prediction cases. In general, the FFGP emulator can therefore make prediction at a large number of case runs all at once, something a computer code cannot do unless multiple instances are run simultaneously. Within the present framework of Bayesian calibration of the uncertain inputs, a single MCMC iteration requires only one case to be predicted at a time. However, the following expressions are presented for multiple case predictions at once. The following expressions change notation back to using the matrix form of the latent variables which requires splitting the stacked latent variables into their respective factors: (43) Then the stacked-factor vectors are reshaped into matrices:
(44) Additionally, the expressions will focus on the predictive FF-model distribution at a single point rather than in vector notation. This simplifies the notation considerably. The FF-model approximate predictive mean requires computing the expectation of the product of two latent variable factors. At the (𝑚∗, 𝑛∗)th predictive point the FF-model approximate predictive mean is
(45)
The kth component in the summation in (45) is the standard result for the product of two correlated random variables:
(46)
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
193
The FF-model approximate predictive variance is the variance of the summation of products of random variables plus the FF-model likelihood noise:
(47)
Writing out the expression completely gives
(48) Both (46) and (48) reveal the FF-model approximate prediction depends on the covariance between all components and all factors. This covariance structure of the posterior test latent variables is induced by the training dataset through the posterior training latent variable covariance structure.
FFGP-Based Calibration With the FFGP emulator posterior predictive distribution approximated as a Gaussian, a modified likelihood can be formulated in much the same way as the GP emulator-modified likelihood function described in Section 3.2.4. As stated earlier, a single case run is being emulated at each MCMC iteration when calibrating the uncertain inputs; therefore 𝑁∗ = 1. Any number of points in time (or in general any number of control variable locations, or predictions per case) can be predicted, but for notational convenience it is assumed that the number of predictions per case equals the number of observational locations, 𝑀∗ = 𝑁𝑜. At each MCMC iteration the FFGP emulator predictions are therefore size 𝑁𝑜 × 1. The joint posterior between the FFGP emulator predictions and the uncertain input parameters is
(49)
The likelihood function between the observational data and the predictions is assumed to be Gaussian with a known observational error
194
Safety Science and Technology
matrix, just as in Section 3.2.4. Integrating out the FFGP predictions gives the uncertain input posterior distribution which looks very similar to the expression in (21):
(50)
Assuming the observational error matrix Σ𝜖 is diagonal, the FFGP modified likelihood function factorizes as
(51) The FFGP-modified likelihood function for each observational data point is then
(52)
CALIBRATION DEMONSTRATION: FRICTION FACTOR MODEL Problem Statement A method of manufactured solutions-type approach is used to verify that the emulator-based calibration process is working as expected. The metric of success is that calibration based on the emulator replicates the calibration results if the computer code itself is used in the same MCMC procedure. The “computer code” in this context is a simple expression that would not actually require an emulator and can therefore be easily used to calibrate any of its inputs. Synthetic “observational” data are generated by setting the uncertain inputs at true values and computing the corresponding output. If the calibration process works as intended, the true values of the uncertain inputs will be learned from the synthetic observational data, within the assumed measurement error tolerance.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
195
A simple friction factor expression is used as the computer code: (53) Note that 𝑓 in (53) is the friction factor and not related to any of the emulator latent variables. The first demonstration below assumes that only 𝑏 is uncertain, while the second demonstration assumes that both 𝑏 and 𝑐 are uncertain. Note that the friction factor expression in (53) is written in the above form to facilitate specifying Gaussian priors on the uncertain inputs. The typical friction factor expression (𝑓 = 𝐵/Re𝐶) can be recovered by substituting in 𝐵 = exp(𝑏) and 𝐶 = exp(𝑐) into (53). Gaussian priors on 𝑏 and 𝑐 are therefore equivalent to specifying log-normal priors on 𝐵 and 𝐶. The prior means on 𝑏 and 𝑐 equal McAdam’s friction factor correlation values: log(0.184) and log(0.2), respectively. The prior variances on each are set so that 95% of the prior probability covers ±50% around the prior mean. Each demonstration follows the emulator-based calibration steps outlined in Figure 1.
Demonstration for the Case of One Uncertain Parameter With only 𝑏 uncertain, the friction factor expression can be decomposed into the product of two separate functions. The first is a function of the Reynolds number and the second is a function of 𝑏: (54) The 1-component FFGP emulator should be able to exactly model this expression, within the desired noise level, because the 1-component FFGP emulator is, by assumption, the product of two functions. The control variable is the Reynolds number; therefore the two factors in the FFGP model are the Reynolds number factor (factor 1) and the uncertain input factor (factor 2). The training data was generated assuming 15 case runs, 𝑁 = 15, and 10 control variable locations per case, 𝑀 = 10. These numbers were chosen based on the “rule of thumb” for GP emulators that requires at least 10 training points per input [5]. The 𝑏 training values were selected at 15 equally spaced points in ±2𝜎. The Re training inputs were selected at 10 equally spaced points over an assumed Reynolds number range, between 5000 and 45000. The training data is shown in blue in Figure 2 along with the synthetic observational data in red. The measurement error is assumed
196
Safety Science and Technology
to be 10% of the mean value of the friction factor. The Reynolds number is shown in scaled terms where 0 and 1 correspond to the minimum and maximum training value, respectively. Figure 2 clearly shows that the true value of 𝑏 falls between two of the training case runs.
Figure 2: One uncertain parameter demonstration training set.
Even with only one uncertain parameter, there are actually two inputs to the computer code: Re and b. If the standard GP emulator was built, a space filling design would need to be used, such as Latin Hypercube Sampling (LHS), to generate input values that sufficiently cover the input space. The FFGP training set is simpler to generate for this demonstration because each factor’s training input values can be generated independent of the other factor. This illustrates how the FFGP emulator decomposes, or literally factorizes, the training set into simpler, smaller subsets. The 2-factor 1-component FFGP emulator is built following the training algorithm outlined in Section 3.3.2. The posterior results of the observation space training points are shown in Figure 3. The red dots are the training output data and although difficult to see, the blue lines are the posterior quantiles on the FFGP training output predictions corresponding to the 5th, 25th, 50th, 75th, and 95th quantiles. The quantiles are tightly packed together representing that the FFGP emulator has very little uncertainty. This meets expectations since by assumption the 2-factor 1-component FFGP emulator should be able to exactly model the product of two functions.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
197
Figure 3: Posterior observational training predictions.
The uncertain b input is calibrated using the FFGP modified likelihood function. The input is scaled between 0 and 1, which corresponds to a prior scaled mean of 0.5 and prior scaled variance of 0.25. Posterior samples were drawn using the RWM algorithm with the FFGP emulator-modified likelihood function. A total of 2×104 samples were drawn with the first half discarded as burn-in. Figure 4 shows the scaled posterior samples in blue with the true value displayed as the horizontal red line. The mixing rate is very high and the posterior samples are tightly packed around the true value. Figure 5 shows the estimated posterior distribution in blue relative to the relatively uncertain prior in black. The red line is the true value. Figure 5 illustrates how precise the posterior b distribution is, confirming that the 2-factor 1-component FFGP emulator is working as expected.
Figure 4: Scaled b posterior samples.
198
Safety Science and Technology
Figure 5: Estimated scaled b posterior and prior densities.
Demonstration for the Case of Two Uncertain Parameters With both b and c uncertain, the uncertain input function g(b,c) cannot be written explicitly. It is expected that the 2-factor 1-component FFGP model will no longer be able to exactly model this relationship since the friction factor is no longer a product of two simple functions. A 3-factor model could be used, but this work focused on 2-factor models for convenience. The 2-factor FFGP model requires additional components to gain the necessary flexibility to handle this. The downside of using only 2-factors requires the uncertain parameter factor (factor 2) to be trained with space filling designs, such as LHS. This work did not focus on finding the absolute “best” training set, which is an active area of research. The LHS generated training dataset is shown in Figure 6. Fifty case runs were made with 25 points taken per case, 𝑁 = 50 and 𝑀 = 25. Using more training points helped guarantee the training dataset would “surround” or cover the observational data. For comparison purposes a standard GP emulator was built for this dataset. The GP emulator training points are shown as circles in Figure 6 and correspond to one point taken per case. The Reynolds numbers selected for the GP emulator training set were chosen as part of the LHS process. The FFGP emulator uses a total of 𝑁𝑀 = 1250 training points but the two factor covariance matrices are sizes (𝑀 × 𝑀) =
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
199
(25 × 25) for factor 1 and (𝑁 × 𝑁) = (50 × 50) for factor 2. The GP emulator covariance matrix is size (𝑁 × 𝑁) = (50 × 50) because only 50 points were used by assumption. If all of the training points were used, the GP emulator covariance matrix would be size (𝑁𝑀 × 𝑁𝑀) = (1250 × 1250). The FFGP emulator setup can therefore drastically reduce the computational burden and facilitate using as many training points as possible.
Figure 6: Two uncertain input demonstration training sets.
Examining Figure 6 also shows that both the FFGP and GP emulator training sets are quite poor compared to the training set used in the one uncertain input demonstration. Only a few case runs lie within the error bars of the data and there are very few GP emulator training points near the observational data. It would be relatively easy to keep adding new training points manually for this demonstration to yield a training set that is closer to the observational data. However, in a real problem with many uncertain inputs it may be very difficult to do that manually. This demonstration problem was set up this way to show that the FFGP emulator would outperform the GP emulator due to the pattern recognition capabilities. A total of 3 emulators were constructed, the standard GP and a 2-factor 1-component and 2-factor 2-component FFGP emulators. Due to different output scaling it is difficult to compare the training results between the GP and FFGP emulators, but a simple approach to compare FFGP performance at the end of the training algorithm is to compare the likelihood noise
200
Safety Science and Technology
hyperparameter. The
hyperparameter was reparameterized during the
RWM sampling as = exp(2𝜙𝑛). The more negative 𝜙𝑛 is, the smaller the likelihood noise will be for that particular emulator. Figures 7 and 8 show the sample histories for 𝜙𝑛 for the 1-component and 2-component FFGP models, respectively. In each figure, the gray line is the initial guess, the blue line shows the samples, and the red line shows the point estimate. It is very clear that the 2-component FFGP emulator is far more accurate relative to the training set. The 𝜙𝑛 point estimate for the 1-component model gives a likelihood standard deviation (𝜎𝑛) that is over 45x that of the 2-component emulator. This illustrates the point that the 1-component FFGP emulator is no longer an exact representation of the computer code. The additional component within the 2-component FFGP emulator provides the extra flexibility needed to match the training set more accurately.
Figure 7: FFGP 2-factor 1-component likelihood noise hyperparameter samples.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
201
Figure 8: FFGP 2-factor 2-component likelihood noise hyperparameter samples.
With the FFGP emulators built, they were used to calibrate the uncertain b and c inputs using the FFGP modified likelihood function within the AM-MCMC routine. A total of 105 samples were drawn with the first half discarded as burn-in. The calibrated posterior predictions for the 1- and 2-component FFGP emulators are shown in Figures 9 and 10, respectively. In both figures, the plot on the left shows the posterior calibrated predictions along with all of the training data. The plot on the right zooms in on the posterior calibrated predictions and the observational data. The gray lines are the training data. In both figures, the blue lines are the posterior quantiles (the 5th, 25th, 50th, 75th, and 95th quantiles) of the predictive means and although difficult to see, the black line is the mean of the predictive means. The blue lines therefore represent what the emulator thinks the computer code’s posterior predictive quantiles would be if the computer code had been used. The green band is the total predictive uncertainty band of the emulator, spanning 95% of the emulator prediction probability, and is ±2 around the mean of the predictive means. Thus, the green band represents the emulator’s confidence. If the edge of the green band falls directly on top of the outer blue lines, the emulator is essentially perfect and contributes
202
Safety Science and Technology
no additional uncertainty to the posterior predictions. A gap between the outer blue lines and the edge of the green band, however, illustrates that the emulator has some associated uncertainty when it makes predictions. The emulator is not perfect, as described in the previous sections, and therefore some spacing between the green band’s edge and the outer blue lines is expected. However, if the gap width is large, the emulator’s own predictive uncertainty starts to dominate the total predictive uncertainty. Considering these conventions, the 1- and 2-component FFGP emulators can be visually compared quite easily. As shown in Figure 10 the green band is very close to the spread in the blue lines; thus the 2-component FFGP emulator adds very little additional uncertainty in the predictions. The 2-component FFGP emulator’s higher posterior predictive precision relative to the 1-component FFGP emulator is in line with the training results shown in Figures 7 and 8. The 1-component FFGP emulator required more noise to match the training data, which was always propagated through onto the predictions, yielding more uncertain predictions.
Figure 9: FFGP 2-factor 1-component calibrated posterior predictions ((a) covers entire training set; (b) zoom in on the observational data).
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
203
Figure 10: FFGP 2-factor 2-component calibrated posterior predictions ((a) covers entire training set; (b) zoom in on the observational data).
Reducing the emulator predictive uncertainty allowed the 2-component FFGP emulator to be more accurate relative to the observational data. As shown in Figure 9, the 1-component FFGP emulator predictions seem to regress the observational data, within the total predictive uncertainty. The 2-component FFGP emulator’s reduced total predictive uncertainty allows the data trend to be captured more accurately. The b and c inputs were also calibrated using the GP emulator. Once constructed the GP modified likelihood function was used within the AMMCMC scheme. The same number of samples was drawn as was done for the FFGP case, to provide a direct comparison between the GP-modified and FFGP-modified likelihood functions. The GP-based calibrated posterior predictions are shown in Figure 11 using the same format as the FFGP predictions. The GP emulator adds less uncertainty to the predictions than the 1-component FFGP emulator but is more uncertain and less accurate relative to the data than the 2-component FFGP emulator. The predictions over the first half of the (scaled) Reynolds numbers are very accurate and are similar to the 2-component FFGP emulator predictions. The latter half of the Reynolds number predictions, however, are worse relative to the 2-component FFGP emulator predictions. The reasons for the difference are best explained by examining the posterior distributions on the b and c parameters.
204
Safety Science and Technology
Figure 11: GP calibrated posterior predictions ((a) covers entire training set; (b) zoom in on the observational data).
The posterior distributions from each of the three emulator-based calibration processes are shown in Figures 12, 13, and 14. In all three figures, the black line is the estimated prior, blue is the emulator-based estimated posterior, red is the true value, and green is the estimated posterior when the computer code (the friction factor expression) is used in the AMMCMC scheme instead of the emulators. Each of the figures is shown over the scaled input ranges, so 0.5 is the scaled prior mean. The computer codebased calibration results find the true values very well, with the posterior mode lining up nearly exactly with the true values. The posterior variance is limited by the assumed measurement error. Although not shown, the posterior variance decreases as the assumed measurement error is decreased.
Figure 12: GP-based uncertain input posterior distributions (black: prior, blue: emulator-based posterior, green: computer code-based posterior, and red: true value).
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
205
Figure 13: 1-component FFGP-based uncertain input posterior distributions (black: prior, blue: emulator-based posterior, green: computer code-based posterior, and red: true value).
Figure 14: 2-component FFGP-based uncertain input posterior distributions (black: prior, blue: emulator-based posterior, green: computer code-based posterior, and red: true value).
Although the GP emulator is capable of finding the correct posterior modes, the (marginal) posterior distributions do not match the computer code-based posterior distributions. Smaller second modes are present
206
Safety Science and Technology
in both input parameters. As described in detail in [12], the relatively sparse GP training set is impacting the posterior results. The GP is only able to resolve the overall trend, as illustrated by the GP-based posterior mode roughly corresponding to the computer code-based posterior mode. The posterior tails however cannot be resolved since the emulator’s own predictive variance starts to impact predictions far from the overall trend. The variation in the output data can therefore be explained by the additional noise from the emulator, rather than variation in either of the inputs. The inputs can therefore take on values they would not normally have, since from the emulator’s point of view the prediction overlaps the data’s own error. The 1-component FFGP emulator-based results also support this concept, since the posterior distributions in Figure 13 are still quite broad. The emulator is capable of shifting the (marginal) posterior distributions in the correct directions, but the additional emulator uncertainty prevents the MCMC sampling from resolving any additional information about the input values. The 2-component FFGP emulator, however, is so accurate relative to the actual friction factor “computer code,” that its uncertain input (marginal) posterior distributions, as shown in Figure 14, are almost identical to the computer code-based results. In more complex problems, it is not expected that the FFGP-based results will always be as accurate as in this simple demonstration. However, the FFGP emulator-based calibration is capable of matching the computer code-based calibration results as shown here. In more complex, and realistic situations, the computer code-based results will not be available for comparison, so it was important to verify through this method of manufactured solution problem that the emulator-based process works as expected.
CONCLUSIONS The emulator-based calibration approach with the FFGP model was shown above to be capable of reproducing the calibration results obtained when the actual computer code is used in the MCMC sampling. As explored in [11, 17], the efficacy of the FFGP in this application can depend on how the model is structured, but, in cases explored, the additional FFGP emulator was shown to outperform the standard GP emulator, on the given friction factor demonstration problem, because it is capable of efficiently using more training data. This is an important feature because safety analysis problems produce time series predictions which could prove to be computationally expensive for standard GP emulators. Reducing the computational burden
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
207
would require choosing a limited subset of all of the training runs, which might negatively impact the GP emulator-based results as described in [12]. The friction factor calibration GP-based results presented in this work confirmed those issues. The FFGP emulator however uses pattern recognition techniques to efficiently decompose the training data. The latent or hidden patterns allow more training data to be used which can drastically improve the predictive accuracy of the emulator. This paper specifically covered the theory and formulation of the FFGPbased calibration approach. Work to presented in a subsequent paper, applies the FFGP-based calibration approach to a more realistic safety analysis scenario, an EBR-II loss of flow transient modeled with RELAP5. As will be shown in that paper, the FFGP-based calibration approach is over 600 times faster than if the RELAP5 model was used directly. Moreover, the FFGP approach is needed, because the standard GP emulator does not provide the necessary flexibility to emulate the RELAP5 time series predictions.
208
Safety Science and Technology
REFERENCES 1.
D. L. Kelly and C. L. Smith, “Bayesian inference in probabilistic risk assessment: the current state of the art,” Reliability Engineering and System Safety, vol. 94, no. 2, pp. 628–643, 2009. 2. D. Higdon, M. Kennedy, J. C. Cavendish, J. A. Cafeo, and R. D. Ryne, “Combining field data and computer simulations for calibration and prediction,” SIAM Journal on Scientific Computing, vol. 26, no. 2, pp. 448–466, 2004. 3. D. Higdon, J. Gattiker, B. Williams, and M. Rightley, “Computer model calibration using high-dimensional output,” Journal of the American Statistical Association, vol. 103, no. 482, pp. 570–583, 2008. 4. A. Keane and P. Nair, Computational Approaches for Aerospace Design: The Pursuit of Excellence, John Wiley & Sons, 2005. 5. MUCM, Managing Uncertainty in Complex Models Project, http:// www.mucm.ac.uk. 6. C. E. Rasmussen and C. Williams, Gaussian Processes in Machine Learning, Springer, New York, NY, USA, 2004. 7. K. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, Cambridge, Mass, USA, 2012. 8. C. Linkletter, D. Bingham, N. Hengartner, D. Higdon, and K. Q. Ye, “Variable selection for Gaussian process models in computer experiments,” Technometrics, vol. 48, no. 4, pp. 478–490, 2006. 9. N. Zuber, G. E. Wilson, B. E. Boyack (LANL) et al., “Quantifying reactor safety margins Part 5: evaluation of scale-up capabilities of best estimate codes,” Nuclear Engineering and Design, vol. 119, no. 1, pp. 97–107, 1990. 10. N. Zuber, U. S. Rohatgi, W. Wulff, and I. Catton, “Application of fractional scaling analysis (FSA) to loss of coolant accidents (LOCA): methodology development,” Nuclear Engineering and Design, vol. 237, no. 15–17, pp. 1593–1607, 2007. 11. J. Yurko, Uncertainty quantification in safety codes using a Bayesian approach with data from separate and integral effect tests [Ph.D. thesis], MIT, Cambridge, Mass, USA, 2014. 12. F. M. Hemez and S. Atamturktur, “The dangers of sparse sampling for the quantification of margin and uncertainty,” Reliability Engineering and System Safety, vol. 96, no. 9, pp. 1220–1231, 2011.
Demonstration of Emulator-Based Bayesian Calibration of Safety ...
209
13. J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn, “Design and analysis of computer experiments,” Statistical Science, vol. 4, no. 4, pp. 409–435, 1989. 14. W. J. Welch, R. J. Buck, J. Sacks, H. P. Wynn, T. J. Mitchell, and M. D. Morris, “Screening, predicting, and computer experiments,” Technometrics, vol. 34, no. 1, pp. 15–25, 1992. 15. N. A. Cressie, Statistics for Spatial Data, John Wiley & Sons, New York, NY, USA, 1993.View at: Publisher Site 16. M. C. Kennedy and A. O’Hagan, “Bayesian calibration of computer models,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 63, no. 3, pp. 425–464, 2001. 17. M. N. Schmidt, “Function factorization using warped Gaussian processes,” in Proceedings of the 26th International Conference on Machine Learning, pp. 921–928, New York, NY, USA, June 2009. 18. J. Luttinen and A. Ihler, “Variation Gaussian process factor analysis for modeling Spatio-temporal data,” in Advances in Neural Information Processing Systems, vol. 22, pp. 1177–1185, 2009. 19. H. Haario, E. Saksman, and J. Tamminen, “An adaptive metropolis algorithm,” Bernoulli, vol. 7, no. 2, pp. 223–242, 2001. 20. R. Neal, “MCMC using Hamiltonian dynamics,” in Handbook of Markov Chain Monte Carlo, pp. 113–162, Chapman & Hall, CRC Press, 2011. 21. J. Yurko, J. Buongiorno, and R. Youngblood, “Bayesian calibration of safety codes using data from separate and integral effects tests,” in Proceedings of the International Topical Meeting on Probabilistic Safety Assessment and Analysis, Sun Valley, Idaho, USA, 2015.
Chapter 10
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety and Operation Assessment: A Case Study
Chao Li, Mohammad Karimi, and Ciprian Alecsandru Department of Building, Civil and Environmental Engineering, Concordia University, Montréal, QC, Canada
ABSTRACT This study proposes two general alternative designs to enhance the operation and safety of High Occupancy Vehicle (HOV) lanes at junctions with bus terminals or parking lots. A series of analysis tools, including microscopic simulation, video-based vehicle tracking technique, and Surrogate Safety Assessment Model (SSAM), are applied to model and test the safety and operational efficiency of an HOV road segment near a bus terminal in Québec as a case study. A metaheuristic optimization algorithm (i.e., Whale Optimization Algorithm) is employed to calibrate the microscopic model
Citation: Chao Li, Mohammad Karimi, Ciprian Alecsandru, “Microscopic SimulationBased High Occupancy Vehicle Lane Safety and Operation Assessment: A Case Study”, Journal of Advanced Transportation, vol. 2018, Article ID 5262514, 12 pages, 2018. https://doi.org/10.1155/2018/5262514. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
212
Safety Science and Technology
while deviation from the observed headway distribution is considered as a cost function. The results indicate that this type of HOV configurations exhibits significant safety problems (high number of crossing conflicts) and operational issues (high value of total delay) due to the terminalbound buses that frequently need to travel across the main road. It is shown that the proposed alternative geometry design efficiently ameliorates the traffic conflicts issues. In addition, the alternative control design scheme significantly reduces the public transit delay. It is expected that this methodology can be applied to other reserved lane configurations similar to the investigated case study.
INTRODUCTION The HOV lane represents a restricted usage traffic lane reserved for vehicles carrying a predetermined number of occupants. The implementation of an HOV lane system targets mobility improvement of both current and future roadway networks. Considering over forty years of deployment of HOV lanes, it has been proven that reserved lanes contribute to mitigating traffic congestion in urban areas and reduce the person-hour delay effectively [1, 2]. However, many problems related to various implementations of HOV lanes have been identified. These problems can be roughly classified into two categories, the reduction of capacity (for the non-HOV users) and potential traffic safety issues, respectively. The former category may include increased congestion on the adjacent General Purpose (GP) lanes and/or reduction of vehicle speeds due to the merging maneuvers of High Occupancy Vehicles into the GP lanes. The latter category is mainly related to the lane changes at prohibited locations, especially in the proximity of junctions with other road facilities, such as bus terminals or parking lots [3]. Currently, efforts are continually being made to explore the new ways to improve the operation and safety of HOV facilities. However, there is no universally accepted method to evaluate the effectiveness of safety of certain HOV facilities [4]. Some studies focused on the HOV safety evaluation based on the statistical analysis of accidents data during long periods [5]. Several studies examined the safety of HOV facilities with respect to different types of geometric design based on the collision and driving behavior (i.e., lane-changing) data [6, 7]. Nevertheless, obtaining reliable accident data over a long enough period is not always possible, especially for recently deployed facilities. A reliable accident-based analysis takes a long time to establish and thus is not suitable for current urban traffic system
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
213
development. In addition, many characteristics of the urban traffic system may change over time (e.g., traffic demand volumes, road alignments, traffic mix, etc.), and this might require an expedited method to assess the existing traffic conditions. Accordingly, using conflict analysis as a method of safety assessment is preferable, as it makes analyzing the safety improvement before implementing any treatment in the real world possible. However, the geometric configuration of an HOV facility has significant impacts on the safety performance [7]. For instance, conducting the before-after study of converting the continuous access to limited access of lane changes in HOV lanes has shown a significant decrease in conflict occurrence. Therefore, the HOV facilities with limited access are expected to be safer than those with continuous access. To validate this conclusion, more studies must be conducted. However, there is limited opportunity for researchers to conduct before-after studies of road facilities with respect to the geometric modification, because they are too infrequent. Therefore, utilizing simulation tools may be an effective remedial measure to overcome the limitation of data availability and to evaluate the impacts of potential geometric alignment changes of existing facilities. Several studies have introduced the evaluation of safety or capacity of HOV facilities utilizing microsimulation [3, 8, 9]. However, these studies mainly focused on the analysis results of the study areas. Therefore, it is necessary to develop a systematic assessment method for HOV lanes. In particular, the HOV deployment on arterials in the proximity of the terminals and parking lots can be conducted using real-word data to calibrate a microscopic simulation model. In this study, a VISSIM microsimulation model is developed to test the safety and operational efficiency of an urban HOV facility near a bus terminal in Québec, Canada. This model is calibrated by employing a metaheuristic optimization algorithm–Whale Optimization Algorithm (WOA)—to minimize the deviation of simulations results from the observed data. Two general alternative network designs are proposed for comparison analysis (i.e., one modifies the existing road geometric alignment; another one proposes a change in the existing traffic control strategy). To assess the road safety impact of the proposed alternative designs, the Surrogate Safety Assessment Model (SSAM) is applied to compare the simulated vehicle conflicts between the existing network and the alternative solutions. The results indicate that the status quo of the study area exhibits a safety problem due to high interactions between buses and passenger cars. The proposed alternative geometry design efficiently eliminates the traffic conflict. In
214
Safety Science and Technology
addition, the alternative control design scheme significantly reduces the public transit delay.
LITERATURE REVIEW Traditionally, most traffic safety studies employed statistical analysis of accident records within a given study area [10–13]. Several studies pointed out the drawbacks of using authority reported crash data for safety evaluation, for example, the lack of ability to evaluate the safety of traffic facilities yet to be built or to assess the traffic remediation solution yet to be applied in the field. In addition, the seldom and random occurrence of traffic accidents lead to the slowness of establishing analysis [14] or the lack of ability to deduce the crash process [15, 16]. On account of these drawbacks, an alternative safety evaluation approach which includes the computer microsimulation modeling of vehicle interactions was developed over the past couple of decades. This approach was possible mainly due to the advancements in computing technologies that allowed the development of enhanced traffic simulation models to be able to replicate vehicle interactions through modeling complex driving behaviors [16, 17]. A significant advantage of simulation-based safety analysis is that microsimulation models can easily generate and measure various safety performance indicators [18, 19]. The typical safety performance indicator is the vehicular conflict, given that conflicts can be observed more frequently than crashes and that their frequency is expected to be correlated with the crash occurrence [14, 20–22]. Various studies have validated the statistical significance and correlation between conflicts and accidents [23–26]. A dedicated tool, namely, SSAM was developed by Federal Highway Administration (FHWA) to automatically identify, classify, and evaluate the severity of the simulated traffic conflicts [14]. Several studies showed that by combining VISSIM and SSAM a reliable tool for traffic safety evaluation can be used, provided that a consistency between the field observed and simulated conflicts is observed [27, 28]. Another study proposed a twostep calibration procedure of VISSIM (Wiedemann model) to enhance correlation between simulated and field-measured conflicts [29]. Therefore, if the simulation model is properly calibrated, it can be used to represent reliably the real-world traffic network in terms of both operation and safety parameters.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
215
METHODOLOGY Modeling of Geometry and Flow Typically, more detailed information contained in the simulation model contributes to capturing more reliably the traffic conditions at a given study area. This is especially important for a traffic safety simulation model, which requires good accuracy of both simulated capacity and vehicle performance. The basic input to this model is represented by the road characteristics (i.e., the number of lanes on each direction, the lane separation type, and the position of access). In this study, the links and connectors of the study area were built in VISSIM by means of an aerial photo from Google Maps®. Some details of the geometry, for example, the access position of the public transit terminal, were measured on the field and were compared with the field-recorded videos to ensure the accuracy. Similarly, the position of the reserved lane was collected on the field and included in the simulation model. Traffic flow is another important input parameter as it relates to the road capacity, one of the potential calibration variables. Traffic flows were measured using the videos recorded at the study area—the following data was collected: the vehicle counts of each lane, vehicle routes within the study area, and the vehicle types (e.g., bus, truck, and passenger cars). In this study, in order to smooth out random variations in flows, while maintaining good precision, the vehicle flows were recorded and input into the model in five-minute increments. An additional five-minute period without vehicle demand was included at the end of each simulation scenario to avoid truncating the analysis period observed in the field. To model the observed vehicle composition, road users were identified and classified into three categories, passenger cars, buses, and trucks, respectively. The basic vehicle characteristics, for example, the acceleration rate, vehicle length, and vehicle weight of each vehicle type, can be modeled separately in VISSIM so as to reflect the traffic more realistically. To determine individual vehicle routes, vehicles were tracked in from the videos generated by three cameras that were used to cover the whole study area. The route of each vehicle in the simulation was assigned in strict accordance with the path observed on the video recordings to ensure a realistic representation of the study area.
Modeling of Traffic Signal The peak hour traffic signal cycle length and the red, amber, and green time intervals on each direction were collected on the field and modeled in
216
Safety Science and Technology
VISSIM. In this study, a fix-cycled signal program was built and set at the intersection to replicate the traffic light at the study area. Some additional signal control strategy was used in this study to improve the network performance; for example, a fix signal cycle contains a protected left-turn phase at the intersection and a pulse-triggered signal at the public transit terminal. To improve the efficiency of public transit, a pulse-triggered signal control was implemented by adding a detector at the exit of the terminal and signal heads linked with the detector near the terminal. An add-on signals design model, namely, Vehicle Actuated Programming (VAP) was programmed to control this actuated signal. Typically, a signal phase of permanent green on the main street and permanent red on the minor road is toggled when no buses are detected. Meanwhile, when the existing buses are detected by the sensor, the signal is programmed to switch to the complementary phase (i.e., green signal on the minor road and red on the main road), thus protecting the movements of buses crossing through multiple lanes.
Modeling of Right of Way without Signal Control In VISSIM priority rules are defined to capture the conflicting traffic flows that are not controlled by signals. In this study, the priority rules were set at the entry and the exit zones of the bus terminal, in order to realistically model the access and egress movements of buses as they were observed in the video recordings. Typically, the buses travel to and from the terminal, yielding to the vehicles traveling along the main arterial, and stop in position near the access or exit until acceptable gaps occur on both directions on the main road. Two thresholds are set for the priority rules to confine the crossing of the yielding vehicles, respectively, the minimum headway and the minimum gap time. A yielding vehicle will stop before the stop line until both predetermined thresholds are achieved. The values of the thresholds are determined by reviewing all the accepted gaps and headways by the crossing buses from the video. The conflict areas are automatically generated in VISSIM where links or connectors overlap. In this study, the priority rules at the conflict areas were set to capture the vehicles approaching the conflict area from the minor road and yielding those from the main road, as typically observed in the field. The gap time needed for crossing at the conflict area was determined similarly by reviewing the video recordings.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
217
Another important VISSIM calibration parameter is the avoid blocking value, which defines the ratio of vehicles that do not stop in the middle of a junction. This value is defaulted to be 100% in VISSIM; in other words, all the vehicles will follow the rule, not to block the junctions, if there is stopping traffic ahead. However, by reviewing the video recorded at the study area, no vehicle obeyed this rule. Therefore, to reflect the real conditions, this value is set to 0% for all the conflict areas in simulation models used in this study.
Modeling of Driving Behavior Properly modeling of the field observed driving behavior is critical for road safety evaluation, since it directly influences the vehicle interactions in a micro level. Microsimulation tool VISSIM adopted Wiedemann car following model as the main portion for modeling the vehicle longitudinal movement and rule-based laws for modeling of vehicle lateral movement and lane change behavior. In this study, the Wiedemann 74 model is selected to simulate the urban motorized traffic as suggested by the VISSIM user’s manual [30]. This model contains three adjustable parameters, respectively, the average standstill distance, the additive part of safety distance, and multiplicative part of safety distance. Average standstill distance defines the average desired distance between two cars. Additive part of safety distance and multiplicative part of safety distance represent the values used for the computation of the desired safety distance. For the initial simulation, the values of these three parameters are usually defined with the default value. However, they must be calibrated later to suite the real driving behaviors of the study site. The lane change behaviors are defined by a rule-based model in VISSIM. In this model, the critical parameter that decides whether a lane change would be executed is the minimum headway. A vehicle can only change lane when there is a distance gap arrival at the adjacent lane that is bigger than the predetermined minimum headway. Otherwise, it has to either travel continuously or stop and wait until the occurrence of an enough gap for it in order to merge according to its predefined route. In this study, the value of the minimum headway was determined by reviewing the videos. Another noticeable parameter defined in the lane change model is the advanced merging; this option is selected in this study; thus more vehicles can change lanes earlier when following their routes, as encountered in the videos.
218
Safety Science and Technology
Measurement of Vehicle Speed Distribution by Feature-Based Tracking Vehicle speed distribution is an important input parameter for safety simulation. While potentially more accurate, individual vehicle speed on multiple lanes is usually difficult to measure on the field simultaneously with radar devices. Therefore, an alternative method was applied in this study to measure the vehicle speed, which is the video-based feature tracking. An open-sourced software project, namely, Traffic Intelligence, was used to automatically track and measure the speed of the vehicles caught by the video at the study site [31]. Traffic Intelligence consists of a set of tools that work cooperatively for traffic data processing and analysis, including camera image calibration, feature tracking, and trajectory data analysis. The feature-based tracking algorithm utilizes a homography file that projects the camera image space to the real-world ground plane. The homography file was created by utilizing a video frame and a corresponding aerial photo with known scale (pixels per meter). In this study, an aerial photo of the study site from Google Maps with known scale of 0.21 pixels per meter was adopted. In total ten noncollinear visible points on the video frame were positioned on the aerial photo; thus, the video image was projected to the aerial photo, and the vehicles tracked in the video were deemed to be tracked in the real-world plane with their speeds. Figure 1 shows the points projected to the aerial photo from the video frame.
Figure 1: Points selected on the video frame to compute homography file.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
219
Based on the computed homography file, the feature tracking program can be run. The predetermined number of features of each vehicle in the video was detected and tracked frame by frame until the vehicle leaves the video capture area. In order to suppress the interference of the shadows, a mask image was created and toggled with the video image; therefore only the features within the white range of the mask image can be detected, and the shadows can be filtered out. The features that move consistently were then grouped together to generate the trajectory file of each vehicle, and all the trajectories generated from the video were written into a database. The average speed of each vehicle can be easily read by processing their trajectories. Figure 2 shows the feature tracking process by Traffic Intelligence.
Figure 2: Feature tracking process by Traffic Intelligence.
Model Calibration In order to determine the optimum values for the calibration parameters, an objective function should be defined based on the error between observed data and simulated data. The objective function is the deviation of the simulated gap distribution from the real observed gap distribution. In order to test this goodness of fit (objective), the Chi-square test was employed. In this study, the westbound vehicle gap distribution on the GP lane near the bus terminal was taken as the criterion to calibrate the model, because the vehicle time gap directly reflects the car following behavior. The real vehicle gaps were observed manually from the video using the MPC player that provides milliseconds accuracy. Because the vehicles travel westbound pass through a signalized intersection before they enter the cameras field of
220
Safety Science and Technology
view, to eliminate the impact of the red time at the intersection, the time gaps bigger than 5 seconds were ignored. The distribution of all the observed gaps that are smaller or equal to 5 seconds was recorded in a histogram with a sample rate of 0.3 seconds. Figure 3 shows the observed vehicle gap distribution.
Figure 3: Observed vehicle gap distribution.
In this paper, the Whale Optimization Algorithm (WOA), a metaheuristic nature-based algorithm, is applied to calibrate the model. The deviation of the simulated headway distribution from its’ observed distribution is considered as the objective function to be minimized during the calibration process. WOA is inspired by hunting behavior of humpback whales. It is defined as “the simulated hunting behavior with random or the best search agent to chase the prey and the use of a spiral to simulate bubble-net attacking mechanism of humpback whales” [32]. The hunting behavior of whales is representative of the procedure of this algorithm. The three parameters of the Wiedemann 74 model in VISSIM (i.e., average standstill distance (ASSD), additive part of safety distance (APSD), and multiplicative part of safety distance (MPSD)) which has the highest impact on the model have been selected to be calibrated. Using MATLAB, an optimization toolbox connecting to COM-interface of VISSIM by M-file programming in MATLAB, the calibration process has been accomplished. After 190 simulation runs, the optimal values of parameters were determined to be as follows: ASSD = 1.156, APSD = 0.637, and MPSD
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
221
= 8.079. For diverse random seeds, the simulation results showed these optimal parameters lead to statistically matching the observed headway distributions with the simulated ones at 90% confidence level. It is worth mentioning that the simple way to optimize the cost function is exploring the whole possible region of the parameters to find the global minimum, which is extremely time-consuming. For this case study, these optimization parameters took values within the following intervals: ASSD between 0 and 2, APSD between 0 and 1, and MPSD between 0 and 10. By exploring these intervals, the optimal values were found after nearly 1000 simulation runs. The lateral movement of buses that merge into the main traffic from HOV lane or travel across the road when an acceptable gap was identified was also calibrated by adjusting the parameters of the priority rule. The minimum gap time and distance headway were set to 6 seconds and 20 meters, respectively, similar to the values observed in the recorded videos. It is noticeable that a part of the terminal-bound buses changed lanes between the reserved HOV lane and the adjacent GP lane before the intersection; this behavior is reflected in the simulation model.
Simulation Output VISSIM provides direct output of various kinds of simulation results. In this study, the vehicle delay and trajectory were analyzed to evaluate the operational efficiency and safety of the study area. Vehicle delay data can be generated by setting vehicle travel time on the defined vehicle routes, which are defined by a Starting Point and an End Point, respectively. For the vehicles that pass through the Starting Point and the End Point, successively, the travel time delays are automatically calculated. The vehicle delays of the interested vehicle routes were then analyzed to evaluate the operational efficiency of the network. The trajectories of all the simulated vehicles can be generated by VISSIM, and the recorded trajectory data was then analyzed using SSAM, to evaluate the vehicle conflicts within the network. For each simulation run, different simulation random seeds were applied, and the output results were averaged for analysis purposes. This simulation setup scenario accounts for the stochastic properties of the simulation model, thus reflecting real-world traffic behavior more realistically.
222
Safety Science and Technology
Analyzing Vehicle Conflicts Using SSAM The vehicle trajectory data collected from VISSIM was used in SSAM to assess the vehicle conflicts detected in the study area. Most studies evaluate traffic safety through two surrogate measures, Time to Collision (TTC) and Postencroachment Time (PET). Values below a commonly accepted threshold of either TTC or PET value indicates a higher probability of collision. SSAM is able to automatically estimate the TTC and PET values of each vehicle interaction and thus to record all potential conflicts. In this study, the TTC and PET were set to 1.5 seconds and 5 seconds, respectively, the values frequently established by previous research studies [20, 33]. The detected conflicts were classified into three types, based on the predetermined conflict angles, namely, crossing, lane changing, and rear ending, respectively. The thresholds of the conflict angles were adjusted to 2 degree and 45 degree as suggested by previous studies [8]. Basically, detected conflict which has a conflict angle of 2 degrees or less is defined as rear ending conflict; if the conflict angle is between 2 and 45 degrees, it is detected as lane changing; while if the conflict angle is bigger than 45 degrees, it is recorded as the crossing type. However, due to the peculiarity of geometry of each study area, the link information of all the output conflicts, which was also detected by SSAM, was manually checked to properly determine their type. The three types of conflicts were recorded for subsequent comparative safety analysis. A built-in filter of SSAM can be applied to screen out the conflicts caused by each measured movement by reading the corresponding link information. The spots where conflicts were detected can be plotted automatically on the toggled network image by positioning the VISSIM network coordinates. The conflicts of different types can be showed in different shapes or colors on the togged map to give a visual estimate of the hotspot areas (i.e., conflicts’ frequency and density).
Summary The methodology presented in this study introduces a simulation-based approach to evaluate road network safety and efficiency. To apply this methodology, the field traffic conditions are collected, and the detailed information including the field geometry, control strategy, flow, and driving behavior is reviewed. Such basic information is then integrated in a VISSIM simulation model. With an important model parameter, the vehicle speed distributions are obtained using a feature tracking program, namely, Traffic
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
223
Intelligence. The model is properly calibrated until the output vehicle time gap distribution compared well with the field observed vehicle gap distribution by applying the Chi-square test. The model output vehicle delays are reviewed for network operational efficiency analysis, and the model output vehicle trajectory files are analyzed by SSAM to determine the conflict within the study area thus giving the safety level of the site. Figure 4 shows the flow chart of the methodology used in this study for traffic safety and operational efficiency evaluation.
Figure 4: Framework of evaluation procedure.
CASE STUDY Study Area Description The study area used in this study is a segment of Rte-116, a suburban highway in Lévis, Québec. Evaluations of traffic safety and operations were made at a specific location along the four-lane east-west arterial segment that includes one GP lane and one HOV lane, in both directions. The eastbound reserved
224
Safety Science and Technology
lane allows buses and passenger cars with three or more passengers, while the westbound direction has a bus-only lane. The current design of this facility is such that the westbound buses arriving at or departing from the terminal have to travel across the four-lane undivided road. Figure 5 shows the current paths of the buses using the terminal.
Figure 5: Paths of the terminal-bound buses.
The traffic video feeds of vehicles accessing the terminal, the commuter parking lot, and traveling along Rte-116 were collected via GoPro HD video cameras that were installed on top of extendable masts along the roadway. Cameras 1 and 2 were both installed at the same location with views opposing each other. The orientations of these two cameras were adjusted to capture east-west traffic that interacts with both access points into and out of the bus terminal. Camera 3 was installed at the proximity of the commuter parking lot entry/exit gate, to capture interactions between main road traffic and vehicles to and from the parking. The positions of the cameras are shown in Figure 5. The video traffic data of the PM peak hour (4:30 pm~5:30 pm) was used in the final analysis of this study. A probe vehicle was driven several times along the study segments with an arbitrarily selected constant speed. The known speed values were used to calibrate the postprocessing speed detection measuring software, Traffic Intelligence. A fixed 88-second cycle of the traffic signal along Rte-116 at the adjacent intersection (i.e., 40 seconds, red, 40 seconds, green, and 4 seconds, yellow) was measured in the field and used in the simulation model of the study area.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
225
The video files from each camera were processed in 5-minute increments to manually determine the distribution of traffic flows during the analysis period. Vehicles were distinguished into four types: passenger cars (on the GP lane), buses, trucks, and reserved lane users, respectively. This definition of the traffic mix was necessary to capture more reliably the vehicle interactions in the traffic simulation model (different vehicle types exhibit different driving behaviors in terms of acceleration, minimum headway, etc.). Tables 1 and 2 show a classification of westbound and eastbound traffic flows along the highway, as well as access/egress of the buses using the terminal during the afternoon peak period. Table 1: Observed traffic flow during the peak hour (4:30 pm~5:30 pm) Time
Average vehicle flows (vehicles/hour) Westbound
4:30 pm~5:30 pm
Eastbound
Car
Bus Truck
HOV
Car Bus Truck
HOV
663
16
7
338 5
36
3
4
Table 2: Access and egress vehicles during the peak hour (4:30 pm~5:30 pm) Time
Average vehicle flows (vehicles/hour) Westbound Bus
4:30 pm~5:30 pm
Eastbound Parking car
Bus
Parking car
Access
Egress Access
Egress
Access Egress Access Egress
10
14
37
7
0
3
4
18
Traffic Intelligence was utilized to measure the vehicle speed. Calibration of the video analysis software was performed using various mask pictures to filter the shadows of the moving vehicles until the measured speeds of the probe vehicle were identical to the observed values. The vehicle speed distributions of both westbound and eastbound vehicles were recorded every five minutes and used as simulation input parameters.
Modeling Existing Configuration and Traffic Conditions (Status Quo) The peak hour traffic was modeled in VISSIM to evaluate traffic safety and operations of the observed arterial segment. Vehicle modals used in the simulation are selected by VISSIM automatically. The vehicle characteristics
226
Safety Science and Technology
of the case study are shown in Table 3. To account for the effects of stochastic variation of the model’s parameters, ten different simulations with different random seeds were ran, and the average values were used in the analysis. Table 3: Vehicles characteristics
The average vehicle delay (excluding signal waiting time at intersection) was measured for three types of movements, using the vehicle travel time measurements tool. Movement 1 identifies the westbound traffic on the GP lane. Movement 2 is associated with westbound buses entering the terminal (i.e., buses merging from HOV lane into the GP lane and then crossing the two eastbound lanes). Movement 3 represents westbound buses leaving the terminal (i.e., buses that cross all the four lanes to enter the highway). Vehicle trajectory files were also generated for conflict analysis. In addition, to evaluate the impact of expected increase in traffic flow on traffic operations (i.e., average vehicle delay) and safety (i.e., conflicts frequency), the same simulation model was used to evaluate similar scenarios, assuming the traffic volume increases in the future by 10%, 20%, and 30% from the current values. Figure 6 represents a snapshot of the VISSIM simulation model using the existing geometric alignment and traffic operations conditions.
Figure 6: The status quo network modeled in VISSIM.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
227
Simulations of Alternative Geometry/Control Designs The main concern related to traffic safety at the investigated study area pertains to the placement of the reserved lanes on the outside lanes. This configuration leads to multiple lanes crossing when left turns are needed and high occurrence of vehicle interactions was observed especially during congested traffic conditions. Two alternative designs have been tested to evaluate their potential to mitigate traffic safety and operations issues. Figure 7 shows the VISSIM network layout of the first alternative design. In this model, westbound buses were prohibited to enter or exit the terminal by crossing the highway directly. Instead, an adjacent roadway segment was inserted along the south side of bus terminal, which is directly connected to the minor road. To serve the terminal-bound buses, ten seconds of left-turning signal phase was provided at the intersection on the main road. Similarly, for each traffic demand alternative (i.e., current status, 10%, 20%, and 30% increments of vehicular traffic volume), the collected peak hour vehicle flows and speed distributions were used to model the network using ten simulation random seeds. The individual vehicle trajectories and delay measurements of the same movements evaluated for the status quo configuration were collected and used for comparison analysis. Figure 8 shows the VISSIM network layout of the second alternative design. In this model, a loop detector that controls a signal set was added to the existing network. This system was used to control the egress of westbound buses as they leave the terminal. The add-on signal control model VAP was created to program the signal timing. The detector was placed near the exit of the bus terminal. As long as buses are not in the proximity of the sensor, the signal indicates green for the main road to allow east-west traffic and red for the bus exit to prevent the egress buses from traveling across the road directly. When buses are detected at the terminal, exit signal turns green for them and red for traffic on the main road, which allows for protected turns. The red signal on the main road lasts for 10 seconds from the last bus detected and then turns back to green until the next detection. The same vehicle hourly flows previously processed were used in this simulation scenario, and the same ten different simulation random seeds were applied. The delay measurements of the same types of movements and trajectory data were collected for comparative analysis.
228
Safety Science and Technology
Figure 7: VISSIM network of alternative road geometry design.
Figure 8: VISSIM network of alternative control design.
Surrogate Safety Measures of Vehicle Conflicts SSAM was applied to assess the vehicle conflicts detected in the study area for safety analysis. A built-in filter of SSAM was applied to screen out the conflicts caused by each measured movement by reading the corresponding link information. The spots where conflicts were detected were plotted automatically on the toggled network image by utilizing the VISSIM network coordinates. The conflicts of different types were showed in different shapes on the togged map. Figure 9 shows the spatial distribution of conflicts caused by measured movements near the bus terminal plotted on the original network.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
229
Figure 9: Conflicts near bus terminal plots on original network.
Comparison Analysis of Safety and Operation Figures 10 and 11 represent the impact of different traffic volumes on traffic operations (delay) and safety (conflicts).
Figure 10: Effects of increasing traffic flow on average delay per vehicle.
230
Safety Science and Technology
Figure 11: Sensitivity analysis of conflicts distribution (current configuration).
As intuitively expected, more traffic demand leads to increased average delay. It also shows that of the three types of vehicle interactions analyzed movements labeled 2 and 3 (i.e., associated with buses entering and leaving the terminal) are affected by significantly higher delay than the vehicles moving along the east-west roadway. This is explained by the fact that buses have to make left turns from/into the arterial, and consequently they do not have the default right of way. In addition, traffic safety analysis (i.e., evaluation of vehicular interactions through the SSAM tool) shows that, for all levels of traffic demand, the majority (more than 85%) of vehicular
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
231
conflicts were crossing conflicts associated with the same movements of buses that enter or leave the terminal facility. Moreover, lane-changing conflicts were observed between buses moving from the reserved lane into the GP lane to engage in left-turning maneuvers towards the terminal. Figure 12 shows the effects of different traffic volumes on traffic operations (magnitude of delay) and safety (frequency of conflicts) when the first alternative scenario was used. As expected, by including a separation median between the two directions of traffic, all vehicular conflicts associated with left-turn movements into and out of the terminal are eliminated. The sensitivity analysis demonstrates that traffic operations are not impacted by this design. It can be seen that there is a minor positive effect on the average vehicular delay for movement 1 (vehicles traveling westbound on Rte-116), but there is a significant positive effect on the average delay of buses accessing the terminal (i.e., a reduction in delay of about 85%). However, this alternative scenario brings a trade-off for the movements of buses exiting the terminal that are hindered for most traffic flow levels. The additional delay encountered by buses leaving the terminal is due to the fact that, for this design, the westbound egress buses must use the nearby intersection, and the traffic signal timing was not optimized to accommodate left-turning buses from the minor street.
Figure 12: Effects of first alternative design on the average delay (separation median).
The results for the second alternative design (i.e., controlling the access/ egress of buses for Movements 2 and 3 via a bus-triggered traffic control signal, in order to reduce the vehicle interactions with the buses) are shown in Figure 13. It can be seen that this alternative design reduces considerably the delay of buses in and out of the terminal (Movements 2 and 3), while it
232
Safety Science and Technology
increases by less than 17% the delay of vehicles traveling westbound along the arterial (Movement 1).
Figure 13: Effects of second alternative design on the average delay (traffic control).
More importantly, the vehicular conflict analysis of these results shows the elimination of the crossing conflicts (Movements 2 and 3) related to buses accessing/leaving the terminal by turning left across the HOV and GP lanes. In addition, this design has no impact on the low conflict occurrence of Movement 1 (vehicles moving westbound on the arterial). Several aspects of the proposed alternative designs are discussed at the end of this section. The delay of the traffic flow moving westbound on the arterial during the peak period was compared across all three simulation scenarios (i.e., current design, separation barrier, and traffic control alternative). It was found that the traffic control alternative leads to the most negative impact on the vehicular delay. In addition, conflict occurrence between the current design and the proposed traffic control design is not significantly different, due to breaking at red light; it is expected that rear ending conflicts might be more severe. On the other hand, rerouting buses through the intersection via the minor street seems to be the best option, because it eliminates completely all conflicts of left-turning vehicles while its impact on traffic operations might not be significant, since it can be mitigated with optimizing the traffic signal timing plan at the intersection. To conclude, the existing geometric and traffic signal configurations show that there is a high occurrence of vehicular conflicts for left-turn buses approaching terminal. It can be seen from the results above that, by using
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
233
the alternative designs, these types of conflicts are eliminated. In addition, the proposed alterations to existing alignment provide benefits for traffic operations because they reduce significantly the average vehicular delay. However, when traffic signals are used to control for protected left-turn buses that are rerouted through the adjacent intersection, an additional analysis of signal delay and optimization is necessary. Similarly, the analysis of the measured movement 3 (i.e., westbound buses leaving from terminal) identifies a large number of crossing conflicts within the east-west traffic on the main arterial. Elimination of these conflicts can be achieved if this movement is protected either through the traffic signal sensitive to the buses present at the terminal exit, or by using the barrier separated geometry that reroutes the buses via the adjacent intersection. The results indicate that the network with alternative control design is the best for departing buses (i.e., the delay is the smallest). As expected, the sensitivity analysis shows that an increased main arterial traffic volume leads to negative effects on the conflict frequency and average vehicular delay, regardless of the design used, while the alternative designs provide elimination or significant reduction in conflicts.
CONCLUDING REMARKS This study proposed two general alternative geometry and control designs to improve the operation and safety of High Occupancy Vehicle (HOV) lanes near the bus terminals and parking lots. A VISSIM simulation model was created using the observed field geometry, control strategy, and vehicle flows, and then the vehicle priority rules and driving behaviors were calibrated to reflect correlated parameters observed on the field. An important model parameter, the vehicle speed distribution, was measured by feature-based tracking technique using an open-sourced program, namely, Traffic Intelligence. The model was calibrated using a metaheuristic optimization algorithm (i.e., Whale Optimization Algorithm) with respect to the field-measured vehicles headway distributions. The results showed that this algorithm converged to the optimal parameters faster than searching whole parameters’ intervals. The output delay data was used for operational efficiency analysis, and the output trajectory data was analyzed by SSAM to determine the number of vehicle conflicts within the study area. This procedure was applied to test the safety and operational efficiency of an HOV road segment in Lévis, Québec. The peak hour safety and operational traffic conditions of the status quo and of two alternative designs (i.e.,
234
Safety Science and Technology
geometry and control designs) were analyzed. The results indicate that the existing network configuration exhibits significant safety issues due to the crossing conflicts along the path of buses approaching the terminal across the four-lane arterial road. It was shown that one of the investigated alternative designs may enable the terminal-bound buses to travel on a different path to efficiently eliminate critical vehicular conflict. In addition, it was shown that the alternative control design can be used to reduce the bus delay by giving priority to public transit. It is expected that this methodology can be successfully applied to other similar reserved lanes facilities in the vicinity of the bus stations and parking lots.
ACKNOWLEDGMENTS This study was funded by Ministère des Transports du Québec (MTQ) through research Project R706.1. The authors would like to thank Matin Giahi Foomani and Gia Hung Lieu from Concordia University for the help in data collection.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
235
REFERENCES 1.
C. Fuhs and J. Obenberger, “Development of high-occupancy vehicle facilities: Review of national trends,” Transportation Research Record, no. 1781, pp. 1–9, 2002. 2. M. Menendez and C. F. Daganzo, “Effects of HOV lanes on freeway bottlenecks,” Transportation Research Part B: Methodological, vol. 41, no. 8, pp. 809–822, 2007. 3. A. Guin, M. Hunter, and R. Guensler, “Analysis of reduction in effective capacities of high-occupancy vehicle lanes related to traffic behavior,” Transportation Research Record, no. 2065, pp. 47–53, 2008. 4. J. I. Bauer, C. A. McKellar, J. M. Bunker, and J. Wikman, “High occupancy vehicle lanes-an overall evaluation including Brisbane case studies,” in Proceedings of the 2005 AITPM National Conference, J. Douglas, Ed., pp. 229–244. 5. T. F. Golob, W. W. Recker, and D. W. Levine, “Safety of high-occupancy vehicle lanes without physical separation,” Journal of Transportation Engineering, vol. 115, no. 6, pp. 591–607, 1989. 6. K. Jang, K. Chung, D. R. Ragland, and C.-Y. Chan, “Safety performance of high-occupancy-vehicle facilities,” Transportation Research Record, no. 2099, pp. 132–140, 2009. 7. X. Qi, G. Wu, K. Boriboonsomsin, and M. J. Barth, “Empirical study of lane-changing characteristics on high-occupancy-vehicle facilities with different types of access control based on aerial survey data,” Journal of Transportation Engineering, vol. 142, no. 1, Article ID 04015034, 2016. 8. H. Tao, M. G. Foomani, and C. Alecsandru, “A two-step microscopic traffic safety evaluation model of reserved lanes facilities: an arterial case study,” in Transportation Research Board 94th Annual Meeting No. 15-3635, 2015. 9. V. Thamizh Arasan and P. Vedagiri, “Microsimulation study of the effect of exclusive bus lanes on heterogeneous traffic flow,” Journal of Urban Planning and Development, vol. 136, no. 1, Article ID 009001QUP, pp. 50–58, 2010. 10. B. N. Persaud, R. A. Retting, P. E. Garder, and D. Lord, Observational BeforeAfter Study of the Safety Effect of US, Transportation Research Board, National Research Council, 2001.
236
Safety Science and Technology
11. D. Gettman and L. Head, “Surrogate safety measures from traffic simulation models,” Transportation Research Record, no. 1840, pp. 104–115, 2003. 12. S. Srinivasan, P. Haas, P. Alluri, A. Gan, and J. Bonneson, “Crash prediction method for freeway segments 2 with high occupancy vehicle (HOV) lanes 3,” in Transportation Research Board 95th Annual Meeting (No. 16-6333), 2016. 13. R. Elvik, “The predictive validity of empirical Bayes estimates of road safety,” Accident Analysis & Prevention, vol. 40, no. 6, pp. 1964–1969, 2008. 14. D. Gettman, L. Pu, T. Sayed, and S. G. Shelby, Surrogate Safety Assessment Model and Validation: Final Report, 2008, No. FHWAHRT-08-051. 15. A. Laureshyn, Å. Svensson, and C. Hydén, “Evaluation of traffic safety, based on micro-level behavioural data: theoretical framework and first implementation,” Accident Analysis & Prevention, vol. 42, no. 6, pp. 1637–1646, 2010. 16. W. Young, A. Sobhani, M. G. Lenné, and M. Sarvi, “Simulation of safety: A review of the state of the art in road safety simulation modelling,” Accident Analysis & Prevention, vol. 66, pp. 89–103, 2014. 17. J. Archer, “Developing the potential of micro-simulation modelling for traffic safety assessment,” in Proceedings of the 13th ICTCT Workshop, vol. 44, 2000. 18. J. Archer, Methods for the Assessment and Prediction of Traffic Safety at Urban Intersections and Their Application in Micro-Simulation Modelling, Royal Institute of Technology, 2004. 19. A. Sobhani, W. Young, and M. Sarvi, “A simulation based approach to assess the safety performance of road locations,” Transportation Research Part C: Emerging Technologies, vol. 32, pp. 144–158, 2013. 20. C. Hydén, The Development of a Method for Traffic Safety Evaluation: The Swedish Traffic Conflicts Technique, Bulletin Lund Institute of Technology, 1987. 21. S. J. Older and B. R. Spicer, “Traffic conflicts—a development in accident research,” Human Factors: The Journal of Human Factors and Ergonomics Society, vol. 18, no. 4, pp. 335–350, 1976.
Microscopic Simulation-Based High Occupancy Vehicle Lane Safety ...
237
22. F. H. Amundsen and C. Hyden, “Proceedings of first workshop on traffic conflicts,” in Proceedings of Workshop on Traffic Conflicts, TTI, Oslo, Norway, 1977. 23. S. R. Perkins and J. L. Harris, “Traffic conflict characteristics-accident potential at intersections,” in Proceedings of the Traffic Safety and presented at the 47th Annual Meeting, pp. 35–43, Highway Research Board, 1968. 24. M. R. Parker and C. V. Zegeer, Traffic Conflict Techniques for Safety and Operations: Engineers Guide, 1989. 25. E. Hauer and P. Garder, “Research into the validity of the traffic conflicts technique,” Accident Analysis & Prevention, vol. 18, no. 6, pp. 471–481, 1986. 26. K. El-Basyouny and T. Sayed, “Safety performance functions using traffic conflicts,” Safety Science, vol. 51, no. 1, pp. 160–164, 2013. 27. R. Fan, H. Yu, P. Liu, and W. Wang, “Using VISSIM simulation model and surrogate safety assessment model for estimating field measured traffic conflicts at freeway merge areas,” IET Intelligent Transport Systems, vol. 7, no. 1, pp. 68–77, 2013. 28. F. Huang, P. Liu, H. Yu, and W. Wang, “Identifying if VISSIM simulation model and SSAM provide reasonable estimates for field measured traffic conflicts at signalized intersections,” Accident Analysis & Prevention, vol. 50, pp. 1014–1024, 2013. 29. M. Essa and T. Sayed, “Simulated traffic conflicts: do they accurately represent field-measured conflicts?” Transportation Research Record, vol. 2514, pp. 48–57, 2015. 30. PTV VISSIM 6 User Manual, Karlsrule, Germany, 2013. 31. S. Jackson, L. Miranda-Moreno, P. St-Aubin, and N. Saunier, “Flexible, mobile video camera system and open source video analysis software for road safety and behavioral analysis,” Transportation Research Record, no. 2365, pp. 90–98, 2013. 32. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. 33. G. R. Brown, “Traffic conflicts for road user safety studies,” Canadian Journal of Civil Engineering, vol. 21, no. 1, pp. 1–15, 1994.
SECTION 3: SAFETY IN TRANSPORT AND VEHICLES
Chapter 11
Safety of Autonomous Vehicles
Jun Wang1, Li Zhang1, Yanjun Huang2, and Jian Zhao2 Department of Civil and Environmental Engineering, Mississippi State University, Starkville, MS 39762, USA 1
Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada 2
ABSTRACT Autonomous vehicle (AV) is regarded as the ultimate solution to future automotive engineering; however, safety still remains the key challenge for the development and commercialization of the AVs. Therefore, a comprehensive understanding of the development status of AVs and reported accidents is becoming urgent. In this article, the levels of automation are reviewed according to the role of the automated system in the autonomous
Citation: Jun Wang, Li Zhang, Yanjun Huang, Jian Zhao, “Safety of Autonomous Vehicles”, Journal of Advanced Transportation, vol. 2020, Article ID 8867757, 13 pages, 2020. https://doi.org/10.1155/2020/8867757. Copyright: © 2020 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
242
Safety Science and Technology
driving process, which will affect the frequency of the disengagements and accidents when driving in autonomous modes. Additionally, the public onroad AV accident reports are statistically analyzed. The results show that over 3.7 million miles have been tested for AVs by various manufacturers from 2014 to 2018. The AVs are frequently taken over by drivers if they deem necessary, and the disengagement frequency varies significantly from 2 × 10−4 to 3 disengagements per mile for different manufacturers. In addition, 128 accidents in 2014–2018 are studied, and about 63% of the total accidents are caused in autonomous mode. A small fraction of the total accidents (∼6%) is directly related to the AVs, while 94% of the accidents are passively initiated by the other parties, including pedestrians, cyclists, motorcycles, and conventional vehicles. These safety risks identified during on-road testing, represented by disengagements and actual accidents, indicate that the passive accidents which are caused by other road users are the majority. The capability of AVs to alert and avoid safety risks caused by the other parties and to make safe decisions to prevent possible fatal accidents would significantly improve the safety of AVs. Practical applications. This literature review summarizes the safety-related issues for AVs by theoretical analysis of the AV systems and statistical investigation of the disengagement and accident reports for on-road testing, and the findings will help inform future research efforts for AV developments.
INTRODUCTION With the demands on reducing traffic accidents, congestion, energy consumption, and emissions, autonomous driving technology has been recognized as one of the promising solutions to these critical social and environmental issues. An autonomous vehicle (AV, i.e., automated or selfdriving vehicle) equipped with advanced technologies assists a human driver or to control the vehicle independently, where human interference may not be required [1, 2]. The control decisions, such as accelerating, deaccelerating, changing lanes, and parking, can be made by a human driver or an autonomous system, depending on the automated levels of the vehicle and the perception results of the surrounding environment (e.g., pedestrians, cyclists, other vehicles, traffic signals, and school zones) [2–5]. Vehicle automation can be divided into several levels, e.g., no automation, partial automation, high automation, or full automation according to the involvement of a human driver or automated system in monitoring the surrounding environment and control the vehicle.
Safety of Autonomous Vehicles
243
The autonomous technology employed in transportation systems brings opportunities to mitigate or even solve transportation-related economic and environmental issues, and therefore, the autonomous vehicle has been actively studied recently [6]. AV techniques are capable of changing the traditional means of transportation by (i) improving road safety, where human errors account for 94% of the total accidents [7], (ii) enhancing the commute experience, via working or entertaining instead of driving and shortening the commute time when the traffic path is planned [8, 9] or the parking task is conducted autonomously [10, 11], and (iii) improving mobility for everyone, which enables differently abled people to access transportation and improve their independence [2, 12]. In 2011, more than 5.3 million vehicle crashes were reported in the United States, leading to approximately 2.2 million injuries, 32 thousand fatalities, and billions of dollars losses [1]. According to [13], the crashes caused by human factors, including speeding, distractive driving, alcohol, and other behaviors, take up 93% of the total crashes. By minimizing the involvement of human operations, AVs have the potential to significantly reduce car crashes. According to the Insurance Institute for Highway Safety [1], partially autonomous technology such as forward collision and lane departure warning systems, side view assist, and adaptive headlights will potentially prevent or mitigate crashes, and the reduction in injuries and fatalities can be up to 33%. When a human operator is not required to operate a vehicle, it would enable the blind, disabled, and those too young to drive, enhancing their independence, social connection, and life experience [14, 15]. AVs would also reduce the needs in mass transit or paratransit agencies, which saves the costs borne by the taxpayer and improves social welfare. The owner and the community will also benefit from the development of the autonomous technology from (i) the potential for fuel savings in terms of better fleet management [16–19] to avoid congestion [9, 20] and more manageable parking arrangement [10], and (ii) the potential of relieving people from the stress of commute driving, perhaps even taking a snap on the way to work [21]. The substantial potential to reduce the congestion would benefit not only AV drivers but also other drivers. Even though significantly increased AV users may potentially increase the congestion [13], the traffic conditions may also be improved by optimized vehicle operation and reduced crashes and delays [22, 23]. With an improved transportation system, AV techniques have a significant potential to save energy and reduce emissions [17, 24]. The benefits of energy-saving may be resulted from a smooth accelerating and decelerating in comparison to a human driver, better fleet management by lowering peak
244
Safety Science and Technology
speeds and higher effective speeds, reduced travel time, and lighter design of vehicles because of fewer crashes [1]. If lighter vehicles can be enabled by autonomous technology, the use of electric vehicles can be promoted due to the improved driveable range [1]. Accordingly, emissions in the whole transportation ecosystem can be reduced. Studies also indicate that advanced lateral control scheme for AVs can also improve pavement sustainability [25, 26]. The significant potential benefits brought by autonomous technology have driven the development of AVs in the past four decades. From the 1980s to 2003, AV studies are mainly led by universities with a focus on two technical pathways: infrastructure-centered and vehicle-centered technology developments. The former requires advanced highway infrastructure systems to guide the vehicles, whereas the latter one does not. Between 2003 and 2007, the U.S. Defense Advanced Research Projects Agency (DARPA) led the development of AV techniques in both rural and urban areas. After 2007, private companies, such as Google, Audi, Toyota, and Nissan, continue this technology development because of the increasing demands on AV technologies [1]. Recently, the road testing of such technologies is booming [27]. In fact, various features have been widely used in modern vehicles including lane-keeping, collision avoidance, automatic braking, adaptive cruise control (ACC), and onboard navigation to assist human drivers [28]. In recent years, many manufacturers, as well as high-tech companies, joined this AV competition. Audi, BMW, Mercedes-Benz, Volkswagen, Volvo, Ford, General Motors, Toyota, Nissan, as well as Google, Baidu, and other research institutes, have begun testing on- and off-road AVs [13, 28]. Even though AVs have been substantially improved, fully autonomous vehicles are still not ready for commercialization. The obstacles mainly come from safety concerns. Moody et al.’s studies indicated that the youth with high salary and education male are the most optimistic about the AV safety [29], while western European countries are more pessimistic about the safety in comparison with Asian countries [30]. They claim that the optimism of autonomous technology among risk-taking people in developing countries may promote the global development of AVs. Lee et al.’s studies indicated that safety risks can affect customers’ intention to use autonomous vehicles [31]. In addition, a ‘safe’ AV should be able to obey traffic laws and to avoid road hazards automatically and effectively [21]. It should be noted that for a fully automated vehicle, human factors in the vehicle-human interface are one of the most significant concerns [2]. Regulations, in which the role of
Safety of Autonomous Vehicles
245
the human driver is defined, can be changed depending on the progress of the AV technology development. In turn, the levels of automation and its maturity can also affect the regulation-making [32], e.g., whether a human driver should be responsible for monitoring the surrounding environment throughout the autonomous driving modes or immediately taking over the control when an AV failure occurs [13]. In other words, AV safety can be affected by various social and technical factors, including automation level definition, regulation-making, nature of vehicles, road and traffic conditions, and even weather conditions. Therefore, a comprehensive understanding of the definition of automation levels for vehicles, types of potential and reported accidents, and current status of on-road testing will be beneficial for the AV technology development. Therefore, it is urgent to conduct a careful investigation of the available data on AV-related accidents and the potential accident prediction when the AV technology moves forward to higher automation levels. Great efforts have been devoted to AV technology development; however, an updated statistical point of view for the safety issues is missing in the literature. The safety issues for AVs have been reported individually in the literature, and critical analysis about the status and causes would be beneficial for the further design and development of AVs. This is of great significance for the related personnel to understand the system failures and possible causes. The objectives of the study are, therefore, to systematically analyze the safety issues related to autonomous technology applied to vehicular applications. The levels of automation in vehicles are reviewed in Chapter 2, and the types of accidents and their potential causes are comprehensively analyzed in Chapter 3. The current status of the on-road testing and accidents are investigated in Chapter 4, and finally, the opportunities and challenges for AV safety studies are discussed in Chapter 5.
LEVELS OF AUTOMATION The definition of AVs is crucial for regulation makers to minimize the impact of this technology on traditional road users, such as other vehicles, pedestrians, bicyclists, and even construction workers. As aforementioned, the automation levels of the vehicles depend on the complexity of the autonomous technology applied, the perception range of the environment, and the degree of a human driver or vehicle system get involved in the driving decision, which is closely related to the AV safety. The definition
246
Safety Science and Technology
of automation levels from various organizations is thus summarized and compared in this section. The traditional definition of the automation levels was reported by Sheridan and Verplank [33] as early as 1987 and later modified by Parasuraman et al. [34] in 2000. Ten levels of automation are defined based on roles of the human operator and vehicle system in the driving process. Level 1 means no automation is involved and human makes all decisions and takes all actions. In Level 2 to 4, the systems can suggest a complete set of the alternative decision or action plans, but human supervisors decide to execute the suggested actions or not. From Level 5, the system is becoming capable of executing a decision with a human operator’s approval. At Level 6, the system allows the human driver to react within a certain time span before the automatic action. At Level 7, after an automatic action, the system will inform the human supervisor; while at Level 8, the system will not inform the human supervisor unless it is asked. At Level 9, the system will decide if the human supervisor will be informed or not after an automatic action. Level 10 means fully automation, completely ignoring human factors. The details of the ten levels of automation can be found elsewhere [33, 34]. In aerospace engineering, the levels of automation are varied, and generally, six levels are defined, which is known as the Pilot Authorisation and Control of Tasks (PACT) framework [35]. This automation system is labeled from Level 0 to 5. Level 0 denotes no computer autonomy, and Level 5 means the systems can be fully automatic but can still be interrupted by a human pilot. In addition to human commanded and automatic modes, the PACT also recommended four assisted modes depending on the operational relations between human pilots and the systems. The details of the six levels can be found in [35]. In automotive engineering, the U.S. National Highway Traffic Safety Administration (NHTSA) defined five levels of automation [36]. In this system, the automation levels are divided into 5 categories, which are numbered from 0 to 4. Level 0 represents no automation, where the drivers completely control the vehicles. The highest level, Level 4, represents fully self-driving automation, where the vehicle is able to monitor external conditions and perform all driving tasks. It can be seen that most of the current autonomous vehicle development activities can be classified into Level 3, limited self-driving automation, where the drivers are able to take over the driving in some instances. Recently, NHTSA has adopted a more widely used definition of AVs based on the Society of Automotive
Safety of Autonomous Vehicles
247
Engineers (SAE) [2], which is regularly updated [37]. SAE defines 6 levels of automation for vehicles from 0 (no automation) to 5 (full driving automation) based on the extent to which the human factor is required for the automation system. Six levels of driving automation are defined by SAE, which is widely adopted by automobile manufacturers, regulators, and policymakers [2, 37–39]. These automation levels are divided by the role of the human driver and automation system in the control of the following driving tasks: (i) execution of steering and throttle control, (ii) monitoring driving environment, (iii) fallback of dynamic driving task (DDT), and (iv) system capability of various autonomous driving modes. According to the role of a human driver in the DDT, Levels 0–2 rely on the human driver to perform part of or all of the DDT, and Levels 3–5 represent conditional, high, and full driving automation, respectively, meaning that the system can perform all the DDT while engaged. This detailed definition of the levels of vehicle automation is widely used for current AV development activities. The six levels of driving automation defined by the Society of Automotive Engineer (SAE) are shown as follows [2, 37, 40]:(i)Level 0 (No Automation). All driving tasks are accomplished by the human operator.(ii) Level 1 (Driver Assistance). The human operator controls the vehicle, but driving is assisted with the automation system.(iii)Level 2 (Partial Driving Automation). Combined automated functions are applied in the vehicle, but the human operator still monitors the environment and controls the driving process.(iv)Level 3 (Conditional Driving Automation). The human operator must be prepared to operate the vehicle anytime when necessary.(v)Level 4 (High Driving Automation). The automation system is capable of driving automatically under given conditions, and the human driver may be able to operate the vehicle.(vi)Level 5 (Full Driving Automation). The automation system is capable of driving automatically under all conditions, and the human driver may be able to control the vehicle. It can be seen from the various definitions of automation levels by different organizations that human operators and vehicle systems can be involved in the driving processes at different degrees. This implies that the safety concerns for partially, high, and fully autonomous vehicles can be varied significantly. When the AVs are operated in no automation, partial automation, or high automation modes, the interaction between human operators and machines can be a significant challenge for AV safety; when AVs are operated at fully automation modes, the reliability of the software and hardware will become a vital issue. In other words, as more autonomous technology is applied in vehicles, the complexity of the autonomous system
248
Safety Science and Technology
grows, which brings challenges for system stability, reliability, and safety. Therefore, theoretical analysis of the potential AV errors will be urgent to understand the current AV safety status and to predict the safety level in the future.
TYPES OF ERRORS FOR AUTONOMOUS VEHICLES As more autonomous techniques are employed, different types of errors may be generated. If such errors are not properly handled, they may lead to critical safety issues. A systematical analysis of different types of errors or accidents for the AV technology will be helpful for the understanding current status of AV safety. It should be noted that the accidents reported in the literature for AVs are extremely less in comparison with those for traditional vehicles. However, it does not necessarily mean that current AVs are safer than human-controlled vehicles. Since the AV technology is still at the early stage of the commercialization and far away from the fully autonomous driving, more road tests should be done and the accident database may show a different trend. AV safety is determined by the reliability of the AV architecture and its associated hardware and software. However, AV architecture is highly dependent on the level of automation such that AV safety may show different patterns at different stages. Even at the same automation level, the architecture of AVs may also vary in different studies. Figure 1 shows the general architecture and major components for AVs. A typical AV is composed of a sensor-based perception system, an algorithm-based decision system, and an actuator-based actuation system, as well as the interconnections between systems [41, 42]. Ideally, all components of the AVs should function well such that the AV safety can be ensured.
Figure 1: A typical autonomous vehicle system architecture.
Safety of Autonomous Vehicles
249
Accidents Caused by Autonomous Vehicle Safety issues or accidents of AVs are highly related to the errors committed by AVs within various automation levels. Generally, such errors can be categorized according to the abovementioned architecture.
Perception Error The perception layer is responsible for acquiring data from multiple sensing devices to perceive environmental conditions for real-time decision making [41, 43]. The development of AVs is primarily determined by the complexity, reliability, suitability, and maturity of sensing technology [43]. The sensors for environment perception include, but not limited to, light detection and ranging (LIDAR) sensors, cameras, radars, ultrasonic sensors, contact sensors, and global positioning system (GPS). The function and ability of various sensing technologies can be found somewhere else [44]. It should be noted that any errors in the perception of the status, location, and movement of the other road users, traffic signals, and other hazards may raise safety concerns for AVs. Figure 2 summarizes the past and potential future AV technology evolution based on the specific sensing technology applied to the vehicle systems, and the information is obtained from [43, 45–55]. At the end of the 20th century, proprioceptive sensors including wheel sensors, inertial sensors, and odometry are widely employed in-vehicle systems for better vehicle dynamics stabilization to achieve the functions of traction control system, antilock braking system, electronic stability control, antiskid control, and electronic stability program. In the first decade of the 21st century, many efforts have been devoted to the information, warning, and comfort during the driving process with the help of exteroceptive sensors such as sonar, radar, lidar, vision sensors, infrared sensors, and global navigation satellite system. The vehicles enable the functions of navigation, parking assistance, adaptive cruise control, lane departure warning, and night vision [56]. In the past decade, sensor networks installed in both vehicle and road systems have been adopted in the modern transportation system for the purpose of automated and cooperative driving [46]. Advanced autonomous functions will be enabled, including collision avoidance and mitigation, and automated driving such that the drivers will be eventually released from the driving process. Depending on the level of vehicle automation, the perceived data may also come from the communication between the AVs
250
Safety Science and Technology
and the corresponding infrastructure [57, 58], other vehicles [44, 59], the Internet [60], and cloud [60].
Figure 2: Past and potential future evolution of autonomous vehicle technology.
Hardware, software, and communication are the three major sources of perception errors. The perception system heavily relies on sensing technology; therefore, the perception errors may come from the hardware including sensors. For example, the degradation and failure of the sensors may cause server perception errors, confuse the decision system, and lead to dangerous driving behaviors. Therefore, reliable and fault-tolerant sensing technology will be a potential solution to such issues. In addition, the perception errors may also result from the malfunction of the software, and this type of error would cause misleading to the decision and action layers, which may either fail the mission tasks or cause safety problems [57]. Communication errors will become important when the AVs are approaching fully automation levels. The communication errors may come from errors resulted from the communication between the AVs and the corresponding infrastructure [57], other road users [44], and the Internet [60]. Interpersonal communication is a vital component of the modern transportation system [61]. Road users, including drivers, pedestrians, bicyclists, and construction workers communicate with each other to coordinate movements and ensure road safety, which are the basic requirements of AVs [62]. The communication methods include gestures, facial expressions, and vehicular devices, and the comprehension of these messages can be affected by a variety of factors including culture, context, and experience, and these factors are also the key challenges for AV technology [61].
Safety of Autonomous Vehicles
251
Decision Error The decision layer interprets all processed data from the perception layer, makes decisions, and generates the information required by the action layer [41, 63]. Situational awareness serves as the input of the decision-making system for short-term and long-term planning. The short-term involves trajectory generation, obstacle avoidance, and event and maneuver manager, while long-term planning involves mission planning and route planning [57, 64–66]. The decision errors mainly come from the system or human factors. An efficient AV system will only take over the driving or warn the drivers when necessary, with a minimized false alarm rate but acceptable positive performance (e.g., safety level) [67]. As the AV technology is improved over time, the false alarm rate can be reduced significantly with sufficient accuracy and meet the requirements of the safety requirements [68]. However, if the algorithm is not able to detect all the hazards effectively and efficiently, the safety of AVs will be threatened. It should be pointed out that it may take a few seconds for the drivers occupied by secondary tasks to respond and take over the control from the automated vehicle [69–71], which bring uncertainties to the safe AV control. Unfortunately, AV technology is not yet completely reliable; therefore, the human driver has to take over the driving process, supervising, and monitoring the driving tasks when AV system fails or is limited by performance capability [69, 72]. In turn, the shifting role of a human driver in AV driving may lead to inattention, reduced situational awareness, and manual skill degradation [73]. Therefore, how to safely and effectively reengage the driver when the autonomous systems fail should be considered in designing the AVs from a human-centered perspective.
Action Error After receiving the command from the decision layer, the action controller will further control the steering wheel, throttle, or brake for a traditional engine to change the direction and accelerate or decelerate [74, 75]. In addition, the actuators also monitor the feedback variables and the feedback information will be used to generate new actuation decisions. Similar to traditional driving systems, action errors due to the failure of the actuators or the malfunction of the powertrain, controlling system, heat management system, or exhaust system may rise to safety problems. However, a human driver would be able to identify this type of safety issues
252
Safety Science and Technology
during the driving and pull over within a short response time. How the vehicle learns in these scenarios and responds to these low-frequency but fatal malfunctions of major vehicular components would be challenging to the full automation driving system. Therefore, the accident reconstruction of traditional vehicles would also be important [76].
Accidents Caused by Other Road Users According to the accidents related to AVs reported by State of California Department of Motor Vehicles [2, 77], the majority of the accidents related to the AVs are caused by the other parties on a public road. For example, vehicles, bicyclists, and angry or drunk pedestrians who share the same road with the AVs may behave abnormally, which is even difficult for a human driver to handle. It will be urgent to investigate what the advanced AVs will react with these hazardous scenarios, and it would not be surprising that this technology will dramatically reduce the fatal accidents on roads. However, the autonomous technology is still not mature enough to handle very complicated scenarios before some key issues could be solved, including the effective detection and prediction of hazardous behaviors caused by other road users, and the correct decision made by the autonomous system. The effective detection of hazards caused by the other road users is crucial for the AVs to make active decisions to avoid oncoming accidents. The AVs should decide if they need to take actions that may violate traffic regulations to avoid potential fatal or injurious accidents.
ON-ROAD TESTING AND REPORTED ACCIDENTS In this section, publicly available data for on-road AV testing including disengagement and accident reports have been analyzed for a direct understanding of the safety status of AVs. Two typical data sources from the California Department of Motor Vehicles (USA) and Beijing Innovation Center for Mobility Intelligent (China) are investigated in this section.
California Department of Motor Vehicles Safety risks exposed during on-road testing, represented by disengagements and actual accidents, are reported by the State of California Department of Motor Vehicles [78, 79]. This section reviews the disengagement and accident reported by the State of California Department of Motor Vehicles
Safety of Autonomous Vehicles
253
as of April 2019, and 621 disengagement reports between 2014 and 2018 have been statistically analyzed. Figure 3 shows the statistical status of the on-road AV testing in California reported by the Department of Motor Vehicles, in terms of cumulative mileage and breakdown of mileage and disengagements. The disengagements during an AV on-road test do not necessarily yield to traffic accidents, but they represent risk events that require the human operator to be alert and take over the automated vehicles [77, 78]. The 621 disengagement reports indicated that the total mileage of the AVs tested in California has reached 3.7 million miles (see Figure 3(a)), among which Google contributed 73% of the total autonomous driving mileage, followed by GM Cruise (13%), Baidu (4%), Apple (2%), and other manufacturers (8%) as shown Figure 3(b). In total 159,870 disengagement events have been reported, and the top four manufacturers are Apple (48%), Uber (44%), Bosch (2%), and Mercedes-Benz (1%). The disengagement events were categorized into two primary modes by Apple: manual takeovers and software disengagements [77]. Manual takeovers were recorded when the AV operators make the decisions to manually control the vehicles instead of automated systems when they deem necessary. These events can be caused by complicated actual driving conditions, including but not limited to emergency vehicles, construction zone, or unexpected objects around the roads. Software disengagements can be caused by the detection of an issue with perception, motion planning, controls, and communications. For example, if the sensors cannot sufficiently percept and track an object in the surrounding environments, human drivers will take over the driving process. The failure of generating a motion plan by the decision layer and the late or inappropriate response of the actuator will result in a disengagement event.
254
Safety Science and Technology
Figure 3: (a) Cumulative mileage, (b) breakdown of mileage, and (c) breakdown of disengagements based on various manufacturers. (Data are statistically analyzed from the reports by the State of California Department of Motor Vehicles between September 2014 and November 2018; the data from Waymo and Google are combined and noted as Google in this figure).
However, it should be noted that different manufacturers may have a different understanding of the disengagement events, which means the reported disengagement events may be incomplete for some companies. Figure 4 presents the relation between disengagements per mile and total miles for different manufacturers. It can be seen that the manual takeover frequency varies significantly from 2 × 10−4 to 3 disengagements per mile for different manufacturers. The significant difference may primarily result
Safety of Autonomous Vehicles
255
from the maturity of the autonomous technology; however, the definition of disengagements at this early stage of the on-road testing may also contribute to the difference in disengagement frequency. Policymakers may play a vital role in the widely accepted definition of disengagement events, considering perception errors, decision errors, action errors, system fault, and other issues.
Figure 4: Number of disengagements vs. autonomous miles according to reported data from various manufacturers.
Figure 5 indicates the breakdown of the actual AV accident reports in California between 2014 and 2018 from the Department of Motor Vehicles. 128 accident reports are statistically analyzed, and the top four reporters are GM Cruise (46%), Waymo (22%), Google (17%), and Zoox (5%). It should be noted that Waymo originated from the Google Self-Driving Car Project in 2009 [80]. Among these 128 accident reports in the past four years, 36.7% of the accidents occur during the conventional manual-control mode, while the remaining 63.3% are found in autonomous driving mode. This indicates that the autonomous technology still requires more intensive on-road testing before it can be completely applied to the AVs. It is also interesting to find that only a small portion (around 6.3%) of the total accidents is caused by the AVs, while 93.7% of the accidents are caused by the other parties, including pedestrians, cyclists, motorcycles, and conventional vehicles. This indicates that a further study on the potential operating strategy of the AVs to avoid passive accidents may dramatically improve AV safety.
256
Safety Science and Technology
Figure 5: Breakdown of autonomous vehicle accident reports. (Data are statistically analyzed from the reports by the State of California Department of Motor Vehicles between September 2014 and November 2018).
Figure 6 indicates the relation between reportable accidents and the total mileage for the AVs tested in California. It can be seen before 2017 that the number of reportable accidents increases with the total testing mileage slowly with a rate of 1.7 × 10−5 accidents per mile; however, from 2017 to 2018, the increase rate becomes 4.9 × 10−5 accidents per mile, which is almost tripled. This is likely due to the advanced but immature technology applied to the recent tested AVs and the increasing number of AVs being tested simultaneously in California.
Figure 6: Relation between cumulative accidents and cumulative autonomous miles. (The data shown in this figure are reported by the manufacturers as of April 2019.)
Safety of Autonomous Vehicles
257
Beijing Innovation Center for Mobility Intelligent The Beijing Innovation Center for Mobility Intelligent recently reported the on-road AV testing in restricted urban areas for the year of 2018 [27]. Since March, the autonomous driving mileage has reached 153,565 km (equivalent to 95,420 miles) at the end of December 2018 (see Figure 7(a)). The top four manufacturers are Baidu (90.8%), Pony.ai (5.6%), NIO (2.8%), and Daimler AG (0.5%). However, no disengagement and accident reports are available yet. It would be meaningful if the accident-related information can be available publicly, and the shared information could be beneficial for all manufacturers to promote the application of automated technology in vehicles and build the customers’ confidence in AVs.
Figure 7: (a) Cumulative mileage and (b) breakdown of mileage contribution due to various manufacturers tested in Beijing in the year 2018.
OPPORTUNITIES AND CHALLENGES AV technology will benefit from various perspectives by improving transportation safety, reducing traffic congestion, releasing humans from the driving process, and impacting our community both economically and environmentally [81–84]. Therefore, advanced AV technology has gained increasing interests in both academia and industry, which indicates a variety of opportunities for the development of AVs. However, the AVs require extensive experimental efforts before they can be promoted in markets, and new challenges from the adopted software, hardware, vehicle system, infrastructure, and other road users have to be addressed.
258
Safety Science and Technology
Opportunities One argument for AV technology development is that many traditional job opportunities will be eliminated. However, as technology is developed, more jobs, in reality, will be created. The AV development requires extensive testing on software, hardware, vehicle components, vehicle system, sensing devices, communication systems, and other multidisciplinary fields. With the AV technology, human operators can be released from the driving process, and time can be better managed. People will work, play, and study more efficiently due to the promotion of AV technology. In addition, the current lifestyle would be altered. For instance, the ways of driving training and driver’s license test would be changed. In other words, not only the AVrelated field but also the non-AV industry can be promoted. AV techniques can also change the traditional way of transportation. The demands of releasing vehicle operators from driving have driven the development of intelligent vehicle grid with the help of a platform of sensors to collect information from the surrounding environment including other road users and road signs. These signals will be provided to the drivers and infrastructures to enable safe navigation, to reduce emission, to improve fuel economy, and to manage traffic efficiently. Stern et al. carried out a ring road experiment involving both autonomous and human-operated vehicles, and their results indicate that a single AV can be used to control the traffic flow of at least 20 human-operated vehicles with significant improvements in vehicle velocity standard deviation, excessive braking, and fuel economy [85]. Liu and Song investigated two types of lanes designed for AVs: dedicated AV lane and AV/toll lane [86]. The dedicated AV lane only allows AVs to pass, while AV/toll lane permits human-operated vehicles to pass by paying extra fees, and their modeling results indicate that the system performance can be improved by utilizing both of the two methods [86]. Gerla et al. reviewed the Internet of Vehicles capable of communications, storage, intelligence, and self-learning [60]. Their work indicated that the communication between vehicles and the Internet will dramatically change the way of public transportation, making traditional transportation more efficient and cleaner. Therefore, traditional transportation systems have to be modified for AVs. Driving simulators have drawn significant attention to reproduce the automatic driving conditions and accident scenarios in a virtual reality environment. Owing to the driving simulators, the driving behaviors, takeover request, car-following maneuver, and other human factors can be
Safety of Autonomous Vehicles
259
efficiently studied [69–71, 87]. This can minimize the risk putting drivers in dangerous environment and simulate the decision-making process and the associated consequence.
Challenges The wide application of AVs still remains challenging due to safety issues. The AVs will be promoted if the following challenges can be further addressed:
Minimizing Perception Errors To effectively detect, localize, and categorize the objects in the surrounding environment will be challenging to minimize perception errors. In addition, the perception and comprehension of human behaviors including posture, voice, and motion will be important for AV safety.
Minimizing Decision Errors To correctly and timely respond to the ambient environment, a reliable, robust, and efficient decision-making system should be developed. This should be achieved through extensive and strict hardware and software testing. In addition, how to make correct decisions under complicated scenarios is still difficult, e.g., what should be the decision if the AVs will have to hurt pedestrians to avoid fatal accidents due to sudden system faults or mechanical failures.
Minimizing Action Errors To achieve safe AVs, actuators should be able to communicate with the decision systems and execute the commands either from human operators or automated systems with high reliability and stability.
Cyber-Security As the autonomous technology develops, the AVs will have to wirelessly communicate with road facilities, satellites, and other vehicles (e.g., vehicular cloud). How to make sure the cyber-security will be one of the biggest concerns for AVs [88].
260
Safety Science and Technology
Interaction with Traditional Transportation System The AVs and traditional vehicles will share the public roads in urban areas, and the interaction between AVs and other road users including traditional vehicles and pedestrians will be challenging [89]. For the other road users, it is difficult to identify the types of vehicles that they are interacting with. For pedestrians, this uncertainty may lead to stress and altered crossing decisions, especially when the AV driver is occupied by other tasks and does not make eye contact with other pedestrians [56]. Rodríguez Palmeiro et al.’s work suggested that fine-grained behavioral measures, such as eyetracking, can be further investigated to determine how pedestrians react to AVs [4].
Customer Acceptance The major factors limiting the commercialization of AVs include safety [90], cost [17, 91], and public interests [92–97], among which safety is the most paramount issues that can significantly affect the public attitude towards the emerging AV technology.
SUMMARY AND CONCLUDING REMARKS Fully autonomous vehicles (AVs) will allow the vehicles to be operated entirely by automated systems, facilitating the engagements of the human operators into tasks other than driving. AV technology will benefit both individuals and community; however, safety concern remains the technical challenges to the successful commercialization of AVs. In this review article, the levels of automation defined by different organizations in different fields are summarized and compared. The definitions of automation levels by the Society of Automotive Engineer (SAE) are widely adopted by automotive engineering for AVs. A theoretical analysis of the types of existing and potential types of accidents for AVs is conducted based on typical AV architectures, including perception, decision, and action systems. In addition, the on-road AV disengagement and accident reports available publicly are statistically analyzed. The on-road testing results in California indicate that more than 3.7 million miles have been tested for AVs by various manufacturers between 2014 and 2018. The AVs are frequently manually taken over by human operators, and the disengagement frequency varies significantly from 2 × 10−4 to 3 disengagements per mile based on different manufacturers. In addition, 128 accidents are reported over 3.7 million miles, and approximately 63.3% of the total accidents occur when driving
Safety of Autonomous Vehicles
261
in autonomous mode. A small portion (around 6.3%) of the total accidents is directly related to the AVs, while 93.7% of the accidents are passively initiated by the other parties, including pedestrians, cyclists, motorcycles, and conventional vehicles. These safety risks exposed during on-road testing, represented by disengagements and actual accidents, indicate that the passive accidents which are caused by other road users are the majority. This implies that alerting and avoiding safety risks caused by the other parties will be of great significance to make safe decisions to prevent fatal accidents.
262
Safety Science and Technology
REFERENCES 1.
J. M. Anderson, N. Kalra, K. D. Stanley, P. Sorensen, C. Samaras, and O. A. Oluwatola, Autonomous Vehicle Technology: A Guide for Policymakers, RAND Corporation, Santa Monica, CA, USA, 2016. 2. F. M. Favarò, N. Nader, S. O. Eurich, M. Tripp, and N. Varadaraju, “Examining accident reports involving autonomous vehicles in California,” PLoS One, vol. 12, 2017. 3. A. Millard-Ball, “Pedestrians, Autonomous vehicles, and cities,” Journal of Planning Education and Research, vol. 38, no. 1, pp. 6–12, 2018. 4. A. Rodríguez Palmeiro, S. van der Kint, L. Vissers, H. Farah, J. C. F. de Winter, and M. Hagenzieker, “Interaction between pedestrians and automated vehicles: a Wizard of Oz experiment,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 58, pp. 1005– 1020, 2018. 5. L. C. Davis, “Optimal merging into a high-speed lane dedicated to connected autonomous vehicles,” Physica A: Statistical Mechanics and its Applications, vol. 555, Article ID 124743, 2020. 6. R. E. Stern, Y. Chen, M. Churchill et al., “Quantifying air quality benefits resulting from few autonomous vehicles stabilizing traffic,” Transportation Research Part D: Transport and Environment, vol. 67, pp. 351–365, 2019. 7. US Department of Transportation National Highway Traffic Safety Administration, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey, NHTSA, Washington, DC, USA, 2015. 8. V. Nagy and B. Horváth, “The effects of autonomous buses to vehicle scheduling system,” Procedia Computer Science, vol. 170, pp. 235– 240, 2020. 9. M. W. Levin, M. Odell, S. Samarasena, and A. Schwartz, “A linear program for optimal integration of shared autonomous vehicles with public transit,” Transportation Research Part C: Emerging Technologies, vol. 109, pp. 267–288, 2019. 10. X. Ge, X. Li, and Y. Wang, “Methodologies for evaluating and optimizing multimodal human-machine-interface of autonomous vehicles,” in Proceedings of the SAE Technical Paper Series, Detroit, MI, USA, 2018.
Safety of Autonomous Vehicles
263
11. L.-J. Tian, J.-B. Sheu, and H.-J. Huang, “The morning commute problem with endogenous shared autonomous vehicle penetration and parking space constraint,” Transportation Research Part B: Methodological, vol. 123, pp. 258–278, 2019. 12. Y.-C. Lee and J. H. Mirman, “Parents’ perspectives on using autonomous vehicles to enhance children’s mobility,” Transportation Research Part C: Emerging Technologies, vol. 96, pp. 415–431, 2018. 13. D. J. Fagnant and K. Kockelman, “Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations,” Transportation Research Part A: Policy and Practice, vol. 77, pp. 167–181, 2015. 14. R. Bennett, R. Vijaygopal, and R. Kottasz, “Willingness of people who are blind to accept autonomous vehicles: an empirical investigation,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 69, pp. 13–27, 2020. 15. C. Włodzimierz and G. Iwona, “Autonomous vehicles in urban agglomerations,” Transportation Research Procedia, vol. 40, pp. 655– 662, 2019. 16. T. Z. Zhang and T. D. Chen, “Smart charging management for shared autonomous electric vehicle fleets: a puget sound case study,” Transportation Research Part D: Transport and Environment, vol. 78, Article ID 102184, 2020. 17. L. Zhang, F. Chen, X. Ma, and X. Pan, “Fuel economy in truck platooning: a literature overview and directions for future research,” Journal of Advanced Transportation, vol. 2020, Article ID 2604012, 10 pages, 2020. 18. J. Farhan and T. D. Chen, “Impact of ridesharing on operational efficiency of shared autonomous electric vehicle fleet,” Transportation Research Part C: Emerging Technologies, vol. 93, pp. 310–321, 2018. 19. M. Lokhandwala and H. Cai, “Siting charging stations for electric vehicle adoption in shared autonomous fleets,” Transportation Research Part D: Transport and Environment, vol. 80, Article ID 102231, 2020. 20. I. Overtoom, G. Correia, Y. Huang, and A. Verbraeck, “Assessing the impacts of shared autonomous vehicles on congestion and curb use: a traffic simulation study in the Hague, Netherlands,” International Journal of Transportation Science and Technology, 2020.
264
Safety Science and Technology
21. P. Koopman and M. Wagner, “Autonomous vehicle safety: an interdisciplinary challenge,” IEEE Intelligent Transportation Systems Magazine, vol. 9, pp. 90–96, 2017. 22. C. Xu, Z. Ding, C. Wang, and Z. Li, “Statistical analysis of the patterns and characteristics of connected and autonomous vehicle involved crashes,” Journal of Safety Research, vol. 71, pp. 41–47, 2019. 23. B. Németh, Z. Bede, and P. Gáspár, “Control strategy for the optimization of mixed traffic flow with autonomous vehicles,” IFACPapersOnLine, vol. 52, no. 8, pp. 227–232, 2019. 24. D. Phan, A. Bab-Hadiashar, C. Y. Lai et al., “Intelligent energy management system for conventional autonomous vehicles,” Energy, vol. 191, Article ID 116476, 2020. 25. F. Chen, M. Song, and X. Ma, “A lateral control scheme of autonomous vehicles considering pavement sustainability,” Journal of Cleaner Production, vol. 256, Article ID 120669, 2020. 26. F. Chen, M. Song, X. Ma, and X. Zhu, “Assess the impacts of different autonomous trucks’ lateral control modes on asphalt pavement performance,” Transportation Research Part C: Emerging Technologies, vol. 103, pp. 17–29, 2019. 27. Beijing Innovation Center for Mobility Intelligent, Beijing Autonomous Vehicle Road Test Report, 2018, http://www.mzone.site/. 28. K. Bimbraw, “Autonomous cars: past, present and future A review of the developments in the last century, the present scenario and the expected future of autonomous vehicle technology,” in Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics, pp. 191–198, Colmar, France, July 2015. 29. L. M. Hulse, H. Xie, and E. R. Galea, “Perceptions of autonomous vehicles: relationships with road users, risk, gender and age,” Safety Science, vol. 102, pp. 1–13, 2018. 30. J. Moody, N. Bailey, and J. Zhao, “Public perceptions of autonomous vehicle safety: an international comparison,” Safety Science, vol. 121, pp. 634–650, 2020. 31. J. Lee, D. Lee, Y. Park, S. Lee, and T. Ha, “Autonomous vehicles can be shared, but a feeling of ownership is important: examination of the influential factors for intention to use autonomous vehicles,” Transportation Research Part C: Emerging Technologies, vol. 107, pp. 411–422, 2019.
Safety of Autonomous Vehicles
265
32. G. Mordue, A. Yeung, and F. Wu, “The looming challenges of regulating high level autonomous vehicles,” Transportation Research Part A: Policy and Practice, vol. 132, pp. 174–187, 2020. 33. T. B. Sheridan and W. L. Verplank, Human and Computer Control of Undersea Teleoperators, Man-Machine Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA, 1987. 34. R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and levels of human interaction with automation,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 30, no. 3, pp. 286–297, 2000. 35. M. Bonner, R. Taylor, K. Fletcher, and C. Miller, “Adaptive automation and decision aiding in the military fast jet domain,” in Proceedings of the Conference on Human Performance, Situation Awareness and Automation: User Centered Design for the New Millenium, pp. 154– 159, Savannah, GA, USA, October 2000. 36. D. Richards, “To delegate or not to delegate: a review of control frameworks for autonomous cars,” Applied Ergonomics, vol. 53, pp. 383–388, 2016. 37. SAE International, Surface Vehicle Recommended Practice (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, SAE International, Pittsburgh, PA, USA, 2018. 38. D. Paddeu, I. Shergold, and G. Parkhurst, “The social perspective on policy towards local shared autonomous vehicle services (LSAVS),” Transport Policy, pp. 1–11, 2020. 39. M. Garavello, P. Goatin, T. Liard, and B. Piccoli, “A multiscale model for traffic regulation via autonomous vehicles,” Journal of Differential Equations, vol. 269, no. 7, pp. 6088–6124, 2020. 40. United States Department of Transportation, Automated Vehicles for Safety, United States Department of Transportation, Washington, DC, USA, 2019, https://www.nhtsa.gov/technology-innovation/automatedvehicles-safety#nhtsa-action. 41. W. L. Huang, K. Wang, Y. Lv, and F. H. Zhu, “Autonomous vehicles testing methods review,” in Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 163–168, Rio de Janeiro, Brazil, November 2016.
266
Safety Science and Technology
42. P. Pisu, P. Sauras-Perez, A. Gil, J. Singh Gill, and J. Taiber, “VoGe: a voice and gesture system for interacting with autonomous cars,” in Proceedings of the SAE Technical Paper Series, Detroit, MI, USA, 2017. 43. C. Ilas, “Electronic sensing technologies for autonomous ground vehicles: a review,” in 2013 8th International Symposium on Advanced Topics in Electrical Engineering (ATEE), pp. 1–6, Bucharest, Romania, May 2013. 44. A. Sarmento, B. Garcia, L. Coriteac, and L. Navarenho, “The autonomous vehicle challenges for emergent market,” in Proceedings of the SAE Technical Paper Series, São Paulo, Brasil, 2017. 45. K. Bengler, K. Dietmayer, B. Farber, M. Maurer, C. Stiller, and H. Winner, “Three decades of driver assistance systems: review and future perspectives,” IEEE Intelligent Transportation Systems Magazine, vol. 6, pp. 6–22, 2014. 46. Ü Özgüner, C. Stiller, and K. Redmill, “Systems for safety and autonomous behavior in cars: the DARPA grand challenge experience,” Proceedings of the IEEE, vol. 95, pp. 397–412, 2007. 47. L. B. Cremean, T. B. Foote, J. H. Gillula et al., “Alice: an informationrich autonomous vehicle for high-speed desert navigation,” Journal of Robotics, vol. 23, pp. 777–810, 2006. 48. J. Choi, J. Lee, D. Kim et al., “Environment-Detection-and-Mapping algorithm for autonomous driving in rural or off-road environment,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, pp. 974–982, 2012. 49. D. Gohring, M. Wang, M. Schnurmacher, and T. Ganjineh, “Radar/ Lidar sensor fusion for car-following on highways,” in Proceedings of the 5th International Conference on Automation, Robotics and Applications (ICARA), pp. 407–412, Wellington, New Zealand, December 2011. 50. N. Suganuma and T. Uozumi, “Development of an autonomous vehicle—system overview of test ride vehicle in the Tokyo motor show,” in Proceedings of the 2012 IEEE/SICE International Symposium on System Integration (SII), pp. 215–218, Akita, Japan, 2012. 51. C. Armbrust, T. Braun, T. Föhst et al., “RAVON-the robust autonomous vehicle for off-road navigation,” in Proceedings of the IARP
Safety of Autonomous Vehicles
52. 53.
54.
55. 56.
57.
58.
59.
60.
61.
267
International Work Robotics Risky Interval Environment Surveillance, pp. 12–14, Brussel, Belgium, 2009. C. Urmson, C. Baker, J. Dolan et al., “Autonomous driving in traffic: boss and the urban challenge,” AI Magazine, vol. 30, pp. 17–28, 2009. T. Nimmagadda, Building an Autonomous Ground Traffic System, Technical Report HR-09-09, The University of Texas, Austin, TX, USA, 2007. A. Petrovskaya and S. Thrun, “Model based vehicle detection and tracking for autonomous urban driving,” Autonomous Robots, vol. 26, no. 2-3, pp. 123–139, 2009. J. Leonard, J. How, S. Teller et al., “A perception-driven autonomous urban vehicle,” Journal of Robotics, vol. 25, pp. 727–774, 2008. A. Borowsky and T. Oron-Gilad, “The effects of automation failure and secondary task on drivers’ ability to mitigate hazards in highly or semi-automated vehicles,” Advances in Transportation Studies, vol. 1, pp. 59–70, 2016. D. González, J. Pérez, V. Milanés, and F. Nashashibi, “A review of motion planning techniques for automated vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1135–1145, 2016. H. Zhang, C. J. R. Sheppard, T. E. Lipman, T. Zeng, and S. J. Moura, “Charging infrastructure demands of shared-use autonomous electric vehicles in urban areas,” Transportation Research Part D: Transport and Environment, vol. 78, Article ID 102210, 2020. J. Yang, T. Chen, B. Payne, P. Guo, Y. Zhang, and J. Guo, “Generating routes for autonomous driving in vehicle-to-infrastructure communications,” Digital Communications and Networks, pp. 1–12, 2020. M. Gerla, E. K. Lee, G. Pau, and U. Lee, “Internet of vehicles: from intelligent grid to autonomous cars and vehicular clouds,” in Proceedings of the 2014 IEEE World Forum Internet Things, pp. 241– 246, Seoul, South Korea, March 2014. S. C. Stanciu, D. W. Eby, L. J. Molnar, R. M. Louis, N. Zanier, and L. P. Kostyniuk, “Pedestrians/bicyclists and autonomous vehicles: how will they communicate?” Transportation Research Record: Journal of the Transportation Research Board, vol. 2672, pp. 58–66, 2018.
268
Safety Science and Technology
62. K. Wang, G. Li, J. Chen et al., “The adaptability and challenges of autonomous vehicles to pedestrians in urban China,” Accident Analysis & Prevention, vol. 145, Article ID 105692, 2020. 63. T. Ha, S. Kim, D. Seo, and S. Lee, “Effects of explanation types and perceived risk on trust in autonomous vehicles,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 73, pp. 271– 280, 2020. 64. S. Yu, J. Puchinger, and S. Sun, “Two-echelon urban deliveries using autonomous vehicles,” Transportation Research Part E: Logistics and Transportation, vol. 141, Article ID 102018, 2020. 65. Y. Liu and A. B. Whinston, “Efficient real-time routing for autonomous vehicles through Bayes correlated equilibrium: an information design framework,” Information Economics and Policy, vol. 47, pp. 14–26, 2019. 66. C. Ryan, F. Murphy, and M. Mullins, “Spatial risk modelling of behavioural hotspots: risk-aware path planning for autonomous vehicles,” Transportation Research Part A: Policy and Practice, vol. 134, pp. 152–163, 2020. 67. O. T. Ritchie, D. G. Watson, N. Griffiths et al., “How should autonomous vehicles overtake other drivers?” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 66, pp. 406–418, 2019. 68. J. D. Rupp and A. G. King, “Autonomous driving-a practical roadmap,” in Proceedings of the SAE Technical Paper Series, Detroit, MI, USA, 2010. 69. A. Eriksson and N. A. Stanton, “Takeover time in highly automated vehicles: noncritical transitions to and from manual control,” Human Factors, vol. 59, pp. 689–705, 2017. 70. A. Eriksson and N. A. Stanton, “Driving Performance after selfregulated control transitions in highly automated vehicles,” Human Factors, vol. 59, pp. 1233–1248, 2017. 71. A. Calvi, F. D’Amico, C. Ferrante, and L. Bianchini Ciampoli, “A driving simulator study to assess driver performance during a carfollowing maneuver after switching from automated control to manual control,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 70, pp. 58–67, 2020. 72. L. J. Molnar, L. H. Ryan, A. K. Pradhan et al., “Understanding trust and acceptance of automated vehicles: an exploratory simulator
Safety of Autonomous Vehicles
73.
74.
75.
76.
77.
78.
79.
80. 81.
82.
83.
84.
269
study of transfer of control between automated and manual driving,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 58, pp. 319–328, 2018. Cunningham, Mitchell, and M. A. Regan, “Autonomous vehicles: human factors issues and future research,” in Proceedings of the 2015 Australasian Road Safety Research, Policing and Education (ARSRPE) Conference, Gold Coast, Australia, 2015. D. Fényes, B. Németh, and P. Gáspár, “A predictive control for autonomous vehicles using big data analysis,” IFAC-PapersOnLine, vol. 52, pp. 191–196, 2019. S. Lee, Y. Kim, H. Kahng et al., “Intelligent traffic control for autonomous vehicle systems based on machine learning,” Expert Systems with Applications, vol. 144, Article ID 113074, 2020. Q. Chen, M. Lin, B. Dai, and J. Chen, “Typical pedestrian accident scenarios in China and crash severity mitigation by autonomous emergency braking systems,” 2015. State of California Department of Motor Vehicles, Testing of Autonomous Vehicles with a Driver, 2019, https://www.dmv.ca.gov/ portal/dmv/detail/vr/autonomous/testing. V. V. Dixit, S. Chand, and D. J. Nair, “Autonomous vehicles: disengagements, accidents and reaction times,” PLoS One, vol. 11, pp. 1–14, 2016. C. Lv, D. Cao, Y. Zhao et al., “Analysis of autopilot disengagements occurring during autonomous vehicle testing,” IEEE/CAA Journal of Automatica Sinica, vol. 5, pp. 58–68, 2018. Waymo, Waymo, 2019, https://waymo.com/. S. Imhof, J. Frölicher, and W. v. Arx, “Research in transportation economics shared autonomous vehicles in rural public transportation systems,” Research in Transportation Economics, pp. 1–7, 2020. F. Liu, F. Zhao, Z. Liu, and H. Hao, “Can autonomous vehicle reduce greenhouse gas emissions? A country-level evaluation,” Energy Policy, vol. 132, pp. 462–473, 2019. S. Rafael, L. P. Correia, D. Lopes et al., “Autonomous vehicles opportunities for cities air quality,” Science of the Total Environment, vol. 712, Article ID 136546, 2020. M. A. Figliozzi, “Carbon emissions reductions in last mile and grocery deliveries utilizing air and ground autonomous vehicles,”
270
85.
86.
87.
88.
89.
90.
91.
92.
93.
Safety Science and Technology
Transportation Research Part D: Transport and Environment, vol. 85, Article ID 102443, 2020. R. E. Stern, S. Cui, M. L. Delle Monache et al., “Dissipation of stopand-go waves via control of autonomous vehicles: field experiments,” Transportation Research Part C: Emerging Technologies, vol. 89, pp. 205–221, 2018. Z. Liu and Z. Song, “Strategic planning of dedicated autonomous vehicle lanes and autonomous vehicle/toll lanes in transportation networks,” Transportation Research Part C: Emerging Technologies, vol. 106, pp. 381–403, 2019. A. Calvi, F. D’Amico, L. B. Ciampoli, and C. Ferrante, “Evaluation of driving performance after a transition from automated to manual control: a driving simulator study,” Transportation Research Procedia, vol. 45, pp. 755–762, 2020. I. Rasheed, F. Hu, and L. Zhang, “Deep reinforcement learning approach for autonomous vehicle systems for maintaining security and safety using LSTM-GAN,” Vehicular Communications, vol. 26, Article ID 100266, 2020. D. Petrovic, R. Mijailović, and D. Pešić, “Traffic accidents with autonomous vehicles: type of collisions, manoeuvres and errors of conventional vehicles’ drivers,” Transportation Research Procedia, vol. 45, pp. 161–168, 2020. C. Wei, R. Romano, N. Merat et al., “Risk-based autonomous vehicle motion control with considering human driver’s behaviour,” Transportation Research Part C: Emerging Technologies, vol. 107, pp. 1–14, 2019. S. Chen, H. Wang, and Q. Meng, “Designing autonomous vehicle incentive program with uncertain vehicle purchase price,” Transportation Research Part C: Emerging Technologies, vol. 103, pp. 226–245, 2019. F. Nazari, M. Noruzoliaee, and A. Mohammadian, “Shared versus private mobility: modeling public interest in autonomous vehicles accounting for latent attitudes,” Transportation Research Part C: Emerging Technologies, vol. 97, pp. 456–477, 2018. C. J. Haboucha, R. Ishaq, and Y. Shiftan, “User preferences regarding autonomous vehicles,” Transportation Research Part C: Emerging Technologies, vol. 78, pp. 37–49, 2017.
Safety of Autonomous Vehicles
271
94. K. F. Yuen, Y. D. Wong, F. Ma, and X. Wang, “The determinants of public acceptance of autonomous vehicles: an innovation diffusion perspective,” Journal of Cleaner Production, vol. 270, p. 121904, 2020. 95. H. Zhong, W. Li, M. W. Burris, A. Talebpour, and K. C. Sinha, “Will autonomous vehicles change auto commuters’ value of travel time?” Transportation Research Part D: Transport and Environment, vol. 83, Article ID 102303, 2020. 96. K. Hilgarter and P. Granig, “Public perception of autonomous vehicles: a qualitative study based on interviews after riding an autonomous shuttle,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 72, pp. 226–243, 2020. 97. S. Wang, Z. Jiang, R. B. Noland, and A. S. Mondschein, “Attitudes towards privately-owned and shared autonomous vehicles,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 72, pp. 297–306, 2020.
Chapter 12
Studying the Safety Impact of Autonomous Vehicles Using Simulation-Based Surrogate Safety Measures
Mark Mario Morando1, Qingyun Tian2, Long T. Truong1, and Hai L. Vu1 Monash Institute of Transport Studies, Department of Civil Engineering, Monash University, Melbourne, VIC, Australia 1
School of Civil and Environmental Engineering, Nanyang Technological University, Singapore 2
ABSTRACT Autonomous vehicle (AV) technology has advanced rapidly in recent years with some automated features already available in vehicles on the market. AVs are expected to reduce traffic crashes as the majority of crashes are related to driver errors, fatigue, alcohol, or drugs. However, very little research has been conducted to estimate the safety impact of AVs. This paper aims to investigate the safety impacts of AVs using a simulation-based surrogate
Citation: Mark Mario Morando, Qingyun Tian, Long T. Truong, Hai L. Vu, “Studying the Safety Impact of Autonomous Vehicles Using Simulation-Based Surrogate Safety Measures”, Journal of Advanced Transportation, vol. 2018, Article ID 6135183, 11 pages, 2018. https://doi.org/10.1155/2018/6135183. Copyright: © 2018 by Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
274
Safety Science and Technology
safety measure approach. To this end, safety impacts are explored through the number of conflicts extracted from the VISSIM traffic microsimulator using the Surrogate Safety Assessment Model (SSAM). Behaviours of humandriven vehicles (HVs) and AVs (level 4 automation) are modelled within the VISSIM’s car-following model. The safety investigation is conducted for two case studies, that is, a signalised intersection and a roundabout, under various AV penetration rates. Results suggest that AVs improve safety significantly with high penetration rates, even when they travel with shorter headways to improve road capacity and reduce delay. For the signalised intersection, AVs reduce the number of conflicts by 20% to 65% with the AV penetration rates of between 50% and 100% (statistically significant at p