Risk-Based Structural Evaluation Methods: Best Practices and Development of Standards 0784415471, 9780784415474

Prepared by Technical Council on Life-Cycle Performance, Safety, Reliability, and Risk of Structural Systems of the Stru

334 17 25MB

English Pages 186 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Preface
Chapter 1: Introduction
1.1 Background
1.2 Risk Analysis for Civil Structures and Infrastructure
1.3 Report Objectives
Chapter 2: Summary of Survey Findings
2.1 General Information
2.1.1 List of Respondents
2.1.2 Affiliations
2.1.3 Types of Structures/Infrastructure of Interest
2.1.4 Pertinent Codes / Standards / Specifications / Guidelines
2.1.5 General Approaches for Risk Assessment and Risk Management
2.1.6 Risk Analysis Team Structure
2.1.7 Role of Respondents
2.1.8 Risk Analysts Training Requirements
2.2 Risk Assessment
2.2.1 Definition of Risk
2.2.2 Risk Evaluation for New and Existing Structural Systems
2.2.3 Risk Acceptance Criteria for New and Existing Structural Systems
2.2.4 Design Life for New Structural Systems
2.2.5 Service Life for Existing Structural Systems
2.2.6 Frequency of Inspections, Structural Assessments, and Repair Actions
2.2.7 Most Pertinent Hazards
2.2.8 Likelihood/Probability of Hazard Occurrence
2.2.9 Hazard Occurrence Rates
2.2.10 Hazard Intensity Levels
2.2.11 Quantitative versus Qualitative Hazard Intensity
2.2.12 Structural Component/System Deterioration
2.2.13 Routine Inspection and Maintenance
2.2.14 Performance and Damage Levels
2.2.15 Performance and Damage Measures
2.2.16 Structural Analysis Approach
2.2.17 Probability of Structural Failure
2.2.18 Consequences of Structural Failure
2.2.19 Combination of Multiple Consequences of Structural Failure
2.2.20 Risk Quantification
2.2.21 Risk Estimation from Historical Damage and Cost Data
2.2.22 Risk Communication
2.2.23 Risk Acceptance Criteria
2.3 Risk Management
2.3.1 Risk Mitigation Strategies
2.3.2 Prioritization of Risk Mitigation Strategies
2.3.3 Flow of Risk Information
2.3.4 Risk Communication to the Public
2.4 Suggestions for Improvements
2.4.1 Overall Assessment
2.4.2 Weak Links
2.4.3 Short-Term Plans and Immediate Needs for Improvements
2.4.4 Priorities for Near-Term Improvements
Chapter' 3: Summary of Workshop Discussions
3.1 Risk Assessment Methods
3.1.1 Structural Analysis Methods
3.1.2 Hazard Assessment
3.1.3 Structural Deterioration
3.1.4 Structural Performance
3.2 Evaluation of Consequences of Structural Failure
3.3 Risk Analysis Codification and Risk Communication
3.3.1 Risk Communication
3.3.2 Risk Acceptance Criteria
3.4 Risk Data
3.5 Obstacles
Chapter 4: Conclusions and Recommendations
4.1 Conclusions
4.2 Recommendations
References and Further Reading
Appendix A: Survey On Risk-Based Structural Evaluation Methods
ASCE/SEI Technical Council on Life-Cycle Performance, Safety, Reliability, and Risk of Structural Systems
ASCE/SEI Technical Council on Life-Cycle Performance, Safety, Reliability, and Risk of Structural Systems
Appendix B: Answers to Section I of the Survey
Appendix C: Answers to Section II of the Survey
Appendix D: Answers to Section III of the Survey
Appendix E: Answers to Section IV of the Survey
Appendix F: Pertinent Standards and Guidelines
References for Table F1-1
Index
Recommend Papers

Risk-Based Structural Evaluation Methods: Best Practices and Development of Standards
 0784415471, 9780784415474

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Risk-Based Structural Evaluation Methods: Best Practices and Development of Standards

Michel Ghosn Graziano Fiorillo Ming Liu Bruce R. Ellingwood

Published by the American Society of Civil Engineers

Library of Congress Cataloging-in-Publication Data Names: Ghosn, Michel, editor. | Fiorillo, Graziano, editor. | Liu, Ming (Structural engineer), editor. | Ellingwood, Bruce R., editor. Title: Risk-based structural evaluation methods : best practices and development of standards / Michel Ghosn, Graziano Fiorillo, Ming Liu, and Bruce Ellingwood. Description: Reston, Virginia : American Society of Civil Engineers, 2019. | Includes bibliographical references and index. | Summary: “This report examines the application of risk-based structural evaluation methods and provides best practice recommendations on their implementation in engineering practice”– Provided by publisher. Identifiers: LCCN 2019033779 | ISBN 9780784415474 (paperback) | ISBN 9780784482643 (pdf) Subjects: LCSH: Structural failures–Risk assessment. | Construction industry–Risk management. | Buildings–Standards. Classification: LCC TA656.5 .R57 2019 | DDC 624.072/1–dc23 LC record available at https://lccn.loc.gov/2019033779 Published by American Society of Civil Engineers 1801 Alexander Bell Drive Reston, Virginia 20191-4382 www.asce.org/bookstore | ascelibrary.org Any statements expressed in these materials are those of the individual authors and do not necessarily represent the views of ASCE, which takes no responsibility for any statement made herein. No reference made in this publication to any specific method, product, process, or service constitutes or implies an endorsement, recommendation, or warranty thereof by ASCE. The materials are for general information only and do not represent a standard of ASCE, nor are they intended as a reference in purchase specifications, contracts, regulations, statutes, or any other legal document. ASCE makes no representation or warranty of any kind, whether express or implied, concerning the accuracy, completeness, suitability, or utility of any information, apparatus, product, or process discussed in this publication, and assumes no liability therefor. The information contained in these materials should not be used without first securing competent advice with respect to its suitability for any general or specific application. Anyone utilizing such information assumes all liability arising from such use, including but not limited to infringement of any patent or patents. ASCE and American Society of Civil Engineers—Registered in US Patent and Trademark Office. Photocopies and permissions. Permission to photocopy or reproduce material from ASCE publications can be requested by sending an email to [email protected] or by locating a title in the ASCE Library (https://ascelibrary.org) and using the “Permissions” link. Errata: Errata, if any, can be found at https://doi.org/10.1061/9780784415474. Copyright © 2019 by the American Society of Civil Engineers. All Rights Reserved. ISBN 978-0-7844-1547-4 (print) ISBN 978-0-7844-8264-3 (PDF) Manufactured in the United States of America. 25 24

23 22

21

20

1

2

3 4

5

Contents

Preface.............................................................................................................................................. vii Chapter 1 Introduction ..................................................................................... 1 1.1 Background ......................................................................................................................1 1.2 Risk Analysis for Civil Structures and Infrastructure....................................4 1.3 Report Objectives .......................................................................................................... 5 Chapter 2 Summary of Survey Findings ....................................................... 7 2.1 General Information ..................................................................................................... 8 2.1.1 List of Respondents.........................................................................................8 2.1.2 Affiliations.............................................................................................................8 2.1.3 Types of Structures/Infrastructure of Interest ....................................8 2.1.4 Pertinent Codes / Standards / Specifications / Guidelines ...... 12 2.1.5 General Approaches for Risk Assessment and Risk Management .................................................................................................... 13 2.1.6 Risk Analysis Team Structure................................................................... 13 2.1.7 Role of Respondents.................................................................................... 14 2.1.8 Risk Analysts Training Requirements................................................... 14 2.2 Risk Assessment .......................................................................................................... 15 2.2.1 Definition of Risk......................................................................................... 16 2.2.2 Risk Evaluation for New and Existing Structural Systems...... 17 2.2.3 Risk Acceptance Criteria for New and Existing Structural Systems ............................................................................................................ 18 2.2.4 Design Life for New Structural Systems.......................................... 18 2.2.5 Service Life for Existing Structural Systems................................... 19 2.2.6 Frequency of Inspections, Structural Assessments, and Repair Actions ..................................................................................... 20 2.2.7 Most Pertinent Hazards............................................................................ 20 2.2.8 Likelihood/Probability of Hazard Occurrence ............................... 21 2.2.9 Hazard Occurrence Rates ........................................................................ 22 2.2.10 Hazard Intensity Levels ............................................................................ 22 2.2.11 Quantitative versus Qualitative Hazard Intensity........................ 23 2.2.12 Structural Component/System Deterioration................................ 23 2.2.13 Routine Inspection and Maintenance............................................... 24 2.2.14 Performance and Damage Levels....................................................... 24 2.2.15 Performance and Damage Measures................................................ 24 2.2.16 Structural Analysis Approach ................................................................ 25

iii

iv

CONTENTS

2.2.17 Probability of Structural Failure ........................................................... 25 2.2.18 Consequences of Structural Failure ................................................... 26 2.2.19 Combination of Multiple Consequences of Structural Failure................................................................................................................ 27 2.2.20 Risk Quantification...................................................................................... 27 2.2.21 Risk Estimation from Historical Damage and Cost Data......... 28 2.2.22 Risk Communication.................................................................................. 29 2.2.23 Risk Acceptance Criteria .......................................................................... 29 2.3 Risk Management....................................................................................................... 29 2.3.1 Risk Mitigation Strategies .......................................................................... 30 2.3.2 Prioritization of Risk Mitigation Strategies........................................ 31 2.3.3 Flow of Risk Information............................................................................ 31 2.3.4 Risk Communication to the Public ....................................................... 31 2.4 Suggestions for Improvements ........................................................................... 31 2.4.1 Overall Assessment....................................................................................... 32 2.4.2 Weak Links ........................................................................................................ 32 2.4.3 Short-Term Plans and Immediate Needs for Improvements.......................................................................................... 33 2.4.4 Priorities for Near-Term Improvements.............................................. 35 Chapter 3 Summary of Workshop Discussions.......................................... 37 3.1 Risk Assessment Methods...................................................................................... 39 3.1.1 Structural Analysis Methods..................................................................... 39 3.1.2 Hazard Assessment....................................................................................... 41 3.1.3 Structural Deterioration .............................................................................. 42 3.1.4 Structural Performance ............................................................................... 43 3.2 Evaluation of Consequences of Structural Failure..................................... 44 3.3 Risk Analysis Codification and Risk Communication................................ 45 3.3.1 Risk Communication .................................................................................... 45 3.3.2 Risk Acceptance Criteria............................................................................. 47 3.4 Risk Data ......................................................................................................................... 48 3.5 Obstacles ........................................................................................................................ 49 Chapter 4 Conclusions and Recommendations ......................................... 55 4.1 Conclusions.................................................................................................................... 55 4.2 Recommendations ..................................................................................................... 56 References and Further Reading .................................................................. 59 Appendix A Survey on Risk-Based Structural Evaluation Methods....... 61 Appendix B Answers to Section I of the Survey ....................................... 81 Appendix C Answers to Section II of the Survey ...................................... 99 Appendix D Answers to Section III of the Survey ..................................137

CONTENTS

v

Appendix E Answers to Section IV of the Survey...................................157 Appendix F Pertinent Standards and Guidelines ....................................167 Index.................................................................................................................173

This page intentionally left blank

Preface

The structural engineering community has recently implemented methodologies that incorporate explicit risk assessment principles and performance-based criteria to design new structures with improved and predictable performance, assess the safety of existing structures, and manage our deteriorating civil infrastructure systems. Particular attention has been paid to extending the design and safety evaluation processes from their original focus on individual structural components to a more complex approach that considers the performance of an entire structural system, such as a building or a bridge, and, on an even larger scale, a portfolio of multiple structures. The interest in risk analysis techniques has intensified following recent extraordinarily destructive events, including Hurricane Katrina and Superstorm Sandy in the United States, the Fukushima Earthquake in Japan, and other extreme natural hazard events that have severely affected communities for many years. This report summarizes the findings from a survey of attitudes of researchers, structural engineers, and government agencies to risk-informed structural engineering practices and a follow-up workshop held in September 2014 which were completed under the auspices and financial support of the Structural Engineering Institute of the American Society of Civil Engineers. The survey and workshop objectives were to examine the progress made on the implementation of risk-based structural evaluation methods (RBsEM), review best practices, and investigate the possibility of developing risk-based standards. In the broadest terms, risk in structural engineering can be defined as the integration of the probability of structural failure and the associated consequences. The responses to the survey and the discussions at the workshop demonstrated that risk analysis principles are well established from a theoretical point of view. However, a number of barriers have hampered a wide-scale implementation of risk-based methods in decision-making processes. These barriers include (a) the difficulty of applying probabilistic analysis techniques when evaluating the performance of complex structures and networks; (b) limited statistical data to model the intensity of extreme hazards and their effects on structural systems; (c) the lack of calibrated criteria that relate analysis results to physical damage of different types of structural systems; (d) the difficulty of enumerating the consequences of failure and assigning quantifiable measures for these consequences; and (e) the paucity of guidelines and standards. Some industries, such as the nuclear power industry, have overcome many of these challenges through long-term research. Furthermore, in seismic engineering, the use of probabilistic performance-based design methods, combined with consideration of the consequences of damage, has become widely accepted, and vii

viii

PREFACE

the process is on track for routine application. Concepts of performance-based design, with objective or subjective evaluations of risk, have also been introduced in ASCE 7 and other standards for the design and safety evaluation of buildings and other structures. Other industries, such as those concerned with the state of dams and transportation infrastructure systems, have recently initiated ambitious research programs to improve their risk analysis methodologies and have, in the interim, resorted to implementing empirical approaches based on the experience of industry leaders and the review of historical data and the archival literature. Despite the progress made in the field, implementation of RBsEM is still in its infancy, and additional work remains before risk-based methods evolve as the standard approach for decision-making processes in structural and infrastructure engineering. All the Participants in the survey and workshop, who included members of government and regulatory agencies, practicing engineers, and academic researchers, have expressed great interest in advancing RBsEM, which in their opinion presents the best approach for addressing issues related to the management of aging structural and civil infrastructure systems susceptible to increased environmental and climate-related hazards, as well as increased security threats. The survey respondents and workshop participants made a number of recommendations to encourage and support the application of RBsEM in structural and infrastructure engineering practice and advance the field. The implementation of these recommendations will require the involvement of professional societies such as the American Society of Civil Engineers, regulatory agencies, research organizations, and educational institutions. These recommendations can be summarized as • Developing guidelines and training programs for risk assessment; • Facilitating the implementation of technical innovations and providing technical support, including the development of appropriate computer software; • Enhancing data gathering and developing advanced statistical analysis techniques for the probabilistic modeling of hazards, projection of expected extreme natural and human-made events, assessing the resulting physical damage to structures and infrastructure, and estimating associated direct and indirect losses; • Investigating approaches for establishing optimum risk acceptance criteria that take into consideration public attitudes toward risk for different levels of hazards and multiple hazards; and • Exploring effective means of communicating risk to different stakeholders including the general public. Because of the nature of the problem it addresses, structural and infrastructure risk assessment is a multidisciplinary process that requires diverse technical expertise, including engineers, system analysts, social scientists, economists, and actuaries. The risk analysis team would also include experts in specific topics

PREFACE

ix

depending on the issues and hazards of concern. For example, materials scientists would help identify structural degradation mechanisms, climate scientists would provide projections of future changes in climatic hazards, and geoscientists would help in modeling of seismic hazards. Furthermore, civil infrastructure risk-based decision making involves multiple stakeholders, including owners, regulators, policymakers, and the affected community, with a great impact on the well-being of the general public. Hence, there is great incentive for all concerned to promote the field and encourage the implementation of RBsEM as the basis for managing our structural and infrastructure systems.

This page intentionally left blank

CHAPTER 1

Introduction

1.1 BACKGROUND In response to our society’s concern with the dangers that large-scale natural and human-made hazards may inflict on the built environment as well as their safety and well-being, the structural engineering community has recently begun to implement design and evaluation methodologies that explicitly consider risk assessment principles through various means, including the establishment of performance-based criteria (ASCE 2017a). The goal is to assuage society’s concerns while reconciling potential differences between actual risk and society’s perception of risk (Slovic 2000). Particular attention has been paid to extending the traditional structural design and safety evaluation approaches from their original focus on the components (such as beams, columns, slabs, walls, and foundations) of a single structure to a more complex approach that considers the performance of an entire structural system, such as a building or a bridge, or a portfolio of multiple systems (Ghosn et al. 2016a). Interest in risk analysis techniques has intensified since several recent extraordinarily destructive events, such as Hurricane Katrina and Superstorm Sandy in the United States, the Fukushima Earthquake in Japan, and other natural hazard events, as well as human-caused malicious events, such as the attack on the World Trade Center in New York, that have devastated the affected communities (FEMA 2005, UNISDR 2017). The interest in risk-based design and evaluation methods stems from the need to consider the consequences of a structural failure rather than just concentrating on the failure probability of individual members as done in traditional design and structural assessment practices. A review of the technical evolution of the field of structural design and evaluation is provided in a compendium of papers authored by members of the ASCE/SEI Technical Council on Life-Cycle Performance, Safety, Reliability and Risk of Structural Systems (Biondini and Frangopol 2016; Ghosn et al. 2016a, 2016b; Lounis and McAllister 2016; Sanchez-Silva et al. 2016). The risk assessment process consists of (a) establishing performance criteria or goals for the structural system, (b) estimating the probability of occurrence of undesired events or loads (the hazards), (c) analyzing the structural response subjected to the hazards, and (d) assessing the potential consequences (losses).

1

2

RISK-BASED STRUCTURAL EVALUATION METHODS

A simplified numerical measure of risk is often presented as the product of the probability of failure, Pf, and the consequence of failure, Cf (Ang and Tang 1984):

risk = Pf × Cf

(1-1)

where the consequence of failure (or loss) can be expressed in monetary terms to assess the costs of failure, which may include direct costs, such as the cost of the structure and loss of life, as well as indirect costs, such as user costs, economic losses, and environmental, societal, and political impacts. While Equation (1-1) represents a generally accepted simplified expression for risk, the expression can be further expanded so that risk or the probability of loss owing to damage under multiple hazards can be represented as a summation (or integration) of all losses over time, accounting for the random nature of all the components (Ellingwood 2001): XXX risk = Pðloss > cÞ = PðlossjD, LS, HÞ PðDjLS, HÞ PðLSjHÞ PðHÞ H

D

LS

(1-2a) where loss is any appropriate loss metric; P(loss > c) is the probability that the loss will exceed a certain value, c; P(H) is the probability of occurrence of an input intensity level associated with an event or hazard, H; P(LS|H) is the conditional probability of exceeding a structural limit state LS given the hazard H; P(D|LS,H) is the conditional probability of a damage state D given the exceedance of the structural limit state LS given the hazard H; and P(loss|D, LS, H) is the conditional probability of a loss L given the damage state D due to the exceedance of the limit state LS given the hazard H. Equation (1-2a) is a statement of the theorem of total probability. In risk assessment practice it is often expressed in the form

λloss > c =

XXX H

D

P½loss > cjD P½DjLSP½LSjH λH

(1-2b)

LS

where λloss > c = Mean annual rate of losses greater than c, P[loss > c] = Probability that the loss will exceed a value c, λH = Mean annual rate of occurrence of hazard H, and other conditional probabilities are defined as previously mentioned. Note that there is only one conditioning event in each of the probabilities in Equation (1-2b). For example, the loss depends on the damage state (e.g., moderate or severe) but not on how that damage state occurred, which is consistent with insurance payout policies. Structural designs that mitigate the risk of structural system collapse and other losses due to partial damage or local failures require a different approach than traditional prescriptive component-based methods. This has led engineers to consider the adoption of performance-based design methods, which provide a

INTRODUCTION

3

more transparent design process, where performance objectives are explicitly stated and uncertainties identified and included in the analysis procedure. To help facilitate the application of the more complex risk-based methods, a few agencies have long had specific guidelines for risk analysis, such as the US Nuclear Regulatory Commission’s (2016) guidelines for evaluating the safety of nuclear power plants. Others, such as the Federal Emergency Management Agency (FEMA) in collaboration with National Earthquake Hazards Reduction Program (NEHRP) are also proposing procedures to apply risk assessment during the seismic design of buildings (FEMA 2012). However, because of the difficulty of obtaining quantitative measures of loss and the complications in calculating the probability of collapse, many newly proposed performance-based design criteria have not yet been calibrated for quantitative assessments of risk. Instead, performance-based design methods have implicitly included subjective notions of risk based on expert judgment and estimates of the public’s risk tolerance. For example, ASCE/SEI Standard 7-16 (ASCE 2017a) assigns different target member reliability levels for the structural components of buildings based on their classification into four risk categories: I. Buildings and other structures that represent a low risk to human life in the event of failure. II. All other buildings and other structures not in Risk Categories I, III, and IV. III. Buildings and other structures that could pose a substantial risk to human life or cause a substantial economic impact or disruption of everyday life in the event of failure. IV. Buildings and other structures designated as essential facilities or the failure of which could pose a substantial hazard to the community. Risk Category II includes most buildings and, therefore, the other categories are described relative to Risk Category II. The acceptable level of risk is the highest for Risk Category I, since the risk to life of a collapse is low. Conversely, the acceptable risk is the lowest for Risk Category IV, since a collapse would present a high risk of life loss and a substantial hazard to the community. The seismic provisions in ASCE 7-16 stipulate different collapse prevention probabilities that are keyed to the same four risk categories described. Additional efforts are underway by National Institute of Standards and Technology (NIST 2011a), in cooperation with NEHRP and other agencies, to develop tools to mitigate risk from strong earthquake ground motions to new and existing buildings, lifelines, and infrastructure and develop improved building codes and new design approaches. Several other standards, codes, and specifications, such as those established by the American Association of State Highway and Transportation Officials (AASHTO 2017) for the design of bridges, have incorporated their own variations of performance-based design methodologies by assigning different load (modifying) factors based on the bridge’s importance or the level of structural system

4

RISK-BASED STRUCTURAL EVALUATION METHODS

redundancy and ductility. Other ongoing activities include those by the US Army Corps of Engineers (USACE 2014), which has established a Risk Analysis for Dam Safety Research Program to provide methodology and tools to aid in allocating investments to improve the safety of dams. Efforts are also underway at NIST (2011b, c) to develop risk-informed performance-based approaches for structures subjected to wind and coastal inundation and to evaluate the risks of fire.

1.2 RISK ANALYSIS FOR CIVIL STRUCTURES AND INFRASTRUCTURE As illustrated in Figure 1-1, the risk analysis framework in structural and infrastructure engineering has two major components: risk assessment and risk management. Risk assessment is subdivided into four tasks: hazard identification, estimation of hazard magnitudes, analysis of structural response to identify potential failure modes and probability of failure under a given hazard and combination of hazards, and evaluation of failure consequences. In this context, the word “hazard” is taken in the widest definition that would consider all sources of loss, including operational hazard owing to normal wear and tear during regular use under design conditions, accidental or malicious human-made hazard, common environmental and climate-related hazard, and extreme environmental events. Risk management is the decision-making process that uses the information assembled during risk assessment to mitigate the risk. While risk assessment is generally executed by teams of engineers with the support of other technical experts, risk management is by definition a multidisciplinary task, involving the contributions of engineers, economists, sociologists, stakeholders, and policymakers, and sometimes community representatives. It is important to recognize

Figure 1-1. Risk analysis framework (Todinov 2007).

INTRODUCTION

5

that a successful implementation of risk-based methods cannot compartmentalize these competences into isolated clusters, and all concerned should have a minimum understanding of all the other disciplines involved in the risk analysis process (Figure 1-1).

1.3 REPORT OBJECTIVES Although variations of risk-based structural evaluation methods have been investigated over the last two to three decades and their benefits have been well recognized, their implementation for the design and management of structural systems and civil infrastructure is still in its infancy because a number of impediments have hampered their wide-scale implementation in decision-making processes. To identify these impediments and study means to overcome them, the Structural Engineering Institute’s ASCE/SEI Technical Council on Life-Cycle Performance, Safety, Reliability and Risk of Structural Systems conducted a survey and a follow-up workshop on risk-based structural evaluation methods (RBsEM) in September 2014. The survey and workshop were organized under the auspices and with the financial support of the ASCE’s Structural Engineering Institute (SEI) to examine the progress made on the implementation of RBsEM, review best practices, and investigate the possibility of developing risk-based standards for structural evaluation. The survey and workshop compared the state of the art and the state of practice and explored avenues to overcome real and perceived obstacles to the routine implementation of risk-based methods in structural engineering. The specific objectives of the survey and workshop were to 1. Share experiences and information across structural and infrastructure engineering sectors on • Successful practices in risk-based structural design, evaluation, and decision-making processes at the project and network levels; • Availability of statistical data, tools, and techniques for RBsEM; • Principles and procedures for establishing risk acceptance criteria at the project and network levels; • Risk communication for engineering professionals, stakeholders, decision makers and the public; • Obstacles hindering the implementation of RBsEM and how to overcome them; • Development of risk-based standards. 2. Outline a plan for a set of activities to advance and encourage the implementation of RBsEM. 3. Build a RBsEM community consisting of representatives of government and regulation agencies, practicing engineers, and researchers.

6

RISK-BASED STRUCTURAL EVALUATION METHODS

This report represents a compilation of the results of the survey and the discussions held at the follow-up workshop of September 2014. The information collected is synthesized by the authors in the narrative presented in the next three chapters, while the appendixes provide the verbatim responses of the survey participants. Figures and charts are also provided in the report summarizing respondents’ answers to specific questions. The authors take all responsibility for the accuracy of the statements and conclusions drawn from the survey and the workshop proceedings and for the interpretation of the participants’ responses which are presented in the next chapters. Because the participants were not asked to cite specific references in support of their arguments, the authors believe that this compilation of the survey and workshop results stands on its own. Therefore, we have chosen not to give specific references, which could have amplified and supported some of the information presented in the next chapters, beyond the few references cited in this introduction. These are meant to provide some context for the discussion presented in the next two chapters rather than an exhaustive review of the field. On the other hand, a list of standards, specifications, and guidelines that the participants found to be pertinent to their engineering practice is given in Appendix F.

CHAPTER 2

Summary of Survey Findings A survey on RBsEM was distributed to three groups of potential participants: representatives of government agencies who may be classified as owners and regulators of structural and infrastructure systems, academic researchers, and practicing engineers. Two slightly differently worded versions of the survey were sent out. The first version was meant for the first two groups, focusing on development of RBsEM, and the second version was meant for the last group, who are involved in the actual implementation of risk-based methods. The goal of the survey was to gather information about the state of the art and research needs of RBsEM, to assess the extent of implementation in current practice, and to identify specific techniques used during such implementations. The survey questionnaire had four sections. In the first section, the respondents were asked to provide general information about their risk-related research or professional activities and their organizations. The second section inquired about the types of structures and hazards for which risk methods had been studied or applied, along with the tools used to assess risk. The third section was about risk management techniques. The final section solicited general comments, and suggestions for future research. Appendix A reproduces the two versions of the questionnaire. The survey was emailed to 74 government and academic researchers and 73 practicing engineers in both the public and private sectors, for a total of 147 potential participants. The list of people contacted was based on membership in the ASCE/SEI Technical Council on Life-Cycle Performance, Safety, Reliability and Risk of Structural Systems, the research interests of known academics in the field, officials and staff from government organizations that deal with structural risk assessment and management, and a general web search for other professionals with technical expertise in the use of structural risk methods. This chapter summarizes the responses received for each section of the survey. Section 2.1 assembles the general information provided by the respondents about themselves, their organization, and their involvement in risk-related activities. Section 2.2 summarizes the responses to survey questions related to risk assessment methods. Section 2.3 presents a summary of the responses to questions related to risk management, and Section 2.4 reports the respondents’ general assessments and suggestions. The detailed responses from researchers, code writers, and those involved in implementation are provided in Appendixes B through E.

7

8

RISK-BASED STRUCTURAL EVALUATION METHODS

2.1 GENERAL INFORMATION Thirty-seven of the 147 people contacted responded to the survey, a response rate of 25%, consistent with the response expected in such specialized professional surveys. The number of surveys filled out by practicing engineers was 16. Eleven of the engineers worked in industry, and five were government officials. The number of surveys returned from academics and researchers was 21. Section I of the survey consists of several general questions, which addressed I.1. List of respondents I.2. Affiliations I.3. Type of structures/infrastructure of interest I.4. Pertinent codes/standards/specifications/design guidelines I.5. Description of adopted risk analysis approaches I.6. Risk analysis team structure I.7. Technical and research background of respondents I.8. Risk analyst training requirements. The responses to these questions are summarized and discussed in the subsections that follow. The actual responses to the questions are provided in Appendix B.

2.1.1 List of Respondents The names of the respondents and their affiliations are provided in Table 2-1. The responses to the survey were divided into two categories: those of researchers and those of engineering practitioners. The last column of the table identifies the category assigned to each respondent. It should be noted that the order in the table is not related to the order of the responses provided in the appendices.

2.1.2 Affiliations As mentioned, 57% of the survey’s respondents were from academic institutions in the US and overseas, 30% from consulting firms, and 13% from government agencies. Responses were received from engineers who have worked with the following government agencies: US Nuclear Regulatory Commission, US Army Corps of Engineers, US Federal Highway Administration, New York State Department of Transportation, US Air Force Research Laboratory, and National Research Council of Canada. Of these six agencies, the first four were classified under “engineering,” and the last two under “research.”

2.1.3 Types of Structures/Infrastructure of Interest The respondents identified the types of structures that are of most interest to their research and where elements of RBsEM are being implemented in engineering

First name

Ferhat Mitsuyoshi Sreenivas Yousef Bilal M. Swagata Paolo Eun Jeong Gary Nilesh C. Perry Jason Ross B. Andy Armen Leonardo Louis Fernando Daniel Dan M. Rade Jon A.

Last name

Akgül Akiyama Alampalli Alostaz Ayyub Banerjee Bocchini Cha Chock Chokshi Cole Coray Corotis Coughlin Der Kiureghian Dueans-Osorio Esteva Ferrante Fodera Frangopol Hajdin Heintz

Table 2-1. List of Survey Respondents.

Middle East Technical University Waseda University, Japan New York State Department of Transportation AECOM University of Maryland Pennsylvania State University Lehigh University University of Illinois Martin & Chock US Nuclear Regulatory Commission Dibble Engineers Marx Okubo University of Colorado Boulder Hinman Consulting Engineers University of California, Berkeley Rice University Instituto de Ingeniería, Ciudad Universitaria, México US Nuclear Regulatory Commission Federal Highway Administration Lehigh University Infrastructure Management Consultants, Switzerland Applied Technology Council

Affiliation

(Continued)

Research Research Engineering Engineering Research Research Research Research Engineering Engineering Engineering Engineering Research Engineering Research Research Research Engineering Engineering Research Engineering Research

Sector

SUMMARY OF SURVEY FINDINGS

9

First name

Kent Charlie Samuel Zoubir Ehsan Larry Michael Jamie E. Dorothy Mauricio Mahendra J. Sarnjeet Alfred Eric John W.

Last name

Hokens Kircher Labi Lounis Minaie Nuss O’Rourke Padgett Reed Sanchez-Silva Shah Singh Strauss Tuegel van de Lindt

Affiliation US Army Corps of Engineers Kircher & Associates Purdue University National Research Council of Canada Intelligent Infrastructure Systems Nuss Engineering Rensselaer Polytechnic Institute Rice University University of Washington University of Los Andes, Colombia Private consulting engineer AECOM University of Natural Resources and Life Sciences, Austria Aerospace Systems Directorate, Air Force Research Laboratory Colorado State University

Table 2-1. List of Survey Respondents. (Continued)

Engineering Engineering Research Research Engineering Engineering Research Research Research Research Engineering Engineering Research Research Research

Sector

10 RISK-BASED STRUCTURAL EVALUATION METHODS

SUMMARY OF SURVEY FINDINGS

11

Figure 2-1. Structure and infrastructure types of interest to respondents. Note: Frequency = percentage of respondents in the category who listed a particular structure/ infrastructure as of interest.

practice. The responses are summarized in Figure 2-1 as the percentage of respondents in each category that identified a specific structure type as being of interest to their activities. For example, close to 70% of responding researchers but only about 42% of practicing engineers identified risk to buildings as a topic of interest to their work. The responses reflect that power plants, particularly nuclear power plants, are the predominant type of structures where risk analysis is of interest to responding practitioners, followed by bridges and hydraulic structures such as dams. The figure also shows that researchers are focusing more on buildings, bridges, and highway systems. The responses reflect the fact that the implementation of risk-based methods in the nuclear industry is the result of long-term research efforts that were invested over the past four decades to develop and refine RBsEM for nuclear power generation facilities. The research effort was motivated by the nature of the nuclear industry given the devastating impact of any failures on society. That early effort has led to the development and implementation of comprehensive probabilistic analysis tools and methods for evaluating failure causes and sequences and quantifying the consequences of failure that are now part of nuclear power plant design and evaluation. Several documents issued by the American Society of Mechanical Engineers ASCE, and the Department of Energy and listed by the respondents (see Appendix F) confirm the advanced stage of risk assessment methods in the nuclear power industry. The fact that such methods have matured to the level of implementation may explain why there currently are fewer researchers working in this field.

12

RISK-BASED STRUCTURAL EVALUATION METHODS

Similarly, aspects of risk analysis, including the use of site-specific return periods and the consequences of flooding for surrounding communities, particularly downstream populations, have long been important ingredients in the design of hydraulic structures such as dams and levees and play a major role in design, as indicated by guidelines issued by the US Army Corps of Engineers (Appendix F). Efforts to develop more direct procedures for practical implementation of RBsEM have been recently renewed following Hurricane Katrina. Likewise, recent disasters such as the collapse of the World Trade Center in New York City following the September 11 terrorist attack and the collapse of the I-35W Bridge in Minneapolis during its rehabilitation have reinvigorated research by several agencies, including NIST and the Federal Highway Administration, directed at implementing RBsEM in structural engineering practice when evaluating the safety of buildings and bridges that may be susceptible to disproportionate collapse following a local or partial failure. The recent interest in formulating and applying risk-based methods for the design and assessment of infrastructure networks such as highways, electric and water distribution systems, and other utilities stems from the devastating impact on communities of extreme hurricanes such as Sandy and Katrina, as well as the aging of these infrastructure systems, which manifested during the 2003 northeast power outage. Interest in gas and oil pipelines is also related to the direct effect of leaks and fires on the safety and health of surrounding communities and their impact on the environment.

2.1.4 Pertinent Codes / Standards / Specifications / Guidelines The respondents provided a list of standards, specifications, and guidelines that are pertinent to their research and engineering practice related to RBsEM (Appendix F). While most of the listed standards and guidelines are member-oriented reliability-based codes, implicit considerations of risk have long been in use in these sectors. Furthermore, decision makers and code writers in North America, Europe, and Asia are currently emphasizing the need to explicitly incorporate risk-informed methods to account for the consequences of a member’s failure for the integrity of the entire structural system and the consequences of failure for occupants, users, and affected communities. One example is the latest version of ASCE Standard 7-16 (ASCE 2017a), which implements a performance-based design approach for buildings. A simplified version of the approach has been adopted for the seismic design of substructures in several bridge design and evaluation guides. Of particular interest are the more advanced risk-based procedures being proposed with the support of the FEMA for the seismic design and assessment of buildings as part of NEHRP. Similar design guidelines are also being offered for other hazards, such as fire, wind, and floods, and ASCE is currently preparing standards for mitigation of disproportionate collapse potential in buildings and other structures that include risk-based methods.

SUMMARY OF SURVEY FINDINGS

13

2.1.5 General Approaches for Risk Assessment and Risk Management This question was formulated differently for researchers than for practicing engineers involved in implementing risk analysis and risk management. Researchers were asked for their recommendations on best practices for implementing risk-based methodologies, and practicing engineers were asked to describe how risk-based methodologies actually are used in practice. Their answers are reproduced in Appendix B. As expected, widely different responses were provided, even from respondents with similar backgrounds and interests. Nevertheless, there was generally a consensus on the main components of risk assessment, which could be divided into analysis of hazards, evaluation of structural vulnerability, and estimation of consequences. In practice, probabilistic measures with different levels of accuracy are used for each step. Risk analyses have been performed at different levels, such as individual projects, portfolios, or entire programs. When possible, risk acceptance criteria are extracted from guidelines and regulations with the understanding that some of these criteria may often be presented in deterministic terms such as safety margins. On the other hand, many consulting firms have developed their own methodologies for assessing risk because of hazards for which no specific guidelines were available. Other than typical structural-response-related criteria, quantifiable measures of risk used in practice have included repair and replacement costs, casualties, and downtime. Risk mitigation strategies are usually recommended to the owners and decision makers based on either experience or cost–benefit analyses. Researchers have emphasized that performance goals must be defined before initiating the analysis process. Ideally, risk assessment should include multiple pertinent hazards and account for the effect of time, so that risk is evaluated over the entire life cycle, considering safety, functionality, and serviceability, as well as aging and durability. The effects of prevention, protection, routine maintenance, retrofitting, and rehabilitation should also be included. It is recommended that a formal multicriteria optimization be adopted in the decision-making process. Issues were raised concerning the importance of the relation between individual structures in a portfolio, and interdependencies between different types of networks, given current interest in expanding the scope of disaster mitigation efforts to establish community resilience as a primary target. Therefore, establishing risk criteria at the community level and securing efficient means for communicating risk to different stakeholders are arising as new issues that need to be addressed.

2.1.6 Risk Analysis Team Structure As pointed out in the responses listed in Appendix B, in actual engineering practice, the risk analysis team is generally composed of one to four people led by a senior engineer, often with the support of junior engineers who may assist in the collection of field data and the execution of analyses. The risk assessment results are then communicated to the stakeholders by the senior engineer.

14

RISK-BASED STRUCTURAL EVALUATION METHODS

When a larger team is involved, each team member joins a group in charge of a separate task based on his/her expertise. For example, for the probabilistic risk assessment analysis of nuclear plants, the team is composed of experts in seismic/ flooding hazards; fire experts; groups of structural, mechanical, electrical, and hydraulic engineers; reliability analysts; risk analysts; and a group of system analysts. Good communication between the different groups is essential for an effective risk assessment process. According to researchers, an ideal risk assessment team should at a minimum include the following expertise: • Engineers who are experts in probabilistic modeling and assessment of hazards, who may seek assistance from climate scientists, hydrologists, and geoscientists to help consider environmental, geological, and seismic hazards, as well as experts in security and relevant accidental hazards who would help assess human-made hazards; • Engineers who are experts in component and system modeling and vulnerability assessment, who may be assisted by materials scientists to help assess deterioration processes and time-dependent changes in component behavior; • Experts in statistical analysis and structural reliability assessment; • Experts in evaluating consequences, which include economists, environmental impact assessors, and social scientists; and • System analysts. The team members should be constantly updated by the facilitator on the status of the risk assessment process, and the final outcomes should be communicated by the lead engineer to the stakeholders and decision makers in nontechnical language.

2.1.7 Role of Respondents Most of the respondents from industry and government agencies identified themselves as risk analysts (42%), followed by structural engineers (33%), team leaders (17%), and managers (8%). All the researchers who responded to the survey had backgrounds in probabilistic analysis, structural reliability, and risk assessment.

2.1.8 Risk Analysts Training Requirements The survey results indicated that a formal introductory training program covering the basics of risk analysis is mandatory to complement technical seminars, workshops, and specialized training on the application of computer software pertinent to particular subject/agency/firm. Researchers unanimously agreed that a risk analyst should, at a minimum, be proficient in structural analysis, probability and statistics, structural reliability, and economics. Some researchers also believe that it is important for structural risk analysts to have basic knowledge in system analysis, decision making, and optimization techniques.

SUMMARY OF SURVEY FINDINGS

15

It was suggested that the natural environment for developing the necessary skills is graduate school, although it may be necessary to modify current civil engineering curricula to bundle existing civil engineering courses with courses in other fields in a consistent program of study that meets appropriate educational requirements.

2.2 RISK ASSESSMENT Section II of the survey consists of 23 questions related to risk definition and quantification, as well as the types of hazards that are of most interest and the methodologies employed to estimate their effects on the pertinent structural systems. The questions can be divided into five groups: identification of hazards and methods to estimate their intensities; evaluation of structural performance under a given hazard or multiple hazards; consideration of structural deterioration over time; estimation of failure consequences; and risk quantification. The complete list is II.1. Definition of risk II.2. Risk evaluation for new and existing structural systems II.3. Risk acceptance criteria for new and existing structural systems II.4. Design life for new structural systems II.5. Service life for existing structural systems II.6. Frequency of inspection, structural capacity assessment, and repair II.7. Most pertinent hazards II.8. Likelihood/probability of hazard occurrence II.9. Hazard occurrence rates II.10. Hazard intensity levels II.11. Quantitative versus qualitative hazard intensity II.12. Structural component/system deterioration II.13. Routine inspection and maintenance II.14. Performance and damage levels II.15. Performance and damage measures II.16. Structural analysis approach II.17. Probability of structural failure II.18. Consequences of structural failure II.19. Combination of multiple consequences of structural failure II.20. Risk quantification II.21. Risk estimation from historical damage and cost data

16

RISK-BASED STRUCTURAL EVALUATION METHODS

II.22. Risk communication II.23. Risk acceptance criteria. The responses to these questions are summarized and discussed below, and details are provided in Appendix C.

2.2.1 Definition of Risk Both practicing engineers and researchers were asked to comment on the following definition of risk, which was provided in the survey: Risk assessment involves the quantification of risk that is defined as the product of the probability of failure and the consequence of failure for systems subjected to different hazards or combination of hazards. Estimation of probability of failure involves the evaluation of (a) the probability of occurrence of a particular type of hazard or a combination of hazards, (b) the maximum intensity of the hazards that the system is expected to be exposed to within its service life, (c) the probability that the system will exhibit a particular level of damage/local failure/collapse should the hazards take place. Evaluating the consequence of failure requires the assessment of (a) the cost of maintenance/repairs/replacement of system; (b) user’s costs and life safety; (c) the failure’s impact on local and regional economic activity/society/environment; and (d) the political/civic/morale ramification to the affected communities. Both practicing engineers and researchers generally agreed with this definition. However, respondents noted that it essentially considers each damage/failure level individually. Instead, a respondent suggested that risk should be considered as an integration that encompasses the entire spectrum of hazard (or event) intensities and all sources of risk, all resulting damage/failure levels, and all associated consequences. Some respondents believe that the large uncertainties in identifying the consequences and establishing their monetary measures should be considered in a comprehensive probabilistic model, where the consequences should have a very wide definition to encompass a large range of tangible and intangible consequences. Furthermore, it is important to account for the accumulated damage over time from low-intensity sources and regular operation during the design life of new structures, as well as the service life. Also, it was suggested to replace the word “failure” by a more encompassing term such as “exceeding performance thresholds” and the word “hazard” by a more general term such as “source of risk,” although the words “threat” and “event” have also been used by some respondents. It was also noted that the consequences must include societal and ecological impacts that are important components of resilience and sustainability and that the reference to “local and regional” consequences be generalized to “multiple spatial and temporal scales.” Questions were raised regarding the treatment of risk due to undefined events, or events that can be defined but not quantified, such as those related to security threats. It was suggested that life safety and public health be foremost in the list of consequences. Several respondents mentioned additional consequences,

SUMMARY OF SURVEY FINDINGS

17

such as impacts on emergency responders and security officers, as well as the cost of investigations and legal actions. It was noted that taken in its most general sense, “risk” should embrace the emerging concepts of resilience (effects on communities) and sustainability (effects on future generations and the ecological well-being of the planet). A few respondents suggested that a much more general definition be used, for example, “Risk can be considered as the effect of uncertainty on objectives.” Objectives can have very wide aspects (such as financial, health and safety, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product, and process). Another general definition can be stated as “Risk is the assessment of what can go wrong” (or “what is the likelihood of its going wrong”), and “what are the consequences of that”; while risk management is concerned with answers to the questions, “What can be done and what options are available?” “What are the trade-offs in terms of costs and benefits?” and “What are the impacts of the current decision on future options?”

2.2.2 Risk Evaluation for New and Existing Structural Systems The results summarized in Figure 2-2 show that 67% of practicing engineers evaluate risk for the design of new systems, and 87% of them evaluate it for existing systems; 91% of researchers are interested in evaluating risk for new systems, and 73% deal with existing systems. The emphasis of the industry on existing structures may be due to design codes and standards being generally well established for most types of new structures and systems. Given the state of our aging structures and infrastructure systems, the challenge is thus in establishing acceptance criteria for existing systems. On the other hand, more researchers may be engaged in updating currently used design standards, which could have most direct impact on new designs.

Figure 2-2. Percentage of respondents who evaluate new and existing structures using risk-based methodologies.

18

RISK-BASED STRUCTURAL EVALUATION METHODS

2.2.3 Risk Acceptance Criteria for New and Existing Structural Systems The responses summarized in Figure 2-3 show that most of the practicing engineers (67%) use different criteria for the design of new systems and the evaluation of existing ones, although 69% of the researchers were of the opinion that there is no need for different criteria for the design of new systems and the evaluation of existing ones. This difference between the practitioners and the researchers may be explained as follows. Theoretically there should be no difference between the risk acceptance criteria for new and existing structures if the effect of remaining service life is accounted for in the risk assessment process. On the other hand, lower acceptance criteria for existing structures may be justified based on current practice, which reflects the higher costs of replacing existing structures or rehabilitating them to a higher performance level as compared to the additional costs needed to build them to these higher standards in the first place. This is especially true under current budgetary constraints, where outlays for upgrading and maintaining our aging infrastructure stock are very limited. Different standards for new and existing bridge structures have been the norm in US bridge engineering practice for many years. It should be noted that, in principle, a properly executed risk analysis process should explicitly reflect such differences in the cost of upgrading versus the cost of new designs while keeping the same overall risk acceptance criteria.

2.2.4 Design Life for New Structural Systems Both practicing engineers and researchers indicated that they use the design life customarily accepted in their field or the actual design life specified in the relevant standards and codes. The most frequent design lives listed include 100 years (34%)

Figure 2-3. Percentage of respondents who favor using different criteria for existing systems and design of new systems.

SUMMARY OF SURVEY FINDINGS

19

Figure 2-4. Most commonly used structural design lives for structural analysis.

for designing new elements of infrastructure systems and networks, such as hydraulic structures, electric grids, and bridges in European countries; 50 years (28%) for buildings; 75 years (15%) for US and Canadian bridges according to the current specifications and standards; 150 years (9%) for tunnels; and 20 years (6%) for bridge decks (Figure 2-4). The survey results also indicated that for special structures, such as dams or bridge foundations subjected to scour erosion due to floods, new structural designs are checked for floods with both 100-year and 500-year return periods in association with different criteria for safety and serviceability, although the design hazard’s return period does not necessarily reflect the structural design life. While design lives are usually specified by design codes or the owners before the system is actually constructed, it is often observed that the specified design life and actual life are not necessarily the same. For example, many buildings remain in operation for longer than 50 years, while several dams are being removed before the ends of their design lives. End of life can be determined based on risk acceptance criteria related to loss of functionality, observed damage, probability of failure, life safety, environmental impact, and economic returns.

2.2.5 Service Life for Existing Structural Systems Many practicing engineers indicated that they do not estimate the remaining service life of existing structural systems unless the structure is prone to cyclic fatigue damage, in which case they use structure-specific guidelines. On the other hand, existing structures are evaluated based on site inspections and nondestructive field tests to determine the structure’s current condition and to ascertain its current level of structural safety in comparison with the required safety level for new structures or, when available (such as for US bridges), to the safety level specified for existing structures. Engineers in government agencies advocated the use of advanced analyses such as asset management programs with time-dependent reliability analyses and Markov-chain processes to estimate the time at which structural members reach unacceptable levels of deterioration and/or compromising structural

20

RISK-BASED STRUCTURAL EVALUATION METHODS

reliability levels. Such an approach is already implemented in commonly used bridge management systems such as AASHTOWare, previously known as Pontis. The answers to this question from the researchers varied. Some supported the use of deterioration rates, projection models, and Markov-chain processes to estimate the changes in structural reliability and safety over time. Others suggested that mathematical models should be replaced, or at least augmented, by estimates of actual structural conditions obtained through in situ and core testing of construction materials and component properties. The updated data can then be used to perform numerical simulations that help estimate the load-carrying capacity of the structural system and its ability to withstand regular, design, or extreme loads at a given point in time. It was also pointed out that the remaining useful service life for existing structures should be determined based on risk acceptance criteria that would account for loss of functionality, observed damage, probability of failure, life safety, environmental impact, and economic returns. A risk-based cost-benefit analysis that considers the cost of replacement as compared to the costs of maintenance and rehabilitation, as well as user costs and societal and environmental losses, would help determine the end of life in many cases.

2.2.6 Frequency of Inspections, Structural Assessments, and Repair Actions The most frequently reported system inspection rate is 24 months, with a minimum of 1 month and a maximum of 60 months. The most frequently reported time interval between two structural evaluations is also 24 months, with a range of 6 months to a maximum of 600 months. Although no information was provided as to how these different intervals relate to specific structural types, it is conjectured that the 600-month (50-year) evaluation period may reflect the fact that, unlike bridges, which are inspected on a biennial basis and their structural capacities regularly rated at a frequent interval, building codes do not require a regular evaluation of structural safety once the building is open for use unless warranted by observed severe damage or change in building function. Finally, the most frequently reported time interval between routine maintenance operations and repairs is 12 months, with a minimum of 6 months and a maximum of 120 months.

2.2.7 Most Pertinent Hazards The hazards identified as being of most interest to the respondents are summarized in Figure 2-5, which gives the percentage of respondents who cited a particular type of hazard. Seismic hazards were clearly emphasized, especially by the researchers. Researchers also considered deterioration almost as important as seismic hazards, followed by overloading, cyclic fatigue, and ship impact, and then severe weather-related hazards such as floods, hurricanes, tornados, ocean waves, and ice pressure. Fire, blast, and malfunction were also listed by several researchers. Except for floods, which were highly cited followed by seismic events,

SUMMARY OF SURVEY FINDINGS

21

Figure 2-5. Types of hazards of most interest to respondents. practicing engineers placed similar emphasis on a wide range of hazards, with slightly higher emphasis on overloading, fatigue, deterioration, and human error.

2.2.8 Likelihood/Probability of Hazard Occurrence Figure 2-6 summarizes the survey results which indicated that the occurrence of relevant natural hazards should best be evaluated through code-specified maps. Occasionally, government agencies and specialized consulting firms develop their own extreme-hazard assessment tools based on historical data for natural hazards or the judgment of owners and experts regarding human-made hazards,

Figure 2-6. Methods for estimating likelihood of hazard occurrence.

22

RISK-BASED STRUCTURAL EVALUATION METHODS

whether these are accidental or malicious in nature. The lesser emphasis placed by industry and agencies on code maps may be a reflection of their concern with human-made hazards as these cannot generally be presented on maps.

2.2.9 Hazard Occurrence Rates The respondents can be divided into two groups, one which believed that the estimation of hazard rates should be based on quantitative measures, while the other believed that qualitative rankings would be preferred as the qualitative approach would be more understandable to practicing engineers and owners. Some practicing engineers indicated that they rely on their own experience or panels of experts in a Delphi technique, while others use either numerical or ranking approaches, depending on the hazard and the type of analysis of interest. Researchers mentioned that statistical methods such as Bayesian updating models can be used to link the quantitative and qualitative approaches. The difficulty of assessing occurrence rates for hazards that may be affected by climate change or growth in population and economic activities is an issue of concern. The specific answers of the respondents are available in Appendix C.

2.2.10 Hazard Intensity Levels The respondents indicated that the intensity of the relevant natural hazards is best evaluated through code-specified maps. As mentioned earlier for occurrence rates, government agencies and specialized consulting firms occasionally develop their own extreme-hazard assessment tools based on historical data for natural hazards or the judgment of owners and experts. Judgment and historical data are most commonly used when addressing human-made hazards, whether these are accidental or malicious in nature. The survey results are presented in Figure 2-7, which gives the percentage of engineers and research respondents who endorsed a specific option for estimating hazard intensities for new or existing structures. Practicing engineers tend to lean more toward the use of engineering judgment compared to researchers, who prefer more objective tools for the evaluation of hazard intensity. It should be noted that hazard analysis procedures have been extensively used to develop code-specified or project-specific hazard models based on historical data. To complement historical data, the intensity of climate hazards (aging agents, temperature, precipitation, ice, floods, and wind) could be represented using projection models that are based on different climate change scenarios, such as those identified by the Intergovernmental Panel on Climate Change, with the understanding that the intensity of climate hazards will be nonstationary. The nonstationary nature of climate-related as well as other economic and population-growth-related hazards, combined with the uncertainties of modeling future changes, adds to the complexity of the hazard analysis process when compared to the analysis of stationary processes adopted in many existing standards and codes.

SUMMARY OF SURVEY FINDINGS

23

Figure 2-7. Methods of hazard intensity estimation.

2.2.11 Quantitative versus Qualitative Hazard Intensity The respondents generally favored an objective numerical evaluation of hazard intensities through the use of probabilistic models, whenever possible, over qualitative ranking. Some respondents suggested a two-step approach, where a first pass could execute the risk assessment using a subjective ranking of hazard intensities, and the second step would update the first step’s results by a more quantitative analysis using numerical values. The answers of the respondents are available in Appendix C.

2.2.12 Structural Component/System Deterioration All the respondents agreed that structural component/system deterioration must be considered in risk analysis. The effects of such deterioration can be best assessed by high-quality inspection programs using visual inspection, nondestructive evaluation, and destructive testing. Two important challenges were identified: extracting quantitative parameters and material characteristics from visual inspection for implementation in structural analysis models; and the scarcity of reliable long-term deterioration models that consider real in-service exposure to combinations of environmental attacks and service loading and account for their large variations over time. This is because many existing deterioration models are based on experimental investigations executed on small scale-models under accelerated testing conditions. To help alleviate the first issue, it is suggested that visual inspections should be complemented with experience and followed up with more in-depth quantitative evaluations through in situ measurements and core testing. Given these issues, most researchers recommended probabilistic analytical techniques such as Bayesian updating, Markov-chain processes, and life-cycle

24

RISK-BASED STRUCTURAL EVALUATION METHODS

performance evaluation to account for the effects of structural component/system deterioration. The actual responses are provided in Appendix C.

2.2.13 Routine Inspection and Maintenance In general, practicing engineers did not directly account for the effects of routine maintenance in their analysis. Instead, they relied on routine inspection to evaluate the current state of a structural system. Researchers, on the other hand, were generally more interested in the life-cycle performance of a structural system and agreed that results from routine inspections and types and schedules of maintenance should serve to update the properties of the materials and project how component and system behavior will evolve over time for use during the structural analysis and risk assessment processes. But accurate approaches for incorporating the effects of routine inspection and maintenance have yet to be developed. The answers of the respondents are provided in Appendix C.

2.2.14 Performance and Damage Levels The respondents emphasized that specific performance levels should depend on the type of system and the hazards being analyzed. In most cases, practicing engineers checked the performance and damage levels specified in appropriate structural design and evaluation codes and standards, as well as those implemented in widely used software such as Hazus. For buildings and bridges, appropriate limit states would include no damage, serviceability, functionality, structural component failure, life safety, and structural collapse. For nuclear power plants, the checks are related to core damage, large early release, and potential health effects. Generally, performance checks should cover geotechnical and structural limit states to ensure strength, utility, and operability, as well as occupant and public safety. In generic terms, researchers suggested that structural performance levels can be classified as no damage; limited damage, including limited cracking calling for cosmetic repairs; significant damage, including excessive cracking, requiring structural repairs to restore pre-event strength and stiffness; substantial damage, such as permanent deformation and loss of load-carrying capacity, requiring replacement of components; and collapse. The consideration of different classes of consequences, including resilience, the ability to recover, and post-disaster functionality, were also mentioned, especially in relation to lifeline systems. Expected performance measures include reliability index or expected failure rate per unit time and expected damage cost. Reference was also made to the consequence classes defined in EN 1991-1-7 Eurocode 1 (CEN 2001). The full set of answers is provided in Appendix C.

2.2.15 Performance and Damage Measures The survey results indicated that approaches to performance and damage assessment depend on the type of structural system and the purpose of evaluation. For example, if damage is being evaluated in the aftermath of a major event

SUMMARY OF SURVEY FINDINGS

25

(e.g., an earthquake) or wear and tear after years of service (e.g., deterioration), the evaluation results should be combined with experience and historical data to determine the expected post-event performance level of the structure of interest. Otherwise, expected damage and structural performance could be obtained through deterioration models (which as mentioned earlier may still require improvements) and structural analysis. When structural analysis is performed, expected damage to buildings and bridges can be estimated based on structural analysis output such as deformations, which may consist of strains, deflections (including inter-story drift), rotations, and curvatures (including those of plastic hinges in main structural members). These can be linked to physical damage such as crack size, concrete spalling, steel bar buckling, and permanent deformation. For other infrastructure systems, such as power distribution networks, water and wastewater pipelines, natural gas lines, and telecommunication grids, as well as highways, the metrics for performance can be expressed in terms of the percentage of demand that is met. Fragility analysis is often used to quantify the probability of reaching a certain damage level for a given specific hazard intensity. The respondents noted that the performance of a structural system is primarily affected by direct damage to its structural components. But it can also be compromised by nonstructural factors such as shutdowns or partial closures undertaken for routine maintenance or for repair of nonstructural components. Also, the performance of one system can be compromised due to damage to another system because of the interdependencies that may exist between systems in the same infrastructure network and between different types of networks.

2.2.16 Structural Analysis Approach Figure 2-8 gives the percentage of respondents in each category who apply a particular type of analysis. The figure provides a snapshot of the structural analysis approaches used by responding researchers and practicing engineers to estimate structural performance and damage. Researchers generally recommended using the most advanced structural analysis tools, such as nonlinear dynamic models, to provide the greatest possible accuracy, even though the difficulties in implementing such complex computer models and interpreting the simulation results are well recognized. For this reason, simpler tools, such as linear static analysis and even more simplified approximate analysis, were favored by practicing engineers. It is understood that these simplified approaches can provide important stop-gap measures for analyzing the behavior of structures under applied hazards and a first-level screening if associated with appropriate safety factors.

2.2.17 Probability of Structural Failure The responses shown in Figure 2-9 indicate that practicing engineers are more inclined to rely on historical projections (with 83% citing their use of such methods) and engineering experience (75%) to estimate the probability of structural failure, as compared to numerical simulations and fragility curves,

26

RISK-BASED STRUCTURAL EVALUATION METHODS

Figure 2-8. Structural analysis approaches for performance evaluation.

Figure 2-9. Methods of estimating the probability of structural failure. both of which are favored by 77% of researchers. Probabilistic analyses with numerical simulations and fragility curves require special training and tools, including specialized software and statistical models for relevant random variables. Therefore, it is understood that the results of numerical simulations should be verified for consistency with experience and historical data and that statistical methods should be used to verify the comparability of the two approaches.

2.2.18 Consequences of Structural Failure Risk assessment requires the enumeration and quantification of the consequences of failure. Figure 2-10 shows the weights placed by the respondents on the types of consequences that are considered important in risk assessment. Researchers

SUMMARY OF SURVEY FINDINGS

27

Figure 2-10. Consequences of structural failure.

considered all the listed consequences important, with a minimum weight of 53% assigned to political implications and a high of 82% to the replacement cost of the system. Practicing engineers considered loss of life most important, giving it a weight of 67%, followed by property damage and cost of repairs (both at 50%). The cost of replacing the system is rated as less important by practicing engineers on the presumption that most types of damage to components can be repaired without replacing the entire system. Furthermore, practicing engineers assigned the lowest weight (33%) to the indirect consequences, including resiliency, environmental impact, and political, even though these issues have been gaining relevance over the last few years as important components of comprehensive risk analysis processes owing to experiences with recent disasters.

2.2.19 Combination of Multiple Consequences of Structural Failure The consensus of the survey results was that all the losses and consequences should be monetized when combination can help provide a global picture of the consequences of structural failure. Through monetization, various types of consequences are represented using a common unit, which would simplify their combination when applicable. It is noted, however, that monetization entails large uncertainties. Furthermore, the difficulty of assigning a dollar value to the loss of life, injury, pain and suffering, or intangible consequences such as political impact or reduced morale, has long been recognized.

2.2.20 Risk Quantification There was a wide range of responses to the question on how one should combine probability of failure and consequences to express or quantify risk. Most responding practicing engineers present risk analysis results using a matrix of hazard intensities

28

RISK-BASED STRUCTURAL EVALUATION METHODS

versus consequences, with subjective measures of risk such as “low,” “medium,” “high,” and “very high.” It was indicated that the use of numerical values helps in calculating risk as the product of exposure, hazard, and vulnerability, with a premium on uncertainty, to assemble risk matrices for all the limit states, which are then combined to provide an overall assessment of risk. The presentation of the final matrix in qualitative terms is popularly used to communicate the level of risk to stakeholders and decision makers, where specific knowledge of risk assessment methodologies and structural engineering is not necessary. When used, probabilistic computations are based on event trees and fault trees. In that case, some engineers define risk in terms of mean estimates of each loss type (e.g., deaths, dollars, and downtime). For example, in the nuclear industry, risk is presented in terms of the frequency of accidents that cause a certain level of damage. Researchers favored a full-fledged integration of probability of structural failures and associated consequences to quantify risk. Results of the analytical methods can be used to calibrate the subjective risk matrices for specific hazards and structural systems, which several respondents said are still needed in communicating with stakeholders. Some researchers advised against combining all consequences into a single measure, preferring the use of multiple measures or the application of weighting factors that reflect the relative importance placed by stakeholders on various consequences, as well as the consideration of asset resilience. The respondents also highlighted important challenges in risk assessment, including treatment of modeling uncertainties; estimation of consequences of structural failure, especially for indirect losses; and integration of risk attitudes and perceptions into risk formulation. Some of these challenges could be addressed using structured risk assessment methods such as minimum life-cycle cost analysis, decision modeling and expected utility theory, cumulative prospect theory, life quality index, capability-based approach, or failure modes and effects analysis. The full responses to this survey question are provided in Appendix C.

2.2.21 Risk Estimation from Historical Damage and Cost Data There was general agreement among the respondents that historical damage and cost data are highly valuable if properly used by considering potential changes over time in population size, economic activity, inflation, discount rate, and productivity gains, as well as changes in construction standards, material quality, and repair technology while accounting for associated uncertainties and model errors. It was noted that historical damage data had been used to calibrate empirical fragility curves and other risk assessment tools and guidelines. Researchers emphasized the probabilistic nature of damage and cost models that can be updated with historical damage and cost data using Bayesian methods and expert opinion elicitation. The researchers also emphasized the importance of carefully verifying the compatibility of past data with the conditions of the systems of interest and recognizing inherent biases in data collection practices, which may focus on the assessment of damaged components and structures, often ignoring the

SUMMARY OF SURVEY FINDINGS

29

large numbers of undamaged structures. Limitations in available data for particular hazards and specific intensities may preclude the direct applicability of such empirical data in many cases, although they can be used for relative comparisons.

2.2.22 Risk Communication All the respondents concurred that risk assessment results should be presented using visual tools whenever possible, especially when communicating risk to nontechnical stakeholders. Various combinations of GIS and heat maps, loss exceedance curves, color-coded matrices, and decision trees are commonly used for different hazard scenarios and consequences. The presentation of risk assessment results should highlight geographical and temporal differences to help stakeholders, decision makers, and the public understand different risk scenarios in as easy a manner as possible.

2.2.23 Risk Acceptance Criteria The survey results indicated that a formal analytical approach for establishing risk acceptance criteria can take the form of a cost-benefit evaluation that would include life-cycle cost optimization, among many other risk analysis techniques. But in practice, risk acceptance criteria are considered the purview of the owners and regulators, although engineers have a role in interpreting risk assessment results for the decision makers. For commonly analyzed hazards, several respondents noted that acceptance criteria may have already been set (explicitly or implicitly) in standards and codes of practice and can be extracted when a more direct risk analysis process is undertaken. When risk analysis is performed for special structures and hazards, acceptance criteria should be established after the consideration of different scenarios and should be set in terms of different measures and consequences. Thus, acceptance criteria may be adjusted after examining risk assessment results, taking into consideration budget constraints as well as regional, societal, environmental, and political implications. The respondents also recommended considering the public’s risk tolerance under similar situations (e.g., past disaster events) or in comparison to other risks accepted by society (e.g., ordinary activities in everyday life). Ultimately, the process of establishing risk acceptance criteria should involve the public and all stakeholders. Actual responses are provided in Appendix C.

2.3 RISK MANAGEMENT Section III of the survey consists of questions on four categories related to risk management practices: III.1. Risk mitigation strategies III.2. Prioritization of risk mitigation strategies

30

RISK-BASED STRUCTURAL EVALUATION METHODS

III.3. Flow of risk information III.4. Risk communication to the public.

2.3.1 Risk Mitigation Strategies The respondents were asked to identify circumstances under which they would apply any of the following five major potential action groups to mitigate the risk: • Reduction of exposure to the hazards • Reduction of the vulnerability of components • Introduction of structural redundancies • Reconfiguration of the entire system layout • Reduction of the consequences of failure. The respondents indicated that risk mitigation strategies are very projectspecific and that the final decisions are often a set of actions that belong to a combination of these five groups. Examples of different actions that have been used in the past or could be undertaken were presented by several respondents. Cited examples of reducing exposure to hazards or the intensity of individual hazards include the implementation of security measures to restrict access to the structure and reduce the possibility of dynamic impacts; shielding vulnerable parts or the entire system (e.g., by fireproofing and corrosion protection); avoiding construction near seismic faults or flood plains; installation of flood walls; changing structural geometry such as elevation; and placing restrictions on human-made loads and number of load cycles. Vulnerability can be reduced by structural hardening and strengthening; improving the ductility of components; implementation of dampers, bracings, and base-isolation systems; installing vulnerable components in less exposed parts of the system; protecting and shielding vulnerable components; robust detailing to avoid progressive collapse; and proper maintenance and rehabilitation of deteriorating components. Examples of how redundancy can be improved include adding elements to the system or duplicating some components; using ductile structural components to help redistribute the load and provide reserve system strength; ensuring the presence of alternative load paths; geographical distribution of system and network components; and creation of multiple and independent layers of defense and protective systems. While reconfiguration of new structures might be an efficient strategy to mitigate risk at the design stage, this was recognized by the respondents to be difficult for existing systems. Systems may be configured to provide redundancies and alternate load paths as well as reducing member vulnerabilities using approaches such as those listed above. Finally, both practicing engineers and researchers agreed that the consequences of failure should be reduced when possible. This could be achieved by different means, depending on the type of threat and system, such as isolating and

SUMMARY OF SURVEY FINDINGS

31

protecting the systems; using early warning systems; emergency planning and preparation of the population, including the setting up of evacuation plans; setting up alternative networks to offset downtimes; and generally improving infrastructure resilience, including plans for quick recovery.

2.3.2 Prioritization of Risk Mitigation Strategies Cost-benefit analysis was the most commonly recommended approach for risk management prioritization. Some researchers also recommended the use of multiobjective decision-making techniques. Experience from previous events and a review of capabilities were also listed as important when considering alternatives.

2.3.3 Flow of Risk Information Generally speaking, the respondents favored the involvement of as many groups as possible to ensure a collaborative effort, which may involve a feedback process whereby information from the engineers, hazard analysts, modelers, economists, and risk analysis experts moves up the chain of command to the project managers and then on to the owners, stakeholders, and decision makers, who might request additional information about possible alternative scenarios and thus a repeat of the cycle. Several respondents emphasized the importance of involving the concerned communities and the public early in the process to ensure success and acceptance of final decisions.

2.3.4 Risk Communication to the Public The communication of risk can be categorized as either internal or external. Members of risk working groups share information internally using technical terms and explain it to owners, stakeholders, and decision makers. There was general agreement among the respondents that it is very difficult to communicate risk to the general public because of the likelihood that risk assessment results will be misinterpreted. Therefore, the respondents felt that it is the responsibility of “policymakers” and “government agencies” (rather than the “engineering community”) to communicate risk to the public using simple terminology through outreach programs, including public hearings (e.g., town hall meetings, community boards), traditional media (e.g., newspapers, radio, and TV), and web-based social media (e.g., Facebook, Twitter). There also was general agreement that risk is best presented to the public through examples and monetary comparisons, using previous disaster events as points of reference, similar to the way that insurance premiums reflect risks for daily life, private ownership, and business activities. The answers of the respondents are available in Appendix D.

2.4 SUGGESTIONS FOR IMPROVEMENTS In the last part of the survey, the respondents were questioned on which components of the risk analysis process are in most need of improvement.

32

RISK-BASED STRUCTURAL EVALUATION METHODS

The questions surveyed were IV.1. Overall assessment IV.2. Weak links IV.3. Short-term plans and immediate needs for improvements IV.4. Priorities for near-term improvements.

2.4.1 Overall Assessment Some of the recognized characteristics of risk-based methods as expressed by the respondents are summarized in the following paragraphs. One respondent described the advantages of risk-based assessment of structure and infrastructure systems this way: “By responding to questions such as (1) What can go wrong? (2) What are the consequences? and (3) How likely are these to happen? : : : risk assessment can help identify weaknesses that may not be apparent in traditional deterministic design processes.” Other respondents explained that risk-based methods help classify systems within a spectrum rather than the traditional deterministic binary classification of safe and unsafe conditions. This enables the application of various action options for one system as well as prioritization of different systems within a network, although it is understood that the implementation of the process on a network level remains an important challenge. Although the concepts and methodologies have long been established, the implementation of risk analysis processes is still in its “infancy,” where it is advancing at different stages in different structural and infrastructure engineering sectors. This may be due to the lack of data and the complexity of implementing the methodologies for certain hazards and multihazard situations. As noted by the respondents, risk assessment can be performed at different levels, ranging from simple to complex analysis methods. As the field progresses, it is important to verify that the simple approaches lead to risk rankings consistent with those from more advanced complex analytical approaches. Also, one should note that actual risk may often be different from perceived risk, and it is important to bring these together through appropriate calibration and educational activities.

2.4.2 Weak Links The respondents identified a list of issues and challenges in current risk analysis methodologies that still require attention. These are grouped into six categories: modeling of hazards, probabilistic analysis, consequences, data limitations, risk management, and support. Modeling of hazards • Modeling nonstationary hazards, considering climate change. • Methods for considering the tails of extreme events and low-probability, high-consequence hazards.

SUMMARY OF SURVEY FINDINGS

33

• Modeling the long-term behavior of new and commonly used traditional construction materials. • Analysis of multihazards, including deterioration processes and the combination of hazards. Probabilistic analysis • Treating correlations between probabilities of failure within a system and between systems. • Analysis of networks. Consequences • Consideration of various tangible and intangible consequences, including resilience and sustainability. Data limitations • Accounting for epistemic uncertainties and completeness of data and integration of statistical data in the decision-making process. • Correlating the results of structural analyses to actual physical damage. • Limitation of data and information from past disaster events. Risk management • Establishing criteria for acceptable levels of risk. • Communicating risk information to nontechnical stakeholders and decision makers. • Consideration of engineering aspects and socioeconomic impact. Support • Training of engineers and development of expertise in risk-based methods. • Recommendation of standardized, practically oriented risk-based design procedures.

2.4.3 Short-Term Plans and Immediate Needs for Improvements Ongoing activities, short-term plans, and the most pressing issues being currently considered by researchers and agencies for improving risk-based methods, as listed by the respondents, are here grouped into the same categories as in Section 2.4.2. Modeling of hazards • Developing single-hazard risk-targeted design maps for multiple performance objectives. • Incorporating security risk assessment into multicriteria decision-making processes.

34

RISK-BASED STRUCTURAL EVALUATION METHODS

Probabilistic analysis • Continuous assessment of current qualitative and quantitative methodologies and implementation in practice of improved techniques being developed in research (e.g., ongoing activities to implement advanced probabilistic flood hazard analysis methods). • Replacing approximate methods for safety assessment with more accurate reliability-based techniques. • Focusing on system-level analysis, involving entire infrastructure systems and their combination. Consequences • Developing specific ways to calculate loss due to structural damage. • Developing an approach to include user costs in the formulation of risk. • Quantifying resilience. Data limitations • Improving accessibility to hazard data, particularly for geographically distributed systems. • Collection of data correlating observed physical infrastructure damage to locally recorded hazard intensities following recent natural events. Risk management • Establishing a process for defining acceptance criteria and determining risk thresholds. • Improving communication of the risk to the public. Support • Replacing deterministic assessment standards by risk-analysis-based guidelines. • Standardization for specific applications. • Creating ways that decision makers (especially publicly elected ones) can better receive recognition and reward for decisions regarding lowprobability, high-consequence events that are unlikely to happen during their political term of office (the incompatibility of lifetimes). • Development of a taxonomy of methods, such that users have guidance on which methods are appropriate for which problems. • Establishing a homogeneous set of definitions. • Training and education (e.g., efforts to get more engineering students to learn probability and statistics; reducing the current emphasis in engineering programs on traditional analysis and design courses that make future engineers think deterministically).

SUMMARY OF SURVEY FINDINGS

35

Figure 2-11. Improvements needed in current risk-based methods.

2.4.4 Priorities for Near-Term Improvements In this question, the respondents were asked to rate the topics that should be prioritized for near-term further improvement on a scale from 1 (highest priority) to 5 (lowest priority). The ratings of the respondents were converted into a single percentage representing the importance that researchers and engineers gave to each issue. The percentage given in Figure 2-11 is the number of scores divided by the sum of the scores. As an example, if four scores of 5 were given to the need to improve an item, then the importance score for that item would be 4 / (4 × 5) = 0.2 (or 20%). As depicted in Figure 2-11, responding researchers believed that four components of the current risk analysis process should have high priority for near-term further investigation: estimation of the probability of damage and failure; evaluation of the consequences of failure; establishment of risk acceptance criteria; and risk communication. Practicing engineers gave practically similar weights to all the topics, with slightly higher weights for risk communication, evaluation of load intensity (hazard), exposure, inspection, and evaluation of consequences.

This page intentionally left blank

CHAPTER 3

Summary of Workshop Discussions

To complement the information collected from the survey, as presented in Chapter 2, this chapter summarizes the outcome of the workshop on RBsEM in structural and infrastructure engineering. Thirty-three experts, including the authors of this report, participated in the workshop, which was held at ASCE headquarters on September 17, 2014. Fifteen of these experts also provided responses to the survey. The stated objectives of the workshop were these. • Share experiences and information across structural and infrastructure engineering sectors on ○

successful practices in risk-based structural design, evaluation, and decision-making processes at the project and network levels;



availability of data, tools, and techniques for RBsEM;



principles and procedures for establishing risk acceptance criteria at the project and network levels;



risk communication for engineering professionals, stakeholders, decision makers, and the public;



obstacles hindering the implementation of RBsEM and how to overcome them; and



development of risk-based standards.

• Outline a strategy to advance and encourage the implementation of RBsEM. • Build a RBsEM community of representatives of government and regulation agencies, practicing engineers, and academic researchers. • Assemble the findings and recommendations of the workshop in an ASCE/ SEI document (i.e., this report). After a brief overview of the state of the art of risk methodologies in structural engineering, which involved presentations from one academic, two government agency representatives, and one consultant, the workshop participants (listed in Table 3-1) were divided into three groups according to their individual interests to discuss key aspects of the implementation of RBsEM in structural and 37

38

RISK-BASED STRUCTURAL EVALUATION METHODS

Table 3-1. Participants in the Workshop. Group 1: Risk Analysis Methods Name

Affiliation

Bilal M. Ayyub (group leader) Stephanie King (group leader) Fernando Ferrante Dan Frangopol John W. van de Lindt Leonardo Duenas-Osorio Ming Liu Mitsuyoshi Akiyama Paul Mlakar Phil Yen Robert Patev Samuel Labi

University of Maryland Consultant US Nuclear Regulatory Commission Lehigh University Colorado State University Rice University Naval Facilities Engineering and Expeditionary Warfare Center Waseda University US Army Corps of Engineers US Federal Highway Administration US Army Corps of Engineers Purdue University

Group 2: Consequences Name

Affiliation

Ross Corotis (group leader) Dennis McCann (group leader) Emil Simiu Graziano Fiorillo Joe Englot Rade Hajdin Sheila Duwadi Steve Ernst Vilas Mujumdar Yousef Alostaz

University of Colorado Boulder CTL National Institute of Standards and Technology City College of New York HNTB IMC US Federal Highway Administration US Federal Highway Administration Consultant AECOM

Group 3: Codification/Communication Name Michael O’Rourke (group leader) Therese McAllister (group leader) Alice Alipour Bruce Ellingwood Cha Eun Jeong

Affiliation Rensselaer Polytechnic Institute National Institute of Standards and Technology University of Massachusetts Colorado State University University of Illinois (Continued)

SUMMARY OF WORKSHOP DISCUSSIONS

39

Table 3-1. Participants in the Workshop. (Continued) Group 3: Codification/Communication Name Gary Chock Ian Friedland Jian Yang Kent Hokens Michel Ghosn Paolo Bocchini

Affiliation Martin & Chock US Federal Highway Administration City College of New York US Army Corps of Engineers City College of New York Lehigh University

infrastructure engineering. As their main topic, the three groups considered (1) risk assessment methods, (2) evaluation of the consequences of structural failure, and (3) risk analysis codification and risk communication, respectively. At the end of the three breakout sessions, each group summarized their observations and recommendations. The overall assessment of the workshop participants is summarized in the following five sections. Section 3.1 gives a summary of the discussions that addressed risk assessment methods, including summaries of issues related to structural analysis methods, hazard assessment, structural deterioration, and structural performance. Section 3.2 summarizes issues related to the evaluation of consequences of failure. Section 3.3 relates the opinions of the participants on codification and risk communication. Section 3.4 expresses concerns related to availability and presentation of necessary data. Section 3.5 describes some of the obstacles that need to be overcome to facilitate the implementation of risk-based methods in structure and infrastructure engineering.

3.1 RISK ASSESSMENT METHODS 3.1.1 Structural Analysis Methods Current Status Current structural design and evaluation specifications, standards, codes, and guidelines are implementing risk-informed procedures at different levels of complexity as risk-based methods have been recognized to offer the most rational approach to decision-making processes related to the design and management of structure and infrastructure systems. Methods applied at the project and network levels include • Probability-based (load and resistance factor design) structural design and evaluation codes that are calibrated to meet preset structural member target reliabilities;

40

RISK-BASED STRUCTURAL EVALUATION METHODS

• Direct application of structural member reliability methods; • Performance-based design methods which seek to account for the structural system’s behavior following a member’s failure by requiring different structural member reliability/safety levels based on the direct consequences of member failures; • Direct use of reliability methods at the structural system level; • Convolution of fragility analysis and hazards to estimate the probability of structural failure; • Risk-informed ranking of structures and infrastructure systems; • Risk analysis for single hazards (e.g., earthquake) at project and network levels; and • Use of event/fault trees to calculate risk for complex systems and networks under multiple threats. The most basic methods, such as load and resistance factor design and performance-based design, are being implemented on a routine basis in many sectors. Also, some standards have started implementing a more direct risk analysis process, such as during the seismic design of buildings and for fire safety analysis. But the most advanced risk-based methods are only being implemented for special facilities, such as nuclear power plants, where the Nuclear Regulatory Commission has established standards and training programs for the implementation of riskbased methods on a regular basis. Thus, the implementation of the most advanced methodologies in structural and infrastructure design and safety assessment remains generally limited to application in very special projects. Nevertheless, some aspects of risk-informed methods are being used in structural-safety-related applications, such as risk-based inspection and damage detection (e.g., in highway bridge inspection and rating processes); project and process management (e.g., in construction of highway structures); screening and prioritization tools for inventories of structures (e.g., in ranking of deteriorating bridges); and protection of regions subjected to widespread hazards (e.g., failures of dams subject to floods).

Needs The most pressing needs to promote implementation of RBsEM were identified as • Development of guidance on the appropriate risk assessment levels required for different applications and circumstances, • Collection of data, • Evaluation and quantification of epistemic and aleatory uncertainties, • Development of computer software and other tools to perform probabilistic analysis on the system and network levels, • Consideration of scalability and interdependencies, • Specific risk analysis guidelines for different structures and infrastructure systems;

SUMMARY OF WORKSHOP DISCUSSIONS

41

• Development of training programs, and • Reaching consensus on standard input requirements and interpretation of risk analysis results, along with peer review processes.

3.1.2 Hazard Assessment Current Status Current methods for estimating probability of hazard occurrence are largely based on historical records and extreme-value probability model fits (e.g., snow, floods, and wind). On the other hand, current seismic mean recurrence interval maps are founded on physics-based models verified with historical data. Both historical and physics-based methods have been incorporated into computer programs (e.g., Hazus, developed by the Federal Emergency Management Agency, FEMA) to provide standardized procedures for estimating hazards and potential losses from earthquakes, floods, and hurricanes. Current structural design and evaluation codes and standards, such as ASCE 7, cover a number of natural hazards at discrete levels for individual facilities. Data are also available through various organizations to support the characterization of some man-made hazards (e.g., fire, through the National Fire Protection Association, Society of Fire Protection Engineers, and ASCE). For hazards not covered by codes and standards, hazard curves need to be developed for implementation in quantitative risk assessment. This process requires historical data on the occurrence and intensities of the hazards of interest. Different US agencies have assembled large databases on different natural events, and there also are databases on a few man-made hazards such as fire, truck and barge impacts on structures, and industrial accidents. Because these databases have been compiled for different purposes, they may need to be synthesized for implementation for structural hazard assessment purposes. When no reliable data are available, scenario-based hazard definitions must be used. These could be generated through a Delphi panel consensus process. However, there is no established method at present for selecting appropriate scenarios when multiple scenarios are possible. Such difficulties arise for example when considering terrorism and similar events that are neither random nor stationary, which precludes using analytical methods to generate probabilistic hazard curves. In some cases, hazard classification is done in terms of rankings rather than hazard curves. For example, the US Army Corps of Engineers uses a relative ranking system for dams as either “high hazard” or “low hazard,” based on the consequences of dam failure. Such rankings aid in deciding where to invest resources for performing in-depth analysis.

Needs Future efforts to improve hazard assessment should focus on the following.

42

RISK-BASED STRUCTURAL EVALUATION METHODS

• Development of methods and procedures to establish criteria for nontypical hazards that are not set in current codes and standards. For example, criteria are needed for assessing the likelihood of larger-magnitude hazards or longer return periods for application to high-consequence facilities. • Consideration of correlated and uncorrelated multihazard events. Commonly available hazard models concentrate on the assessment of a single hazard for a single facility. However, in many cases, hazards are correlated with each other (e.g., wind, snow, and flood; earthquakes and fire; tornados and impacts). • Establishing methods for system-wide hazard analysis at a community level (e.g., bridge failures within a region) that may require more information, such as joint probability of hazard events at multiple locations. • Where possible, development of physics-based models to complement historical data, as done for example when using seismic wave attenuation models for earthquake ground motions. • Consideration of modeling uncertainties. Because of the limited historical data on extreme events, modeling uncertainties need to be transparent and presented separately. • Development of rational ranking systems. When no analytical models are available, relative hazard rankings are often used. When rankings are sufficient, guidelines are needed on how relative rankings should be defined. The basis of the rankings and the decision on whether to use them or seek a more formal probabilistic hazard model may need to include consequences. This process may require the engagement of owners to help decide which hazards should be considered and whether a numerical value is needed or a relative ranking is sufficient. But, one should not ignore the need to weigh-in owners’ biases related to long-term or short-term self-interests which presents a real challenge. It has been suggested that game theory could be used for such purposes. When no probabilistic models are available and only hazard rankings are adopted, multihazard assessment turns into a difficult task.

3.1.3 Structural Deterioration Current Status Risk analysis requires information on the state of a structure of interest throughout its service life, taking into consideration the degradation of structural materials over time under the effect of cyclic loading and environmental factors. This is particularly true for structures exposed to climate-related effects (e.g., bridges and dams) or to chemicals and high temperature (e.g., power plants). Although degradation mechanisms are highly probabilistic and stochastic as well as structure-specific, current structural design standards and codes generally account for deterioration-induced reductions in structural capacity implicitly based on expected degradation rates obtained from the historical performance of existing stocks. Thus, the current approach is specific to each facility type.

SUMMARY OF WORKSHOP DISCUSSIONS

43

Tools and methods to protect structures from exposure to deteriorating agents and to mitigate the effects of various deterioration mechanisms are also available (e.g., American Concrete Institutes ACI 222R-01 for protection of metals in concrete against corrosion). Moreover, for many types of exposed structures (e.g., bridges), regulatory agencies have instituted rigorous inspection requirements to ensure adequate structural performance over a structure’s service life and to extend the specified nominal design life, which can be achieved through proper maintenance and repair. For example, bridge inspection and load rating data are assembled in various databases and used in life-cycle asset management processes. Deterioration models have been developed to estimate the effects of long-term exposures and corresponding reduction in structural capacity caused by various deterioration mechanisms. Some of these models also consider the effects of maintenance and repair. These models are meant to determine the condition of the structure at a point in time but are largely empirical in nature, often based on limited data, and apply conservative assumptions on the spread of the deterioration throughout a member or a structure and how the deterioration is related to structural capacity.

Needs Research needs to improve structural deterioration modeling were identified as • Development of methods and procedures to characterize degradation in a manner amenable for implementation in standardized structural evaluation procedures—these should help in determining the actual strength of a structural member or system at a given point in time without making common highly conservative assumptions about the spread of deterioration throughout the member; • Development of maps of environmentally or regionally driven degradation rates, as done for other natural hazards, such as the seismic mean recurrence interval maps as mentioned; • Establishing correlations between loads and degradation mechanisms or between different degradation processes (e.g., salt exposure and concrete cracks, or load-cycle fatigue of corroded elements); and • Development of methods to scale accelerated testing results to field conditions and actual life cycles.

3.1.4 Structural Performance Current Status Risk assessment and performance-based design require engineers to relate the results of structural analysis to critical performance objectives, including serviceability, structural damage levels, local failures, and system collapse. For example, in seismic design, the analysis controls a response parameter (usually related to deformation) that serves as a surrogate for how a structural system behaves during

44

RISK-BASED STRUCTURAL EVALUATION METHODS

an earthquake. Response parameters such as yielding and nonlinear deformations are often linked to strength degradation (softening). Qualitative damage levels are subsequently related to the level of nonlinear deformations, while maximum displacement levels are often used to set serviceability and functionality criteria. For example, slight damage is assumed to take place at the onset of yielding, while structural failure occurs when ductility capacity is exhausted. Engineers studying other hazards (e.g., wind) are currently considering the adoption of a similar approach as a rational method to look at nonlinear structural response. Hence, software capabilities and analytical skills are key requirements for the accurate interpretation of structural analysis results. The consideration of actual construction detailing, secondary and nonstructural elements, and actual in situ condition of the structure are also important factors for risk assessment.

Needs The correlation of physical damage with structural analysis results currently is based on very limited validation data, with large modeling uncertainties. The failure of analysis software to converge is often owing to numerical issues and limitations in material and structural behavior modeling and does not necessarily coincide with system collapse. The effects of nonstructural elements and their potential damage on structural system behavior need to be considered as well. Therefore, research efforts should focus on calibrating structural response parameters, including element and system behavior, to physical damage and consider modeling uncertainties in the structural analysis and damage evaluation processes.

3.2 EVALUATION OF CONSEQUENCES OF STRUCTURAL FAILURE Current Status It is widely accepted that risk-based methodology, including the consideration of consequences, should be the basis of structural design and evaluation processes and that code provisions must be based on risk-based calibrations. The performance-based seismic design approach could be considered as a model for future risk-based code developments. Although the most common approach to account for consequences is to only address direct costs, consequences of structural damage and failures can actually be classified into four major categories: • Life safety, including injuries to users and occupants and fatalities; • Direct economic costs, including repair costs, replacement costs, business interruption / loss of asset, and nonreplacement; • Indirect costs, including impact on alternate assets, harm to business reputation, loss of societal functionality, behavioral change, fear, change in economic activities (function of scale), and environmental impact; and

SUMMARY OF WORKSHOP DISCUSSIONS

45

• Resilience, that is, the time to restore functionality to a community, which should address various consequences, including robustness, reliability, adaptation, mitigation, recovery, and resourcefulness. Diverse consequences should normally be combined using one common measure. Monetary value is the most widely used because it is easily understood. It produces a quantitative measure and avoids assignment of relative weights. The use of monetary value also simplifies the application of cost-benefit analysis and facilitates rational decision making (i.e., risk-informed decision making). Software such as Hazus (Hazard US) and REDARS (Risks from Earthquake Damage to Roadway Systems) incorporate modules to assess the consequences of failure, providing tools that are being used to conduct seismic, flood, and hurricane risk assessments for communities and seismic risk analysis for buildings or roadway networks.

Needs Efforts to improve the evaluation of consequences of failure should focus on the following. • Development of guidelines for selecting and prioritizing consequences and informing risk assessors which consequences should be considered under which circumstances. This may be most appropriate in a context of the role of the facility in the community. Prioritization should also be related to asset management within a community or system, which considers functions and social needs. Such guidelines may include matrices that can be calibrated for different hazards, with various hazard intensities, to assess life safety, direct economic, indirect economic, and environmental costs. • Whenever possible, calibration of loss assessment models to data collected following historical events to verify the validity of current cost assignment methods. • Consideration of intragenerational and intergenerational equity when assigning monetary value to human life or when assessing the long-term consequences of rare events that may not occur within the time framework of the decision makers. This should include sustainability and ecological impacts that may affect different generations.

3.3 RISK ANALYSIS CODIFICATION AND RISK COMMUNICATION 3.3.1 Risk Communication Current Status Communicating risk to stakeholders is an important task in the risk analysis process, as successful risk management requires concurrence and support from

46

RISK-BASED STRUCTURAL EVALUATION METHODS

all concerned. Risk communication necessarily depends on the audience, which ranges from engineers and code committees to owners, local government agencies, community representatives, federal government agencies (including political representatives, e.g., Congress), and the general public. Therefore, the message will take different forms depending on the specific group that is addressed. A successful risk communication strategy for the general public first needs to identify the purpose of the interaction and the type of information to be conveyed. Risk communication should convey that predictions are uncertain but major events are possible. Low-probability, high-consequence events need to be recognized. It is also important to identify possible risk mitigation actions and each option’s associated short-term and long-term benefits. Negative risk messages that may lead to fear or panic are to be avoided. It has also been recognized that public outreach efforts may be more effective at a local level than at the national level. Local champions often are the most effective. Therefore, risk communication should consider immediate interests and ownership perspectives. Technical terminology and numerical expressions of risk are effective when addressing engineers and code committees, although communicators should consider the effectiveness of messages based on annualized risk versus cumulative risk over a period of time. Matrices expressing risk levels as a function of hazard intensities and consequences in general terms are often resorted to when presenting options for possible risk mitigation actions to owners, regulators, and decision makers. When addressing the general public, visual tools may be useful to convey scenarios and uncertainties. It is also recommended to create risk categories using lay terminology or the cost of remediating deficiencies. For example, the Federal Highway Administration classifies existing bridges as “functionally obsolete” or “structurally deficient” to convey the risks of aging bridge inventories. ASCE, in its Infrastructure Report Card (issued every four years), gives the state of US infrastructure a grade from A to F and also provides estimates of what it would cost to upgrade the system.

Needs The workshop participants believed that there is a need to promote a general culture that considers outcomes and opportunities within a risk-informed framework while holding decision makers accountable based on public demand for improved resilience to major hazards. It is important to enhance public understanding of risk and expectation regarding the vulnerability of structures and infrastructure systems and occupants’ and users’ safety. There is a need to create a forum for interaction between risk analysts and communication experts and learn from best practices to develop protocols on how best to package the message and the proper medium to use to address diverse audiences.

SUMMARY OF WORKSHOP DISCUSSIONS

47

3.3.2 Risk Acceptance Criteria Current Status Analytical approaches for establishing risk acceptance criteria at the project and network levels range from simple to complex. Decisions can be made using optimization methods based on cost-benefit analyses or benefit-risk principles. Such analytical approaches may include life-cycle cost and time-dependent structural performance models when studying long-term risk. In practice, however, it is most common for decisions to be made using empirically based techniques, and qualitative evaluations of risk. It is not uncommon to observe decisions being made to satisfy legal requirements and political pressures rather than just economic considerations. It has long been recognized that risk-based methodologies should be the basis for development of structural design and evaluation code provisions. As a matter of fact, many existing codes and standards have implicitly built in risk criteria which are based on the level of satisfaction with the performance of previous generations of similar structures. In general, the focus of most current codes and standards is on the safety of individual facilities, while recognizing that risk acceptance criteria must depend on the facility’s role in the community. As such, the current performance-based seismic design standard can be set as the most basic model for future risk-based code developments. It is noted however that experiences from developing risk acceptance criteria for the safety of individual facilities are not necessarily transferable to risk evaluation of a portfolio, a network, or an entire community. The entity responsible for establishing risk acceptance criteria may require the involvement of different stakeholders depending on the type of facility. For example, an individual building owner may be willing to accept a risk level lower than that set by building code requirements to better protect his/her investment. But decisions to go beyond minimum design requirements of publicly owned facilities (e.g., bridges) require multiple levels of clearance to justify the change and associated costs to agency officers, regulators, policymakers, and budget controllers. Generally speaking, establishing risk acceptance criteria requires an interdisciplinary approach. The criteria must be acceptable to a range of stakeholders, including owners (individual or public), regulatory agencies, policymakers, and the concerned community. Involved stakeholders must be sufficiently educated in risk principles to make an informed decision.

Needs Future efforts for establishing risk acceptance criteria should include the following. • Development of multiscaling protocols which, at the highest scale, take into consideration community resilience and sustainability. This is because current risk acceptance criteria established for typical individual structures

48

RISK-BASED STRUCTURAL EVALUATION METHODS

subject to recurring events are not necessarily transferable when addressing high-consequence rare events that cause damage over extended areas. For example, no risk-informed acceptance criteria have been established for lifeline networks. • Calibration of risk acceptance criteria to ensure consistency between the outcomes of risk assessment processes that followed different levels of complexity (e.g., risk matrices vs. numerical risk results). The calibration should include comparisons between risk acceptance criteria obtained from empirically based techniques and those obtained from analytical approaches (e.g., optimum life-cycle cost) to ensure compatibility in the final decisionmaking processes. • Integration of risk attitude into risk acceptance criteria. It is understood that the final decisions should reflect individual and societal attitudes toward risk and the public’s risk tolerance, which may depend heavily on socioeconomic as well as local, political, and legal factors, among others.

3.4 RISK DATA Current Status As observed from the previous discussion, the quality and quantity of data are critical characteristics of risk-based methodologies. While, as mentioned in Section 3.2, many government agencies have amassed large quantities of data on various hazards, many of the available databases need to be synthesized and assembled in consistent and easily implementable formats. The collection of information on physical damage to facilities, consequences of structural failures, and economic impacts following extreme events has recently attracted more attention. But such information needs to be synthesized for implementation for risk assessment purposes, keeping in mind that deficiencies in the available data, whether knowledge-based or model-based, may be difficult for potential users to recognize.

Needs Improving the availability and quality of risk analysis data would entail the following • Continuing the assembly of historical data on hazards, observed structural damage, and consequences and removing restrictions on access to available data; • Establishing incentives for private deposits for data sharing; • Development of guidance on the appropriate use of data types for different applications (e.g., empirical data versus analytical models);

SUMMARY OF WORKSHOP DISCUSSIONS

49

• Development of guidance to identify the most appropriate data analysis techniques for different applications and circumstances—for example, big data requires different sets of tools, which may not be widely available; • Consideration of uncertainties in existing data as well as in modeling, statistical extrapolation, and projection techniques— this is especially important when limited data are available, because small-sample inferences can be unreliable; • Development of methods for analyzing and modeling nonstationary processes and the application of these techniques to hazards susceptible to climate change (e.g., wind, snow, and floods) and evolution of human and economic activities (e.g., demands for traffic and utilities).

3.5 OBSTACLES The information presented in this chapter based on the collective expertise of those attending the workshop confirms the findings of the survey, as summarized in Chapter 2. The consensus is that risk analysis is clearly the most rational approach for assessing the performance of structures and infrastructure systems and should be the basis for the development of structural codes and standards and for decision making related to maintaining, upgrading, and ensuring the resilience of infrastructure systems and communities. However, as mentioned in this chapter, the participants noted a number of important obstacles that are hindering the use of RBsEM in structural and infrastructure engineering. These will need to be overcome over time through concerted effort by the various stakeholders. The main obstacles are related to the following issues (listing technical issues first, and then other issues).

Technical Issues Standards and guidelines Structural engineering practice has traditionally followed requirements, procedures, and recommendations set in structural codes, specifications, standards, or guide manuals that have evolved following decades of experience with successful structural designs and performances. Deviations from such documents usually require careful detailed reviews and the approval of owners and regulatory agencies. The review and approval processes entail significant effort to justify the need for the deviations from standards, provide detailed explanations of the modified procedures, and defend the final outcome and decisions. Some US agencies, such as FEMA and NRC, have developed probabilistic risk assessment guidelines to address specific threats and structural systems (see, e.g., https:// www.fema.gov/hazard-identification-and-risk-assessment and https://www.nrc. gov/reading-rm/doc-collections/nuregs/contract/cr2300/vol2/). Nevertheless, the application of such standards has remained within the realm of a small group of

50

RISK-BASED STRUCTURAL EVALUATION METHODS

experts concerned with a narrow range of applications. The availability of standards and guidelines for risk assessment and risk analysis for general structures would widen the scope of the application of risk-based methods while also providing some uniform baselines that would help keep engineers, owners, and regulators on the same page when interpreting the results of structural risk assessments and thus greatly facilitate and encourage the application of risk-based procedures in routine structural engineering practice. Perceived complexity of risk assessment methods and availability of easy-to-use probabilistic structural analysis software Engineering training is geared toward the deterministic assessment of structural safety, whereby a structure is considered either safe or unsafe. Even reliability-based codes and standards, such as load and resistance factor design methods, present their specifications and recommendations in deterministic formats which, to ease their implementation, obscure the large uncertainties inherent in the process of safety assessment, including the definition of structural safety limit states and the consideration of the various consequences of exceeding these limits. Thus, the traditional approach often creates the impression that structural safety can be assured when code provisions are blindly implemented. Compared to such traditional, simplified, straightforward assessment of safety, and because of their nuanced view of what could be considered as acceptable safety, risk-based methods require a different frame of mind that seems alien to engineers trained in traditional codified structural assessment approaches. The application of risk assessment procedures, as compared to traditional deterministic methods, requires additional effort to collect the necessary statistical data on the pertinent hazards, as well as the behavior of structures, and to investigate the different consequences of failures; knowledge of probabilistic analysis methods; and efficient, easy-to-use tools for performing such analyses. Unfortunately, few civil engineering education programs require training in probabilistic analysis methods. Furthermore, the excessive computation needed to perform probabilistic structural analyses necessitates highly efficient specialized software running on advanced computer platforms. While several specialized and general-purpose probabilistic programs have been developed over the last decade to address that need, these are generally difficult to use; they have remained in the domain of researchers and expert engineers and have not yet found their way into general engineering practice. Availability of and access to high-quality data A vast array of data related to the occurrence of natural and human-made events as well as their impact on the infrastructure, health, and economic wellbeing of the affected communities and the environment are continuously assembled by various US federal and state institutions. These days, many of those data are available through these institutions’ websites. For example, the Building Science Branch of FEMA’s Federal Insurance and Mitigation Administration provides information on the impact of various historic events on primary

SUMMARY OF WORKSHOP DISCUSSIONS

51

structural systems of buildings (https://www.fema.gov/fema-mitigation-assessment-team-mat-reports). However, such information needs to be synthesized and presented in a format applicable for implementation for risk assessment purposes. Difficulties in applying risk analysis processes arise because of deficiencies and gaps in the data, particularly at the extremes, which may not be properly recorded or reported. Definitions of spatial and temporal scope and application Structures and infrastructure systems are designed for a predetermined nominal service period. Yet, actual service life is often quite different from the intended design life, as owners often seek to extend their assets’ lives for various economic and functional reasons or they occasionally dismantle an asset when its utility has been exhausted. Also, the evaluation of the immediate and future longterm consequences of a threat requires definition of the physical radius of the directly and indirectly impacted geographic areas and the size and characteristics of the communities affected, and consideration of effects on future generations, as well as long-term ecological factors. The determination of the scope may be a critical factor in the decision-making process, because different ranges may lead to different recommendations for managing the risk. Consideration of evolution of hazards and structural, social, and economic factors over time In addition to the unpredictability of future major malicious attacks, climate change, local and global economic expansions or recessions, and population growth and migrations present difficulties in estimating the severity and frequency of potential events and evaluating their consequences. Such issues can be addressed by building consensus among experts in the pertinent subjects on the expected ranges of such changes. However, as observed in current discussions on climate change, such consensus is at times difficult to reach owing to technical limitations related to the scale of the problem, or because of political considerations. Treatment of high-probability, low-consequence events and low-probability, high-consequence events Our knowledge base with regard to the types, frequencies, and intensities of human-made and natural hazards, as well as their impacts, is limited to experiences that have been witnessed or historically recorded. This dependence on experience imposes artificial bounds on our expectation of the sets of hazards that we may be exposed to as well as their intensities and their consequences. A well-known example of an unforeseen low-probability, high-consequence event is the 2001 attack on the World Trade Center. The attack and its intensity were at the time unimaginable, and the unforeseen short-term and long-term physical damage, human toll, and economic, social, and political impacts were devastating. Furthermore, much of our civil infrastructure is subject to wear and tear from everyday loading and slow yet continuous aging and deterioration. Because

52

RISK-BASED STRUCTURAL EVALUATION METHODS

the immediate consequences of these factors are essentially minimal and thus hard to quantify, there has been a tendency to ignore the long-term effects of these aging processes until their accumulation creates hazardous conditions requiring major and costly interventions that had not been planned and budgeted. Although the need to address these high-probability, lowconsequence events has recently attracted considerable attention from the engineering community, the general public, and the political classes, much work is still needed to study the various daily hazards that afflict the various types of infrastructure, and their interactions, to model the evolution of the combined damages they cause and estimate their consequences to develop appropriate maintenance and rehabilitation strategies. Reasonable and acceptable risk acceptance criteria given the trade-offs between costs and benefits Researchers have proposed different approaches to determine mathematically optimal risk acceptance criteria that would minimize the life-cycle costs of a project should a particular event or a combination of events take place over the project’s expected service life. However, it has been observed that the adoption of the outcome of these recommendations may often be sidestepped owing to the difficulty of quantifying constraints, such as the timely availability of necessary financial and other resources, or intangible political and social considerations. Consistency in terminology with that used in other nonstructural risk analysis applications The field of risk management has been prominent in the realm of economics and finance and has since been adopted in many fields. Although the specific needs must depend on the particular projects being assessed, generic guidelines for conducting risk management processes have been proposed by International Organization for Standardization (ISO, https://www.iso.org/standard/43170.html). It is suggested that developing structure- and infrastructure-specific nomenclature and protocols compatible with those of the ISO would help harmonize risk management processes for structure and infrastructure engineering projects and ensure consistency with those in other sectors. This would facilitate the implementation of these processes and wider acceptance by decision makers and the general public.

Other Issues In addition to the technical issues, the workshop participants listed a few other specific issues. Defining the target As explained earlier, the results of risk analysis will depend on the temporal and spatial scope of the analysis and the perimeter of the community considered for analysis. Therefore, in the context of structure and infrastructure risk assessment, an important question that must be first addressed is, “Risk to whom?” Which raises a follow-up question: “Who should provide the answer to the first question?”

SUMMARY OF WORKSHOP DISCUSSIONS

53

Certification and training programs Risk-based structural and infrastructure analysis forms a new paradigm that requires specific technical skills that are not in common use in traditional practice. Except for a few specific applications (for example in the nuclear power generation field), mechanisms for training and certification have yet to be developed by relevant institutions. Mechanisms to involve key stakeholders The risk analysis process requires the involvement of a wide range of stakeholders at every step of the process. Mechanisms should be developed to invite the participation of key stakeholders and various representatives of relevant institutions and concerned communities and incentivize them to participate while keeping away from potential biases and conflicts of interest. Liability of owner Deviations from standard procedures entail important responsibilities. For this reason, project owners may be quite hesitant to embark on new initiatives, especially if the liability associated with a particular event is severe. Institutional inertia Just as humans are creatures of habit, it is organizationally difficult for institutions to deviate from routine practice to embrace completely new methods. In addition to a new frame of mind, the adoption of risk-based methods would require institutional restructuring and the engagement of new personnel that cover wider ranges of expertise. It is difficult for many institutions to invest in the required changes when current methodologies are considered technically adequate and financially profitable. Lack of incentives There must be clear incentives for institutions to embark on the proposed risk-based initiative and to work on overcoming the aforementioned financial, political, and technical constraints. The rewards, financial and otherwise, should be sufficiently large to overcome the costs of restructuring and personnel preparation and also to offset potential liabilities given that the traditional methods, as specified in current building codes and design manuals, have been incorporated into routine application and mostly have a proven safety record, even if they are not optimal in some sense.

This page intentionally left blank

CHAPTER 4

Conclusions and Recommendations

4.1 CONCLUSIONS This report describes the results of a survey and a follow-up workshop on the state of the art of the application of RBsEM in structural and infrastructure engineering. The responses to the survey and the discussions during the workshop have shown that risk analysis principles are well established from a theoretical point of view. However, a number of barriers have hampered a wide-scale implementation of risk-based methods in decision-making processes. These barriers include • A paucity of guidelines, manuals, and standards which, in combination with adequate training, would help homogenize the risk analysis process across the different sectors; • The difficulty of applying probabilistic analysis techniques when evaluating the performance of complex structures and networks; • Limited statistical data to model the intensities of extreme hazards and their effects on structural systems, including the establishment of criteria that relate structural analysis results to physical damage and subsequent enumeration of the consequences of failure and the allocation of quantifiable measures for these consequences; and • The absence of protocols to communicate risk to relevant stakeholders and the general public in a manner that enhances their understanding that risk is ubiquitous in all areas of life, including the built environment. Such protocols would help policymakers and regulators set risk acceptance criteria based on socioeconomic conditions and demands while taking into consideration the public’s attitudes toward risk and risk tolerances. Despite the scale of these challenges, some industries, such as the nuclear power industry, have overcome many of them through long-term research. Furthermore, in seismic engineering, the use of probabilistic performance-based design and evaluation methods, combined with the consideration of the

55

56

RISK-BASED STRUCTURAL EVALUATION METHODS

consequences of failure, has become widely accepted for routine application. Risk analysis procedures have also been advanced in dam engineering and are being considered for wind design of buildings and building fire safety. Concepts of performance-based design, with objective or subjective evaluation of risk, have already been introduced in ASCE 7 and other standards for the design and safety evaluation of buildings and other structures. Other agencies and associations, such as those concerned with the state of dams and transportation infrastructure systems, have recently initiated ambitious research programs to improve their risk analysis methodologies and have, in the interim, resorted to implementing empirical approaches based on the experience of industry leaders and the review of historical data. Despite the progress made in the field, implementation of risk analysis methods in structural and infrastructure engineering is still in its infancy, and additional work still remains before risk-based methods evolve as the standard approach for decision-making processes in structural engineering. Survey respondents and workshop participants, who included officers in government and regulatory agencies, practicing engineers, and academic researchers, expressed great interest in advancing RBsEM, which has been recognized to present the most rational approach for addressing issues related to the management of aging structural and infrastructure systems susceptible to increased environmental and climate-related hazards, as well as increased security threats.

4.2 RECOMMENDATIONS As a result of the survey and the workshop, the following recommendations are made to encourage and support the application of RBsEM in structural and infrastructure engineering practice. The implementation of these recommendations will require the involvement of professional societies such as ASCE, regulatory agencies, research organizations, and educational institutions. Development of guidelines and training programs • Professional societies and regulatory agencies should develop a set of guidelines explaining when and how different levels of risk-based methods should be applied. Calibration studies should be performed to ensure consistency in the outcomes of different levels of risk-based methods. • Regulatory agencies should provide incentives to encourage the application of risk-based methods rather than traditional prescriptive methods. • Short courses and training seminars should be organized by professional societies, regulatory agencies, and educational institutions to train engineers, owners, and decision makers.

CONCLUSIONS AND RECOMMENDATIONS

57

• Professional societies should organize seminars and workshops and issue special publications to highlight the benefits of applying RBsEM. • Educational institutions should offer specialized postgraduate programs and degrees related to risk analysis in structural and infrastructure engineering. Technical support • Research organizations should support technical innovations for the development of efficient methodologies and techniques for risk analysis of structural and infrastructure systems and networks. • Computer software developers should implement modules to support the probabilistic analysis of structural and infrastructure systems and networks. Data • Research organizations should support the development of efficient techniques and tools for small and big data analysis that account for nonstationary processes and epistemic uncertainties. • Public agencies should continue to assemble hazard and consequence data and should facilitate access to depositories where these data are stored in formats applicable for straight use in risk assessment processes. • Research organizations should support the development of both statistical and physics-based models for typical and rare event hazards as well as models for the evaluation of structural performance and consequences of failure that can extrapolate historical data to predict future long-term events and associated effects. Risk acceptance criteria and risk communication strategies • Multidisciplinary teams of engineers, social scientists, economists, and actuaries should research approaches to establish optimum risk acceptance criteria that take into consideration public attitudes toward risk for different levels of hazards and multiple hazards. • Professional societies and regulatory agencies should provide a forum for engineers, social scientists, and communication experts to explore effective means of communicating risk to different stakeholders, including the general public.

This page intentionally left blank

References and Further Reading

AASHTO. 2017. LRFD bridge design specifications, 8th ed. Washington, DC: AASHTO. Ang, A. H.-S., and W. H. Tang. 1984. Vol. 2 of Probability concepts in engineering planning and design: Decision, risk, and reliability. New York: Wiley. ASCE. 2017a. Minimum design loads and associated criteria for buildings and other structures. ASCE/SEI 7-16. Reston, VA: ASCE. ASCE. 2017b. Seismic analysis of safety-related nuclear structures. ASCE/SEI 4-16. Reston, VA: ASCE. Biondini, F., and D. M. Frangopol. 2016. “Life-cycle performance of deteriorating structural systems under uncertainty: Review.” J. Struct. Eng. 142 (9): F4016001. CEN (European Committee for Standardization). 2001. Eurocode 1: Actions on structures— Part 1-2: General actions—Actions on structures exposed to fire. EN 1991-1-2. Brussels, Belgium: CEN. Ellingwood, B. R. 2001. “Acceptable risk bases for design of structures.” Accessed August 26, 2019. https://onlinelibrary.wiley.com/doi/abs/10.1002/pse.78. FEMA. 2005. Risk Assessment: A how-to guide to mitigate potential terrorist attacks against buildings. FEMA-452. Washington, DC: FEMA. FEMA. 2012. Seismic performance assessment of buildings: P-58. Washington, DC: FEMA. Ghosn, M., L. Due˜nas-Osorio, D. M. Frangopol, T. P. McAllister, P. Bocchini, L. Manuel, et al. 2016a. “Performance indicators for structural systems and infrastructure networks.” J. Struct. Eng. 142 (9): F4016003. Ghosn, M., D. M. Frangopol, T. P. McAllister, M. Shah, S. M. C. Diniz, B. R. Ellingwood, et al. 2016b. “Reliability-based performance indicators for structural members.” J. Struct. Eng. 142 (9): F4016002. Lounis, Z., and T. P. McAllister. 2016. “Risk-based decision making for sustainable and resilient infrastructure systems.” J. Struct. Eng. 142 (9): F4016005. NIST. 2011a. “Earthquake risk reduction in buildings and infrastructure program.” Accessed August 26, 2019. https://www.nist.gov/programs-projects/earthquake-risk-reduction-buildings-and-infrastructure-program. NIST. 2011b. “Fire risk reduction in buildings program.” Accessed August 26, 2019. https:// www.nist.gov/programs-projects/fire-risk-reduction-buildings-program. NIST. 2011c. “Structural performance for multi-hazards program.” Accessed August 26, 2019. https://www.nist.gov/programs-projects/structural-performance-multi-hazards-program. Sanchez-Silva, M., D. M. Frangopol, J. Padgett, and M. Soliman. 2016. “Maintenance and operation of infrastructure systems: Review.” J. Struct. Eng. 142 (9): F4016004. Slovic, P. 2000. The perception of risk. London: Taylor & Francis.

59

60

RISK-BASED STRUCTURAL EVALUATION METHODS

Todinov, M. 2007. Risk-based reliability analysis and generic principles for risk reduction. Amsterdam, Netherlands: Elsevier. UNISDR (United Nations Office for Disaster Risk Reduction. 2017. National disaster risk assessment words into action guidelines, governance system, methodologies, and use of results. New York: UNISDR. USACE. 2014. Safety of dams: Policy and procedures. Washington, DC: USACE. USNRC (US Nuclear Regulatory Commission). 1983. “PRA procedures guide: A guide to the performance of probabilistic risk assessments for nuclear power plants: Chapters 9–13 and appendices A–G (NUREG/CR-2300, volume 2).” Accessed November 15, 2019. https://www.nrc.gov/reading-rm/doc-collections/nuregs/contract/ cr2300/vol2/.

APPENDIX A

Survey On Risk-Based Structural Evaluation Methods ASCE/SEI TECHNICAL COUNCIL ON LIFE-CYCLE PERFORMANCE, SAFETY, RELIABILITY, AND RISK OF STRUCTURAL SYSTEMS TG3: Assessment of Structural Infrastructure Facilities and Risk-Based Decision Making Questionnaire on implementation of risk-based methods The goal of this questionnaire is to gather information about successful practices and explore avenues to overcome real and perceived obstacles for implementing risk-based methods for the design and the management of civil structure and infrastructure systems. The information will be synthesized and published in an ASCE report and will be used as the focus of a workshop that will take place in September 2014 in Washington DC. The questionnaire is divided into the following topics and subtopics: I. General Information: 1. Respondent 2. Organization/Industry 3. Structure/Infrastructure type II. Risk Assessment: 1. Identifying pertinent hazards and intensities 2. Analyzing system performance and potential failure modes 3. Assessing the consequences of failure 4. Risk ranking and acceptance criteria III. Risk Management: 1. Eliminating or reducing exposure to hazards 2. Designing component/system to reduce potential failures 3. Controlling the consequences of failure 4. Risk communication IV. Additional Comments and Suggestions 61

62

RISK-BASED STRUCTURAL EVALUATION METHODS

Important Note: If you have Adobe Professional, the filled form can be saved as pdf. If you DO NOT have Adobe Professional, the information you enter will be LOST unless you PRINT the document after filling out the form and send it either as pdf or scanned copy. Please send the completed form by March 27, 2014 to: gfi[email protected] CC: [email protected]

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

63

64

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

65

66

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

67

68

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

69

70

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

71

ASCE/SEI TECHNICAL COUNCIL ON LIFE-CYCLE PERFORMANCE, SAFETY, RELIABILITY, AND RISK OF STRUCTURAL SYSTEMS

TG3: Assessment of Structural Infrastructure Facilities and Risk-Based Decision Making Questionnaire for Researchers/Code writers on implementation of risk-based methods The goal of this questionnaire is to gather information about approaches to advance the use of risk-based methods for the design and the management of civil structure and infrastructure systems. Your input will complement collected information on the state of practice and will be synthesized and published in an ASCE report and will be used as the focus of a workshop that will take place in September 2014 in Washington DC. The questionnaire is divided into the following topics and subtopics: I. General Information: 1. Respondent 2. Organization/Industry 3. Structure/Infrastructure type II. Risk Assessment: 1. Identifying pertinent hazards and intensities 2. Analyzing system performance and potential failure modes 3. Assessing the consequences of failure 4. Risk ranking and acceptance criteria III. Risk Management: 1. Eliminating or reducing exposure to hazards 2. Designing component/system to reduce potential failures 3. Controlling the consequences of failure 4. Risk communication IV. Additional Comments and Suggestions

Important Note: If you have Adobe Professional, the filled form can be saved as pdf.

72

RISK-BASED STRUCTURAL EVALUATION METHODS

If you DO NOT have Adobe Professional, the information you enter will be LOST unless you PRINT the document after filling out the form and send it either as pdf or scanned copy. Please send the completed form by March 27, 2014 to: gfi[email protected] CC: [email protected]

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

73

74

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

75

76

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

77

78

RISK-BASED STRUCTURAL EVALUATION METHODS

SURVEY ON RISK-BASED STRUCTURAL EVALUATION METHODS

79

80

RISK-BASED STRUCTURAL EVALUATION METHODS

APPENDIX B

Answers to Section I of the Survey This appendix provides only the answers provided by each respondent to the technical questions in Section I of the questionnaire. Please note that the answers of the respondents are not ordered by name as in the list given in Table 1-1. The questions are I.1 Personal data, including name and contact information I.2 Affiliation I.3 Type of structures/infrastructure of interest I.4 Pertinent codes/standards/specifications/design guidelines being used I.5 General description of the approach used for implementing structural risk assessment and risk management in structural design and evaluation I.6 Organization structure for a risk analysis team and how information should flow between its members I.7 Respondent’s background/research/expertise I.8 Type of training recommended for risk analysts. Answers to question I.5 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. In a few words give a general description of firm/agency’s approach to risk assessment and risk management. (version for Engineers) Give a general description of the approach for implementing structural risk assessment and risk management in structural design and evaluation. (version for Researchers) Primarily risk tolerances are established by owner and implicitly communicated through criteria documents. This is usually relevant to government projects. For private projects with no specified criteria we establish design basis threats and give client options on how much vulnerability they can accept. In rare circumstances we assess the return period of certain events and balance risk and mitigation based on threat likelihood.

81

82

RISK-BASED STRUCTURAL EVALUATION METHODS

We use risk management at the project, unit, and corporate level. Corporate risks are identified annually by senior leaders and communicated throughout the agency. Unit risks are assessed [and] rolled up annually for consideration as part of the national planning process. Unit activities must address top risks. At our firm, we define risk as a product of Vulnerability, Hazard and Exposure. Hazard is generally defined as a threat to the structure. Probabilistically, it is defined as the likelihood of a hazard to occur. Vulnerability is a preexisting condition of the structure by which a hazard is more or less likely to enable or mobilize failure. Probabilistically, it is defined as the likelihood of a failure to occur given the existence of a hazard. And Exposure is the consequences associated with a failure. Of course, there is an Uncertainty Premium which is an adjustment factor to account for the inherent uncertainty of the evaluation methods used to generate the data that is used for risk assessment. The agency does this by applying and enforcing a set of technical requirements on plant design and operations, described in Title 10 of the Code of Federal Regulations (10 CFR). Generally, these are written in terms of traditional engineering practices such as “safety margins” in design, construction, and operations. The NRC also uses risk in a risk-informed, performance-based framework. Specific risk techniques have been applied to various areas, under different titles which include Probabilistic Risk Assessment, Integrated Safety Assessment (ISAs). We have specialized expertise and experience in risk analysis, hazard mitigation, benefit/cost analysis, and emergency operations planning for natural disasters and technological hazards, and anti-terrorism force protection (ATFP) planning and blast analysis. We typically provide seismic risk assessments (SRAs) for single buildings, or multiple individual buildings within a portfolio, including garage structures, for due diligence, real estate investment decisions, and insurance purposes. We also sometimes use the Thiel-Zsutty method of obtaining seismic losses. Procedures are being developed for risk assessment of existing dams and levee systems, with numerous initial assessments performed in the past few years. The agency uses both deterministic and risk-informed approaches in its regulatory activities. The Agency has established regulatory tools and guidance for risk assessments. Probabilistic risk assessment methods are well developed and used for nuclear power plants. Some of the natural hazards have well developed methods for probabilistic hazard assessments (e.g., seismic) that is crucial for a probabilistic risk assessment. For some hazards, for example flooding, the approaches are under development. The risk assessment approaches are and will be used in connection with the post-Fukushima evaluations of operating reactors and licensing of new reactors. An industry standard, ASME/ANS-RA-S-2008, Addendum B, is available.

ANSWERS TO SECTION I OF THE SURVEY

83

The purpose of risk assessment and management is to reduce risks to the society in general to an acceptable level. This level is not fixed and should be oriented toward social equity. This means that the acceptable risk level for a structure in hazard prone regions (e.g., mountains) would be different than in the almost hazard free regions (plains). Structural risk assessment requires a careful definition of relevant hazard scenarios, which include their probability of occurrence and their mostly cascading consequences. We minimize risks and maximize opportunities to successfully manage a program. We assess and develop risk management strategies to mitigate potential risks that can adversely impact a capital program. We developed our own approach that yields a numerical risk factor for each potential threat. Schemes are developed to mitigate the effects of the potential threat, and the risk factor is re-calculated for each scheme. The reduction in the risk factor in combination with cost of the scheme allows for a cost/benefit analysis. In general, the risk issues can be categorized in different classes: (1) identifiable as well as quantifiable risks, (2) expectable risks which are non-quantifiable at the relevant time, (3) non-identifiable and non-quantifiable risks, (4) unknown and extreme risks, so that a certain danger or a risk does not occur and the condition remains safe, preventive and protective measures can be taken. The design process in general is iterative in nature involving a process of selection and analysis until a set of specified performance criteria are satisfied. At the design stage the aim would be to achieve the set of performance targets set by the owner at minimum cost. The performance specs can be in terms of allowable probabilities for exceeding certain non-performance thresholds. Of course, more generally, one can cast the problem in terms of minimizing overall risk. But this is a difficult problem as it requires drawing the boundary within which the indirect components of risk are computed. Risk management also involves the process of maintaining a building. How often the building should be inspected, when should it be repaired, etc. Again, this requires solution of an optimization problem that minimizes the overall cost of inspection and repair, while not compromising the expected performance and safety of the building. Reducing the failure probabilities of the structure given those hazards. Reducing the consequences caused by structure failure. The Air Force does not currently use risk assessments unless the deterministic damage tolerance approach cannot assure safety. This limits the application to fatigue and subsequent fracture of metallic airframe parts. We are trying to get the USAF more comfortable with risk assessments in order to have them start considering using risk assessments more frequently.

84

RISK-BASED STRUCTURAL EVALUATION METHODS

Risk assessment and risk management can afford either a uniform or risktargeted design of structures and infrastructure systems that provides safer, more cost-effective, and sustainable systems. I am interested in promoting this approach considering life-cycle performance of structures and infrastructure subjected to multiple natural hazards. Hence I suggest that the associated approach requires consideration of the individual and joint risks of hazard occurrence, characterization of the probabilistic response and performance of structures subjected to such hazards, account for the time-dependent effects of aging and deterioration on hazard performance, and quantification of impacts or consequences of structural damage. (1) Identify hazards and loading intensities to be considered including recurrence, possibly multiple level consideration, if any. (2) Formulate/develop initial simplified model to determine necessary accuracy. (3) Couple load/ response model(s) for analysis in either a known framework; or develop a framework specific to the problem at hand as part of step 2. And (4) risk management would be periodic assessment and applying optimization algorithms for resource allocation to maximize benefit, i.e. minimize risk. My group tends to focus on genetic algorithms lately but we have used all sorts of simpler optimizations. My experience with risk assessment is for the purpose of assessing building performance given the occurrence of natural and man-made hazards (e.g., earthquake, wind, flood, and terrorist events). My approach involves quantification of the hazard in terms of some measure of intensity that can be equated to load (and the probability of occurrence or exceedance of that intensity), an assessment of the response of the structure to that load, a translation between structural response and potential structural damage, calculation of potential consequences in measurable terms such as costs, casualties, or downtime, and explicit or implicit consideration of uncertainties in each step of the process. To implement structural risk assessment and management in the design of infrastructure systems, performance goals need to be established first. These performance goals indicate the quality of service and the restoration time that is acceptable for a particular lifeline system, and then individual manuals and guidelines need to prescribe requirements that result in the attainment of such system-level performance goals. Inverse system reliability methods are required, and approximated computational tools should be implementable in practice, so as to gain widespread use. Incentives should be part of the riskbased design process so that infrastructure owners and stakeholders embrace the shift in design paradigms. (1) Identify all relevant threats for the specific systems of interest. (2) Develop probabilistic models of them. (3) For each specific system, develop probabilistic vulnerability functions under the action of each threat. (4) Perform a

ANSWERS TO SECTION I OF THE SURVEY

85

conventional risk analysis, including all relevant threats. (5) Formulate lifecycle risk-related optimum decision criteria. (6) Evaluate the influence of possible repair and maintenance actions on the vulnerability functions and on the life-cycle utility functions for existing systems, using information about accumulated damage and structural health monitoring. And (7) make optimum decisions using the information and the criteria mentioned above. First, it is important to say that risk assessment is only meaningful within the context of decision making. The results of risk assessment are evidence, which should be complemented with other evidence that comes from different sources. Secondly, structural design and evaluation should move from a static evaluation to a time-dependent analysis. That means that assessments should focus on evaluating the reliability in terms of the analysis of time to failure. Thirdly, the analysis should have a wider scope and go beyond the mechanical performance of the system. For instance, it should involve at least costs/utility and financial aspects; e.g., incorporate LCCA. ISO 13824 (General principles on risk assessment of systems involving structures) could present a general description for implementing structural risk assessment. Section 5 in the ISO 13824 presents structural context. The structural context defines the role of risk assessment in the framework of risk management for structures. The typical structural contexts are (1) design basis, (2) assessment of existing structures, (3) assessment of exceptional structures and/or extraordinary events, and (4) risk-based decision making. After the establishment of structural context, risk assessment could be conducted. Risk assessment consists of establishment of structural context, definition of structural system, identification of hazard and consequences, risk estimation, risk evaluation and evaluation of alternatives for risk treatment in case that risk shall be treated. In my opinion, the most important issue that is not addressed in the appropriate way at the moment is the systemic impact on risk associated with any action on individual structures. When a new building or infrastructure component is designed, it usually impacts several infrastructure systems (if not directly, through interdependencies). For instance, when a new hospital is built, it changes the relative importance of the roads and portions of utility networks that serve it. Changing the consequences associated with the failure of such infrastructure systems (or portions thereof), it changes the risk associated with them. Including these considerations in the design and planning phases can lead to very different decisions. These aspects are very complex and therefore usually disregarded or, at least, not treated in a rigorous way. I believe that our approach to risk should go in this direction. Society must balance trade-offs of risk in the built environment with other demands of society; ones that often have more immediate and tangible shortterm benefits. Therefore, it is important to be able to communicate the longterm costs and benefits of risk management across the spectrum of societal

86

RISK-BASED STRUCTURAL EVALUATION METHODS

needs. Issues such as public risk perception, public involvement, incorporating community values, overcoming incompatibility of lifetimes, and cost presentation methods must be understood in order to communicate with the public and with elected/appointed decision makers. Purpose: Include security (defined herein as the absence of risk) as an independent performance criterion in multiple-criteria decision making for transportation infrastructure investment analysis; thus, risk will be considered as an additional criterion for carrying out prioritization for a large number of projects. Circumstances: The context of the decision making is the conduction of investment evaluation either for a specific facility or for a network of facilities. The former case is where we are deciding on the optimal set of preservation actions over the life cycle or remaining life of the facility. The latter case is where, at a given year, we are faced with a large number of assets that deserve some preservation project but lack the resources to implement all these projects, so we carry out optimization to identify the optimal portfolio using knapsack (binary) optimization. The objective function and constraints are expressed in terms of multiple performance criteria which traditionally does not include security (lack of risk). So, we are introducing security into the formulation, and we are carrying out scenario analysis and trade-off analysis to quantify various trade-off relationships between the various performance criteria. Conditions: To incorporate the security rating into multi-criteria evaluation as a performance measure, the security rating can be used for the asset in its current state as a generic performance measure that is the same for each project alternative, as an “increase in security rating” that is alternativespecific and different for each proposed improvement, and as a “final security rating” which is again alternative-specific and based on the enhancement to security that the improvement provides. A risk-based criterion may be introduced in design and evaluation of structures. Application should be region specific. Introduction of such a criterion may allow more flexibility in ULS (ultimate limit state) design. The approach for structural risk assessment can be tailored for a component, structural sub-system or overall infrastructure system, depending on the fundamental characteristics of the decisions it is intended to support. Risk assessment can be implemented at the design stage of a new structure or at the evaluation stage of an existing structure. The key steps of risk assessment and management include: Define Objectives: Define performance targets that the structure should satisfy in terms of safety, security, functionality, serviceability, durability. Identify structural systems/components: Identify systems and components that contribute to the safety, serviceability, durability and functionality of the structure and collect information including analysis of dependencies and interdependencies. Assess Risks: Evaluate the risk, taking into consideration the likelihood of failure under different possible hazards and their potential direct

ANSWERS TO SECTION I OF THE SURVEY

87

and indirect consequences. Implement Risk Management: Make risk-informed decisions and implement risk management approaches to control, accept, transfer, or avoid risks through different mitigation measures, including prevention, retrofit, protection, and recovery activities. Monitoring and control, Communication. Answers to question I.6 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Please describe the structure of the risk analysis team and describe how the information flows between team members. (version for Engineers) Please recommend a structure for a risk analysis team and describe how information should flow between its members. (version for Researchers) Usually team is small and strict document control serves as information flow, along with person-to-person meetings. Not sure what “Risk Analysis team” refers to? Corporate level usually only one or two people. Units also have one or two. Communication is via web-conferences and SharePoint submission. The team usually consists of bridge owner’s representative, bridge inspector, the engineer who is performing the risk assessment, as well as risk experts. Risk assessment is performed when bridges in a network should be prioritized for either replacement, retrofit, or capital planning. In this case, the information flows from the bridge inspector, to engineers, to consulting firm in charge of the assessment, and finally to the client, or the owner of the bridges. The NRC has a significant number of risk analysts that have either general PRA expertise and/or are focused on specific areas (e.g., Seismic/Flooding Hazards, Human Reliability Analysis, Internal Fire). Traditionally, risk analysts will work in individual groups or divisions focused on specific activities (i.e., licensing, oversight) and applications (i.e., operating reactors, new reactor licensing, advanced and small reactor designs). There is also a significant amount of interaction between risk analysts in different areas to support common projects where specific expertise may be needed. Usually only one engineer conducts a site visit and performs the assessment, however sometimes a junior engineer or architect is relied upon to collect field data. We also sometimes use junior engineers to research hazards and ground motions. The risk assessment team is a multidiscipline team with facilitators from the USACE risk management center and technical members usually consisting of

88

RISK-BASED STRUCTURAL EVALUATION METHODS

geotechnical engineers, hydraulic engineers, structural engineers, economists, and a member with background in risk and reliability computation. Generally, the risk assessment team consists of hazard specialists (e.g., geologists and seismologists for seismic hazard), plant response and fragility analysts (various engineering disciplines), system and risk analysts, and human factors specialists. Key interfaces are established among the activities of different groups to maintain overall coherency. The interactions between system analysts and fragility engineers are crucial to assure that all structures, systems, and components important to risk and their relevant failure modes are captured in a risk assessment. The risk analysis team should include: (1) Hazard analyst, who may or may not be a structural engineer but is knowledgeable with regard to hazard and actions that a structure may be exposed to. He collects relevant data from meteorologist, geologist etc. (2) Action analyst, who transform general hazards into actions on structure define their magnitude and occurrence rate. (3) Failure analyst, who estimates the response of the structure for various hazards and their magnitudes. This results in estimation of direct consequences of a failure, e.g., structural damage. (4) Consequence analyst, who estimates for a given structural damage the societal consequence and evaluates them perhaps in monetary terms. Clearly all members of team should work closely together and exchange their experience. In the current economic climate, many organizations continue to face significant financial pressures and uncertainties. Consequently, the need to deliver projects and programs which meet schedule and budget, whilst minimizing risk and maximizing opportunities, is a top priority. We understand the challenges facing our clients and we look to provide more certainty of delivery with lower costs and improved performance across programs and projects. Normally, workshop sessions that involve all impacted disciplines are conducted. Each workshop might last up to few days. A risk assessment requires probabilistic information related to quantifying epistemic or aleatory uncertainties. Such uncertainties are best quantified from analysis of large statistical databases derived from operating experience, and field or experimental studies. The team should include an engineering risk analyst who is an expert in statistics/probability and well-versed in the methods of risk and reliability analysis. The team should include one or more structural analyst/designer, and someone who can assess direct and indirect costs of various design alternatives, various damage states, inspections, repair, etc. The overall framework for the analysis should be developed by the risk analyst, with the others contributing to their specific parts in the framework.

ANSWERS TO SECTION I OF THE SURVEY

89

Systems modeling, Risk analyst, Domain knowledge engineers, Management (policy/decision makers). A Risk Analysis Team should consist of members with social, environmental, economic, and probabilistic background to evaluate various aspects related to hazard occurrence, failure occurrence, and failure consequences, and make rational risk-informed decisions. Risk Assessment Team should be responsible for: Risk Identification, Risk Analysis and Risk Priority. Risk Control Team should be responsible for: Risk Strategy, Risk Action and Risk Closure. Lead / overall system risk analyst / party interested in applications of probabilistic methods (this person may have core competencies in one or more of the areas below). Hazard modeler(s), Load characterization specialist, or Environmental exposure analyst (passes info on hazard, load, or exposure potential). Structural system behavior/response and vulnerability modeling (provides estimates of anticipated structural performance for individual structures or portfolios of structural infrastructure). Infrastructure systems modeler (considers the estimates of structural component performance to evaluate reliability of overall infrastructure network providing estimates of likelihood of system). Consequence modelers from diverse fields such as Economics, Social Sciences, Public Health, etc. (provide probabilistic estimates of the consequences of damage/downtime in structure and infrastructure systems). Again, I’ll focus on hazards since this is my area: (1) Each hazard should have a team leader, or one person could potentially handle two. (2) The level of complexity needs to be decided before the model is developed, for example, avoid having a complex nonlinear model and a very simple binary model. And (3) each group or team or individual would provide either a conditional distribution or an unconditional, but conditional may be preferred so it can be de-conditioned in the same way. In my experience, this process has involved an expert in the probabilistic assessment of the given hazard (e.g., seismologists in the case of seismic) who communicates the intensity measure and the probabilities associated with it, a structural engineering practitioner who is skilled in response simulation, performance assessment, and loss estimation, and a decision-maker in terms of an owner or other stakeholder who is responsible for making decisions based on the risk information. Information flow to the decision-maker must be nontechnical in nature and must be aligned with the types of decisions that the stakeholder is accustomed to making (e.g., construction costs, cost-benefit ratios, downtime). In addition, communication of risk in probabilistic terms has proven to be very difficult and often needs to be boiled down into discrete decision-making points.

90

RISK-BASED STRUCTURAL EVALUATION METHODS

A lifeline system is used in this question. Take for instance the power transmission and distribution systems, including their facilities, equipment, and other distributed structures. A risk analysis team should be divided so that one subgroup identifies the failure modes of the system and its components, another subgroup identifies the hazards in the region the system is located, and another subgroup evaluates the consequences of failure modes triggered by potential hazards. This will constitute a risk analysis exercise, which should be used as input for risk management exercises that explore mitigation, trade-offs, and strategic planning as well as decision making. The team should include (a) A specialist in each of the relevant threats to be considered in the analysis, (b) One or more engineers with capacities in the design of structures and foundations of the different systems considered and in the estimation of their vulnerability functions, (c) One or more specialists in probabilistic risk analysis, including the quantitative assessment of possible consequences for different types of damage, (d) For decisions related with acceptable risk levels, the team should include persons with knowledge about social attitudes related to this concept. Considering bridges belonging to a transportation network, a risk analysis team needs information on seismic hazard, fragility curves associated with bridge, assessment of the impact of direct and indirect costs, and post-disaster functionality of road network. If “systemic risk” is considered, a risk analysis team will have to involve analysts with competencies in different branches of civil engineering. The flow of information should be handled in a very rigorous way so that each analysis can use the results of the others to feed his own calculations. We have not developed a team structure. Instead, we plan to adopt that developed by AASHTO in the AASHTO Vulnerability Guide for team composition. Civil engineering team should interact with social scientists to prepare preliminary design plans that can be used by decision-makers. Law makers should also be involved in the framework at a point when it might be beneficial to introduce and enforce new laws in order to facilitate sustainable and resilient (not just “safe”) infrastructure design. A possible structure for a risk analysis team could include several members who will undertake the following tasks: Risk identification, Risk assessment, Risk mitigation, Risk acceptance [Communicate risks to stakeholders (Internal and External stakeholders), Review implemented mitigation measures and monitor risk].

ANSWERS TO SECTION I OF THE SURVEY

91

Answers to question I.7 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Please describe respondent’s duties and responsibilities and role in the team. (version for Engineers) Please describe your research focus / background and how it contributes to key elements of question 5. (version for Researchers) Completing calculations and drafting reports to be reviewed by supervisor. Not sure what “Risk Analysis team” refers to? Team leader. The respondent is part of the risk assessment team in the consulting firm. My responsibilities include identification of hazards, exposures and vulnerabilities for bridges, and formulating the risk. My role is specific to the evaluation of the risk significance of events and findings identified by NRC inspectors or self-identified by the licensees. Principal investigator for the firm’s research work on wind, earthquake, tsunami, and flood hazard analysis with emphasis on GIS mapping, building damage data and loss function studies. I am responsible for both conducting and reviewing SRAs as well as managing and providing guidance to junior through senior associate engineers. I have been responsible for developing and updating company policies regarding seismic risk assessment. I am a structural engineer and also perform the risk calculations. As a manager, train and guide staff members in review of new plant applications, risk assessments, and all of the issues associated with natural hazards for both new and operating reactors. Also, interface with the research group to identify needed regulatory research. Interact with standard bodies and code committees. Collaborate with international regulators and IAEA on issues of common interest. My research addresses all elements of Risk Assessment and Risk Management. However I address these elements in planning process in rather crude manner. Additionally for management purpose I try to group relevant situations in order to deal not only with one structure but with the whole inventory.

92

RISK-BASED STRUCTURAL EVALUATION METHODS

Structural analysis and design input if required. Team leader and coordinator. My research focuses on monitoring, analyses techniques and in particular on special models, processes and methods of structural analysis linked with the life cycle management of engineering system. My research primarily deals with the methods of risk and reliability analysis. Main research interests are risk, uncertainty and decision analysis, and systems engineering applied to civil, infrastructure, energy, defense and maritime fields. My research focus is in life-cycle structural engineering under uncertainty with emphasis on inspection, maintenance, repair, monitoring, resilience, risk, sustainability and optimization. We are working on getting the risk assessment method correct, introducing simple techniques such as plain vanilla Monte Carlo simulations so that people become more comfortable with risk assessments. We are also looking at how to represent the loading distribution, the initial crack size distribution, and the fatigue crack growth curve. I have developed an approach to analyze the risk acceptance attitude reflected in past decisions, building upon advanced decision models such as cumulative prospect theory and subjective utility theory. Utilizing the approach, I have studied structural safety related decisions for understanding the nature of risk acceptance attitudes of various stakeholders towards extreme natural and manmade hazards including earthquakes, hurricanes, fires in a nuclear power plant, terrorist attacks, etc. My research focus is condition and safety evaluation of bridges and bridge elements. Condition evaluation is normally based on visual inspection and it is used to determine the condition of a deteriorated element. In condition evaluation, element is categorized into condition levels or states. Safety evaluation requires special testing equipment. Testing can use either destructive or nondestructive testing technique. Condition evaluation results can be quantified in terms of a condition index for the member or bridge. Similarly, safety evaluation results can be quantified in terms of a safety index for the member or bridge. My background is in civil/structural engineering. Interests include: vulnerability modeling of bridges and other structures, multi-hazard risk assessment, and life-cycle analysis of infrastructure sustainability. I focus on reliability modeling

ANSWERS TO SECTION I OF THE SURVEY

93

of portfolios of structures and their intersection with impacts on the overall infrastructure network. Risk assessment due to natural hazards; both individual hazards and combined hazards. My focus is on the effects of hazard mitigation and how best to apply typically limited resources to have either the largest impact on a single structure, a community, or a region. My focus and background have been as an engineering practitioner responsible for performance assessment and communication with stakeholders for 16 years, and then in research and development of new assessment technologies funded by FEMA and NIST for the past 9 years. My research focuses on the reliability and risk assessment of distributed infrastructure systems. This perspective informs my recommendation of starting with performance goals and service restoration times, to then reverse-engineer the requirements that individual components should meet to attain system-level design goals. This target system-level performance perspective is advantageous for geographically distributed systems, as it provides local flexibility/innovation on how to meet requirements, while achieving regional performance goals. This approach also reduces uncertainty and aids utilities with planning the availability of restoration crews, siting of equipment and spare parts, etc. I am currently working on time dependent reliability related to degrading infrastructure systems; and applied mostly to life-cycle analysis. My research interests include earthquake engineering, life-cycle structural performance, safety and reliability in structural engineering, and application of probabilistic concepts and methods to the design of civil structures. This could contribute to identification of hazard and consequences, and risk estimation. I am interested in risk and resilience of transportation networks, with focus on bridges. Lately I have been studying all the issues associated with infrastructure interdependencies and I came to believe that a holistic and systemic approach is the only way to have an accurate quantification of risk. My background is in computational mechanics and probabilistic analysis. My research has primarily been on structural reliability, and more recently on natural hazard risks. I have reviewed social psychology literature in terms of understanding public risk perception and risk communication as applied to natural hazard risks. I have current research on the use of structural reliability methods for community portfolio risk and resilience, as well as aspects of general information theory for uncertainty analysis.

94

RISK-BASED STRUCTURAL EVALUATION METHODS

Transportation project prioritization uses performance measures that are related to the transportation asset, its operations, and its environment. However, in the state of practice, evaluation does not consider directly the likelihood of natural or man-made threats, the infrastructure resilience, or the consequences of the infrastructure damage in the event that the threat occurs. Thus, during the prioritization of investments, assets of low security do not receive the due attention they deserve. The inclusion of security considerations in prioritization introduces a much needed element of robustness in investment prioritization. However, the inclusion of investment security impacts leads to an increase in the number of performance measures for the investment evaluation. A methodology is developed to quantify the overall security level for an asset in terms of the environmental threats it faces, its resilience or vulnerability to damage, and the consequences of the infrastructure damage. The overall framework consists of the traditional steps in risk management, and a specific contribution is in the part of the framework that measures the risk. The methodology is applied to a given set of assets by measuring the risk (security) of each asset and prioritizing security investments across multiple assets using multiple criteria analysis. In sum, we intend to show how security (absence of risk) could be incorporated proactively in the transportation planning process and not as an after-the-fact consideration in transportation infrastructure management. My research focuses on risk evaluation of highway bridges under natural hazards. The risk is defined in terms of societal loss resulting from structural damage during hazard events. A major part of this risk evaluation process involves the assessment of structural performance under natural disasters. This provides “risk-rating” for structures passing (designed for) ULS criteria. With the knowledge of quantified risk, it’s possible to revise the original design to have a uniform risk distribution among similar structures. Research focus is on the development of practical and reliable approaches for risk-based design and rehabilitation of infrastructures with an emphasis on management of aging critical concrete infrastructure. The risk management of existing structures presents bigger challenges than risk management during design as the implementation of risk mitigation measures for an existing structure are much more costly than for a new structure not yet built. Answers to question I.8 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Do risk analysts in your organization undergo special training? (version for Engineers) What type of training should risk analysts undergo? (version for Researchers)

ANSWERS TO SECTION I OF THE SURVEY

95

Approximately 1 week introductory training course upon hire to learn our company software product and project types. Monthly “brown bag” training presentations. Scheduled one-on-one training for special skills. No. Yes: annually training on the agency risk management process. Yes, risk analysts undergo a series of specialized training depending on the area and application they are involved with. Our structural engineering seismic consultants typically come from backgrounds in design, have many years of experience, and are licensed in multiple states. We prefer that our consultants have an SE license. Training to perform SRAs is done in-house and initial projects are performed under direct supervision. Some training is provided in a course associated with the USACE risk analysis computer program, DAMRAE. Other training for performing risk assessments is provided by the USACE risk management center. Yes. There are several well organized courses on various topics of probabilistic risk assessment including natural hazard risk assessments. In its intern program, the Agency includes several risk-assessment courses. These courses cover the range of subjects starting from the basic probability and statistics, plant system analysis, and human factor analysis. The risk analyst has to have an extensive training in structural analysis, which is the basis. He/she also has to have in-depth knowledge of real actions on structures (not just code actions) and their probabilistic nature. Furthermore, it is advantageous to have basic knowledge in geology and hydrology in order to be able to get the right data from professionals in these areas. ASCE or University providing: periodical participation on workshops – knowledge exchange. A good knowledge of applied probability and statistics, particularly Bayesian statistics. If the application is to structures, then knowledge of structural reliability theory and methods is also essential. Systems modeling, Probabilistic and statistical analysis, Reliability engineering, Economic valuation, Decision analysis, Data collection including expert opinion elicitation. Training in all aspects related to occurrence of hazards, failure analysis, and consequence analysis (involving economic, environmental, and social aspects). Probability and statistics, decision-making (CEE and Industrial Systems Engineering). Modeling risk to structures and infrastructures relies on fundamental understanding of probability and statistics. Further to provide solution to risk engineering problem, fundamental understanding of decision-making is essential.

96

RISK-BASED STRUCTURAL EVALUATION METHODS

An inspection training is necessary for a risk assessment specialist. A coursebased training is necessary for the designer. Ideally university programs should be constructed to afford graduate level / post-graduate training in this multi-disciplinary topic. One should first establish a core competency in one of the contributing fields. Multidisciplinary research centers are also primed to provide such training. As an academic, I see this as graduate education. However, if there was a less subjective procedure developed, perhaps more like a LEED score for risk analysis, then maybe there would be SCE or ATC-style training courses. I think training in probabilities and statistics is needed to be able to use, interpret, and communicate risk information to decision-makers. Risk analysts require different skills and knowledge ranging from probabilistic methods and their applications in civil engineering, to decision theory that spans decision trees and utility theory, as well as a working understanding of behavioral issues affecting decision making. Hence, risk analysts require a unique combination of mathematical modeling rigor, structural and infrastructural analysis methods, and a working understanding of human subjectivity for decision making. He has to be trained in the following areas: (1) Probability/statistics, (2) Decision analysis, (3) Optimization, and (4) Financial and contractual aspects. I think it is important for risk analyst to learn the lessons from the previous disasters. Risk analysts should have a strong background in civil engineering (not merely structural) and graduate degrees. Being a risk analyst requires competencies that go beyond the normal BSc in civil engineering and allow to interact with a multidisciplinary team as the one described above. Graduate programs in universities provide this type of background. Probability theory, structural analysis, community development, risk systems theory. I guess this could be provided through a specialized ASCE-type short course, as long as the participants had a strong background in structural engineering and an introduction to probability. (1) Understand the assets/infrastructure characteristics they are analyzing. (2) Understand the risks affecting the assets/infrastructure they are analyzing. (3) Understand the interaction between the risks and infrastructure (consequences). (4) Understand methods to improve security of assets/infrastructure. (5) Understand methods to prioritize asset improvement projects to maximize the use of a budget.

ANSWERS TO SECTION I OF THE SURVEY

97

Design practice and philosophy – DOT and FHWA. Recent advances in analysis and risk quantification – Researchers from academia and research labs. Policy and preferences for local and federal Government – Government agencies. Risk analysts should undergo training in the following domains: System engineering, Structural engineering, Mathematics/Statistics, Economics/ Finance.

This page intentionally left blank

APPENDIX C

Answers to Section II of the Survey

This appendix provides only the answers provided by each respondent to the technical questions in Section II of the questionnaire. The questions are II.1 Definition of risk II.2 Evaluation of risk for new and existing systems II.3 Criteria for the analysis of new and existing systems II.4 Design life for evaluation of structural systems II.5 Service life for existing systems II.6 Frequency of inspections, evaluation rating, and repair II.7 Pertinent hazards to structure/infrastructure systems II.8 Estimation of hazard occurrence II.9 Quantitative probability versus qualitative ranking of hazard occurrence II.10 Estimation of hazard intensity levels II.11 Quantitative probability versus qualitative ranking of hazard intensity II.12 Consideration of deterioration II.13 Consideration of maintenance and routine inspections II.14 Determination of performance (or damage) levels II.15 Measures of performance (or damage) levels II.16 Analysis of structural performance, damage and possible failure modes II.17 Estimation of probability of failure II.18 Pertinent consequences of failure II.19 Combination of consequences of failure II.20 Risk measures II.21 Consideration of historical data II.22 Presentation of risk assessment results II.23 Risk acceptance criteria.

99

100

RISK-BASED STRUCTURAL EVALUATION METHODS

Answers to question II.1 Both practicing engineers and researchers were asked to comment on the following definition of risk: Risk assessment involves the quantification of risk that is defined as the product of the probability of failure and the consequence of failure for systems subjected to different hazards or combination of hazards. Estimation of probability of failure involves the evaluation of (a) the probability of occurrence of a particular type of hazard or a combination of hazards; (b) the maximum intensity of the hazards that the system is expected to be exposed to within its service life; (c) the probability that the system will exhibit a particular level of damage/local failure/collapse should the hazards take place. Evaluating the consequence of failure requires the assessment of (a) the cost of maintenance/repairs/replacement of system; (b) user’s costs and life safety; (c) the failure’s impact on local and regional economic activity/society/environment; and (d) the political/civic/morale ramification to the affected communities. Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue the answers of Researchers. Do you agree with the above definition of risk? (Please provide your interpretation.) This is the general consensus definition of risk assessment. We usually use a decision-making process that does not consider the likelihood of occurrence of inherently unpredictable events. Yes, seems quite reasonable, although there could other equally valid definitions of risk. No. Risk is the effect of uncertainty on objectives (ISO31000). An effect is a deviation from the expected—positive and/or negative. Objectives can have different aspects (such as financial, health and safety, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process). Risk is often characterized by reference to potential events and consequences, or a combination of these. Risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated likelihood of occurrence. Yes, absolutely. Our interpretation of risk concurs with this statement. In fact, the cost of failure is not just the cost of replacement of the failed structure. It shall also address the economic impacts on the population centers in the vicinity of the bridge, interruption of transit of goods and traffic, as well as impacts on emergency responders, and security forces in case of a terrorist attack or natural disaster.

ANSWERS TO SECTION II OF THE SURVEY

101

In general terms, yes. The NRC follows the definition of the “risk triplet,” which is also discussed in the ASME/ANS PRA standard as the “probability and consequences of an event, as expressed by the ‘risk triplet’ that is the answer to the following three questions: (a) What can go wrong? (b) How likely is it? (c) What are the consequences if it occurs?” Yes. I agree with the definition. However, as a practitioner in the private sector, my clients would be more specifically concerned with liability, financial exposure, downtime, business interruption, occupancy resumption, etc. Essentially yes. USACE does not try to compute indirect consequences although they are sometimes assessed qualitatively. In general, I agree with the definition with some minor exception. The item (b) in the second paragraph is not the way a hazard is used in our risk assessments. We look at all levels of intensities with the frequency of their occurrences in the quantitative risk assessments. The service life does not enter in establishing the probabilistic hazard. Yes. In my view evaluating means qualifying in monetary terms. The reason for this lies in ability to compare risk reduction measures with other field of activities and make a case in front of the politics. There is a lot competition for public money and most beneficial actions should be taken. Yes. A conclusion is always easy for the team involving owner, design team and the construction if the risk assessment is studied keeping in mind the overall structure failure and failure of non-structural elements. The above definition makes sense and is consistent with the approach we developed internally. More or less I agree, but would extend the definitions in the following way: identifiable and quantifiable risks = deterministic risk analysis, expectable risks, which are nonquantifiable at the relevant time = probabilistic risk analysis, non-identifiable and nonquantifiable risks = empirical risk analysis, unknown, extreme risks = dynamic phenomenological scenario analysis. I would say it is an “integration” of the product of probability of failure and the consequence of failure, since the performance of a structure may not be a binary state. Furthermore, I think we should define “failure” as the event of exceeding a certain non-performance threshold, e.g., the event that will result in closure of a facility for more than one month.

102

RISK-BASED STRUCTURAL EVALUATION METHODS

Generally, yes, I agree, but using the “product of Pf and Cf” leads to loss of information by obtaining expected values. We should account for uncertainties in Pf and Cf. I agree with the above definition. The costs of failure investigation and lawsuits have also to be considered. Yes. Yes, I agree with the definition of risk in general. However, I believe the risk should be computed as the sum or integral of the products of probabilities of failure/damage and the corresponding consequences represent different limit states, not only considering the maximum intensity of hazard in the system’s service period. It is a very general description which contains all of the cases listed above (i.e., cost, life safety, environmental effect, etc.). For this reason, it is a thoughtfully created and well stated definition. In general I agree with the above definition. Also consider that this hazard risk assessment should account for accumulated damage (e.g., from aging/deterioration) that affects performance under punctuated hazard events. I appreciate the broad indication of consequences in the above definition to include economy, society, and environment. However, the terms “local” and “regional” offer geographic constraint to these impacts. In reality, they may span widely at national and global scales. Also, the temporal aspect of consequence modeling is of interest. Hence, the definition could be loosened to suggest “multiple spatial and temporal scales.” Yes, seems like a good definition. However, it seems that the probability of failure terminology has evolved over the last 10 years—I would prefer to see it as just probability. Yes. I don’t have anything meaningful to add in terms of interpretation of the above. I agree with the definition overall (barring some details clarified below), as the given definition tends to align with broader statements used across many fields and endorsed by professional societies, like the Society for Risk Analysis. More specifically, risk evaluation should be interpreted as a two-phase process, where risk assessment and risk management lead each of the phases. Risk assessment answers the following questions: What can go wrong? What is the likelihood of going wrong? and what are the consequences of going wrong? Then, risk management and control is concerned with answers to the following questions:

ANSWERS TO SECTION II OF THE SURVEY

103

What can be done and what options are available? What are the trade-offs in terms of costs and benefits? and what are impacts of current decision on future options? Note that these six questions spanning risk assessment and risk management are broad enough, that in principle they include emerging notions, such as resilience and sustainability. However, it is my sense that risk in practice is narrowly defined today, as in the code when limited to probabilities of collapse and missing the entire set of dimensions covered by the questions of risk assessment and management of the broader risk analysis community. In general terms, I agree with the definition, with some reserves about the most adequate probabilistic indicators of risk, both for a single hazard and for superposition of several ones. For instance, the probability of failure of a system subjected to a natural hazard such as earthquake or strong wind depends not only on the probability distribution of the maximum intensity that may occur, but also on the expected number of times each type of hazardous event may occur, especially when the system vulnerability keeps growing with time, as a consequence of damage accumulation. Also, even if damage accumulation is disregarded, the failure condition may be reached for an intensity lower than the maximum that may occur. The definition presented above looks similar to an expected value of losses, which is a traditional approximation to the formal definition, which states that risk is a function of losses; i.e., probability of having a certain level of losses, and not a single value (see Vol. 1, No. 1 of Journal of Risk Analysis). Note that the definition in terms of a function of losses involves in a very articulated way on one hand the relationship between hazard and system characteristics, and on the other the set of all possible consequences. Yes. This definition is consistent with that in ISO 13824 (General principles on risk assessment of systems involving structures). First, the definition of risk as product of probability of failure and consequences is always criticized by risk analysts of fields different from CE. That product only provides the expected loss. In the long-term we should also promote the use of a more complete and rigorous use of the term “risk.” However, for now, I believe that using a very simple definition and computing just the expected loss could be enough. For the estimation of the probability of failure, I think that point (b) is very confusing. The “expected maximum intensity” is something that from a probabilistic point of view does not make much sense. Also, since in this context the word “hazard” has a very precise mathematical meaning, I would avoid using it to indicate the various “sources of risk.” For the evaluation of consequences, I completely agree with the definition. As I mentioned, I believe that points (c) and (d) are usually not emphasized enough.

104

RISK-BASED STRUCTURAL EVALUATION METHODS

Basically yes. Social scientists often point out that risk is a function of both the probability and consequence of failure but should not be restricted to the product. In the second paragraph, items (a) and (c) are expressed in terms of probability, but item (b) in terms of expected value. I find this mixture confusing (and perhaps misleading). In the third paragraph, I am not sure why maintenance costs are listed as part of the consequence of failure. Certainly changes to maintenance could be, but these would normally be thought of as additional costs, not maintenance costs. Sustainability is usually thought of as a three-legged stool, and two are mentioned in the third paragraph (economic and political/social, although they are not grouped as usual). The third, environmental or ecological, has been omitted. The current risk assessment model described above is currently the accepted model of risk assessment, but there is room for improvement. For example, the quantification of risk using just the product of the probability of failure and the consequences should be improved by multiplying the probability of a threat occurring by the consequences and then dividing by the asset resilience (probability of the system failing). This would be interpreted as a security rating with high threat occurrence and high consequences divided by low resilience equating to a high security rating therefore the asset should be considered for security improvements and vice versa. Additionally, the concept of asset resilience should be incorporated into the security quantification equation as a separate factor just focusing on characteristics of the asset. The probability of threat should just incorporate the probability of a threat happening while the concept of resilience captures the asset specific characteristics. Furthermore, uncertainty should be incorporated into the risk analysis to capture the variability associated with the factors that influence the three key factors of overall security: threat likelihood, consequence, and asset resilience. YES. This is the strategy I use to calculate risk of highway bridges under a particular type of hazards or multiple hazards. Yes. Answers to question II.5 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you estimate the remaining service life for existing systems? We do not. Varies, but typically no effective reduction in service life—for example, the Hazus Earthquake Model tacitly assumes no reduction in service life (full replacement value) when estimating expected dollar loss. As noted above (No. 4), in some

ANSWERS TO SECTION II OF THE SURVEY

105

cases, the service life is shortened for existing structures that are slated for removal from service (e.g., before 50 years). This is a very hot research topic. There are different approaches for estimating the life cycle of a bridge. One of the most widely used ones is Markovian method which estimates the life cycle curve of a bridge based on the existing condition of its elements. Traditional aging of components is incorporated via reliability evaluations of component failure rates in system modeling. We typically don’t need to provide EUL for whole buildings or buildings’ structure. We use local experience, specialist sub-consultants, published guides, visual observation and maintenance records for mechanical and architectural systems. Procedures are described in USACE EC 1110-2-6062, Risk and Reliability Engineering for Major Rehabilitation Studies. There are specific requirements for maintenance and inspections to assure that plants remain within their design basis. The fatigue type of situation requires explicit evaluations. If an existing system complies with the code for the new structure the remaining service life is the same as design life. If an existing system complies with the maintenance code, which is based on actual loads and updated material properties, the service life is given by the development of actual loads and expected deterioration of material properties. This is however still a deterministic analysis. In some cases, owner would like to have a better estimate of the remaining service life and ask for full probabilistic analysis. This is also needed for hazards not covered by the code of practice. The service life is defined with the point in time at which the structures reach the acceptable risk level. Depending upon the system type (e.g., structural inspection using nondestructive tests and fatigue analysis can translate the remaining service life). Analyses techniques – Nonlinear probabilistic analyses techniques – and standard design formats. The remaining service life should be decided based on cost−benefit analysis (i.e., the cost of maintaining the building with an acceptable reliability level versus the benefit gained from it). When the cost of maintenance becomes larger than the benefit, then that marks the end of life.

106

RISK-BASED STRUCTURAL EVALUATION METHODS

I would use time-dependent reliability analysis. We calculate a single flight probability of failure which is essentially the hazard rate during a flight. We account for inspections and maintenance/repair in the calculations based upon their frequency as prescribed by the damage tolerance analysis. The frequency of inspection is specified in flight hours and is different for different components. Remaining service life of an existing structural system can be calculated using a deterioration rate for the system and/or its elements. For bridges, deterioration rate can be calculated based on the time-variant increase in applied loading and reduction of resistance due to deterioration. Techniques developed using straight lines, curve-based or Markov chain process-based deterioration models can be used. Ideally a probabilistic life-cycle analysis that projected aging and deterioration as well as exposure to extreme events. A threshold of acceptable performance must be set, and the ability of anticipated maintenance and prospective upgrade actions considered. Assessment condition; conduct test or develop calibrated model to determine capacity in units needed. I think structures can remain in service, as long as they remain in good repair and they continue to be a reasonable risk, given the current hazard at the site. I would judge the ability to continue using a structure based on the cost−benefit associated with a risk assessment for repairing/maintaining/strengthening the structure to keep performing at an acceptable level of risk going forward. The remaining life of systems should be obtained from analytical or computer simulation models calibrated with experimental tests of components as well as from field observations of failure modes under natural hazards. The degree of aging and deterioration of components of a system also informs the remaining system-level life. Note that the trends in the usage of services provided by the lifeline system operators and other infrastructural facility owners affect their service life as well. Using systems close to their design capacity certainly affects their lifetime negatively. On the flip side, there is growing trend of “demandside” effects, which if managed properly, via innovative incentives and technologies, could be used to extend the life of existing systems by shifting demand patterns, or allowing utilities to control the quality of service of opt-in users. With information provided by the groups I would take the difference between the design lives mentioned in response to Question 4 and the lifetime already spent by the system. In general, I am not interested in this concept, because I

ANSWERS TO SECTION II OF THE SURVEY

107

ordinarily work with the expected failure rate or with the expected cost of damage, both per unit time (year). Basic considerations: (1) Definition of the minimum (acceptable) performance level; (2) Definition of a maintenance or intervention policy. This includes an evaluation of costs. It requires: (1) Assessment of the system condition at the time of the evaluation; (2) identification of degradation models (stochastically); (3) Definition of costs associated to the maintenance policy; (4) Estimation of the lifetime distribution or the surrogate measures (e.g., MTTF). Note that step 2 requires a detailed consideration of future loads and system performance overtime. The remaining service life could be estimated based on the comparison of the time-dependent reliability of existing systems and the target values. I would collect data on the current conditions through inspection and monitoring. Then we have good techniques for the identification of damage, stiffness and so forth. Based on this, I would perform again (not necessarily update) the risk analysis and residual life analysis (e.g., for fatigue) as done for new systems. The same as for a new structure, but with updated structure condition and deterioration statistics. We use a method that requires two inputs: a deterioration model and a specified threshold level of condition. The remaining service life is the time taken for the deterioration function to reach the threshold level of condition. Note that the deterioration model is a function of asset material type, asset design type, traffic volume, loading, climatic severity, soil acidity, and other variables. The actual set of significant variables depends on the asset under investigation. Through system identification. The process follows the steps below: (1) Structural response from real-time monitoring, (2) Identification of structural stiffness by analyzing monitored data (i.e. vibration measurements) using appropriate structural-identification (St-Id) technique, (3) Numerical simulation of structural performance based on identified stiffness parameters, and (4) Assessment of reliability and remaining structural life from numerical analysis. The remaining life of existing structures can be estimated by considering the current state of the structure, rate of deterioration, system reliability and acceptable probability of failure or limit state. Different models can be used to estimate the remaining life of existing systems, including: (i) Empirical models; (ii) Statistical models; (iii) Mechanistic models.

108

RISK-BASED STRUCTURAL EVALUATION METHODS

Answers to question II.9 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Do you assign a numerical probability value to the hazard occurrence or do you give a subjective rank such as very high, high, moderate, low? (version for Engineers) Should engineers assign a numerical probability value to the hazard occurrence or can researchers/codes provide guidelines for a relative ranking system such as high, moderate, and low? (version for Researchers) Subjective, per PDC TR 06-08. Typically, numerical although some clients, such as the VA and FEMA, also use subjective rankings of seismic hazard (e.g., FEMA 154/FEMA 155). Numerical values. There are multiple categories for hazard, e.g. technological, structural, serviceability and operation, etc. Each category has multiple hazards levels (e.g., 1 to 3, 1 being the lowest and 3 being the highest). For the specific applications I am involved with, a numerical probability is assigned to the hazard. This may range from estimates of loss of offsite power based on reliability studies on the multiple causes that may impact availability of AC power at a nuclear station (including observed events) to estimates of seismic/flooding events that contain wide systemic/epistemic uncertainties. Numerical hazard curve. No, not for seismic ground motion or shaking intensity. For some hazards (e.g., liquefaction susceptibility), we use published subjective ranks. Numerical probability is assigned to hydrologic hazard. Various level of Hazard intensities are assigned frequency of occurrence / unit period of time (e.g., year) when probabilistic hazard methods are used. Both approaches are possible, and the latter is considered as more practical. However, I consider the relative ranking system a temporary solution. Both. Prefer numerical values. Codes should provide a relative ranking system. It is more understandable for the user and owner. “Block maxima models” with extreme value distribution functions can be used to link the two approaches.

ANSWERS TO SECTION II OF THE SURVEY

109

It depends on the application. Sometimes a discrete scale of high, moderate, low is justified. In other cases a numerical evaluation must be made. In the latter case, one should properly account for statistical uncertainties, either by incorporating it in the estimate (Bayesian approach, appropriate for decision making) or provide bounds on the estimate. It is recommended that researchers/codes provide guidelines for a relative ranking system. Probabilities should be considered when specifying the relative ranking. Mil-Std-882 specifies hazard occurrence ranking system for DoD systems. Current state of risk engineering allows engineers to use numerical value to the hazard intensity in risk assessment. Qualitative risk measures like subjective rank given as an example above can be provided along with the numerical values to give a further insight from risk analysts to engineers who might not understand fundamentals of risk engineering. However, risk assessment should be performed in transparent way with numerical values. Although a numerical probability value to the hazard occurrence is useful, engineers may not have access to adequate amount of statistical data to estimate this probability. For this reason, code-provided guidelines for a relative ranking system is more appropriate for design and assessment of engineering structures. I prefer the assignment of a probability value to the hazard occurrence not left to engineers, although I appreciate that for some hazards this may be a challenge. I fear the lack of uniformity in application of a relative ranking approach. Values should be assigned. No, engineers should not assign a numerical probability value to the hazard occurrence, unless it is estimated according to scientific methods based on theory, observed data and appropriate models. Instead, codes and guidelines should provide either sets of hazard maps (particularly for distributed systems, which will require collections of maps for network-consistent analyses) or provide methodologies to estimate hazards based on occurrence rates and other fundamental information provided by USGS and related agencies. Relative rankings of hazard should only be used as a screening tool. Codes should have methodologies for both screening and, if warranted, for estimating probabilities based on sound scientific methodologies that combine theory and observation. It is not clear to me how the assigned values would be used.

110

RISK-BASED STRUCTURAL EVALUATION METHODS

The level of precision depends on two aspects: (1) the information available and the capabilities to manage and construct models; (2) the relevance of information and results for the decision (Note that a model should not be taken beyond a level in which precision and relevance are mutually exclusive). I think that researchers and codes should specify a methodology, then used by engineers. Formal risk approaches developed by several state DOTs vary from qualitative to quantitative depending on the importance of the decision. This flexibility is practical. The assignment of a numerical probability would be best. This will ensure consistency and will minimize bias. Using actual data, the rankings from a set of experienced engineers/researchers could be solicited using a Delphi-like survey approach, to consider both assignment techniques to the given set of data; then it will be possible to calibrate between these two assignment techniques. Numerical values for Hazard occurrence probability for the quantification of risk. Given the lack of data on hazard occurrence for most hazards, the assessment of the hazard occurrence should be based on the combination of both qualitative and quantitative models. The two models can be related by using triangulation. Answers to question II.11 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Do you assign a numerical probability value to the hazard intensity or do you give a subjective rank such as very high, high, moderate, low? (version for Engineers) Should engineers use a numerical probability value to the hazard intensity or could they use a subjective rank such as high, moderate, and low? (version for Researchers) Numerical. Blast Energy value. Typically numerical although in some cases MMI is used for reference or when data from previous earthquakes are reported in terms of MMI. Numerical value, please see the answer to Q 9.

ANSWERS TO SECTION II OF THE SURVEY

111

ASME/ANS RA-S, “Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications.” For other NRC applications, different quantitative/qualitative approaches may apply. Numerical probability of exceedance. For MMI, we use standard ATC-13 definitions (i.e. MMI = I-XII). Normally numeric values are used. Various level of Hazard intensities are assigned frequency of occurrence / unit period of time (e.g., year) when probabilistic hazard methods are used. Both approaches are possible, and the latter is considered as more practical. However, I consider the relative ranking system a temporary solution. Subjective. Engineers should use a subject rank system such as high, moderate, low, or many situations the data may be of insufficient extent or quality to provide useful and credible quantitative measures for use in a risk assessment. It is necessary to rely on the use of expert opinions where quantitative information obtained from people associated with the parameter to be measured may be solicited. It depends on the application. Sometimes a discrete scale of high, moderate, low is justified. In other cases a numerical evaluation must be made. In the latter case, one should properly account for statistical uncertainties, either by incorporating it in the estimate (Bayesian approach, appropriate for decision making) or provide bounds on the estimate. Numerical values. It is recommended that researchers/codes provide guidelines for a relative ranking system. Probabilities should be considered when specifying the relative ranking. Should use a numerical probability value so that a numerical probability of failure can be determined. Current state of risk engineering allows engineers to use numerical value to the hazard intensity in risk assessment. Qualitative risk measures like subjective rank given as an example above can be provided along with the numerical values to give a further insight from risk analysts to engineers who might not

112

RISK-BASED STRUCTURAL EVALUATION METHODS

understand fundamentals of risk engineering. However, risk assessment should be performed in transparent way with numerical values. Subjective ranking is the first step in identifying hazard intensity. However, if adequate statistical data is available, estimation of a numerical probability values is a better alternative. Once a probability value is estimated, it may be possible to interpret it using a ranking or category classification. I prefer the assignment of a probability value to the hazard occurrence not left to engineers, although I appreciate that for some hazards this may be a challenge. I fear the lack of uniformity in application of a relative ranking approach. Values should be assigned. Yes, engineers should be comfortable associating hazard intensity levels to probabilities of occurrence. However, such probabilities and hazard intensity levels must be estimated via rigorous methods that involve at least the following steps, particularly when dealing with geographically distributed systems: Assessment of the occurrence of events from active sources in the region of interest, quantification of attenuation effects and distribution of intensities across geographies, and clustering of hazard maps for subsequent analysis of performance of distributed systems (as their analysis is generally not suitable using probabilistic hazard maps, unless a single hazard source and location dominate the hazard occurrence and intensities). The hazard level associated with the intensity assumed for design should be specified in probabilistic terms. Each hazard has its inherent property and possible consequence. Considering the degree of significance of consequence, researchers or codes could help develop the ranking system. I think that researchers and codes should specify a methodology, then used by engineers. Formal risk approaches developed by several state DOTs vary from qualitative to quantitative depending on the importance of the decision. This flexibility is practical. Always better to use a numerical probability value to quantify hazard intensity. A numerical value would be preferable.

ANSWERS TO SECTION II OF THE SURVEY

113

It is preferable for engineers to use a numerical value to the hazard intensity. However, given the lack of data especially for high intensity hazards, engineers could use a subjective rank to assign a qualitative intensity to the hazard. Answers to question II.12 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you consider current state of component/system deterioration? (version for Engineers) How can engineers consider current state of component/system deterioration? (version for Researchers) Not considered. By building inspections (e.g., ASCE 31 evaluations). From visual inspection reports/photos. ASME/ANS RA-S, “Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications.” For other NRC applications, different quantitative/qualitative approaches may apply. By evaluation of existing capacity. Visual observation, dimensional measurements, (wood) moisture readings, and manual operations such as probing, scratching, chain drags, etc. Usually through inspections and past inspection reports. Measurements are taken of steel structures to approximate loss of section. Concrete structures are often cored. In general, there are periodic inspections to examine conditions of structures and components. In addition, there are tests and surveillance programs for active and passive components. There are technical specification requirements to maintain the plants within safe operating conditions. The deterioration has an effect on material properties and on environment. This effect has to be taken into account when developing hazard scenarios. The scour changes the environment where the corrosion changes the material properties. Based on the level of deterioration.

114

RISK-BASED STRUCTURAL EVALUATION METHODS

Inspections and testing – there are national regulations. Monitoring programs – there are also national regulations. By inspection. Bayesian methods should be used to process then indirect information gained from inspections. Deterioration/corrosion surveys followed by statistical analysis. Current state of component/system deterioration could be considered by using a resistance reduction coefficient varying from 1 (intact component/system) to 0 (no resistance). Since we are only considering fatigue cracking in metal structure, component/ system deterioration is considered through the current location and scatter in the crack size distribution. There are two techniques available. Condition evaluation based on visual inspection and evaluation using testing equipment. Ideally, research should inform typical deterioration patterns and their implications for different structural systems and exposure conditions. This could inform the suggestion of typical hazard functions, h(t), for various systems that can be used in risk assessment (risk based LCA). In a practical design context, perhaps this could lead to the future derivation of resistance factors that depend upon design life and anticipated exposure condition. This can be relative (e.g. within a range, but eventually would need to be assigned a value for asset allocation, etc.). In my practice, I have adjusted the structural capacity to account for the current condition (deterioration) of the system, and then conducted a performance assessment as before. I suppose an approach can be used to consider uncertainty in the future condition of the component or system as part of the risk assessment procedure. We would need data on typical deterioration rates, or need a study to come up with reasonable, conservative allowances for deterioration that could be overridden with actual knowledge of the structure. Time-dependent reliability and risks analyses are certainly a way to the future. If inspections or structural/infrastructural health monitoring data signal deterioration, such knowledge should be used to update capacity models and reevaluate the safety and reliability of components and systems. Bayesian updating is a reasonable approach to perform such updating in a practical way. Codes should encourage the application of such Bayesian methodologies, as they can help with maintenance decision making and entire life-cycle management of structures and infrastructure systems.

ANSWERS TO SECTION II OF THE SURVEY

115

In some cases, visual inspection, together with historical information about the occurrence of high intensity events, can be sufficient. In general, application of structural health monitoring methods may be necessary. The validity and efficiency of these methods depends on the specific circumstances. These concepts must be evaluated by researchers and transformed into possible codified requirements. It is extremely important to build degradation models for different types of structures and infrastructure; in terms of materials and the structure itself. There are many models to handle the stochastic nature of degradation, but there is not enough information available to build well-grounded curves. The basic approach to consider the current state of a system is by selecting a variable (e.g., stiffness) as a surrogate of the overall system performance and compare its current condition with its value at design t = 0. Parameters used in the analysis of material transport and material deterioration have to be updated based on information provided by inspection and monitoring on site. I believe that the approach should be made “as simple as possible, but not simpler.” If a risk assessment or reassessment is required for existing structures with deterioration, this information should be incorporated in the model and a brand new analysis should be performed. I don’t think that simplified code-like formulas can be used to represent the state of deterioration with just a couple of parameters and adjust the probability of failure accordingly. This would be acceptable if information of the actual level of deterioration is not available, but if it is a brand new analysis (with modified geometry to account for corrosion, shrinkage etc.; modified stiffness; and so forth) should be completed. With updated Markov decision model approaches. Follow the condition guidelines for highways (IRI, PSI) and Bridge condition rating (0-9) based on inspection or current models for predicting condition. Can be evaluated from structural identification. The steps are below: (1) Structural response from real-time monitoring, (2) Identification of structural stiffness (that provides current structural states) by analyzing monitored data (i.e., vibration measurements) using appropriate structural-identification (St-Id) technique. Engineers can use different approaches to consider the state of system deterioration: (i) Lifetime data; (ii) Historical data to develop empirical deterioration models; (iii) mechanistic models; (iv) incorporate uncertainty in the deterioration models by using appropriate stochastic models.

116

RISK-BASED STRUCTURAL EVALUATION METHODS

Answers to question II.13 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you account for routine inspection and maintenance? (version for Engineers) How can engineers account for routine inspection and maintenance? (version for Researchers) Not considered. Typically, not considered (other than by observation of deterioration during building inspection). Routine inspection and maintenance positively reduce the uncertainty premium, which accordingly reduces the risk. There are detailed inspection and maintenance requirements for operating nuclear reactors. The NRC performs its own inspections and includes riskinformation in its structure and focus. We don’t. N/A. We typically request post-earthquake evaluation reports for older buildings that have experienced damaging earthquakes, but we rarely get them. Usually not directly accounted for. As stated, there are specific requirements for inspections, maintenance, tests, and surveillance. The deterioration has an effect on material properties and on environment. This effect has to be taken into account when developing hazard scenarios. The scour changes the environment where the corrosion changes the material properties. Inspection and maintenance are very important. They indicate the weak link. The basis of any kind of reliability assessment and monitoring is always a detailed inspection. Such inspections will fall into one of four categories: • Visual inspections, on a yearly basis. • Simple checks, for instance 3 years after every main inspection. • Main inspections, for instance every 6 years. • Special inspections, following exceptional occurrences or incidents. The information obtained from routine inspection and maintenance can be used for updating the state of structural performance.

ANSWERS TO SECTION II OF THE SURVEY

117

We consider inspections to occur at the intervals specified by the Aircraft Structural Integrity Program (Mil-Std-1530). When an inspection occurs, there is a probability of finding a crack and a probability of missing a crack. All cracks that are found are considered repaired. Representations for the repaired structure going forward in time is currently being developed. The result of routine inspection and maintenance will update the survival probability of a system under hazards. Routine inspection and maintenance results for bridges are currently utilized when performing a load rating for the bridge. Ideally a structural design should be associated with an anticipated inspection and maintenance plan. Hence the “parameterized” resistance factors above could be a function of maintenance schedule perhaps as well. Basically, the engineers deliver a structural design in concert with a suggested maintenance plan. If time variant assessment, then adding capacity/performance to model. If time invariant, then slope of capacity should change. Routine inspection and maintenance could be accounted for by adjusting allowances for future deterioration (or uncertainty values on the future condition) of components or systems in the risk analysis. Designing and conceptualizing structures and infrastructure systems within a life-cycle context makes it natural to account for routine inspection effects and associated maintenance in the assessment of the remaining life of engineered systems. The Bayesian updating approach noted in the previous question applies equally well here to systematically integrate performance assessment from models and field data that captures deterioration and maintenance actions throughout the useful life of the components and systems. This perspective enables considering the value of information (benefits from reducing uncertainty via inspection) and help justifying the soundness in periodic inspection and maintenance practices that are less reactionary and more proactive. Taking into account information mentioned in my response to question 12. Inspection and maintenance policies are essential for systems that are expected to last for long time periods like most infrastructures. All modern designs should take maintenance into account. Maintenance and operation for long time periods are difficult to define beforehand; in practice this requires some flexibility and adaptability. It is necessary to reflect on this, but it may be useful to evaluate at least a set of possible simple operation policies at design, so that the result of the design is more complete.

118

RISK-BASED STRUCTURAL EVALUATION METHODS

I believe that the approach should be made “as simple as possible, but not simpler.” If a risk assessment or re-assessment is required for existing structures with deterioration, this information should be incorporated in the model and a brand new analysis should be performed. I don’t think that simplified code-like formulas can be used to represent the state of deterioration with just a couple of parameters and adjust the probability of failure accordingly. This would be acceptable if information of the actual level of deterioration is not available, but if it is a brand new analysis (with modified geometry to account for corrosion, shrinkage etc.; modified stiffness; and so forth) should be completed. With updated Markov decision model approaches. Using past records. When last maintenance was completed, and intensity of the maintenance activity. Intensity can be measured as cost per unit depth or cost per unit area. Inspection data can be used for risk ranking. Engineers can update their deterioration models when they collect data from routine inspection and maintenance. Answers to question II.14 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. What are the performance (damage) levels you consider for risk analysis? (version for Engineers) What performance (damage) levels should be considered for risk analysis of systems of interest to your group? (version for Researchers) Superficial, low, moderate, high. Depends on the project/client—in some cases, just complete/collapse for safety evaluations, but all damage states for all systems (structural and nonstructural) and contents when performing a comprehensive seismic risk evaluation (e.g., Hazus). We consider different performance limit states, Geotechnical, Structural, Serviceability, Utility, Operation, etc. Multiple performance levels are considered depending on the issue and component/system as well as degradation level and whether operability is still feasible. 1. Immediate Occupancy 2. Damage Control – 3. onset of structural damage to primary members 4. Life Safety 5. Collapse Prevention 6. Collapse.

ANSWERS TO SECTION II OF THE SURVEY

119

Mean and upper bound (90% confidence level) damageability, SEL and SUL, respectively. Usually for a 475-year return period seismic hazard level, however 190- or 200-year and 2,475- or 2,500-year return periods are sometimes requested. Damage levels depend on the question you are attempting to answer. For the nuclear power plants in US, the risk metrics are generally: core damage, large early releases, and potential health effects. All damage levels that have impact on the functionality of the structure. Any damage that may restrict the intended usage should be considered. Structural damage. Code specified, owner requirements, engineering experience. Consequence classes as to be found in EN 1991-1-7 Eurocode 1 - Actions on structures - Part 1-7: General actions - Accidental actions. This is a wide open question. It all depends on the system under consideration. Several performance levels for various objectives. We consider all potential cracks via a crack size distribution. All performance levels should be considered integrally. Load rating levels. Reliability index levels. For bridges, loss of functionality and damage that triggers maintenance (e.g., excessive cracking of components, settlements, loss of column axial or lateral capacity, residual deformations of columns or offsets at expansion joints, etc.) should all be considered as well as the traditional emphasis on life-safety/ collapse. (1) Service failure, (2) Collapse/full failure, and (3) Consideration of loss levels as failure. In generic terms, I consider: (1) no damage; (2) limit damage requiring cosmetic repairs; (3) significant damage requiring structural repairs to restore pre-event strength and stiffness; (4) substantial damage requiring replacement of the component; and (5) collapse. For lifeline systems as the main interest from our group, the performance levels should be established in terms of system-level functionality (quality of service),

120

RISK-BASED STRUCTURAL EVALUATION METHODS

as well as in terms of time to restoration of service after disruption. The latter implies at least three reasonable levels, where emergency restoration is first concerned with providing services to essential facilities and operators (on the order of days), economic and social stability is then concerned with providing services to regular users, workers, factories, and the population at large, even if the quality of the service is not comparable to pre-event levels (on the order of weeks), and then a total recovery or adaptation phase that is concerned with the long term reliability and resilience of systems, including upgrades to handle weaknesses revealed by previous events (on the order of months). We take into account the following indicators of expected performance in systems exposed to seismic risk: (a) Reliability functions with respect to collapse: expected failure rates per unit time, (b) Expected damage costs per unit time, associated with the following types of behavior that includes Repair of physical structural and non-structural elements, Preventive replacement of energy-dissipating devices and Failure or damage of equipment and contents, due to high strains or to overturning. Performance levels are difficult to define, and they vary depending on the system evaluated. Its definition should be linked with the function of the system and not only with the mechanical performance. Safety, serviceability, resilience, ability to recover, and post-disaster functionality. I believe that the standard levels currently used are good (no, minor, moderate, extensive damage and collapse). Cost categories related to architectural damage, other nonstructural damage, structural damage degrees, and of course risk of injury and death. PSI, Bridge Condition Rating, component condition ratings, surface roughness, rutting, cracking, corrosion. As indicated in Hazus manual, all performance levels (minor, moderate, major damage and collapse) are used to calculate risk. Loss of safety, Loss of serviceability, Loss of functionality. Answers to question II.15 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you measure performance (damage) and relate it to structural analysis results?

ANSWERS TO SECTION II OF THE SURVEY

121

Member end rotation and ductility. Typically, damage to the structural system (e.g., complete damage) is related to peak lateral earthquake displacement (e.g., story drift) using fragility curves that describe the probability of the damage state (e.g., complete damage) given peak lateral earthquake displacement (e.g., story drift). We install and utilize sensors on the bridge to monitor the performance metrics of interest. The collected data from the sensors will then be used to calibrate any preliminary finite element model. The issue of structural damage within NRC PRA modeling for operational nuclear reactors is usually raised for seismic-related issues, via quantitative fragility analyses. By pushover or plastic analysis. Typically, our deadlines and level of investigations only warrant a Tier I or occasionally a Tier II ASCE 31-03 evaluation in conjunction with seismic risk assessments that are constrained by due diligence periods or investor committee requirements. Don’t know, as we do other than to do general comparisons of analysis results to experience. The structural analysis or fragility analysis relates how a structural failure can cause a loss of safety systems or may not allow a system or component to not perform its intended function resulting in accident sequences that lead to the above damage states. The change in material properties and in environment (e.g., soil disappearance) is considered in structural analysis. Structural re-analysis is possible by changing the material and geometrical properties of damaged elements. This depends on the hazard being considered. So, the analysis might include CFD simulation, Finite element (implicit and explicit). Using the following inspection procedures: • Visual inspections, on a yearly basis. • Simple checks, for instance 3 years after every main inspection. • Main inspections, for instance every 6 years. • Special inspections, following exceptional occurrences or incidents. Performance should be measured in terms of a set of metrics. For buildings, it could be in terms of inter-story drift, joint rotation, floor acceleration, etc.,

122

RISK-BASED STRUCTURAL EVALUATION METHODS

depending on what we are after. These measures can be correlated to damage states and thereby to cost of repair. Damage should be related to the performances of the system. The performance (damage) levels of interest for risk analysis in my group are all possible states (e.g., Pontis system for bridges—varying from 1 to 5). Damage is measured by the crack size. It is related to the Structural Analysis through answering the question, Can the structure carry limit load? Seismic: computed interstory displacement computed from structural analysis given horizontal excitation to measure damage. Hurricane: used direct relation of damage cost to wind speed based on insurance claim data. Structural identification is a field of study where performance or damage measured or assessed for a given structure can be incorporated into structural analysis such as finite element analysis. There are scientific articles published on this subject. We have traditionally taken a combined approach of linking structural response (engineering demand parameters) to anticipated visual damage to the structure (e.g. cracking, spalling, bar buckling, etc.) based on mechanics and past experimental testing. We have further tried to link this to anticipated performance in terms of functionality (closure, repair, etc.) based upon expert opinion and survey data given the subjective nature of inspection, closure and repair decisions. As a function or percent of the amount of damage that would be total loss of the system, region, etc. Through the use of fragility functions that relate a response parameter (e.g., drift) to the probability of not exceeding that parameter for each performance level (or damage state) of interest. To measure the performance of geographically distributed lifeline systems, it is necessary to quantify the level of satisfied demand at the local service area level. This way, aggregates from service area information can be used for various levels of decision making, such as city-level, county-level, state-level, etc. The measure of performance at the local service area level also affords a clearer linkage to what design requirements should be in place for individual components so that their performance contributes to system-level target performance levels. This implies that the design of individual facilities in no longer dependent on their structural integrity alone, but also on the role such facilities

ANSWERS TO SECTION II OF THE SURVEY

123

or structures play at the network level so as to contribute to system-level target performance. Different lifeline systems have different metrics to quantify performance, but all have a generic objective of satisfying demand within some quality of service constraints. Supplied energy for power systems, blocking probability for telecommunication systems, water pressure and quality for water systems, or travel time for transportation systems are all measurable quantities at the service area level that could be used to establish component requirements that match system-level performance goals. (a) Failure probabilities for given intensities: we express the safety margin with respect to the failure condition in terms of a secant-stiffness-reduction index, which is equal to zero when failure occurs. We use Cornell’s reliability index expressed in terms of the first two probabilistic moments of that index. (b) We estimate physical structural and nonstructural damage in terms of the story lateral distortions estimated by Monte Carlo simulation of the nonlinear dynamic response to samples of artificial earthquake records of different intensities; we take into account uncertainties about gravitational loads and mechanical properties of the system. (c) We estimate a similar approach to estimate damage to equipment or overturning, and (d) We transform physical damage measures into economic costs, taking into account direct and indirect effects. The performance function would be defined comparing the demand with capacity. Demand could be provided in structural analysis. First, I try to make a distinction between performance and damage. For instance, for a bridge the performance of the bridge is associated with its ability to allow traffic flows above or underneath it. So, a performance metric is its practical traffic flow capacity (carried and crossed) or just the number of open lanes (carried and crossed). This is a function of the damage, but not only. For instance, ongoing restoration works impact these values. To relate the damage with the structural analysis results, my group uses the traditional approach of identifying a relevant engineering demand parameter for each component and comparing demand (under representative scenarios, or “unconditional”) with the capacity. The combination of the damage on the various components gives the total damage level. We relate age and condition to cost of repair and not to structural analysis results. Our purposes are for cost allocation, asset valuation, and other economic and finance-related research for transportation assets. From analysis results, rotational/displacement/curvature ductility values are estimated. These values are compared with threshold ductility to check the performance levels or limit states.

124

RISK-BASED STRUCTURAL EVALUATION METHODS

Damage can be considered by reducing the values of the stiffness and strength of the structure and undertaking re-analysis of the structure at each time the key structural properties change. Answers to question II.20 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you combine failure and consequences to estimate risk? (version for Engineers) How would you combine failure and consequences to estimate risk? (version for Researchers) We don’t. (1) Mean estimates of each loss type (e.g., deaths, dollars and downtime) for a given set scenario earthquake ground motions, (2) AAL for each loss type (not preferred for use with most decision makers), (3) Qualitative estimates of risk based on quantitative (mean estimates) estimates for decision makers who prefer simplified risk measures. Heat map. We define risk as a product of hazard, vulnerability and exposure, multiplied by uncertainty premium. A risk level is calculated based on matrices for each limit state, and then a combined risk is calculated for the structure. Based on this risk, a risk level (Very high, high, moderate, and low) will be assigned to the structure. For the type of nuclear plant currently operating in the United States, a PRA can estimate three levels of risk. A Level 1 PRA estimates the frequency of accidents that cause damage to the nuclear reactor core. This is commonly called core damage frequency (CDF). A Level 2 PRA, which starts with the Level 1 core damage accidents, estimates the frequency of accidents that release radioactivity from the nuclear power plant. A Level 3 PRA, which starts with the Level 2 radioactivity release accidents, estimates the consequences in terms of injury to the public and damage to the environment. We try to quantify consequences economically rather than heuristically. No. Depends on the level of assessment that is performed. Ranges from a ranked matrix to probabilistic computations. Normally USACE works with annual damages or annual life lost computed problematically.

ANSWERS TO SECTION II OF THE SURVEY

125

In a quantitative risk analysis, the accident sequences have associated frequency of occurrence that defines the order of sequences. The failure and consequences are combined through a combined approach of event trees and fault trees. The matrix has wide usage. More useful would be the numerical values in matrices, which would allow to multiply failure occurrence rate with the consequences. Yes, Matrix system. Set matrices with numerical values. Matrix with ranks. Integrate or sum over a range of performance thresholds. Integration to obtain loss exceedance curves. By multiplying the probability of failure with associated consequences. Matrix of probability of failure versus the consequence. Risk should be computed numerically based on decision models that specify objective of decision-making appropriately. Minimum life cycle cost analysis could be one approach, in which risk is computed in terms of expected life cycle cost for the service period of a system. There have been developments in decision modeling and expected utility theory, cumulative prospect theory, life quality index, capability based approach can be utilized to estimate risk in more complicated situations that require advanced techniques of uncertainty modeling. An example is the use of Failure Modes and Effects Analysis technique. Another approach is the Risk Matrix Evaluation. An example for Risk Matrix Assessment would be as follows: General categories for risk can be defined as: High, Serious, Medium and Low. Probabilities can be defined as: Very Probable, Probable, Occasional, Remote, and Improbable. Severity can be classified as: Catastrophic, Critical, Marginal, and Negligible. A matrix can be used where horizontal axis defines Probabilities and vertical axis defines Severity. Regions of intersections are then defined in terms of general categories defined above. Many of the consequences may be quantitatively estimated (e.g., cost, even deaths), so a direct mapping considering uncertainty may be viable. This typically requires models of the consequences as a function of damage level to components or the system. Aggregating consequences may also be an

126

RISK-BASED STRUCTURAL EVALUATION METHODS

interesting question, as a simple summation over all components may not be realistic given efficiencies or other conditions. In any respect this approach could allow a quantitative comparison of risks, which is often ideal. For other consequences and systems, this mapping between damage state and consequence may not be as well defined. Qualitative indicators (e.g. high, medium, low) for the risk of each consequence may be invoked. Overall risk indicators would require some solicitation of weights or preferences of stakeholders regarding the relative importance of various consequences. This has not been thoroughly tackled for a broad set of sustainability related consequences of structural/infrastructure failure. In general, based on no more than three metrics, one of which should be the statistical distribution of the losses in either a period of time or annualized. The others would depend on the system at hand. The matrix approach for combining failure probabilities with consequences should be used for screening purposes or gain understanding of the problem at hand only. Ideally, a more long term approach informed by increasingly available data and models is to establish functional forms that relate failures and consequences to risk. Data availability and statistical learning tools are becoming more powerful and should be embraced if enriched with pertinent physical and mechanistic constraints. I would measure risk in terms of the following quantitative indicators, within a life-cycle framework: (a) Expected failure rates per unit time, (b) Expected damage costs per unit time, and (c) Expected number of lives lost per unit time. Also, in terms of the following indicators, considering an assumed design life: (d) Failure probability during the design life, (e) Probability distribution of the maximum loss for a single event, and (f) Probability distribution of the accumulated losses. Combination of the probability or frequency of occurrence of an event and the magnitude of its consequence (i.e. the sum of the products of the consequences of each event and its probability). For the first two groups (lives and economic impact), I would try to assess two probability distributions of the losses, convoluting the probabilities of reaching various configurations (of damage in many components of the infrastructure) and the associated consequences. Matrices of probability versus consequence but realizing that the multiple aspects of consequences should not always be reduced to a single measure. I would also want to reflect the risk perception issues, such as dread, familiarity, and degree of voluntariness.

ANSWERS TO SECTION II OF THE SURVEY

127

Multiplication. Initially set the index of each to 1.00. Then ascertain through research what should be the appropriate index for each of these two factors. Add a third factor: system or asset resilience. Asset failure is a component of asset resilience. Therefore, I would have: consequences × threat likelihood divided by asset resilience to define risk in a security ranking numerically. To obtain the cutoff points between each pair of adjacent cells in the risk matrix, I would initially use expert opinion. Then, at a subsequent time, I would try using optimization to identify the optimal cutoff points. Naturally, doing so will be a very challenging but interesting problem. Failure to consequence matrix is developed for risk assessment. Each consequence is represented with a monetary loss to estimate risk. Matrix with qualitative ratings of failure and consequences by using normalization approaches. Answers to question II.21 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Do you project historical damage and cost data in your risk assessment and in estimating future risks? (version for Engineers) How could one include historical damage and cost data during risk assessment and in estimating future risks? (version for Researchers) Yes. Yes. No, but I believe it is a good idea. We are planning on improving our approach, one of the factors that we will take into account is cost data. I do not apply historical damage and cost data directly in the specific work I perform at the NRC. Yes, to the extent that fragility curves may have statistical basis. No. However historical damage data is used in the ATC-13-1 methodology we use. Rarely, we are asked to perform a “probable loss” (PL) that factors in annualized damageability losses that would project the data into the future. Sometimes for recovery costs for navigation systems. Experience data play an important role in reliability and fragility estimates. They can be included very easily if the scenario is known. However, this data have to be adapted to take into account current population density, economic

128

RISK-BASED STRUCTURAL EVALUATION METHODS

activity, etc. The costs have to be adapted for inflation but also to take into account productivity increase. Yes. Sometimes. Could be based on engineers’ subjective rank system such as high, moderate, low, with respect to historical damage data, current damage data, etc. and could be combined with extended Bayesian techniques or Markov network techniques. Historical data should be used to develop models (e.g., a model of repair cost versus inter-story drift in tall buildings). Such models must be probabilistic in nature, accounting for the model error. The model can then be used for risk assessment of new or existing structures. Bayesian methods might be appropriate. Expert opinion elicitation might be applicable. The approach is objective and problem dependent. By considering aging effects and historical damage and cost (including discount rate of money). Future risks are of course time-dependent. They can be integrated into consequence estimation. Historical damage and cost data have paramount importance for dependable risk assessment for any engineering system subjected to any type of natural or man-made hazard. Historical damage and cost data provide statistical distributions which can then be used to estimate probability of occurrence and costs of risk events such as hazards or failures. Historical damage may be used to update (e.g., via Bayesian framework) fragility models; or simply as a ballpark sanity check of models. Historical cost data may similarly be valuable to inform the cost consequence models. This could be used to check numerically developed fragility curves. We do this based on EF ratings for tornadoes somewhat frequently. Historic data can be used to validate or calibrate new risk assessment tools, but care must be taken to properly interpret historic data and limitations on available information. It is not as simple as looking at damage reports because data in damage reports are skewed to report on damage, and often ignore the large numbers of buildings and structures that went undamaged in the event.

ANSWERS TO SECTION II OF THE SURVEY

129

Historical damage and cost data could be included into risk assessments during the phase of consequence estimation, which itself depends on the convolution of fragility and hazards for components or systems. Historical damage could be integrated via Bayesian updating into the failure probability assessment tools, while cost data could be added directly to the consequence assessment tools. Hence, future risk estimations could account and quantify the uncertainty from changing trends in hazard exposure, capacity degradation, and evolving trends in cost repairs. The best way would be to use detailed quantitative information to update the vulnerability functions (related to failure probabilities and expected damage costs) of systems similar to those for which historical information might be collected. This requires a utility evaluation. Historical cost data could be used to estimate the indirect cost during risk assessment. It can be used as a starting point to assess the consequences. In a Bayesian approach. These would be used in the consequence aspect of the framework. Also, these could be used in the development of strategies for risk mitigation and decision making. Determine what the future risk level is associated with the expected levels of security factors and predicted data, then determine the expected damage outcome and subsequent cost of repair. Integrating empirical fragility curves within risk analysis framework. Answers to question II.22 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you summarize and display your risk assessment results? Curves of % damage vs. threat intensity. Figures highlighting structure damage for various scenarios. Typically, by curves, with selected tabulated values. Heat maps and risk registers. By a color and risk level. For example, very high risk in red, high risk in orange, etc.

130

RISK-BASED STRUCTURAL EVALUATION METHODS

CDF and LERF numerical estimates are usually used. For oversight purposes, a number of risk communication tools are used, including matrix and color charts. Expected AAL for multiple hazards. Simple tables or in written form. Economic risk results are generally are presented in tables. Life loss risk is presented in a plot of annual probability of occurrence vs. average lives lost per occurrence (f vs. N). Both by curves and quantitative results. Results of a probabilistic risk assessment are probability distributions that include both aleatory and epistemic uncertainties. The Risk assessment is ideally represented with curves as a function of time. Each curve would represent one hazard scenario. Matrices and curves include type of structures, hazard level, setting up risk intolerable limits or failure criteria, define risk factor. Matrices and curves. In matrices, since these illustrations are more understandable for public. The final results should be translated into the appropriate form in such a way that the stakeholders can make decisions in the framework of risk management. There is no unique way. Each application may benefit from a different type of summary display. Graphical displays are always more intuitive and more easily understood. Curves, Loss exceedance curves. Curves, showing the variation of risk with time; both a mean curve and curves with risk percentiles should be provided. DoD requires use of the matrix in Mil-Std-882 that yields a single number and color code as to who needs to decide whether to accept the risk or not. A metric that represent the objective of decision-making should be used for a more transparent risk assessment and the following risk management process. It can be in terms of expected cost, expected utility, expected value, change in life quality index and change in capability index when minimum expected cost analysis, expected utility theory, cumulative prospect theory, life quality index based approach and capability based approach is utilized, respectively.

ANSWERS TO SECTION II OF THE SURVEY

131

There are diagram-based methods (e.g., Decision trees). There are matrix-based methods [ e.g., Risk assessment matrices (L-Type matrix, S-Type matrix)]. Statistical moments of the consequences. Curves of exceedance probability for various consequences. As fragility curves or surfaces. Communication will depend on the audience. I assume this question is directed at displaying results to stakeholders and decision-makers who may be nontechnical. I have been unsuccessful in trying to communicate probabilistic information to non-technical stakeholders. I believe the information needs to be simplified into discrete decision points (for example: the value of expected repair costs given the occurrence of a scenario earthquake). This represents a single value, or series of values for different design options, based on the translation of hazard into a discrete event that a non-technical person might be able to react to. Risk assessment results should be graphically displayed in spatial-temporal plots that show the current heterogeneity in risk levels across geographies as a function of systems’ lifetimes. This way, it becomes intuitive for infrastructure owners and decision makers that uniform risk attainment is beneficial to reduce uncertainty in system performance, as well as for better siting, sizing, and staging backup systems, maintenance and repair crews, and spare parts among others. Locations for reduction of interdependence effects could be revealed by this spatial-temporal approach to displaying and communicating risks for geographically distributed systems. I would do it by means of curves describing the vulnerability functions. Apart from traditional methods, it is important to look for alternative methods which are more visual and that can be linked easily with other tools. In the special case of systems whose spatial distribution is important, tools such as GIS should be used. The probability distribution of consequences is a quantitative representation of whole profile of a risk, which is a combination of probability and consequence as defined in ISO 13824. For the convenience of risk comparison, a risk is sometimes represented with a scalar. I would use Probability Density Functions for scientific communications and expected values (+ percentiles) for communication to the general public. Graphically for the most part, such as circles of consequences and risk perception issues, combined to the extent possible with decision quantities such as benefit/cost, optimization, regret, Pareto optimality, etc.

132

RISK-BASED STRUCTURAL EVALUATION METHODS

ArcMAP GIS for spatial analysis or in a plot showing assets under different levels of risk; using color-coded area blocks or contour lines. Risk curves that would demonstrate the chance of having a certain level of risk in terms of cost/damage/impact. Both matrices and curves. Answers to question II.23 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How and when do you determine risk acceptance criteria? Do you adjust criteria after Risk analysis results are compiled? (version for Engineers) How should engineers/owners determine risk acceptance criteria? (version for Researchers) We do not. Risk acceptance criteria is entirely in the purview of the decision maker (client)—whether that be a codified acceptable collapse rate (e.g., 10% probability of collapse given MCE ground motions, ASCE 7-10 / FEMA, p. 695) or client-specific acceptance criteria. When setting the criteria for impacts, we use narrative definitions of what is catastrophic, major, minor, etc. Risk assessment is used to prioritize bridges, candidate for replacement or repairs. Usually very high and high-risk bridges are not accepted. However, this really depends on the client and how they want to proceed, as well as the available budget. This can change the risk acceptance criteria. Risk criteria are set in advance as discussed in the example above. However, the effectiveness of the program is evaluated continuously for potential issues and improvements. We typically do not determine risk acceptance criteria, only report the risks. We have in the past advised our clients in developing their own acceptance criteria. Most of our clients are institutional investors and are able to “manage away” individual building risks within the context of a huge portfolio of buildings. The idea is to have defined risk acceptance criteria and it is fairly well established for dams. Risk acceptance criteria for levee systems is under evaluation.

ANSWERS TO SECTION II OF THE SURVEY

133

Results of a quantitative risk analysis are used in conjunction with other qualitative insights and many other factors, such as defense-in-depth considerations, balance between protection and mitigation. NRC Regulatory guide describes a procedure for risk-informed decisions. This level is not fixed and should be oriented toward social equity. This means that the acceptable risk level for a structure in hazard prone regions (e.g., mountains) would be different than in the almost hazard free regions (plains). At the beginning of the project. Yes, we do adjust criteria after risk analysis results are compiled. The risk analysis is adjusted for each hazard mitigation. Consequences resulting from the hazards and following events should be identified. They should be described in terms of several measures (e.g., monetary loss, human fatalities and environmental damage). Some consequences can be identified by scenario analyses considering the extent of influences due to failure of the system in time and space. These investigations should be the basis for the definition of risk acceptance criteria. One way is to compare with other risks that are accepted by society. The following methods can be used: Implicit risks in previous or similar practices, Benefit/risk ratios, Regulatory requirements. Risk attitudes have to be considered in this process. DoD has already decided their process to a large degree. Determination of risk acceptance criteria should be done by considering all stakeholders of the engineering project, which includes owner, engineers, the public, etc. The nature of risk acceptance of stakeholders should be studied comprehensively based on their decisions in past and current. This process oftentimes involves questionnaires and interviews of individual stakeholders. However, for civil engineering projects that involve big groups of stakeholders, the acceptance criteria would be studied more efficiently from their past preferences to decision in similar decision contexts. Risk acceptance can be defined by two different methods: implicitly or explicitly. Implicit criteria often involve safety equivalence with other industrial sectors (e.g., stating that a certain activity must impose risk levels at most equivalent to those imposed by another similar activity). In the past, this approach was very common because some industrial sectors (for example nuclear and offshore) developed quantitative risk criteria well before others,

134

RISK-BASED STRUCTURAL EVALUATION METHODS

and thus also constituted a basis for comparison. While this methodology has been surpassed by more refined techniques, it is still used occasionally today. Explicit criteria are now applied in many industrial sectors, as they tend to provide either a quantitative decision tool to the regulator or a comparable requirement for the industry when dealing with the certification/approval of a particular structure or system. Difficult question. Owners must be involved in the discussion with engineers, but other stakeholders exist as well (depending upon structure/infrastructure type). A benchmark that quantifies the risk for different existing portfolios of structures is always helpful to put acceptance criteria into perspective. Ensuring that the risk metrics are also interpretable/meaningful to the owners is also valuable. Showing sensitivity of design / cost of design / complexity / etc. to different risk targets can also facilitate the selection. Standardized minimum (max risk levels) should also be established or preserved for certain consequences. This is a process which requires an understanding of how owners will make design decisions (e.g., financial cost-benefit; aversion to loss of life), and then a translation of engineering information into the owner’s decision-making metrics. To determine risk acceptance criteria, engineers should consult with infrastructure owners on the actual capacity and safety of systems today to perform their intended functions (reliable supply of demanded services). Then, engineers and owners should engage users and a wide spectrum of other stakeholders to assess their expectations of performance. These two perspectives will reveal the gap between system-level capacity and user performance demand so as to initiate an informed discussion and eventual consensus of acceptable risk and acceptance. This kind of exercise should have a local emphasis, but one that aligns with national-level minimum priorities and expectations of performance and restoration times for lifeline systems. They should take into account the following information about risk levels: (a) Derived from a life-cycle optimization approach, (b) Implicit in modern, currently applied, building codes, (c) Implicit in ordinary activities in everyday life, (d) Derived from studies about amounts of money that individuals are disposed to spend for specified risk-reduction levels. These studies should be carried out with the participation of sociologists. Risk acceptance criteria would be determined taking into consideration costs and benefits, legal and statutory requirements, socio-economic and environmental aspects, the concerns of stakeholders, priorities and other inputs to the assessment (ISO 13824). Optimizing utility or cost-effectiveness.

ANSWERS TO SECTION II OF THE SURVEY

135

I am not entirely sure that these should be determined by engineers. They are more of a political choice, we as engineers can only provide information to support this decision. Of course, we, as engineers, will be the most informed individuals on these topics and will have developed logical opinions and applied critical thinking on the issues, so we will try to influence those decisions. But this is not part of our technical role, it is part of our broader role in the society. Engineers should convey to the public, and to decision makers, the risk issues, and society as a whole should set normative standards. Determine what factors comprise risk, then decide what criteria make up the factors, then establish how to measure the criteria that make up the factors. Based on code-specified limits. Risk acceptance criteria should be defined by using different existing approaches: ALARP, ALARA, GAMAB, Minimum endogenous mortality, LQI: Life quality index, Societal risk criteria, Life-cycle cost.

This page intentionally left blank

APPENDIX D

Answers to Section III of the Survey

This appendix provides only the answers provided by each respondent to the technical questions in Section III of the survey. The questions are III.1 If the risk is too high describe circumstances under which you would recommend specific actions. III.2 On what basis would you prioritize above possible options? III.3 How should the risk information flow during the decision-making process and who should be involved in the process? III.4 How should agencies communicate risk to the public? Answers to question III.1 This question is further divided into five subtopics. If the risk is too high: (version for Engineers) If the risk is too high describe circumstances under which you would recommend: (version for Researchers) Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue the answers of Researchers. 1. Do you reduce the exposure to the hazards? (version for Engineers) Reducing exposure to the hazards. (version for Researchers) Site modifications. Implementation of security measures. The decision maker might locate (or move location of) the building to not be near an active seismic source. Risk treatment strategies are not constrained. For example, removing the high-risk trees in the vicinity of the bridge, whose falling could compromise the safety of the bridge.

137

138

RISK-BASED STRUCTURAL EVALUATION METHODS

It should be first noted that the NRC would not accept a condition where the risk is considered “too high” at operating reactors for any of these answers. Having said that, reducing exposure to hazards is a focus, such as reducing the potential likelihood of initiators such as loss of offsite power and/or mitigating its potential impact via safety margin and defense-in-depth concepts. Yes. For an existing power plant, generally this is difficult for natural hazards. No comment. The exposure can be reduced or eliminated by deviating hazard with earth works or structures. In case of transportation infrastructures one can eliminate gravitational hazards by building a tunnel. By applying extra strengthening measures. By applying some security measures. Applying appropriate sealers, etc., for decks is one such example. E.g. by selecting an alternative location of the structure. This answer is for all options. Of course, I would consider all these options and more, but how can I say which one without having an idea of the situation? In most cases, one would use a combination of these actions. Furthermore, for an existing system, an important option might be to reduce epistemic uncertainties (e.g., doing nondestructive tests, performing proof tests, or other information-gathering actions). Such relatively cheap actions may in fact reduce the estimate risk and make the system acceptable. This option can be used if it does not affect performance or utility of the system. None. This is too difficult. Hazards are everywhere. The aircraft may be put under flight restrictions: maximum weight that it can carry, the types of maneuvers that can be flown, etc. This can only be achieved with advanced planning for newly developed areas. For example, not allowing any building permits near flood hazard zones. However, generally residential settlements are established centuries ago. Therefore, it is not possible to relocate an entire city away from hazard zones. For bridges, for example, elevation helps reduce hazard exposure to hurricane surge. Just note building in a hazard prone location is really the only other

ANSWERS TO SECTION III OF THE SURVEY

139

alternative for many hazards. Difficult to dictate not putting bridge/roadways along the coast at all though, or crossing a fault, but suggesting to transportation planning groups to consider this may have some potential. For other infrastructure (e.g., housing) policy may drive this. If economically feasible. Choosing a different site; building in a different location. For the design and construction of new facilities for lifeline systems, it is essential not to expose them directly to the hazards. For instance, for storm surge hazards, the siting of substations, pumping stations, etc. should be away from flood zones to the extent possible. For earthquake hazards, base isolation of equipment along with sufficient slack in connections across components should minimize the direct exposure to the hazard effects. Land-use regulations, when they are feasible; for instance, not allowing construction in areas exposed to flooding or to slope instability failure. Finding the alternative place for the construction. For instance, increasing the distance of barriers to protect critical buildings against large explosions. Elevation if in flood areas, moving from flood-prone areas. Not really possible to take a bridge or road away from the hazard but can reinforce the infrastructure to withstand the hazard to reduce consequences. Taking adequate security and safety measures against man-made hazards. In the case of high consequences hazards, measures should be taken to reduce the exposure of the public and structure to the hazards by controlling or limiting the access to the structure.

2. Do you reduce the vulnerability of components? (version for Engineers) Reducing vulnerability of components. (version for Researchers) Structural hardening. The decision maker might decide to base-isolate the building, add dampers or provide seismic bracing/anchorage of components (beyond minimum code requirements).

140

RISK-BASED STRUCTURAL EVALUATION METHODS

Risk treatment strategies are not constrained. Replacing or repairing the vulnerable elements. Yes, by ensuring appropriate availability and reliability of risk-significant components/systems. Component strength. Modification or replacement of features with the highest risk potential. Yes. By strengthening structures, systems, or components. By adding procedural changes or changing a design. Maintain, repair, rehabilitate or replace. Introduction of robust detailing and eliminating progressive failure. By protecting components directly. Depending upon the structural system. By increasing strength, reduced potential to corrosion and scour damage, etc. Use benefit−cost analysis. Strengthening the important components. Elevating substations if flooding is a problem. More distributed systems. Parts will be repaired or replaced. Recent advance in structural material science and sensor technology allows to reduce vulnerability of each structural components. This is a design issue. Components can be strengthened during the design phase. General capacity increase, response modification, etc. Providing more robust systems, or high-ductility structural components; or adjusting the structural system to reduce response. When reducing exposure is infeasible, mainly for legacy systems, the reduction of vulnerability is the sensible approach to follow. Examples include

ANSWERS TO SECTION III OF THE SURVEY

141

strengthening, isolation, and even reducing the importance of individual components within a system, particularly if lifeline systems are increasingly reconfigured towards decentralization. What do you mean by “components”? I assume you are thinking of a structure or a part of it. Suppose you have a building with a soft ground-floor story. In some cases, this configuration results from adding infilling elements to the frames in all stories of a building, with the exception of the ground-floor story. In general, this will make the building much more vulnerable to seismic excitations than one with the same structural frame arrangement, without the infilling elements. In some cases, the vulnerability of the building can be reduced by isolating the infilling walls from the frames; in others, it will be necessary to increase the lateral strength of the ground-floor story. Another example: a building which is too weak to resist the lateral forces recommended by the code. All stories can be strengthened by the addition of bracing elements. Retrofit or replacement of components and system. For instance, applying structural retrofits. Ocean surge walls, fireproofing, etc. Vulnerability of systems components. This is what we refer to as “system resilience” (or lack thereof). We recommend reinforcing the components or improve them to withstand the potential hazards. Make sure components are properly maintained and up to code to withstand possible hazards in the area. By retrofitting the structure. By providing protection systems that enhance the resilience of components when exposed to hazards. 3. Do you introduce structural redundancies? (version for Engineers) Introducing structural redundancies. (version for Researchers) Progressive collapse retrofit. No. Risk treatment strategies are not constrained. Adding elements to the system to provide redundancy.

142

RISK-BASED STRUCTURAL EVALUATION METHODS

The NRC uses the concept of defense-in-depth throughout multiple applications and processes, where it consists of an approach to designing and operating nuclear facilities that prevents and mitigates accidents that release radiation or hazardous materials. The key is creating multiple independent and redundant layers of defense to compensate for potential human and mechanical failures so that no single layer, no matter how robust, is exclusively relied upon. Defense-in-depth includes the use of access controls, physical barriers, redundant and diverse key safety functions, and emergency response measures. In this context, structural redundancy may be incorporated in a number of different ways but at a high level. Alternative load paths. It depends. If the risk (i.e., SEL/SUL loss estimates) is moderately high but less than insurance or finance maximums we would advise that voluntary seismic upgrades may be considered to reduce damageability. If the risk is higher, we would state that upgrade would be required to reduce the loss estimates. If there is partial or full collapse potential or the building does not meet Life Safety criteria per ASCE 31, we would recommend retrofit. Yes. Depends on what are the vulnerabilities. Introduction of robust detailing and eliminating progressive failure. Yes. Depending upon the structural system. This is done whenever possible and is a recommended primary option during the design and rehabilitation. For torrent barrier structures in the Alps redundant structural systems are created—if settlements cannot be moved to other locations. Use benefit−cost analysis. Adding components to improve the system redundancy. More distributed systems. Repairs may include putting doublers/patches on the part. Providing additional load paths will reduce vulnerability of structure to hazards, which is a major part of risk calculations.

ANSWERS TO SECTION III OF THE SURVEY

143

This is a design issue. Redundancy can be built into the system if benefit−cost analysis justifies its use. For example, a building can be made stronger with additional columns and beams and bracing members if the additional cost is allowed. Alternate load paths for service loads and hazard induced loads. Geographically distributed systems benefit greatly even from small levels of redundancy. Hence, strategic locations, which are computationally expensive to assess, need to be identified for enhanced performance and resilience after disruption. The addition of redundancies cannot be considered as an option in the case of tall buildings exposed to seismic hazard. The benefits of redundancy are always included in conventional seismic design criteria, which accept the development of significant distortions in the nonlinear range. This implies that failure mechanisms occur when all the elements that contribute to the shear strength of the system reach their deformation capacities. An exception to this statement can be, for instance, the case when the required lateral shear capacities are provided by a rigid frame arrangement, with the infilling walls isolated from it. However, in some cases the possibility of having infilling elements providing additional shear strength capacity for very high intensities can increase the seismic vulnerability of the system if the distribution of the additional lateral strength is not uniform along the building height. Not only structural, also systemic (e.g., promoting the use of looped infrastructure networks). Only if intentional damage is an issue, or as a retrofit measure when adequacy of some existing components is questionable. High factors of safety, more than one member to shoulder a given critical load. This is another way to improve the infrastructure system and the asset itself— in case one component fails, the other will hopefully keep it up. For example, have multiple reinforcing beams. Constructing additional members that would help to make the structure redundant. Making continuous systems, alternative load path systems, ductility. 4. Do you reconfigure the entire system layout? (version for Engineers) Reconfiguring the entire system layout. (version for Researchers)

144

RISK-BASED STRUCTURAL EVALUATION METHODS

Building massing studies at early design stages. No (other than choosing early in the design process to perhaps use base isolation or dampers or select a structural system that is known to better protect against damage). Risk treatment strategies are not constrained. The NRC requires a detailed level of licensing where system configuration would be considered (having said this, it would be the licensee’s decision as to how this may be implemented). Yes. Generally not for an existing facility. Possible for a new design during the design stage. In case of buildings by defining no-build areas. In case of transportation infrastructures one can eliminate gravitational hazards by building a tunnel. Yes. If needed or if we can. Based on structural type selection. Use of pile supported substructure for bridges crossing waterways is a good example. For torrent barrier structures in the Alps the structural system chain is reconfigured if settlements can be moved to other locations. Use benefit−cost analysis. Use risk transfer methods (insurance or contracts). Examine risk avoidance options. Not economically feasible. This is a design issue and such a decision must be made at the preliminary feasibility study phase. At that stage there is more room for making radical decisions about design changes. As the design progresses, system design becomes more rigid and will have less flexibility for changes. Providing alternate designs using alternative structural systems that limit the response that is driving the damage estimates. For new systems, or even for the integration of legacy systems with new ones, the notion of reconfiguration into more decentralized yet coordinated systems is a promising route for increased resilience and lifetime performance.

ANSWERS TO SECTION III OF THE SURVEY

145

The actions mentioned in connection with “reducing vulnerability of components” can be considered as reconfigurations of the entire system layout. This may not be easy, but at the urban planning level these type of considerations should be taken into account in hazard prone areas. Measures such as diagonal bracing for earthquake risks, smoothing discontinuities in vertical changes to lateral stiffness. Also provide alternative facilities or routes as a functional redundancy. Might be possible for new design following an approach specified in response to Q6, part I. Robust system. 5. Do you reduce the consequences of failure? (version for Engineers) Reducing the consequences of failure. (version for Researchers) Modifying space planning inside building or positioning of structural elements. The decision maker might decide to develop earthquake preparedness material that would mitigate possible failures and associated downtime. Risk treatment strategies are not constrained. Yes, we ensure that the consequences of a severe accident at a nuclear reactor can be mitigated via multiple overarching processes such as: protection of the reactor core (i.e., core damage protection as a first barrier), capability to contain fission products, and emergency response, for example. Emergency operations planning to account for losses to critical infrastructure and key resources, and staging of temporary services. It depends. If the risk (i.e., SEL/SUL loss estimates) is moderately high but less than insurance or finance maximums we would advise that voluntary seismic upgrades may be considered to reduce damageability. If the risk is higher, we would state that upgrade would be required to reduce the loss estimates. If there is partial or full collapse potential or the building does not meet life safety criteria per ASCE 31, we would recommend retrofit. Life loss consequences from flooding can be reduced through warning and education of the people at risk.

146

RISK-BASED STRUCTURAL EVALUATION METHODS

Yes. This can be done by contingency planning. Yes. Try to minimize. Designing for ductile failures, avoiding progressive collapse, etc. Insurance/contracts. Emergency response. Providing additional resilience by speeding the recovery process. By examining community behavior during and after hazard events. Consequences include direct and indirect costs. Direct costs include structural, nonstructural costs and casualty, and indirect costs include indirect economic costs accrued from business interruption, relocation, economic impact to society, etc. Oftentimes, indirect cost multiplies overall consequence tremendously and can be managed by preparing proper post disaster strategies. By establishing emergency preparedness measures. By monitoring the risk event on a continuous basis. This may be achieved through innovative construction techniques or materials usage. Also, a key contributor to consequences of bridge failure (for example) is typically associated with the loss of functionality at the transportation network level. Hence alternative routes, temporary structures, etc. can help to alleviate driver delay. No comment. When one solution does not solve the problem, using a combination of all of the above to reduce response, improve structural configuration, or reduce vulnerability using more ductile components, each contributing to overall reduction of consequences in some measure. Note that decentralization also reduces the consequences from localized failures, which themselves are better contained from spreading through entire regional or national systems, particularly in the context of infrastructure systems. Also, engaging users to take responsibility and attempt to be selfsufficient for at least 48 to 72 hours after an event will greatly aid with the focus of utility operators to restore service after disasters strike. In addition, increased monitoring of systems will enable utility owners to determine failure modes and locations more expeditiously, resulting in better use of limited resources soon after the contingencies materialize.

ANSWERS TO SECTION III OF THE SURVEY

147

If sufficiently long warning time is available, evacuation of the system in risk may be a choice. This is the case for hazards such as flooding, hurricanes or tornados. For seismic events, the time between the detection of the event at the source and the arrival of the seismic waves at the site of interest is usually very short, but it may be as long as 50 seconds in Mexico City, for large-magnitude earthquakes generated at the subduction zone along the southern coast of the country. Unfortunately, evacuation times can be much longer than that in multistory buildings. Even for cases with short alarm times, it may be possible to automatically disconnect vulnerable equipment or installations, thus reducing the consequences of their failure or malfunction. Establishing the alternative path in the network. Especially working on the concept of disaster management and resilience, to maximize the post-event functionality and minimize the long-term losses (which are usually larger than the immediate losses). An important (and often overlooked by engineers) and often economical part of the pattern of preparation, response, recovery and mitigation. Make the infrastructure more resilient. For example, reinforcing support beams on a bridge to withstand an earthquake. Contingency plans for raid warning and evaluation. Quick response to repair damaged infrastructure. Building robust system. Communicating the risk to the public and structure users/occupants to minimize exposure. Enhancing resilience of system to avoid catastrophic collapse of entire system and restrict it to localized damage in noncritical components. Answers to question III.2 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you prioritize the options given in Question III.1? (version for Engineers) On what basis would you prioritize the options given in Question III.1? (version for Researchers) The NRC relies on engineering judgment, operating experience, and (for very specific actions) cost−benefit analysis. Benefit−cost analysis. Capability-oriented approach. There’s a problem with the form; only the first three should be checked.

148

RISK-BASED STRUCTURAL EVALUATION METHODS

Look at all the alternatives and see where the greatest risk reduction can be achieved. All of the above come into play in making decisions. RG 1.174 is a good example of factors that are to be considered in a risk-informed decision. Cost−benefit analysis, engineering judgment. Engineering judgement, experience from past events, and cost−benefit analyses. First two. All of the above and more. Cost−benefit analysis, Multi-objective decision analysis. Experience from past events. Capability/performance typically comes first, but there is also a cost/benefit consideration to all repairs and modifications. Cost−benefit analysis, Multi-objective decision analysis, Capability-oriented approach. I would prioritize above possible options on the following basis: Experience from past events, Cost−benefit analysis and multi-objective decision analysis. Past events – judgement – decision analysis – cost−benefit – capability. The best way to prioritize the options in Question 1 is to screen out competing approaches via past experience and judgment, to then use cost−benefit analyses or multi-attribute utility theory (MAUT) to elicit decision makers’ values and map them to the competitive options/alternatives that have been initially screened out. MAUT along with other outranking approaches need to become more mainstream in structural and infrastructure engineering as they could incorporate multiple dimensions and remain mathematically rigorous yet practical in the field. From the choices above, I would select the following: Experience from past events; Cost−benefit analysis; Capability-oriented approach (evaluated in terms of degree of code-compliance); Multi-objective decision analysis.

ANSWERS TO SECTION III OF THE SURVEY

149

Cost−benefit (probabilistic, if needed) multi-objective decision support [there is an issue with the flag above]. They are probably all appropriate in certain cases. Answers to question III.3 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How does the risk information flow internally during the decision-making process and who is involved in the process? (version for Engineers) How should the risk information flow during the decision-making process and who should be involved in the process? (version for Researchers) Usually team is small and strict document control serves as information flow, along with person-to-person meetings. Prepare report, present and discuss findings with the client. Depends on the offices, sometimes it is a collaborative effort, other times it is done by an individual then shared. Program specialist, program and project managers, senior leaders. Very similar to the risk assessment information flow, and stakeholders such as the owner, the inspectors, the engineers and the experts are involved. Yes, there is explicit guidance through multiple processes on the level of engagement, communication, decision-making authority requirements and other aspects that are intended to ensure the appropriate flow of information is used in the decision-making process. As an example, see the publicly available internal procedure LIC-504, Office Instruction, “Integrated Risk-Informed DecisionMaking Process for Emergent Issues” (http://pbadupws.nrc.gov/docs/ML1005/ ML100541776.pdf). Between engineer and other subject matter experts. Between modeler and analyst, between engineer and stakeholders, between engineer and client agency. We are not directly involved in the decision-making process. This is complicated. Risk team to a senior level review team to upper management. As mentioned above, there is a structured approach to risk-informed decision making that includes quantitative results, other insights, and cost-benefit

150

RISK-BASED STRUCTURAL EVALUATION METHODS

analysis. Both the technical staff and management are involved in formulating a decision. This also involves interactions with the affected licensees. Internal flows via communications, project team is involved. Decision making is an iterative process and all information should be available to everyone. The owner, consultant, contractor, affected public should be informed. The risk communication is essential part of risk management. The whole design team’s input after the risk analysis is done. See the department’s risk management guide mentioned before. In the field of torrent barriers in the mountains there is the following hierarchical communicating and protecting procedure with respect to risk issues: Responsible engineer for territory supervision / responsible engineer for state supervision / ministry. On each of these levels there are planning and evacuation assignments with respect to risk, and it is a clear path defined with respect to the communication of risk to the public. Problem dependent. Generally, the information should be customized for particular objectives and recipients. Decision-makers have to be provided with Pareto objective sets resulting from multi-objective optimization processes. Economists, environmentalists, and structural reliability specialists have to be involved in this process. Engineers—civil and electrical, environmental, urban planners and policy experts. The risk assessment should be performed by the engineers. The results should flow up the chain of command to the lowest management level authorized to accept the risk level determined in the assessment. Risk-informed decision making consists of hazard modeling, structural vulnerability assessment, consequence assessment, risk assessment and prioritization of available options. Depending upon hazard type, earthquake engineer, wind engineer, fire analyst, etc. should be involved in hazard modeling. Structural engineer should be involved in structural vulnerability assessment depending upon his/her specialty. Consequence assessment can be done by structural engineer and/or risk analyst for direct and indirect costs. Overall risk assessment and decision making should be led by risk analyst who communicates rigorously with the owner.

ANSWERS TO SECTION III OF THE SURVEY

151

As defined in the beginning paragraph of this section, risk information must flow during the decision-making process thorough system’s owner, engineers, analysts, system managers. The information flow can be defined within the context of the organizational structure. Organizational structure may be different depending on the agency, etc. Analyst, owner, public / other stakeholders. It should flow from one person per hazard to a team leader; It should then be up to the lead to communicate with the stakeholder(s). Information should flow freely back and forth between engineers and stakeholders/decision-makers. Stakeholders must be involved. Engineers must stop making risk-based decisions on behalf of the general public. It obscures what we do and creates a public misconception about hazards and risks associated with our built environment. Risk information should flow in a closed loop where risk analysts provide to owners and stakeholders evidence of risks and alternative ways to manage it, while users and additional stakeholders provide extra input about needs and requirements to influence owners back. They in turn would ask additional “what-if” questions to risk analysts, and the cycle is repeated until all owners and users are satisfied with a reasonable compromise. The existence of guidelines and future standards based on risk and resilience principles will empower users and stakeholders in this decision-making process. At least two situations can be considered: immediate (or short term) and longterm risk. + For decisions related to short term risks, emergency actions may have to be implemented. For this purpose, civil-protection agencies should have quantitative information of the risks, provided by groups including both specialists in the corresponding natural hazards and engineers capable of assessing levels of vulnerability and risk and proposing risk-reduction strategies. The information from one of these groups should flow to the civilprotection agencies and then to the communities that may be affected, in the form of recommended emergency actions, using the most adequate means of communication for each particular case. + Decisions related to long term risks should be made by civil-protection agencies, focusing on risk reduction and mitigation actions for two possible situations: (a) existing systems at hazardous sites and (b) systems to be built in the future. For each of these cases, alternative risk-reduction strategies must be examined, taking into account quantitative information about immediate and long-term cost-benefit analysis for each alternative. The study of these alternatives should be carried out by the groups of specialists mentioned in the preceding paragraph. The strategies may include (a) immediate actions, to be communicated to communities at risk, and (b) risk-prevention strategies, applicable to systems to be built in the future.

152

RISK-BASED STRUCTURAL EVALUATION METHODS

Engineers (including specifically trained ones) and authorities should be involved. Anyone who is a stakeholder (directly, or for many of the consequences indirectly) should be part of the process from the beginning. Studies have shown that successful engineering policies involve the public from the beginning (including people engineers would classify as lay people), make the people involved have a genuine empowerment role, consider alternatives beyond the usual engineering solutions, be aware of the cost and environmental aspects of solutions, especially in terms of social aspects, and provide feedback. The information should come from the agency who owns the infrastructure. Experts in the field should be involved in the process. Other stakeholders must be involved (taxpayers or the general public who own the public infrastructure); businesses in the region, and system users. Risk manager, Internal stakeholders, External stakeholders. Answers to question III.4 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. How do you communicate the risk to the public? (version for Engineers) How should agencies communicate risk to the public? (version for Researchers) We do not. Through presentations that make it into the public domain (6 o’clock news). For example, during the 100-Centennial conference of the 1906 San Francisco Earthquake (2006), the findings of a major study of the consequences to the San Francisco Bay Area (19 counties) due to a repeat of the 1906 San Francisco earthquake were presented at a plenary session with local politicians that received broad media coverage and which provided talking points for other related talks and conference press releases. AAL and other probabilistic tools, however elegant, do not work well when communicating risk to the public. It is much more effective to describe the expected consequences of a major earthquake that will occur sometime in the future (P = 1.0, just a matter of when) that are newsworthy and easily comprehended by the public. In the 1906 study example, the findings included a total dollar loss estimate of $150 billion (illustrating increased risk since 1906 due to the 10-fold growth of the region), major structural damage to over 10,000 commercial buildings (e.g., roughly 40% of all commercial buildings in San Francisco and San Mateo Counties would be closed), 250,000 displaced households requiring temporary shelter, and the remarkable finding that over one-half of est. 1,800 nighttime fatalities (and over one-third of est. 3,400 daytime fatalities) would occur in less than 5% of all building types (i.e., soft-story wood, nonductile concrete and URM

ANSWERS TO SECTION III OF THE SURVEY

153

buildings)—suggesting (to decision makers) that seismic retrofit programs could target a relatively small population of the most vulnerable buildings and still be quite effective in improving life safety. Reference: “When the Big One Strikes Again: Estimated Losses Due to a Repeat of the 1906 San Francisco Earthquake,” Earthquake Spectra, Special Issue II, Vol. 22 (Oakland, California: Earthquake Engineering Research Institute). 2006 Outstanding Paper Award. It depends on the level of risk. All communication with the public is done through the bridge owner. The NRC has an extensive focus on sharing information with the public on multiple issues, where risk information is included. This is a challenging area and continuous improvement is sought but, for examples of the NRC’s activities in this area, see NUREG-2122: Glossary of Risk-Related Terms in Support of Risk-Informed Decision-making (http://pbadupws.nrc.gov/docs/ ML1215/ML121570620.pdf); NUREG/CR-7033: Guidance on Developing Effective Radiological Risk Communication Messages: Effective Message Mapping and Risk Communication with the Public in Nuclear Plant Emergency Planning Zones (http://pbadupws.nrc.gov/docs/ML1104/ML110490120.pdf); NUREG/BR-0318: Effective Risk Communication: The Nuclear Regulatory Commission’s Guidelines for Internal Risk Communication [While this is internal, it highlights techniques that can be used for external purposes as well] (http://pbadupws.nrc.gov/docs/ML0509/ML050960339.pdf). Odds of hazard occurrence, Historic data and examples, Examples of underestimation of risk and lessons learned, probability of exceedance over lifespan (odds of design failure). We have no communication with the public. Typically, we talk directly with clients subsequent to site visit to the property then issue an executive summary and cost table within one week. A draft report is sent via email for client review and comment within two weeks of the site visit. Once all client comments have been addressed a final report is issued. I don’t. Normally USACE communicates to the public through local governments/agencies. Freedom of Information Act. Public meetings. The NRC has published guidelines for risk communication to both internal and external stakeholders. Training is conducted on this subject. Most of the regulatory decisions are made in an open environment so that all of the

154

RISK-BASED STRUCTURAL EVALUATION METHODS

stakeholders can view the process. There are designated formal opportunities for public participation (e.g., hearings or public comments). Public hearings. In clear and understandable terms. Graphical presentation should be preferred, e.g., risk maps. By coordinating with the public agencies. Outreach by demonstrating structural failure and risk involved. This is a sensitive issue and depends on the specific case. This is a whole topic for another paper. Problem dependent. Generally, risk messages should be customized for particular objectives and recipients. The risk to the public has to be reported in the following terms: very high, high, moderate, and low. Through public policy experts and consulting cognitive psychology experts. Not sure, but it needs to be simple since most of the public are not numerically or technically savvy. One easy language to communicate risk to layperson would be risk premium or insurance premium. I believe use of a nominal value of risk that can be used for different hazards could be helpful. Agencies should communicate risk to the public by emergency preparedness measures and training and educating of the public thorough various media such as the internet. Education campaign on current levels and potential changes with heightened investment. With comparative comments and language. Very carefully. It is increasingly accepted that the public is indeed sensitive to the severity of the risk; hence, categorical or qualitative scales would do well to align the public risk perceptions to expected risks and contribute to adequate behavior and

ANSWERS TO SECTION III OF THE SURVEY

155

action. However, such risks should be in a geographical scale that is pertinent to the experience of the public. This should be done in different forms, at different levels. To professionals responsible for (a) revision and updating of building codes and (b) identification and reinforcement of systems likely to be exposed to significant risks, relevant information should be transmitted by civil-protection agencies through the corresponding professional organizations. For transmitting information to the general public, civil protection agencies should examine the best options for each community. In general, they will largely depend on the socioeconomic and cultural background conditions. They must transmit information (a) trying to create or enhance risk consciousness attitudes and (b) showing the best (feasible) options to reduce risk. Agencies should develop effective risk communication strategies to improve message design and delivery in difficult or uncertain circumstances, which will increase public trust in the agency and reduce public anxiety about a risk and will help key stakeholders make better decisions. Risk is very difficult to communicate. In these cases, I always resort to comparisons or relative numbers. Things like “this retrofit will reduce losses by x% if a hurricane like Sandy strikes again.” Same as above. Community outreach and seminars. Print and electronic media. YouTube, Facebook, Twitter, etc. Through social media, newspaper, television.

This page intentionally left blank

APPENDIX E

Answers to Section IV of the Survey

This appendix provides only the answers provided by each respondent to the technical questions in Section IV of the survey. The questions are IV.1 In a few words provide your overall assessment of current risk analysis processes and how to implement them in practice. IV.2 What are the weak links in current processes that need further improvement? IV.3 What are your recommendations for immediate improvements to current processes? IV.4 Please scale the priorities for near-term improvements of processes. IV.5 Please attach any helpful documents that further describe your proposed risk analysis process. Answers to question IV.1 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. In a few words, provide your overall assessment of the risk analysis process, its strengths, and how it compares to traditional deterministic or purely probabilistic approaches. (version for Engineers) In a few words, provide your overall assessment of current risk analysis processes and how to implement them in practice. (version for Researchers) Allows prioritization of some mitigation measures over others based on reduction in risk per unit cost. Risk assessment is a new and active research topic. There have been different approaches which rely on a combination of deterministic and probabilistic factors. This is the nature of the risk assessment. We always try to understand the perceived risk, while the actual risk may be different than the perceived one.

157

158

RISK-BASED STRUCTURAL EVALUATION METHODS

Risk assessment approaches try to bring the perceived risk closer to the actual risk. The risk analysis process provides significant insights if implemented appropriately and used in the correct context: to provide risk insights as well as a prioritization ranking of more/less risk-significant issues. It should be noted that the risk analysis process can detract benefits if focused too heavily on quantitative results (i.e., solely risk-based) and without appropriate calibration (e.g., inclusion of uncertainty information). With respect to a purely deterministic approach, probabilistic approaches can provide a risk spectrum that accounts for both likelihood and consequence while also managing residual risk in a way that is less binary (i.e., zero probability of failure if a design basis is met or vice versa). My overall assessment of the risk analysis process we are involved in is that it provides a reasonable and relatively repeatable method to quantify damageability given our clients’ acceptable level of uncertainty. I believe our approach, while it can be superficial, represents the best of both worlds. A traditional deterministic analysis often cannot be satisfactorily performed within the give timeframes we operate in, whereas a purely probabilistic approach often yields similar results for buildings with quite different performance characteristics. The USACE risk assessment process is being developed while we are using it. One engineer in the Corps likens it to flying the plane while you’re building it. Parts of the risk process are fairly well understood and developed, like determining annual probability of exceedance for river flooding. While others are not well understood from a quantification standpoint, like probability of progression of seepage and piping in a levee. However, the process does provide better understanding of risk than deterministic approaches. Deterministic methods and risk methods are complementary. The deterministic methods are necessary for a complex system involving many disciplines and large number of designers to avoid individual decisions. However, the design is to assure that a particular structure or component will perform its function. On the other hand the risk assessment is the total integrated response of a system when something fails. It asks the questions, What can go wrong? What are the consequences? And how likely? The risk assessment is an integrated response which takes into account design, as-built construction, operation, and human aspects. Risk assessment can identify weaknesses that are not apparent in the design process. There are developed processes worldwide both in agencies and private companies and they are implemented in practice. However, they differ from country to county and from company to company and are hardly comparable.

ANSWERS TO SECTION IV OF THE SURVEY

159

Risk assessment considers probability on two variables, Load and resistance, in other words hazards and resilience. Deterministic approach requires variance in structural resilience parameters and pure probabilistic approach requires study of hazard exceedance probability/uncertainty. In the technical sciences, a quantitative measure in the form of a probability of failure value is used to compare the actual risk present with the tolerable risk value as defined by the relevant standards, or with reasonably acceptable risk. High level of maturity if entrusted to experienced analysts. The state-of-the-art in the risk analysis processes is quite advanced; however, the state-of-practice is in its initial stage of development. For our application, the risk analysis processes are not too bad. Our biggest issue is that the people who have guided the little bit of implementation that has occurred were ignorant of the techniques developed in the 1960s and were not clear in explaining their thoughts behind the approaches that were recommended. Now we are having to educate people about risk assessment techniques and convince them that the approaches recommended by the “giants” of our organization are not the best. Since first-generation probability-based design has been adopted in structural engineering practice, there have been discussions among researchers in understanding societal risk acceptance toward low-probability, high-consequence hazards. Implementing such discussions and the result of recent developments into risk engineering practice can be done based on advanced decision models. Characterization of hazard potential, structural or infrastructure system fragility, and consequence modeling integrated to offer estimates of risk. I believe risk targeted design is within arm’s reach for bridge infrastructure (even if risk is limited to damage or functionality loss), but it lags behind the recent advances made in the seismic building community. Target reliability levels for classes of bridges (e.g. critical/essential, standard) can be identified also to achieve transportation network level performance objectives. However, standardizing this for different network topologies is not trivial. Based on the portion I know (e.g., natural hazards, it is relatively good from a hazards standpoint). From my perspective, I believe current risk analysis processes to be in their infancy. They might have been around in the nuclear arena for a long time, but we are just now beginning to think about these concepts in general building design and construction. We have big challenges in deciding on criteria for

160

RISK-BASED STRUCTURAL EVALUATION METHODS

acceptable levels of risk, and on communicating risk information to nontechnical stakeholders and decision makers. Current risk analysis processes span from sophisticated to simple methods, which could be more efficacious together if consistent across them, and if tools for the quantification and management of risk are available for the public at large. Also, these distinct resolution methods could complement each other in a screening phase first and then in elaborate analyses for worthwhile cases. Concepts and tools for decisions related to seismic risk are well known, at least among the academic community. Seismic hazard maps and functions with the information required for specific cases are available in some countries and regions, including criteria and methods to account for local soil conditions and topographic configurations. Most risk studies are focused in one single hazard source. This is far away from reality. Studies, developments and implementation actions within a multihazard and multirisk framework should be encouraged. Dealing with specific structural systems requires the determination of the corresponding seismic vulnerability functions. The basic concepts and tools are also available; as far as I know, they have been applied to specific cases, mainly with the objective of understanding the sensitivity of the vulnerability functions to several features of the structural arrangements. In order to implement current risk analysis in practice we have to organize workshops for identification of possible actions, which depend on the specific countries or communities. These workshops should include the participation of risk analysts, specialists in different types of hazard sources, engineers and sociologists, and members of civil-protection agencies. They should be oriented to the identification of specific practical actions regarding implementation of current processes and the proposal of actions oriented to reducing their limitations. We should not be worried of saying that our job is difficult and delicate. All other branches of engineering have higher average salaries than civil engineers. Putting aside the financial aspect, this means that civil engineers undersell their knowledge and their work. Everybody understands that a Ferrari has higherquality engineering than a Ford, but when it comes to buildings the engineering component is given for granted: the building/bridge has to stand up, an engineer will crunch some numbers to make sure that this is the case, there is no such thing as a building with “better structural engineering.” The only way to invert this trend is to stop accepting very simplified analyses (which, indeed, don’t require much expertise and for this reason are not valued) and start doing things how we feel they should be done. Nobody will ask a Boeing engineer what method is “practical” for his analyses on the next 747. They just ask what is the most appropriate. We shouldn’t settle for the most practical either. We have good methodologies, which need refinement, but are a good starting point. We should just start asking that they are enforced.

ANSWERS TO SECTION IV OF THE SURVEY

161

Most are done by a computer program or detailed risk equations that require substantial amount of data. The implementation of risk analysis processes in practice is still limited because of their complexity. There is a need to develop practical and reliable risk assessment approaches that can be used by engineers. Answers to question IV.2 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. What are the weak links in the current process that need further improvement? (Please specify current shortcomings.) Tail risk is often not considered because it does not get generated in a library of hazard events. Epistemic uncertainties and issues of completeness can challenge risk assessment if not handled appropriately or, at least, acknowledged in the decisionmaking process (and augmented by a risk-informing philosophy where nonrisk information still plays a role). Regarding building seismic damageability, unfortunately there is limited data from past (US) earthquakes to draw from. Presumably, at the national level, there is reduced funding to develop and implement emerging technologies. In analyzing and communicating risks, there needs to be more transparency and accountability. Determining probability of failure for dam and levee systems requires much judgement and guesswork, particularly for progression of failure modes from initiation (which is what design calculations compute) to breach. Also, computing probabilities of failure for a single component is fairly well understood. How to combine correlated probabilities of failure for a large dam or levee system is not well understood. As with any methods, we need to continue to learn from the operating experience, actual events, new understanding, and computing advances in our methods. From the risk assessment perspective, we should reduce epistemic uncertainties where feasible. In order to improve the process, one needs to unify the risk analysis process. Furthermore, systematic data collection on failures has to be established. It is a sad fact that failures for obvious reasons are kept secret and thus depriving us from valuable insight. Legislation has to enforce the documentation of failures in infrastructure management systems.

162

RISK-BASED STRUCTURAL EVALUATION METHODS

Ongoing research and development incorporating new material properties. The problem is that it is not possible to use the simple product of the probability of a risk event to occur with its consequences or damage extent for evaluating the average risk expectation. Confusion regarding the concept of risk. Completely unavoidable human errors in subjective judgment of risk. Entirely ineffectual but popular subjective scoring methods. Misconceptions that block the use of better, existing methods. Recurring errors in even the most sophisticated methods. institutional factors. unproductive incentive structure. Very few experts in this field. codes of practice are reliability-based, not riskbased. practicing engineers are not familiar with the risk-based approach. The weak link in the current process is getting data to establish the initial crack size distribution and the distribution for the loads applied to the structure. Nonstationary hazard modeling approaches need to be studied further to provide comprehensive risk management framework considering climate change. Societal risk acceptance should be studied further for risk management practices that better meet the public needs. Integrated consideration of risk given to multiple hazards. Coupled impact of deterioration on hazard performance. Derivation of component reliability levels required to achieve system level performance (particularly when considering the bridge as the component of the transportation network). Characterization/modeling of an expanded set of consequences of interest to decision makers with uncertainty. The level of complexity is not consistent and not articulated well. I think stakeholders think some of the process is being done more accurately than it often is performed. As noted above, I think we have big challenges in deciding on criteria for acceptable levels of risk, and on communicating risk information to nontechnical stakeholders and decision-makers. We also have challenges in structural response simulation and knowing what level of response corresponds to what level of damage. The current processes tend to be too qualitative and also tend to use concepts such as risk, resilience, sustainability, etc., without full consistency in their definition, and much less in their rigorous quantification.

ANSWERS TO SECTION IV OF THE SURVEY

163

Criteria and methods oriented to the determination of seismic design criteria for specified target reliability and expected performance levels have been developed, but their transformation into recommendations for practically oriented, easy to apply, codified design rules is still far from being achieved. The formulation of codified seismic design criteria with consistent reliability and expected performance levels requires the development of wide parametric studies taking into account the influence of a large number of variables. As mentioned above, we have devoted very little attention to multihazard and risk analysis. In many cases, we do not take into account the influence of degradation of structural properties associated with the system response to different hazard events, or to the process of damage accumulation associated with cracking or corrosion. These are certainly sources of enhancement. It is necessary to improve the links between researchers or engineers working on failure probability estimation and consequence estimation. From a scientific point of view, I believe that the hazard models (especially for regional analyses) are still in need of substantial work. The so-called unconditional probability of reaching a limit state is still a topic of research (but significant progress has been done lately). Lack of understanding of the long-term benefits of risk management decisions. Include the concept of resilience and incorporate more factors into the risk equation that quantifies risk. The current approaches focus more on the mathematics associated with risk assessment rather the engineering aspects and the socio-economic benefits that will be derived from a risk-based analysis. Answers to question IV.3 Gray shading indicates the answers in the version of the survey distributed to Engineers, and light blue, the answers of Researchers. Does your organization have any plans in the near future to improve your current process? (Please specify.) (version for Engineers) What are your recommendations for immediate improvements to current processes? (Please specify.) (version for Researchers) Yes. No. Yes, we are planning to bring cost to the users into the formulation.

164

RISK-BASED STRUCTURAL EVALUATION METHODS

The NRC strives for a continual improvement of its risk assessment tools, methodologies, and risk-informing decision-making processes. We do not have any specific plans, but we are continually reassessing our options as improved technologies and methodologies are developed. I am not involved in risk assessment methodology planning. Yes. For example, we are working to advance probabilistic flood hazard methodology. Establish the process for defining acceptance criteria. Yes. Our company always and continues to improve the placed system. Get more engineering students to learn probability and statistics. Get them away from analysis and design courses that make them think deterministically. Standardization for particular applications. Training and education. Offer courses in risk-based analysis and design, implement risk-based techniques in practice, improve communication of the risk to public, and establish risk thresholds. The immediate improvement that the USAF needs to make is to move from the approximate method for reliability assessment currently in use to the more accurate techniques recommended by the ASCE Committee on Factor of Safety in 1964. For bridges the low-hanging fruit is single-hazard risk-targeted design maps for multiple performance objectives. Some type of standardization beyond what is present. Immediate improvements would rely on a homogeneous set of definitions, as well as on the development of a taxonomy of methods, such that users have guidance on which methods are appropriate for which problems or associated decisions. Also, examples with data availability, particularly for geographically distributed systems would trigger quantitative approaches that will ultimately help with a rigorous process and set of metrics that are quantifiable and meaningful across owners, regulators, and users. Nothing is easy “immediately.” What I can say is that I consider very positive the recent focus on system-level analyses, involving entire infrastructure systems and their combination.

ANSWERS TO SECTION IV OF THE SURVEY

165

Creating ways that decision makers (especially public elected ones) can better receive recognition and reward for decisions regarding low probability, high consequence events that are unlikely to happen during their political term of office (the incompatibility of lifetimes). Incorporate security risk assessment into multi-criteria decision-making processes for decision making regarding infrastructure improvement or construction projects. Empirical assessment tools should be replaced by analysis-based guidelines. Needs specific ways to calculate loss due to structural damage. Resilience quantification is important too. Need to develop both qualitative and quantitative approaches for risk assessment.

This page intentionally left blank

APPENDIX F

Pertinent Standards and Guidelines

This appendix provides a list of documents indicated by the survey respondents as pertinent to risk analysis and related activities. As shown in Table F1-1, the most frequently cited manuals were those issued by ASCE and FEMA. Specifically, the ASCE 7 design standard was the one most mentioned by the respondents. Table F1-1. Pertinent Codes, Standards, Specifications, and Guidelines. AASHTO (2008, 2011a, 2011b, 2017) American Bureau of Shipping: various rules American Concrete Institute: ACI 318 (2014), ACI 530 (2013) American Institute of Steel Construction: AISC 325 (2017), AISC 341 (2016) American Petroleum Institute: various standards American Society of Mechanical Engineers: ASME/ANS RA-S (2009b), ANS/ASME 58.24 (2009a), ASME (2017), ASME/ANS 58.212 (2007) American Society for Testing and Materials: ASTM 2026-16 (2016), ASTM 255707 (2007) ASCE: ASCE 4-16 (2017b), ASCE 5-13 (2013), ASCE 7-16 (2017a), ASCE 10-15 (2015b), ASCE 24-14 (2014), ASCE 31-03 (2003), ASCE 37-14 (2014), ASCE/SEI 37-14 (2015a), ASCE 41-17 (2017c), ASCE/SEI 43-05 (2005), ASCE/SEI 48-11 (2011b), ASCE 59-11 (2011a), ASCE 74-10 (2010), ASCE 111 (2006), ASCE 113 (2008) American Welding Society: AWS D1.1 (2015), AWS D1.6 (2017) American Wood Council: AWC (2018) Applied Technology Council: ATC 13 (1985), ATC 13-1 (2002) Canadian Standards Association: CSA (2014a, 2014b), CSA/NRCC 56190 (2015) Department of Homeland Security: Interagency Security Criteria, DHS (2019) FEMA P-58, P-350, P-351, P-352, P-353, P-547, P-646, P-695, P-751, P-795, P-807, FEMA-452, FEMA Hazus, MH 2.1 (2018b), Federal Highway Administration: FHWA (2006, 2012, 2018) (Continued)

167

168

RISK-BASED STRUCTURAL EVALUATION METHODS

Institute of Electrical and Electronics Engineers: IEEE (2018a, 2018b) International Code Council: ICC (2018a, b) International Organization for Standardization (ISO): TC-262/ISO-31000 (ISO 2018), TC-98/ISO 13824 (ISO 2009), TC-98/ISO 2394 (ISO 2015), TC-98/ISO 13822 (ISO 2010), TC-92/ISO 16732 (ISO 2012), ISO/IEC 27002 (ISO 2013) Japan Society of Civil Engineering: JSCA (2010, 2012) Joint Research Center: JRC. EN-1990 (2005a), JRC EN (2004–2007), Eurocode 1 to Eurocode 9 Nuclear Regulatory Commission: USNRC (1983) State Department of Transportation: various manuals The Masonry Society: TMS 402 (2016) US Army Corps of Engineers: USACE (2014) and various other engineering manuals, regulations and technical letters US Department of Defense, Air Force: JSSG-2006 (1998) US Department of Energy: DOE (2013) US National Institute of Building Sciences: NIST (2011, 2011a, 2011b, 2011c), UFC 3-340-02 (2008), UFC 4-010-01 (2018) US Navy: various rules US Water Resource Council: various standards

REFERENCES FOR TABLE F1-1 AASHTO (American Association of State Highway and Transportation Officials). 2008. Guide Specifications for Bridges Vulnerable to Coastal Storms. Washington, DC: American Association of State Highway and Transportation Officials. AASHTO. 2011a. Guide Specifications for Seismic Design. Washington, DC: American Association of State Highway and Transportation Officials. AASHTO. 2011b. Manual for bridge evaluation, 2nd ed. Washington, DC: American Association of State Highway and Transportation Officials. AASHTO. 2017. LRFD bridge design specifications, 8th ed. Washington, DC: American Association of State Highway and Transportation Officials. ACI (American Concrete Institute). 2013. Building code requirements and specification for masonry structures, ACI-530. Farmington Hills, MI: American Concrete Institute. ACI. 2014. Building code requirements for structural concrete, ACI-318. Farmington Hills, MI: American Concrete Institute. AISC (American Institute of Steel Construction). 2016. Seismic provisions for structural steel buildings, AISC 341-16. Chicago: American Institute of Steel Construction. AISC. 2017. Steel construction manual, 15th ed. Chicago: American Institute of Steel Construction. ANS/ASME. 2014. Severe accident progression and radiological release (Level 2) PRA methodology to support nuclear installation applications, ASME/ANS RA-S-1.2 (formerly ANS/ASME-58.24). Grange Park, IL: American Nuclear Society.

PERTINENT STANDARDS AND GUIDELINES

169

ASCE. 2003. Seismic evaluation of existing buildings, ASCE/SEI 31-03 ed. Reston, VA: ASCE. ASCE. 2005. Seismic design criteria for structures, systems and components in nuclear facilities, ASCE 43-05. Reston, VA: ASCE. ASCE. 2006. Reliability-based design of utility pole structures, Manual and Reports on Engineering Practice No. 111. Reston, VA: ASCE. ASCE. 2008. Substation structure design guide, Manual and Reports on Engineering Practice No. 113. Reston, VA: ASCE. ASCE. 2010. Guidelines for electrical transmission line structural loading, ASCE 74-10. Reston, VA: ASCE. ASCE. 2011a. Blast protection of buildings, ASCE/SEI 59-11. Reston, VA: ASCE. ASCE. 2011b. Design of steel transmission pole structures, ASCE/SEI 48-11. Reston, VA: ASCE. ASCE. 2013. Building code requirements and specification for masonry structures, ASCE 5-13/6-13. Reston, VA: ASCE. ASCE. 2014. Flood resistant design and construction, ASCE/SEI 24-14. Reston, VA: ASCE. ASCE. 2015a. Design loads on structures during construction, ASCE/SEI 37-14. Reston, VA: ASCE. ASCE. 2015b. Design of latticed steel transmission structures, ASCE/SEI 10-15. Reston, VA: ASCE. ASCE. 2017a. Minimum design loads and associated criteria for buildings and other structures, ASCE/SEI 7-16. Reston, VA: ASCE. ASCE. 2017b. Seismic analysis of safety-related nuclear structures, ASCE/SEI 4-16. Reston, VA: ASCE. ASCE. 2017c. Seismic evaluation and retrofit of existing buildings, ASCE/SEI 41-17. Reston, VA: ASCE. ASME (American Society of Mechanical Engineers). 2009a. Severe accident progression and radiological release (Level 2) PRA methodology to support nuclear installation applications, ASME/ANS RA-S-1.2. New York: American Society of Mechanical Engineers. ASME. 2009b. Standard for Level 1/Large early release frequency probabilistic risk assessment for nuclear power plant applications, ASME/ANS RA-S. New York: American Society of Mechanical Engineers. ASME. 2017. Boiler and pressure vessel code. New York: American Society of Mechanical Engineers. ASME/ANS 58.212. 2007. External-events PRA methodology, ASME-ANS 58.212. New York: American Society of Mechanical Engineering. ASTM International. 2007. Standard practice for probable maximum loss (PML) evaluations for earthquake due-diligence assessments, ASTM E2557-07. West Conshohocken, PA: ASTM International. ASTM. 2016. Standard guide for seismic risk assessment of buildings, ASTM E2026-16a. West Conshohocken, PA: ASTM International. ATC (Applied Technology Council). 1985. Earthquake damage evaluation data for California, ATC-13-1. Redwood City, CA: Applied Technology Council. ATC. 2002. Commentary on the use of ATC-13, Earthquake damage evaluation data for probable maximum loss studies of California buildings. Redwood City, CA: Applied Technology Council.

170

RISK-BASED STRUCTURAL EVALUATION METHODS

AWC (American Wood Council). 2018. National design specification (NDS) for wood construction. Leesburg, VA: American Wood Council. AWS (American Welding Society). 2015. Structural welding code, steel D1.1. Miami, FL: American Welding Society. AWS. 2017. Structural welding code, stainless steel. Miami, FL: American Welding Society. CSA (Canadian Standards Association). 2014a. Canadian highway bridge design code. Mississauga, ON: Canadian Standards Association Group. CSA. 2014b. Design of concrete structures. Mississauga, ON: Canadian Standards Association Group. CSA. 2015. National Building Code of Canada. Ottawa, ON: National Research Council of Canada. DHS (US Department of Homeland Security). 2019. Interagency security committee policies, standards, best practices, guidance, and white papers. Washington, DC: US Dept. of Homeland Security. DOE (US Department of Energy). 2013. Development of probabilistic risk assessments for nuclear safety applications. Washington, DC: US Dept. of Energy. FEMA. 2000a. Recommended seismic design criteria for new steel moment-frame buildings, P-350. Washington, DC: FEMA. FEMA. 2000b. Recommended seismic design evaluation and upgrade criteria for existing welded steel moment-frame buildings, P-351. Washington, DC: FEMA. FEMA. 2000c. Recommended post-earthquake evaluation and repair criteria for welded steel moment-frame buildings, P-352. Washington, DC: FEMA. FEMA. 2000d. Recommended specifications and quality assurance guidelines for steel moment-frame construction for seismic applications. P-353. Washington, DC: FEMA. FEMA. 2005. Risk Assessment: A how-to guide to mitigate potential terrorist attacks against buildings. FEMA-452. Washington, DC: FEMA. FEMA. 2006. Techniques for the seismic rehabilitation of existing buildings, P-547. Washington, DC: FEMA. FEMA. 2009. Quantification of building seismic performance factors, P-695. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2011. Quantification of building seismic performance factors: component equivalency methodology, P-795. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2012a. 2009 NEHRP Recommended seismic provisions: design examples, P-751. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2012b. Guidelines for design of structures for vertical evacuation from tsunamis, P-646. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2012c. Seismic performance assessment of buildings, P-58. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2012d. Seismic evaluation and retrofit of multi-unit wood-frame buildings with weak first stories, P-807. Washington, DC: Federal Emergency Management Agency, Dept. of Homeland Security. FEMA. 2018a. Hazard identification and risk assessment. Accessed September 26, 2019. https://www.fema.gov/hazard-identification-and-risk-assessment

PERTINENT STANDARDS AND GUIDELINES

171

FEMA. 2018b. Hazus, MH 2.1. Multi-hazard loss estimation methodology, technical manual. Washington, DC: Federal Emergency Management Agency Dept. of Homeland Security, Mitigation Division. FHWA (Federal Highway Administration). 2006. Seismic retrofitting manual for highway structures, part I: Bridge. Washington, DC: Federal Highway Administration. FHWA. 2012. National bridge inventory standards. Washington, DC: Federal Highway Administration. FHWA. 2018. Bridge load rating and posting. Washington, DC: Federal Highway Administration. ICC (International Code Council). 2018a. International building code. Washington, DC: International Code Council. ICC. 2018b. International existing building code. Washington, DC: International Code Council. IEEE. 2018a. IEEE draft recommended practice for seismic design of substations. New York: Institute of Electrical and Electronics Engineers. IEEE. 2018b. IEEE guide for bus design in air insulated substations. New York: Institute of Electrical and Electronics Engineers. ISO. 2009a. Bases for design of structures and general principles on risk assessment of system involving structures, TC-98. Geneva: International Organization for Standardization. ISO. 2009b. Risk management. Principles and guidelines, ISO-31000. Geneva: International Organization for Standardization. ISO. 2010. Bases for design of structures and general principles on risk assessment of system involving structures, TC-98. Geneva: International Organization for Standardization. ISO. 2012. Fire safety engineering, fire risk assessment, TC-92. Geneva: International Organization for Standardization. ISO. 2013. Information technology-security techniques-code of practice for information security control, ISO/IEC 27002, 2nd ed. Geneva: International Organization for Standardization. ISO. 2015. Bases for design of structures and general principles on risk assessment of system involving structures, TC-98. Geneva: International Organization for Standardization. ISO. 2018. Risk management guidelines, TC-262. Geneva: International Organization for Standardization. JRC (Joint Research Centre of EU Commission’s Science and Knowledge Service). 2004. EN-1995 Eurocode 5: Design of timber structures. Brussels, Belgium: European Committee for Standardization (CEN). Accessed November 15, 2019. https://twitter. com/EU_ScienceHub JRC. 2005a. EN-1990 Eurocode: Basis of structural design. Brussels, Belgium: European Committee for Standardization (CEN). JRC. 2005b. EN-1994 Eurocode 4: Design of composite steel and concrete structures. Brussels, Belgium: European Committee for Standardization (CEN). JRC. 2006a. EN-1991 Eurocode 1: Actions on structures. Brussels, Belgium: European Committee for Standardization (CEN). JRC. 2006b. EN-1992 Eurocode 2: Design of concrete structures. Brussels, Belgium: European Committee for Standardization (CEN). JRC. 2006c. EN-1996 Eurocode 6: Design of masonry structures. Brussels, Belgium: European Committee for Standardization (CEN).

172

RISK-BASED STRUCTURAL EVALUATION METHODS

JRC. (2006d). EN-1998 Eurocode 8: Design of structures for earthquake resistance. Brussels, Belgium: European Committee for Standardization (CEN). JRC. 2006e. EN-1999 Eurocode 9: Design of aluminum structures. Brussels, Belgium: European Committee for Standardization (CEN). JRC. (2007a). EN-1993 Eurocode 3: Design of steel structures. Brussels, Belgium: European Committee for Standardization (CEN). JRC. (2007b). EN-1997 Eurocode 7: Geotechnical design. Brussels, Belgium: European Committee for Standardization (CEN). JSCA (Japan Society of Civil Engineering). 2010. Standard specification for concrete structures. Tokyo: Japan Society of Civil Engineering. JSCA. 2012. Design specifications for highway bridges. Tokyo: Japan Society of Civil Engineering. National Institute of Building Sciences, UFC (Unified Facilities Criteria). 2008. Structures to resist the effects of accidental explosions, UFC 3-340-02. Washington, DC: National Institute of Sciences, Unified Facilities Criteria. National Institute of Building Sciences, UFC (Unified Facilities Criteria). 2018. DoD minimum antiterrorism standards for buildings, UFC 4-010-01. Washington, DC: National Institute of Sciences, Unified Facilities Criteria. NIST (National Institute of Standards and Technology). 2011a. Standard of seismic safety for existing federally owned and leased buildings, ICSSC recommended practice 8, GCR 11917-12. Gaithersburg, MD: NIST. NIST. 2011b. Earthquake risk reduction in buildings and infrastructure program. Accessed September 26, 2019. https://www.nist.gov/programs-projects/earthquake-risk-reduction-buildings-and-infrastructure-program NIST. 2011c. Fire risk reduction in buildings program. Accessed September 26, 2019. https:// www.nist.gov/programs-projects/fire-risk-reduction-buildings-program NIST. 2011d. Structural performance for multi-hazards program. Accessed September 26, 2019. https://www.nist.gov/programs-projects/structural-performance-multi-hazards-program TMS. 2016. Building code requirements and specification for masonry structures. Longmont, CO: The Masonry Society. US Department of Defense. 1998. Joint service specification guide: Aircraft structures, JSSG2006. Washington, DC: US Department of Defense. USACE (US Army Corps of Engineers). 2014. Safety of dams: Policy and procedures. Washington, DC: US Army Corps of Engineers. USNRC (US Nuclear Regulatory Commission). 1983. PRA Procedures guide: A guide to the performance of probabilistic risk assessments for nuclear power plants: Chapters 9–13 and Appendices A–G (NUREG/CR-2300, Vol. 2).” Accessed September 26, 2019. https:// www.nrc.gov/reading-rm/doc-collections/nuregs/contract/cr2300/vol2/

Index

Page numbers followed by e, f, and t indicate equations, figures, and tables. American Association of State Highway and Transportation Officials (AASHTO), 3–4, 20, 90 Applied Technology Council (ATC), 96, 111, 127 ASCE Standard 7, viii, 3, 41, 56, 132, 167 ASME/ANS RA-S, 111, 113 comments and suggestions (survey section IV): engineers and, 70, 157–159, 161–162, 163–164; researchers and, 80, 159–161, 162–163, 164–165; survey findings summary, 31–35, 35f component system deterioration: engineers and, 113; researchers and, 114–115; survey findings summary, 23–24. See also service life; structural deterioration cost-benefit analysis, of risk, 13, 20, 29, 31; engineers and, 147, 148, 149–150; researchers and, 89, 105, 106, 134, 148–149, 151; workshop discussions, 45, 47. See also historical damage and cost data design life. See service life failure, consequences of: engineers and, 67, 124–125, 145–146; researchers and, 77, 125–127, 146–147; survey findings summary, 25–27, 26f, 27f; workshop discussion summary, 44–45

flow of risk information. See risk information, flow of Fukushima Earthquake, vii, 1 general information section, of survey. See respondents (survey section I) guidelines and standards: list of, 167–168t; recommendations summary, 56–57; survey findings summary, 12; workshop discussion summary, 49–50 hazard assessment: engineers and, 65–66, 108, 110–111; researchers and, 75–76, 108–110, 111–113; survey findings summary, 20–23, 21f, 22f, 23f; workshop discussion summary, 41–42 Hazus (Hazard US), 24, 41, 118, 120 historical damage and cost data: engineers and, 127–128; researchers and, 128–129; survey findings summary, 28–29 Hurricane Katrina, vii, 1, 12 I-35W Bridge, Minneapolis, 12 improvement suggestions. See comments and suggestions (survey section IV) inspections: engineers and, 116; researchers and, 116–118; survey findings summary, 20, 24 173

174

INDEX

International Organization for Standardization (ISO), 52, 85, 100, 103, 131, 134 Markov-chain processes, 19–20, 23, 105–106 Mil-Std-1530, 117 Mil-Std-882, 109, 130 mitigation strategies, survey findings summary, 30–31 National Research Council of Canada, 8 New York State Department of Transportation, 8 nuclear power generation, vii, 3, 8; engineers and, 82, 108, 111, 113, 116, 119, 121, 124, 142, 145, 153; researchers and, 92, 133, 159; survey findings summary, 11, 14, 24, 28; workshop discussion summary, 40, 53, 55–56 obstacles to risk assessment, workshop discussion summary, 49–53 performance and damage: engineers and, 118–119, 121; researchers and, 119–120, 121–124; survey findings summary, 24–25 REDARS (Risks from Earthquake Damage to Roadway Systems), 45 respondents (survey section I): engineers and, 62–63, 81–83, 87–88, 91–92, 94–95; researchers and, 72–73, 83–87, 88–90, 92–94, 95–97; survey findings summary, 8–15, 9–10t, 11f risk, defined: engineers and, vii, 64, 100–101; researchers and, 101–104; survey findings summary, 16–17 risk acceptance criteria: engineers and, 132–133; researchers and,

133–135; survey findings summary, 13, 18, 19, 20, 29, 35; workshop discussion summary, 47–48, 52, 55, 57 risk assessment (survey section II): components of process, 1–3, 2e; engineers and, 64–68, 129–130; researchers and, 74–78, 130–132; tasks of, 4, 4f, 5; workshop discussion of methods, 39–44 risk assessment (survey section II), summary of findings, 13, 15–16; hazards, 20–23, 21f, 22f, 23f; inspections, 20, 24; new and existing structural systems, 17–20, 17f, 18f, 19f; performance and damage, 24–25; risk acceptance criteria, 29; risk communication, 29; risk defined, 16–17; risk estimation from historical damage and cost data, 28–29; risk quantification, 27–28; structural analysis, 25, 26; structural failure, 25–27, 26f, 27f; system deterioration, 23–24 risk categories, 3 risk communication: engineers and, 152–154, 161; researchers and, 79, 154–155; survey findings summary, 29, 31; workshop discussion summary, 45–46. See also risk information risk data, workshop discussion summary, 48–49 risk information, flow of: engineers and, 153, 161; researchers and, 89, 96, 151, 160, 162; survey findings summary, 31. See also risk communication risk management (survey section III): engineers and, 69, 137–138, 139–140, 141–142, 144, 145–146, 147–148, 149–150, 152–154; process of, 4–5; researchers and, 79–80, 138–139, 140–141, 142–143,

INDEX

144–145, 146–147, 148–149, 150–152, 154–155; survey findings summary, 13, 29–31 risk quantification, survey findings summary, 27–28 “risk triplet,” 101 seismic engineering: assessment and, 3; researchers and, vii–viii, 55–56; survey findings summary, 12, 14, 20–21, 30; workshop discussion summary, 40–45, 47 service life: engineers and, 64, 104–105; researchers and, 74, 105–107 structural analysis: survey findings summary, 25, 26f; workshop discussion summary, 39–41 structural deterioration: survey findings summary, 23–25; workshop discussion summary, 42–43 structural failure. See failure, consequences of structural performance: survey findings summary, 15, 25; workshop discussion summary, 43–44 structural systems, survey findings summary, 16–20, 17f, 18f, 19f Superstorm Sandy, vii, 1, 12, 155 survey, for engineers, 61; general information answers, 81–83, 87–88, 91–92, 94–95; general information questions, 62–63; risk assessment questions, 64–68; risk management

175

questions, 69; suggestions for improvement questions, 70 survey, for researchers, 71; general information answers, 83–87, 88–90, 92–94, 95–97; general information questions, 72–73; risk assessment questions, 74–78; risk management questions, 79–80; suggestions for improvement questions, 80 survey, summary of findings, 7; comments and suggestions, 31–35, 35f; conclusions of, 55–56; general information, 8–15, 9–10t, 11f; objectives of, 5; recommendations, viii–ix, 56–57; risk assessment, 13, 15–29, 17f, 18f, 19f, 21f, 23f, 26f, 27f; risk management, 13, 29–31 US Air Force Research Laboratory, 8 US Army Corps of Engineers (USACE), 4, 8, 12, 41, 87–88, 95, 101, 105, 124, 153, 158 US Nuclear Regulatory Commission, 3, 8, 40, 153 workshop, after survey, vii, 5–6; evaluation of consequences of structural failure, 44–45; objectives of, 5, 37; obstacles to risk assessment, 49–53; participants in, 37, 38–39t, 39; risk acceptance criteria, 47–48; risk assessment methods, 39–44; risk communication, 45–46; risk data, 48–49 World Trade Center attack, 1, 12, 51