133 102
English Pages 287 [282] Year 2023
Weifei Hu
Design Optimization Under Uncertainty
Design Optimization Under Uncertainty
Weifei Hu
Design Optimization Under Uncertainty
Weifei Hu State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering Zhejiang University Hangzhou, China
ISBN 978-3-031-49207-5 ISBN 978-3-031-49208-2 https://doi.org/10.1007/978-3-031-49208-2
(eBook)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
Design optimization under uncertainty (DOUU) studies the theories, methods, and technologies to design reliable products considering various uncertainties ubiquitous in loads, geometry, material properties, manufacturing processes, and operational environments. Traditional deterministic design optimization processes treat these uncertainties by some simplified rules, such as using safety factors and considering limited operation conditions, which do not directly account for the random nature of the design variables, the constrained performances, and the objective function. This may lead to a non-reliable design, or too conservative and expensive design. Hence, it is an increasingly important task to involve uncertainties in the optimization processes for designing complex engineering equipment, such as large-scale wind turbines, automobiles, ships, and airplanes, to name a few. This book introduces the fundamental concepts of probability and reliability, the classical methods of uncertainty modeling, time-dependent and time-independent reliability analysis methods, model verification and validation, two main categories of DOUU methods (e.g., reliability-based design optimization and robust design optimization), the state-of-the-art approaches of physics informed methods for DOUU, and a comprehensive survey of engineering applications of DOUU. Each chapter begins with the fundamental theories and methods in a lucid, easy-to-follow treatment and then elaborates on the corresponding advanced approaches using detailed methodologies, mathematical models, numerical examples, tables, and graphs. References and exercises are presented at the end of chapters. The book is ideal for both educational and research needs for readers from undergraduate students, graduate students, and faculty to engineering designers. Writing a book on DOUU has always been a dream during my graduate study at South Korea and USA. My advisers planted and watered the seed of writing this book. I would like to appreciate Professor KK Choi, my PhD adviser at University of Iowa, Iowa City, USA. Professor Choi not only taught me the knowledge of reliability-based design optimization, but also shaped my philosophy about academic research. Sincerely thank Professor Olesya Zhupanska and Professor James Buchholz who co-advised me at University of Iowa, and Professor Dong-Hoon Choi v
vi
Preface
who initially led me into the field of design and optimization during my master study at Hanyang Uninversity, Seoul, South Korea. Grateful acknowledge goes to Professor Sara C. Pryor and Professor Rebecca J. Barthelmie for their numerous encouragements, guide, and support during my postdoctoral research at Cornell University. I would also like to thank my graduate students Weiyi Chen, Tongzhou Zhang, Feng Zhao, Jiquan Yan, Xiaoyu Deng, Jianhao Fang, Qing Jiao, Jiale Liao, and Sichuang Cheng at Zhejiang University, Hangzhou, China. Without their input, this book would not have been possible. Special thanks go to Michael Luby and Brian Halm for their support in publishing my second Springer book and Olivia Ramya Chitranjan for providing answers to my many questions during the manuscript preparation. Finally, I would like to thank all my family and friends for their love and support. Hangzhou, China October 23, 2023
Weifei Hu
Contents
1
2
Basic Concepts of Probability and Reliability . . . . . . . . . . . . . . . . . 1.1 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Definition of Probability . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Basic Probability Theorem . . . . . . . . . . . . . . . . . . . . . 1.1.3 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Discrete Random Variable . . . . . . . . . . . . . . . . . . . . . 1.2.2 Continuous Random Variable . . . . . . . . . . . . . . . . . . . 1.2.3 Transformation of Random Variables . . . . . . . . . . . . . 1.2.4 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.6 Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Typical Distributions of Discrete Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Typical Distributions of Continuous Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Basic Concepts of Reliability . . . . . . . . . . . . . . . . . . . 1.4.2 Importance of Reliability Assessment . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 2 3 4 5 6 6 8 10 13 15 17 18
Uncertainty Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Probabilistic Methods . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Non-probabilistic Methods . . . . . . . . . . . . . . . . . . . . .
35 36 38 38 42
19 20 26 29 30 33
vii
viii
Contents
2.3
Uncertainty Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Distribution Function Transformation Method . . . . . . 2.3.2 Monte Carlo Simulation Method . . . . . . . . . . . . . . . . 2.3.3 Evidence Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
49 49 51 52 63
Surrogate Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Surrogate Modeling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Response Surface Method . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Radial Basis Function . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Support Vector Regression . . . . . . . . . . . . . . . . . . . . . 3.2.4 Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Performance Evaluation of Surrogate Models . . . . . . . . 3.3 Adaptive Sampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Steps of Adaptive Sampling Methods . . . . . . . . . . . . . 3.3.2 General Features of Adaptive Sampling Schemes . . . . . 3.3.3 Techniques for Exploitation and Exploration . . . . . . . . 3.4 An Effective Strategy for Adaptive Sampling . . . . . . . . . . . . . . 3.4.1 Voronoi Tessellation . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Metrics for Evaluating the Existing Samples . . . . . . . . 3.4.3 Identification of Sensitive Voronoi Cell . . . . . . . . . . . . 3.4.4 Determination of the Location of New Sample Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65 67 68 68 68 69 69 71 73 73 76 78 79 79 81 82
4
Model Verification & Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Model Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Code Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Calculation Verification . . . . . . . . . . . . . . . . . . . . . . . 4.3 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Area Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Evaluating at Multiple Validation Sites . . . . . . . . . . . . 4.3.3 Validating with Multiple Correlated Outputs . . . . . . . . 4.3.4 Interval Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.5 Model Validation with Limited Data . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93 94 96 96 97 97 98 103 106 110 116 121
5
Time-Independent Reliability Analysis . . . . . . . . . . . . . . . . . . . . . . 5.1 Basic Concept of Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Concept of Reliability . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 MPP-Based Methods for Reliability Analysis . . . . . . . . . . . . . . 5.2.1 First-Order Reliability Method . . . . . . . . . . . . . . . . . . 5.2.2 Second-Order Reliability Method . . . . . . . . . . . . . . . .
123 124 124 125 126 126 130
3
85 90
Contents
ix
5.3
Sampling Methods for Reliability Analysis . . . . . . . . . . . . . . . . 5.3.1 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
132 132 136 139 141
Time-Dependent Reliability Analysis . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Basic Concept of Time-Dependent Reliability Analysis . . . . . . . 6.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Mathematical Expression of Time-Dependent Reliability Analysis Problem . . . . . . . . . . . . . . . . . . . . 6.2 Expansion of the Stochastic Process . . . . . . . . . . . . . . . . . . . . . 6.3 Outcrossing Rate Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Extreme Value Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Nested Extreme Response Surface Approach . . . . . . . . 6.4.2 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Response Surrogate-Based Methods . . . . . . . . . . . . . . . . . . . . . 6.5.1 Confidence-Based Adaptive Extreme Response Surface Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Equivalent Stochastic Process Transformation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Instantaneous Response Surface Method . . . . . . . . . . . 6.5.4 Real-Time Estimation Error-Guided Active Learning Kriging Method . . . . . . . . . . . . . . . . . . . . . . 6.5.5 Surrogate-Based Time-Dependent Reliability Analysis Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
145 146 146
6
7
Reliability-Based Design Optimization . . . . . . . . . . . . . . . . . . . . . . 7.1 Basic Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Problem Statement and Formulation . . . . . . . . . . . . . . . . . . . . . 7.3 Most Probable Point-Based RBDO . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Reliability Index Approach and Performance Measure Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Numerical Reliability Analysis Method Based on RIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Numerical Reliability Analysis Method Based on PMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Full Loop of MPP-Based RBDO . . . . . . . . . . . . . . . . . 7.4 Sampling-Based RBDO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Surrogate Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Stochastic Sensitivity Analysis Based on Surrogate Model . . . . . . . . . . . . . . . . . . . . . . . . . .
147 147 148 150 150 156 156 157 158 158 159 159 166 169 170 171 172 172 174 175 185 186 186 187 187
x
Contents
7.5
Double-Loop, Single-Loop, and Decoupled RBDO . . . . . . . . . 7.5.1 Double-Loop RBDO . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Single-Loop RBDO . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Decoupled RBDO . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
9
10
. . . . .
191 191 191 193 197
Robust Design Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Problem Statement and Formulation . . . . . . . . . . . . . . . . . . . . . 8.3 Main Procedure of RDO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 RDO Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Weighted Sum Method . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Compromise Programming Method . . . . . . . . . . . . . . . 8.4.3 Physical Programming Method . . . . . . . . . . . . . . . . . . 8.4.4 Normal Boundary Intersection Method . . . . . . . . . . . . 8.4.5 Evolutionary Multi-objective Optimization Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Reliability-Based Robust Design Optimization . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199 200 200 202 203 204 204 206 208 210 212 213
Physics-Informed Neural Networks for Design Optimization Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Basis of Physics-Informed Neural Network . . . . . . . . . . . . . . . . 9.2.1 Basic Structure of Multi-layer Perceptron . . . . . . . . . . . 9.2.2 Loss Construction Based on Priori Knowledge . . . . . . . 9.2.3 A Basic Example of PINN . . . . . . . . . . . . . . . . . . . . . 9.3 Reliability Analysis Based on Physics-Informed Neutral Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 PINN-Based Design Optimization Under Uncertainty . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
222 227 230
Engineering Applications of Design Optimization Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 RBDO Engineering Applications . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Aeronautical Engineering . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Ocean Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Bridge Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.4 Vehicle Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 RDO Engineering Applications . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Energy Management . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Logistics Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Closed-Loop Supply Chain . . . . . . . . . . . . . . . . . . . . . 10.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
231 232 233 236 241 245 249 249 249 255 258 262
215 216 217 217 218 221
Contents
10.3
Research Outlook of Design Optimal Under Uncertainty . . . . . 10.3.1 Challenges and Prospects of RBDO . . . . . . . . . . . . . . 10.3.2 Challenges and Prospects of RDO . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
. . . .
262 262 265 266
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
About the Author
Weifei Hu received the BS degree in 2008 from Zhejiang University, Hangzhou, China, the MS degree in 2010 from Hanyang University, Seoul, South Korea, and the PhD degree in 2015 from University of Iowa, Iowa city, Iowa, USA, all in mechanical engineering. From February 2016 to September 2018, Dr. Hu was a postdoctoral fellow at Cornell University, Ithaca, New York, USA. He is currently holding a ZJU100 Young Professor position at the School of Mechanical Engineering, Zhejiang University. His research interests include design optimization under uncertainty, digital twin, artificial intelligence, and wind energy. His work has been published in over 100 peer-reviewed journals and conference proceedings, e.g., Journal of Mechanical Design, Mechanical Systems and Signal Processing, Structural and Multidisciplinary Optimization, Journal of Intelligent Manufacturing, Journal of Manufacturing Systems, Renewable and Sustainable Energy Reviews, Applied Energy, Renewable Energy, Wind Energy, and many ASME and AIAA proceedings. He is now serving as an associate editor for the journal Wind Energy Science and an editorial board member for the journals Wind Energy and Journal of Intelligent Manufacturing and Special Equipment.
xiii
Chapter 1
Basic Concepts of Probability and Reliability
Nomenclature A, B, . . . Cov(X, Y ) D(x) E(X) FX(x) fX(x) g(∙) J N(μ, σ 2) P(∙) Pf PX(x) X U(a, b) Γ(∙) γ(∙, ∙) μ ρXY σ(X) Φ(x) φ(x) Ω ω Ø
Random event Covariance of random variables X and Y Variation of random variable X Expectation of random variable X Cumulative distribution function Probability density function Performance function Jacobian of transformation Normal distribution Probability of an event to occur Probability of failure Probability mass function Random variable Uniform distribution Complete gamma function Lower incomplete gamma function Mean of random variable X Correlation coefficient of random variables X and Y Standard deviation of random variable X Cumulative distribution function of standard normal distribution Probability density function of standard normal distribution Sample space Sample in sample space Impossible event
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_1
1
2
1.1 1.1.1
1
Basic Concepts of Probability and Reliability
Probability Theory Definition of Probability
Probability models are constructed for experiments that produce random outcomes when repeated under the same conditions. The collection of all possible outcomes of a random experiment is called the sample space, denoted as Ω. Each outcome of a random experiment is called a sample, denoted as ω [1]. The definitions of the sample space and the sample are tightly related to the utilization of the set, which is a collection of objects meeting certain well-defined conditions [2]. Set theory is often used for analyzing the sample space and the sample [1]. Here, any subset of the sample space, Ω, is defined as an event. A certain event is an event that contains all the samples in the sample space. In other words, since the sample space Ω contains all the samples, as defined previously, it can be used to denote a certain event. An impossible event is an event that contains no samples. The null set, denoted as Ø, is often used to represent an impossible event [3]. Some commonly used relationships between random events are listed as follows [1, 4]: 1. Subset of random events: If all the samples in event A also belong to event B, then random event A is said to be a subset of random event B, which is denoted as A ⊂ B. 2. Union of random events: An event whose samples all belong to at least one of the random events A and B is called the union of random events A and B, and is denoted as A [ B. Furthermore, an event whose samples all belong to at least one of a series of random events, A1, A2, . . ., An, is denoted as A1 [ A2 [ . . . [ An or n i = 1 Ai .s. 3. Intersection of random events: An event whose samples all belong to both random events A and B is called the intersection of random events A and B, and is indicated as A \ B or AB. Furthermore, an event whose samples all belong to all of a series of random events, A1, A2, . . ., An, is denoted as A1 \ A2 \ . . . \ An or ni = 1 Ai . 4. Mutually exclusive events: When the members of random events A and B are pairwise disjoint, then these random events are said to be mutually exclusive, and denoted as AB = ∅. Moreover, they are said to be mutually exclusive and exhaustive if AB = ∅ and A [ B = Ω. This definition can also be expanded to situations with multiple random events, i.e., when the members of a series of random events, A1, A2, . . ., An, are pairwise disjoint, then these random events are said to be mutually exclusive, and noted as A1 \ A2 \ . . . \ An = Ø. These events are further said to be mutually exclusive and exhaustive if A1 \ A2 \ . . . \ An = Ø and A1 [ A2 [ . . . [ An = Ω.
1.1
Probability Theory
3
5. Complement event: A random event that consists of all samples that do not belong to random event A is called the complement event of A, and noted as A, i.e., A = Ω - A. Suppose the random experiment is repeated n times under the same conditions. If nA denotes the number of times random event A occurs during these n experiments, the frequency fA = nA/n reflects how often event A happens. If, with the increase of n, fA stabilizes or converges at a constant p, then p is said to be the probability of the random event A, and denoted as P(A) = p. It is clear that the following rules of probability exist: 0 ≤ P(A) ≤ 1, P(Ω) = 1, and P(Ø) = 0. Example 1.1 As a random event experiment, consider a single roll of a dice and observe the number of points that appear. 1. Its sample space and sample points. 2. Examples of certain and impossible events in this experiment in randomness. Solution 1. Sample space Ω = {1, 2, 3, 4, 5, 6}, sample points ω are 1, 2, 3, 4, 5, 6. 2. The event that a die roll falls in the range of 1 ~ 6 is a certain event, and a die roll of 7 is an impossible event.
1.1.2
Basic Probability Theorem
Each random event A in the sample space Ω is assigned a real number P(A) that satisfies the following basic conditions: 1. Nonnegativity: For any random event A ⊂ Ω,P(A) ≥ 0. 2. Normality: P(Ω) = 1. 3. Additivity: If the random events A1, A2, A3,... are mutually exclusive, i.e., Ai \ Aj = Ø(i, j = 1, 2, . . .; i ≠ j), then P
1
1
i=1
i=1
[ Ai =
PðAi Þ.
Based on the definition of probability, the following important properties can be obtained: 1. The probability of the impossible event is 0, i.e., P(∅) = 0. 2. Finite additivity: If a finite sequence of random events A1, A2, A3,..., An are mutually exclusive, i.e., Ai \ Aj = Ø (i, j = 1, 2,..., n; i ≠ j), then P
n
n
i=1
i=1
[ Ai =
PðAi Þ.
3. Suppose there are random events A and B, with A being a subset of B; then, PðB - AÞ = PðBÞ - PðAÞ, PðAÞ ≤ PðBÞ
ð1:1Þ
4
1
Basic Concepts of Probability and Reliability
4. For any event A, P(A) ≤ 1. 5. Probability of the opposing event: For any random event A, P A = 1 - PðAÞ. 6. Addition: For any two random events A and B, the following relationship exists: PðA [ BÞ = PðAÞ þ PðBÞ - PðA \ BÞ
1.1.3
ð1:2Þ
Conditional Probability
Suppose there are two random events, A and B, in a random experiment, and P(B) ≠ 0. P(A|B) is denoted as the conditional probability of the occurrence of random event A under the condition that random event B occurs, which is calculated as PðAjBÞ =
PðABÞ PðBÞ
ð1:3Þ
According to the formula of conditional probability, three important formulas can be derived. 1. Multiplicative formula: PðABÞ = PðAjBÞPðBÞ, if PðBÞ ≠ 0; PðABÞ = PðBjAÞPðAÞ, if PðAÞ ≠ 0;
ð1:4Þ
2. Full probability formula: Assume that there is a random event B in a random experiment and the sample space Ω can be divided into A1, A2, A3,..., An, and P(Ai) ≠ 0, i = 1, 2,..., n, then n
PðBjAi ÞPðAi Þ
PðBÞ =
ð1:5Þ
i=1
3. Bayesian formula: Suppose there is a random event B in a random experiment and the sample space Ω can be divided into mutually exclusive events A = {A1, A2, A3, . . ., An}, and P(B) > 0, P(Ai) > 0 for i = 1, 2,..., n, then PðAi jBÞ =
PðBjAÞPðAi Þ n i=1
PðBjAi ÞPðAi Þ
, i = 1, 2, . . . , n
ð1:6Þ
1.1
Probability Theory
5
Example 1.2 A factory is producing some product parts. Through historical statistics, it is found that when the machine works well, the produced parts are 99% qualified, while when the machine does not work well, the produced parts are 60% qualified. The probability that the machine is well working when producing a part is 95%. What is the probability that the machine is well working given that a qualified part is produced? Solution Suppose the random event A is “the part is qualified” and the random event B is “the machine is well working.” According to the given information in the example, we can obtain P(A| B) = 0.99, P AjB = 0:6, P(B) = 0.95, and P B = 0:05, and we need to solve P(B| A) by the Bayesian formula: PðBjAÞ =
PðAjBÞPðBÞ 0:99 × 0:95 = = 0:9691 0:99 × 0:95 þ 0:6 × 0:05 PðAjBÞPðBÞ þ P AjB P B
Thus, given that the first part produced is a qualified product, the probability that the machine is well working is 0.9691.
1.1.4
Independence
For random events A and B, if P(A) = P(A|B), it means that whether random event B occurs or not has no effect on the probability of occurrence of random event A. Here, the two events are said to be independent of each other. This leads to a definition of the independence of random events. If two random events, A and B, satisfy P(AB) = P(A)P(B), the random events A and B are said to be mutually independent. Independence means that the occurrence of one random event does not affect the probability of the occurrence of another random event; while mutually exclusive means that the two random events cannot occur simultaneously. The definition of the independence of multiple random events is obtained by extending the concept of the mutual independence of two random events to multiple random events. Suppose A1, A2, A3,..., An are n random events. If P(A1A2. . .An) = P(A1)P(A2). . .P(An) is satisfied, then A1, A2, A3,..., An are mutually independent of each other. Example 1.3 Consider an example of a batch of product parts with a qualifying rate of 0.9. What is the possibility of k product parts being qualified among 10 randomly selected parts? Solution Since the 10 product parts are randomly selected, whether a single part is qualified or not is totally independent from any of the other 9 parts. Denote Ai(i = 1, 2, . . ., 10) as the event of “the i-th part being qualified,” then P(Ai) = 0.9, P Ai = 0:1.
6
1
Basic Concepts of Probability and Reliability
If there are k product parts being qualified, it indicates that 10 - k parts are unqualified. These events are independent of each other. Additionally, the choice of k product parts involves a number of combinations; thus, the possibility of k mechanical parts being qualified among 10 randomly selected parts is k
P = Ck10
PðAi Þ i=1
10 - k
P Ai j=1
= Ck10 × 0:9k × 0:110 - k
1.2
Random Variable
Consider a random experiment with sample space Ω. A random variable is defined as a function X that assigns to each sample point ω in the sample space, one and only one, real number x = X(ω). In this book, a capital letter, e.g., X, is used to represent random variables, while its corresponding letter in lowercase, e.g., x, stands for a realization of the random variable. In this section, two types of random variables are introduced, namely, discrete random variables and continuous random variables.
1.2.1
Discrete Random Variable
A random variable is a discrete random variable if its space is either finite or countable. In general, a discrete random variable can only represent values on discrete points, so the probability of a discrete random variable can only be calculated on these discrete points. The randomness of a discrete random variable is described by the probability mass function (PMF), denoted as PX(x). Suppose X is a random variable with the range of a series of discrete values xk(k = 1, 2, . . .); then, the probability mass function of X can be defined as pX ðxk Þ = PðX = xk Þ, k = 1, 2, . . .
ð1:7Þ
According to the definition of probability, the probability mass function needs to satisfy the following two conditions. 1. pX(xk) ≥ 0, k = 1, 2, . . . 1
2. k=1
pX ðxk Þ = 1.
1.2
Random Variable
7
Fig. 1.1 (a) Probability mass function and (b) Cumulative distribution function of a discrete random variable
The cumulative distribution function (CDF) of a discrete random variable can be obtained by summing the probability mass functions, and the expression of the CDF is F X ð xÞ = P ð X ≤ x Þ = xk ≤ x
pX ðxk Þ
ð1:8Þ
The probability mass function of a discrete random variable is a discrete function consisting of a series of discrete values, while the cumulative distribution function of a discrete random variable is a step-shape function, as shown in Fig. 1.1. Example 1.4 Suppose X is the number of times that a coin is tossed heads up on two consecutive tosses, what is the probability mass function of X? Solution Assume that A represents heads up and B represents tails up, the sample space of two consecutive coin tosses is Ω = {AA, AB, BA, BB}, and the probability of each sample ω in the sample space is (0.5)2 = 0.25. The probabilities of the random variables X being 0, 1, and 2 are calculated as pX ð0Þ = PðX = 0Þ = Pðω = BBÞ = 0:25 pX ð1Þ = PðX = 1Þ = Pðω 2 fAB, BAgÞ = 2 × 0:25 = 0:5 pX ð2Þ = PðX = 2Þ = Pðω = AAÞ = 0:25
8
1
1.2.2
Basic Concepts of Probability and Reliability
Continuous Random Variable
A random variable is a continuous random variable if its CDF, FX(x), is a continuous function for all real x 2 R. For most continuous variables, the following equation holds: F X ðxÞ =
x -1
f X ðt Þdt
ð1:9Þ
The function fX(t) is called the probability density function (PDF) of X. If fX(x) is also continuous, then f X ð xÞ =
dF X ðxÞ dx
ð1:10Þ
The PDF fX(x) has the following properties: 1. Nonnegativity: fX(x) ≥ 0. þ1 2. Normative: - 1 f X ðxÞdx = 1. 3. For any real numbers x1, x2 (x1 < x2), there exists FX(x2) ≥ FX(x1), and Pðx1 < X ≤ x2 Þ = F X ðx2 Þ - F X ðx1 Þ =
x2
f X ðxÞdx
ð1:11Þ
x1
4. Continuity: If fX(x) is continuous at x = x0, then we have F 0X ðx0 Þ = f X ðx0 Þ. The cumulative distribution function FX(x) of a continuous random variable has the following properties: 1. For any real number a, the P(X = a) = 0. 2. Normality: FX(-1) = 0,FX(+1) = 1. is a 3. Right continuity: FX(x) that lim þ F X ðx þ ΔxÞ = F X ðxÞ.
continuous
function
such
Δx → 0
An illustration of the PDF and CDF of a random variable X is shown in Fig. 1.2. Example 1.5 Suppose the random variable X has a PDF
f ðxÞ =
kx, x 2- , 2 0,
0≤x 0). It should be noted that the concept of time-independent reliability is mainly used for designing an engineered system in a way that ensures a high built-in reliability; whereas, the concept of time-dependent reliability is often employed to design an Fig. 5.1 Limit state in a two-dimensional case
126
5 Time-Independent Reliability Analysis
engineered system and/or its affiliated prognostics and health management (PHM) system to attain a high operational reliability. The purpose of this chapter is to present advanced methods and tools that can be used to quantify time-independent reliability. The time-dependent reliability analysis will be introduced in detail in Chap. 6. In general, the reliability analysis methods can be categorized into two types: (i) most probable point (MPP)-based methods and (ii) sampling methods. Different methods of approximating limit state functions form the basis of different reliability analysis algorithms. First- and second-order reliability methods (FORM/ SORM) [2] are standard methods of reliability analysis. They are based on linear (first-order) and quadratic (second-order) approximations of the limit state G(x) = 0 tangent to the MPP. Besides these MPP-based methods, there are several sampling methods (e.g., Monte Carlo simulation (MCS) and importance sampling) for reliability analysis. From Eq. (5.1) it is obvious that, in order to calculate reliability of a system, the crucial step is to obtain the probability of failure PF. When the performance function of system can be explicitly expressed, a theoretical method can be applied to directly solve Eq. (5.2), which is elaborated in Sect. 5.2. However, most engineering applications are too complicated that the performance functions are usually unknown. In this case, a sampling method is needed which is discussed in Sect. 5.3.
5.2 5.2.1
MPP-Based Methods for Reliability Analysis First-Order Reliability Method
First-order reliability method is based on linear approximation of the limit state surface (x) = 0 at the closest point of the surface to the origin in the U-space. The point is regarded as most probable point (MPP). The U-space is composed of independent standard normal variables U that are transformed from the random variables X in the X-space, which has been introduced in Chap. 1. The determination of MPP involves nonlinear constrained optimization and is usually performed in the U-space. For a normal random variable Xi with mean μXi and standard deviation σ Xi, transformation T can be simply defined as U i = T ðX i Þ =
X i - μXi , i = 1, 2, . . . , N σ Xi
ð5:3Þ
However, not all the variables are normally distributed, as is common in engineering problems. It is necessary to transform the general variables into equivalent normal variables. The Rosenblatt transformation [3] can be used to obtain a set of independent standard normal variables, if the joint CDF is available. Detailed discussion on the Rosenblatt transformation has been provided at the end of Sect. 1.3 in Chap. 1. In the U-space, the MPP u* denotes the point on the limit state surface
5.2
MPP-Based Methods for Reliability Analysis
127
which has the minimum distance to the origin. The distance, denoted as β, is called the reliability index, expressed as β= u
*
N
= i=1
1=2 2 u*i
ð5:4Þ
And the first-order estimation of the probability of failure can be computed as PF = Φð- βÞ
ð5:5Þ
where Φ is the cumulative probability distribution of standard normal distribution. Example 5.1 Consider the following performance function in U-space Gðu1 , u2 Þ = 2u1 - u2 þ 6 Calculate the reliability index β, and the probability of failure by FORM. Solution
As illustrated in the figure above, the solution can be conducted in the following steps: Step 1: Find the MPP point. Calculate the shortest distance from the origin to the LSF based on the figure above. β = 6 sin tan
-1
1 2
p 6 5 = ≈ 2:68 5
Step 2: According to Eq. (5.5), calculate the probability of failure. PF = Φð- βÞ = Φð- 2:68Þ = 0:0037
128
5
Time-Independent Reliability Analysis
The reliability index β is an efficient way for reliability analysis. However, this method only gives an exact reliability estimation if the limit state surface is a linear form, as is the case in the above Example 5.1. The algorithms implementing FORM involves several steps. 1. First, the X-space of uncertain parameters X is transformed into a new d-dimensional U-space, consisting of independent standard normal variables, U. The original limit state, G(x) = 0, is mapped into the new limit state g(u) = 0; 2. The MPP u* is determined by using an appropriate nonlinear optimization algorithm; 3. The limit state, G(u) = 0, is approximated by a surface tangent to it at the MPP. The remaining task is to search for the MPP. Generally, this task can be formulated as an optimization problem with a deterministic constraint in the U-space, expressed as Minimize
u
ð5:6Þ
Subject to GðuÞ = 0
where the optimal result is the MPP u*. The MPP search requires an iterative optimization scheme based on the gradient information of the performance function G(u). The Hasofer-Lind and Rackwitz-Fiessler (HL-RF) method [4] is the most widely used due to its simplicity and efficiency. The main steps of HL-RF are shown as following. 1. Set the number of iterations k = 0 and the initial MPP u = u(0) that corresponds to the mean value of X. 2. Transform u(k) to x(k) using the inverse of Rosenblatt transformation. Compute the performance function G(u(k)) = G(x(k)) and its partial derivatives with respect to the input random variables in the U-space as ∇U G uðkÞ =
∂G ∂G ∂G , , ..., ∂U 1 ∂U 1 ∂U 1
ð5:7Þ
U = uð k Þ
3. Update the search point at current iteration as
uðkþ1Þ =
uðkÞ . nðkÞ -
G uðkÞ ∇U GðuðkÞ Þ
= uðkÞ . ∇U G uðkÞ - G uðkÞ
nð k Þ ð5:8Þ ∇U G u
ðk Þ
k∇U GðuðkÞ Þk
2
where n(k) is the normalized steepest ascent direction of G(U) at u(k), expressed as:
5.2
MPP-Based Methods for Reliability Analysis
129
∇U G uðkÞ
nð k Þ =
∇U GðuðkÞ Þ
ð5:9Þ
4. Go to Steps 2 and 3 until the convergence of u. Example 5.2 Consider the following performance function GðX 1 , X 2 Þ = 1 -
20 X1 þ X2 þ 5
where X1 and X2 each follows a normal distribution with the mean 4 and the standard deviation 1. Find the MPP and compute the probability of failure by FORM. Solution: The HL-RF method is used here, and the first iteration is shown as follows: Step 1: Set the number of iterations k = 0 and the initial point u(0) = (0, 0); Step 2: Transform u(0) to x(0) with the inverse of Eq. (5.2) to obtain: ð0Þ
ð0Þ
ð0Þ
ð0Þ
x1 = μX1 þ U 1 σ X1 = 4 þ 0 × 1 = 4 x2 = μX2 þ U 2 σ X2 = 4 þ 0 × 1 = 4 ð0Þ
ð0Þ
xð0Þ = x1 , x2
= ð4, 4Þ
Compute the performance function G(x(0)) as GðX 1 , X 2 Þ = 1 -
20 ≈ - 0:5385 4þ4þ5
and the partial derivatives as ∂G ∂X 1 20 20 ∂G = = σ X1 = × 1 ≈ 0:1183 2 ∂U 1 ∂X 1 ∂U 1 ðX 1 þ X 2 þ 5Þ ð4 þ 4 þ 5Þ2 ∂G ∂X 2 20 20 ∂G = = σ X2 = × 1 ≈ 0:1183 ∂U 2 ∂X 2 ∂U 2 ðX 1 þ X 2 þ 5Þ2 ð4 þ 4 þ 5 Þ2 Step 3: Update the search point at the current iteration as
130
5
Table 5.1 Iteration result in Example 5.2
Iteration 0 1 2 3 4
U1 0 2.2760 3.3505 3.4981 3.5
Time-Independent Reliability Analysis
U2 0 2.2760 3.3505 3.4981 3.5
uð1Þ = uð0Þ . ∇U G uð0Þ - G uð0Þ
G(U ) 0.5385 0.1395 0.0152 0.0002 0
∂G ∂U 1
∂G ∂U2
0.1183 0.0649 0.0515 0.0500
0.1183 0.0649 0.0515 0.0500
∇U G uð0Þ 2
k∇U Gðuð0Þ Þk ð0:1183, 0:1183Þ = ½ð0, 0Þ . ð0:1183, 0:1183Þ - ð - 0:5385Þ] 0:11832 þ 0:11832 = ð2:2760, 2:2760Þ
xð1Þ = T uð1Þ = σuð1Þ þ μ = ð6:2760, 6:2760Þ
The subsequent iteration process is analogous to the first iteration and the result is listed in Table 5.1. Therefore, the MPP u* is (3.5, 3.5). The reliability index and the probability of failure are calculated. *
β= u
N
= i=1
1=2 2 u*i
= kð3:5, 3:5Þk ≈ 4:9497
PF = Φð- βÞ = Φð- 4:9497Þ = 3:7164 × 10 - 7
5.2.2
Second-Order Reliability Method
As can be seen in Fig. 5.2, FORM constructs a first-order or linear approximation to the limit state surface at MPP. If G(u) is nonlinear, the resulting error will be large. In such cases, SORM, which considers the curvature information in the MPP, is more credible. The comparison between FORM and SORM is shown in Fig. 5.2. The accuracy of reliability estimation can be improved by nonlinear approximation of the limit state surface using SORM. This increase in accuracy is achieved by utilizing more information, especially about the second derivatives of the input random variables. There are many SORM forms (e.g., Breitung [5], Hohenbichler and Rackwitz [6], and Tvede [7]). This chapter details one of these methods, Breitung’s asymptotic
5.2
MPP-Based Methods for Reliability Analysis
131
Fig. 5.2 FORM/SORM approximations of a performance function at U-space
solution. The Breitung’s SORM approximation of the probability of failure can be expressed in an explicit form as PF = Φð- βÞ
N -1
ð1 þ βκi Þ - 2 1
ð5:10Þ
i=1
where κ i, i = 1, 2, . . ., N, are the principal curvatures of G(u) at MPP. Therefore, κi need to be calculated, which can be completed in two steps: 1. Rotate the standard normal variables Ui (in U-space) to a new set of standard normal variables Yi (in Y-space), where the last variable YN is in the same direction as the unit gradient vector of G(u) in MPP. To do this, we generate an orthogonal rotation matrix R, which can be derived from a simple matrix R0 expressed as
R0 =
1
0
⋯
0
0
1
⋯
0
⋮ ∂Gðu* Þ=∂u1 j ∇Gðu* Þ j
⋮ ∂Gðu* Þ=∂u2 j ∇Gðu* Þ j
⋱
⋮ ∂Gðu* Þ=∂uN ⋯ j ∇Gðu* Þ j
ð5:11Þ
where the last row consists of the components of the unit gradient vector of the limit state function at the MPP. Next, an orthogonal matrix R can be obtained by orthogonalizing R0 using the Gram-Schmidt algorithm. In the Y-space, the second-order approximation of the limit state function at the MPP can be expressed as 1 GðYÞ = - Y N þ β þ ðY - Y* ÞT RDRT ðY - Y* Þ 2
ð5:12Þ
132
5
Time-Independent Reliability Analysis
where D is the second-order Hessian matrix of the size N by N; and Y* = [0, 0, . . ., β]T is the MPP in Y-space. 2. Compute the principal curvatures κ i as the N - 1 eigenvalues of an (N - 1) × (N 1) matrix A of the following form A=
1 RDRT 2 j ∇Gðu* Þ j
ð5:13Þ
But for high-dimensional nonlinear problems, the effect of FORM and SORM is poor, and the MPP-based method cannot be applied.
5.3
Sampling Methods for Reliability Analysis
During the last several decades, sampling methods have played an important role in advancing research in reliability analysis. These methods generally involve generation of random samples of input random variables, deterministic evaluations of the performance function at these random samples, and post-processing to extract the probabilistic characteristics (e.g., statistical moments, reliability, and PDF) of the performance function. In this section, direct MCS, the simplest yet very useful sampling method, is briefly introduced, followed by a smart MCS method that borrows ideas from MPP-based methods, namely the importance sampling method.
5.3.1
Monte Carlo Simulation
The MCS captures the frequency of the target event by means of “experimentation”. It is utilized to estimate the probability of the target event without knowing the nature of the target event, which is very suitable for the black-box model. The basic idea behind MCS is to approximate the underlying distribution and associated probabilistic characteristics (e.g., mean, variance, and higher-order moments) of a random function by computing the value of the function from a certain amount of simulated random samples. In addition to the direct Monte Carlo method, many Monte Carlo methods based on surrogate models have been proposed [8, 9]. 1. Direct MCS Define the performance function G(x) < 0 as failure event. The sign of the performance function can be used to judge whether the constraint fails (positive or negative). In addition, the boundary state between the positive and negative domains is defined as the limit state (i.e., G(x) = 0). Hence, as illustrated in Fig. 5.3, by spreading a large number nMC of random sample points in the design space, the performance function could be evaluated. After counting the number of sample
5.3 Sampling Methods for Reliability Analysis
133
Fig. 5.3 Concept of reliability analysis using direct MCS
points with negative sign of the performance function nG ≤ 0, the failure probability of the constraint can be approximated by the ratio of nG ≤ 0 to nMC [10]. PF =
nG ≤ 0 nMC
ð5:14Þ
Besides, the failure domain set ΩF is introduced as the collection of x which yields G(x) < 0. The sample points in the failure domain set are the failure points. An indicator function I ΩF ðxÞ is used to determine whether the sample points are in the failure domain: I ΩF ð x Þ =
1, x 2 ΩF 0, otherwise
ð5:15Þ
Direct MCS becomes more precise as the number of sample points increases. This is also its biggest advantage. In theory, as long as there are enough sample points, the accuracy of the obtained results can be guaranteed. Its shortcomings are also particularly obvious. For the problem of low probability of failure, the required sample size is very large, which brings huge calculation and time costs. The number of random sample points needed is estimated by giving a coefficient of variation (COV) εMCS for quantifying the probability of failure: εMCS =
ð1 - P F Þ × 100% nMC × PF
ð5:16Þ
For example, there is a reliability problem with the PF = 0.5%. If we want to make the coefficient of variation (COV) εMCS less than 0.05, the MCS sample nMC should be more than 79600.
134
5
Time-Independent Reliability Analysis
2. Surrogate-based MCS In order to overcome the shortcomings of direct MCS, surrogate models have been implemented to facilitate the performance evaluation. Commonly used surrogate models include polynomial response surface model, neural network model, support vector machine, and Kriging model. Four main general aspects can be highlighted to play a major role in the surrogate modeling: (1) Initial design of experiment (DoE) points Random sampling techniques, such as MCS, are the most fundamental techniques to define the initial DoE points; however, as these do not obey any criterion other than the random description of x, they do not provide the most efficient approach to it. With the requirement for efficient surrogate modeling techniques, the Latin hypercube sampling (LHS) becames one of the most widely implemented techniques in adaptive surrogate modeling for reliability analysis. In order to meet the demand for some complex engineering problems, [11, 12] recently proposed the usage of uniform DoE points, and [13] utilized the Sobol sequences for the selection of initial DoE points. (2) DoE enrichment and stopping criterion Despite the importance of the initial DoE, the possibility to adaptively enrich the initial DoE is one of the main features of adaptive surrogate modeling. It includes improvements to establish G(x) substitution capabilities in order to select other DoE points that are expected to improve this approximation. This happens repeatedly until the stop criterion is satisfied. Learning functions are convenient mathematical functions that weight alternative model properties to find the best candidates for improved DoE. They evaluate a set of candidate criteria, which are essentially based on consideration of uncertainty in model approximation to failure regions, and select the new most promising DoE-enriching criteria. Learning functions are the present state-of-art technique for DoE enrichment. Echard et al. [14] proposed the adaptive Kriging model method (AK-MCS). The Adaptive Kriging Monte Carlo method consists of an active learning reliability method combining Kriging and Monte Carlo simulation. Kriging active learning methods for reliability analysis have actually been proposed in previous works [15]. Two learning functions are used here, namely the expected feasibility function (EFF) and the active learning function U(x). EFF comes from the global optimization (efficient global reliability analysis, EGRA) method [15]. It is able to provide at a certain sample point the degree to which the response value of its performance equation satisfies the equality constraint G(x) = a, which is in a self-defined area a ± ε. EEF is defined as shown in Eq. (5.17).
5.3
Sampling Methods for Reliability Analysis
EFF ðxÞ = GðxÞ - a
2Φ
135
ða - εÞ - GðxÞ ða þ εÞ - GðxÞ a - G ðx Þ -Φ -Φ σ ðxÞ σ ðx Þ σ ðx Þ G
G
G
ða - εÞ - GðxÞ ða þ εÞ - GðxÞ a - GðxÞ - σ ðxÞ 2ϕ -ϕ -ϕ G σ G ðxÞ σ G ðxÞ σ G ðxÞ þ Φ
ða þ εÞ - GðxÞ ð a - ε Þ - G ðx Þ -Φ σ ðxÞ σ ðxÞ G
G
ð5:17Þ where G is the constructed Kriging model, Φ is the standard normal cumulative distribution function, and ϕ is the standard normal distribution density function. For reliability analysis, the threshold of a is 0. In EGRA, the value of ε in the expected feasibility function is 2σ 2 . G
The active learning function U(x) is a learning function based on a different concept from EFF based on the statement that only the sign of the response value of the sample point is concerned when the Monte Carlo method is used to calculate the failure probability in the reliability analysis. In fact, in AK-MCS, the exact estimation of the sign of the response value only needs to be performed among the Monte Carlo population candidate sample points, and according to the distribution of random variables, the limit state can be roughly approximated in the case of low probability density distribution. Those points with a high potential risk of crossing G (x) = 0 must be added to the training set for model building and their response values evaluated. In fact, uncertainty at these points can cause their predicted values to change from positive to negative (or from negative to positive). This results in a change in the probability of failure. Potential “danger” points can exhibit three characteristics: being close to a limit state, having a high degree of uncertainty (high Kriging variance), or both. To identify them, a learning function U(x) is proposed, specifically as shown in Eq. (5.18) [14]: j GðxÞ j - UðxÞσ ðxÞ = 0 G
ð5:18Þ
It represents the distance of the Kriging standard deviation between the predicted and estimated limit states. It represents a measure of reliability, expressing the probability that the symbol estimate at a certain point is wrong. The smaller the U (x) is, the more likely it is that the dot symbol is incorrectly estimated, and the more it needs to be added to our training set. The above content is the earlier research results of the active learning algorithm of the Kriging model. In recent years, based on AK-MCS, EFF, and active learning function U(x), scholars have proposed many improvement methods to improve their sampling efficiency. The adaptive Kriging-oriented importance sampling method proposed by Zhang et al. [16] and the error rate-based adaptive Kriging reliability analysis method proposed by Wang et al. [17] can effectively improve the sampling efficiency. To guide the choice of samples, Jiang et al. [18] proposed a failurepursuing sampling framework with sensitive Voronoi cell.
136
5
Time-Independent Reliability Analysis
The probability of failure can be calculated by MCS after constructing a response surrogate model GðXÞ. nMC
PF =
j=1
I GðXÞ nMC
ð5:19Þ
(3) DoE size In the definition of the initial and posterior DoE, there is interest in considering the number of variables that are strictly necessary to define an accurate surrogate model. High-dimensional spaces demand additional effort in the analysis. Sensitivity analyses are an effective method to reduce the DoE to the variables of interest. Adaptive reduction of the DoE random variables, such as applied in the PCE-RBDO of [19], is an efficient method to address dimensionality in complex problems. Recent research works [20–24] have shown that dimensionality dependence is still relevant in reliability surrogate modeling implementations. (4) Surrogate model parameters As seen in Chap. 3, all surrogate models have a priori assumptions and parameters to estimate that are expected to have a large impact on the performance of the surrogate for G(x). This fact has given rise to a range of techniques for improving the reliability of parameter estimates of surrogate models. Nonetheless, the relevance of [25] emphasizing model assumptions and parameter estimates in the application of alternative models in engineering remains largely underestimated and ignored.
5.3.2
Importance Sampling
To alleviate this computational burden and reduce the variance of reliability estimates, researchers have developed several modified MCS methods, the most popular of which is the importance sampling method [26]. The basic idea of importance sampling is to assign more sample points to regions that have a greater influence on the probability of failure, i.e., the neighborhoods of G(x) = 0. The evaluation of sampling points far away from G(x) = 0 becomes less significant and can be reduced to improve efficiency. Therefore, one of the most important elements in importance sampling is choosing an appropriate sampling distribution that encourages random samples to be placed in these regions. The U-space in Fig. 5.4 shows an illustrative comparison between sampling distributions centered at origin and at the MPP in the standard normal space. The efficiency of importance sampling is improved by introducing an importance sampling density function hX(x) that favors failure regions. Xiao et al. [27] proposed a hierarchical importance sampling method based on importance sampling. Even without an optimal important sampling density, further improvements in sampling
5.3
Sampling Methods for Reliability Analysis
137
Fig. 5.4 Comparison between direct MCS and importance sampling
efficiency can be achieved. Under the same sample size, stratified importance sampling can obtain the estimation of failure probability with lower variance. If the importance sampling density hX(x) can be obtained, the probability of failure can be expressed as: PF =
⋯
I Ω ðxÞf X ðxÞdx Ω
=
⋯
I Ω ðx Þ Ω
f X ðxÞ h ðxÞdx hX ðxÞ X
f ðxÞ = Eh I Ω ðxÞ X hX ð x Þ =
1 nMC
nMC
I Ω xj j=1
ð5:20Þ
f X xj hX xj
It can be expected that in the new sampling distribution, the probability of the index value of random sampling points is greater, that is, the probability of falling into the failure area is greater. However, the optimal importance sampling density is not implementable since it involves the failure probability which is unknown beforehand. To approximate the optimal importance sampling density, several adaptive methods have been proposed [28, 29]. Xiao et al. [27] proposed the stratified importance sampling method to improve the efficiency of importance sampling. Suppose the range of values of the i-th variable xi [b1, b2] is divided into m subintervals, i.e., Ak = [ak - 1, ak], k = 1, 2, . . ., m, a0 = b1, am = b2. The failure probability in Eq. (5.19) can be represented as
138
5
PF = Eh I Ω ðxÞ
f X ð xÞ hX ð x Þ
= EAk Eh I Ω ðxÞ m
=
Time-Independent Reliability Analysis
pE k=1 k h
f X ð xÞ jx 2 Ak hX ð x Þ i
I Ω ðxÞ
ð5:21Þ
f X ð xÞ jx 2 Ak hX ð x Þ i
where pk = Pr ðxi 2 Ak Þ = Ak hX i ðxi Þdxi and hX i ðxi Þ is the marginal probability density function of hX(x). Equation (5.21) can be considered as a special case of stratified sampling. Let φk = Eh I Ω ðxÞ hf XX ððxxÞÞjxi 2 Ak , then PF = km= 1 pk φk . According to the importance sampling density hX(x), generate Nk independent ðjÞ stratified samples xk ðj = 1, . . . , N k Þ. Then φk can be estimated as 1 φk = Nk
ðjÞ
Nk
ðjÞ I Ω xk
j=1
f X xk
ðjÞ
hX x k
ð5:22Þ
And the failure probability can be estimated as m
PF =
ð5:23Þ
p k φk k=1
Since all the samples are independently and identically distributed, the expectation and variance of φk can be easily obtained as Eðφk Þ = Eh I Ω ðxÞ V ð φk Þ =
f X ð xÞ jx 2 Ak hX ð x Þ i
f ð xÞ 1 V I ðxÞ X jx 2 Ak N k h Ω hX ð x Þ i
ð5:24Þ ð5:25Þ
Then the expectation of PF can be represented as m
pφ k=1 k k
E PF = E =
m
=
m
= PF
p E ð φk Þ k=1 k p E I ð xÞ k=1 k h Ω
f X ð xÞ jx 2 Ak hX ð x Þ i
ð5:26Þ
5.3
Sampling Methods for Reliability Analysis
139
This shows that PF is an unbiased estimation of PF. The variance of PF can be written as V PF =
m
=
m
p2 Vðφk Þ k=1 k p2 k=1 k
f ð xÞ 1 V I ðxÞ X jx 2 Ak N k h Ω hX ðxÞ i
ð5:27Þ
In order to facilitate a proof of improvement, take the case that Nk is proportional to pk [30], i.e., Nk = pkN. Then, Eq. (5.27) can be further written as V PF =
m
=
m
p k=1 k p k=1 k
f ð xÞ Nk 1 V I ðxÞ X jx 2 Ak N N k h Ω hX ðxÞ i f ð xÞ 1 V I ðxÞ X jx 2 Ak N h Ω hX ð x Þ i f X ðxÞ jx 2 Ak hX ð x Þ i
=
1 N
=
f ð xÞ 1 E V I ðxÞ X jx 2 Ak N Ak h Ω hX ðxÞ i
m
p V I ðxÞ k=1 k h Ω
ð5:28Þ
According to the law of total variance in subintervals [31], as shown in Eqs. (5.29), (5.28) can also be written as
V PF =
VðY Þ = EAk ½VðYjX 2 Ak Þ] þ VAk ½EðYjX 2 Ak Þ]
ð5:29Þ
f ð xÞ f ð xÞ 1 - VAk Eh I Ω ðxÞ X jxi 2 Ak V I ð xÞ X N h Ω hX ð x Þ hX ð x Þ
ð5:30Þ
Since V Ak E h I Ω ðxÞ hf XX ððxxÞÞjxi 2 Ak
is nonnegative, stratified IS can get an esti-
mate of failure probability with lower variance for the same sample size. Additionally, to obtain even more variance reduction, the input variable with the highest value of V Ak Eh I Ω ðxÞ hf XX ððxxÞÞjxi 2 Ak should be chosen as the variable for stratification. Actually, V Ak E h I Ω ðxÞ hf XX ððxxÞÞjxi 2 Ak
is an estimate of the variance-based sensitiv-
ity measure [32].
5.3.3
Other Methods
1. Dimension Reduction Method The search for efficient computational procedures to handle high-dimensional problems has led to the development of dimension reduction (DR) methods
140
5 Time-Independent Reliability Analysis
[33, 34]. Outside the field of engineering design, DR methods are widely known as high-dimensional model representation (HDMR) methods, which were originally developed for efficient multivariate model representation in chemical system modeling [35, 36]. This method approximates a multidimensional response function using a set of component functions, where the number of random variables is increased from a constant function to a multidimensional function. For system responses with negligible higher-order variable interactions, HDMR methods can efficiently and accurately formulate this response function using lower-order component functions (usually second-order or bivariate are sufficient). In fact, the response of most real physical systems is only significantly affected by low-order interactions of the input random variables. Depending on the way in which the component functions are determined, HDMR methods can be categorized into two types: ANOVA-HDMR and Cut-HDMR [35]. ANOVA-HDMR exactly follows the analysis of variance (ANOVA) procedure, and is useful for measuring the contributions of the variance of each component function to the output variance [37]. However, the multidimensional integrations involved in ANOVA-HDMR make this expansion computationally unattractive. On the other hand, Cut-HDMR expansion exactly represents the response function in the hyperplane that passes through a reference point in the input random space. This expansion does not require multidimensional integrations and is computationally much more efficient than ANOVA-HDMR. It is fair to say that the DR method is essentially Cut-HDMR designated for the purpose of reliability analysis. Specialized versions of this method include the univariate dimension reduction (UDR) method that simplifies one multidimensional response function to multiple one-dimensional component functions [33, 35] and the bivariate dimension reduction (BDR) method that simplifies one multidimensional response function to multiple one- and two-dimensional integrations [34, 36]. 2. System Reliability Analysis Method In actual engineering, a system may have multiple failure modes, so it is necessary to study system reliability analysis and propose a new active learning strategy for multiple failure modes. Fauriat et al. [39] proposed a system reliability analysis method, which mainly defined the expression of the limit state function for the distribution of series system, parallel system, and composite system for system reliability analysis. On this basis, Yun et al. [39] proposed a further improved system reliability analysis method, and proposed an improved learning function U(x) for system reliability analysis under multiple failure modes. The key issue to analyze the system reliability (only the series system and the parallel system are considered here) is to estimate Eqs. (5.31) and (5.32). MCS methods for estimating Eqs. (5.31) and (5.32) are = Pr Pseries f Pparallel = Pr f
n
[ gi ð x Þ ≤ 0
i=1 n
\ gi ð x Þ ≤ 0
i=1
ð5:31Þ ð5:32Þ
References
141
Exercises 5.1 Consider a reliability problem for a mechanical component involving two random variables X1 and X2. The variables are independent with their marginal probability density functions defined by f X 1 ðx1 Þ = 2 - 2x1 f X 2 ðx2 Þ = 2 - 2x2 And the limit state function for this component is represented by GðX 1 , X 2 Þ = 1:8 - X 1 - X 2 Calculate the probability of failure by FORM and SORM. 5.2 Consider a reliability problem for a mechanical system with four constrains involving two basic random variables X1 and X2. Both of the random variables obey the standard normal distribution. The limit state function of this problem can be expressed as ð x1 þ x2 Þ p 2 ð þ xÞ x 3 þ 0:1ðx1 - x2 Þ2 þ 1p 2 2 6 ð x1 - x 2 Þ þ p 2 6 - ðx1 - x2 Þ þ p 2
3 þ 0:1ðx1 - x2 Þ2 -
Gðx1 , x2 Þ = min
Calculate the probability of failure by MCS method. If possible, calculate the probability of failure by other simulation methods (i.e., the surrogate-based method or the importance sampling method).
References 1. Hu, C., Youn, B. D., & Wang, P. (2019). Engineering design under uncertainty and health prognostics. Springer. 2. Hasofer, A. M., & Lind, N. C. (1974). Exact and invariant second-moment code format. Journal of the Engineering Mechanics Division, 100(1), 111–121. 3. Rosenblatt, M. (1952). Remarks on a multivariate transformation. The Annals of Mathematical Statistics, 23(3), 470–472. 4. Hohenbichler, M., & Rackwitz, R. (1981). Non-normal dependent vectors in structural safety. Journal of the Engineering Mechanics Division, 107(6), 1227–1238.
142
5
Time-Independent Reliability Analysis
5. Breitung, K. (1984). Asymptotic approximations for multinormal integrals. Journal of Engineering Mechanics, 110(3), 357–366. 6. Hohenbichler, M., & Rackwitz, R. (1988). Improvement of second-order reliability estimates by importance sampling. Journal of Engineering Mechanics, 114(12), 2195–2199. 7. Tvedt, L. (1983). Two second-order approximations to the failure probability. Veritas report RDIV/20-004083. 8. Peng, X., et al. (2022). Construction of adaptive Kriging metamodel for failure probability estimation considering the uncertainties of distribution parameters. Probabilistic Engineering Mechanics, 70, 103353. 9. Peng, X., et al. (2022). Estimation of small failure probability based on adaptive subset simulation and deep neural network. Journal of Mechanical Design, 144(10), 101704 (1–13). 10. Cruse, T. A. (1997). Reliability-based mechanical design (Vol. 108). CRC Press. 11. Jiang, Y., et al. (2015). An efficient method for generation of uniform support vector and its application in structural failure function fitting. Structural Safety, 54, 1–9. 12. Zhao, W., Fan, F., & Wang, W. (2017). Non-linear partial least squares response surface method for structural reliability analysis. Reliability Engineering System Safety, 161, 69–77. 13. Zhang, J., et al. (2019). Probability and interval hybrid reliability analysis based on adaptive local approximation of projection outlines using support vector machine. Computer-Aided Civil Infrastructure Engineering, 34(11), 991–1009. 14. Echard, B., Gayton, N., & Lemaire, M. (2011). AK-MCS: An active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety, 33(2), 145–154. 15. Bichon, B. J., et al. (2008). Efficient global reliability analysis for nonlinear implicit performance functions. AIAA Journal, 46(10), 2459–2468. 16. Zhang, X., Wang, L., & Sørensen, J. D. (2020). AKOIS: An adaptive Kriging oriented importance sampling method for structural system reliability analysis. Structural Safety, 82, 101876. 17. Wang, Z., & Shafieezadeh, A. (2019). REAK: Reliability analysis through Error rate-based Adaptive Kriging. Reliability Engineering and System Safety, 182, 33–45. 18. Jiang, C., et al. (2019). A general failure-pursuing sampling framework for surrogate-based reliability analysis. Reliability Engineering and System Safety, 183, 47–59. 19. Kim, N., Wang, H., & Queipo, N. (2004). Adaptive reduction of design variables using global sensitivity in reliability-based optimization. In 10th AIAA/ISSMO multidisciplinary analysis and optimization conference (p. 4515). 20. Pan, Q., & Dias, D. (2017). Sliced inverse regression-based sparse polynomial chaos expansions for reliability analysis in high dimensions. Reliability Engineering System Safety, 167, 484–493. 21. Xu, J., & Kong, F. (2018). A cubature collocation based sparse polynomial chaos expansion for efficient structural reliability analysis. Structural Safety, 74, 24–31. 22. Cheng, K., & Lu, Z. (2018). Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression. Computers Structures, 194, 86–96. 23. Fang, H., et al. (2019). A gradient-based uncertainty optimization framework utilizing dimensional adaptive polynomial chaos expansion. Structural Multidisciplinary Optimization, 59, 1199–1219. 24. Xu, J., & Wang, D. (2019). Structural reliability analysis based on polynomial chaos, Voronoi cells and dimension reduction technique. Reliability Engineering System Safety, 185, 329–340. 25. Abdallah, I., Lataniotis, C., & Sudret, B. (2019). Parametric hierarchical kriging for multifidelity aero-servo-elastic simulators—Application to extreme loads on wind turbines. Probabilistic Engineering Mechanics, 55, 67–77. 26. Echard, B., et al. (2013). A combined importance sampling and kriging reliability method for small failure probabilities with time-demanding numerical models. Reliability Engineering and System Safety, 111, 232–240. 27. Xiao, S., Oladyshkin, S., & Nowak, W. (2020). Reliability analysis with stratified importance sampling based on adaptive Kriging. Reliability Engineering and System Safety, 197, 106852.
References
143
28. Dubourg, V., Sudret, B., & Deheeger, F. (2013). Metamodel-based importance sampling for structural reliability analysis. Probabilistic Engineering Mechanics, 33, 47–57. 29. Au, S.-K., & Beck, J. L. (1999). A new adaptive importance sampling scheme for reliability calculations. Structural Safety, 21(2), 135–158. 30. Fishman, G. (2013). Monte Carlo: Concepts, algorithms, and applications. Springer. 31. Xiao, S., & Lu, Z. (2020). Structural reliability analysis with conditional importance sampling method based on the law of total expectation and variance in subintervals. Journal of Engineering Mechanics, 146(1), 04019111. 32. Sobol, I.M. (1993) Sensitivity Estimates for Nonlinear Mathematical Models. Mathematical Modelling and Computational Experiments, 4, 407–414. 33. Rahman, S., & Xu, H. (2004). A univariate dimension-reduction method for multi-dimensional integration in stochastic mechanics. Probabilistic Engineering Mechanics, 19(4), 393–408. 34. Xu, H., & Rahman, S. (2004). A generalized dimension-reduction method for multidimensional integration in stochastic mechanics. International Journal for Numerical Methods in Engineering, 61(12), 1992–2019. 35. Rabitz, H., et al. (1999). Efficient input—output model representations. Computer Physics Communications, 117(1-2), 11–20. 36. Li, G., Rosenthal, C., & Rabitz, H. (2001). High dimensional model representations. The Journal of Physical Chemistry A, 105(33), 7765–7777. 37. Sobol', I. M. (2003). Theorems and examples on high dimensional model representation. Reliability Engineering System Safety, 79(2), 187–193. 38. Xu, H., & Rahman, S. (2005). Decomposition methods for structural reliability analysis. Probabilistic Engineering Mechanics, 20(3), 239–250. 39. Fauriat, W., & Gayton, N. (2014). AK-SYS: An adaptation of the AK-MCS method for system reliability. Reliability Engineering and System Safety, 123, 137–144. 40. Yun, W., et al. (2019). AK-SYSi: An improved adaptive Kriging model for system reliability analysis with multiple failure modes by a refined U learning function. Structural Multidisciplinary Optimization, 59, 263–278.
Chapter 6
Time-Dependent Reliability Analysis
Nomenclature AERS CY Clsensitive EGO EOLE eSPT e(t) e(x) eiLOO* eLOO* FORM F(t) Fmin f(t) G(X,Y(t),t) LSF mVar NERS NTPM N+(0, TL) nX nY OSE P P\pi PF(0)
Adaptive extreme response surface method Correlation matrix Sensitive Voronoi cells Efficient global optimization Expansion optimal linear estimation Equivalent stochastic process transformation method Stochastic process with zero mean Mean squared error Failure probability error Average failure probability error First-order reliability methods Stochastic process model Approximated global minimum value Approximation of stochastic process model LSF containing time factors Limit state function Average variances of error Nested extreme response surface Nested time-prediction model Number of outcrossing from safe domain to failure domain within [0, TL] Number of random variables Number of stochastic processes Orthogonal series expansion Sample set Sample set without pi Instantaneous probability of failure at initial time
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_6
145
146
6
Pf P∖pi
Pf p Rc SORM STRA s TL TRA t ti v+(t) WEFF WSE WSIE X Y(t) Z μY(t) σ Y(t) ρY(t) ρY(ti, tj) λi Φi Φ(.) ϕ(.) ξ(x) εmax εtar
6.1 6.1.1
Time-Dependent Reliability Analysis
Predicted failure probability evaluated by using the sample set P Predicted failure probability evaluated by using the sample set P\pi Number of dominant eigenvalues Covariance matrix Second-order reliability methods Surrogate-based time-dependent reliability analysis Number of discrete time nodes Specified time period Time-dependent reliability analysis Time parameter Discrete time nodes Instantaneous outcrossing rate at time t Weighted expected feasibility function Wrong state estimation Wrong sign estimation Vector of random variables Vector of stochastic processes Standard random variables Mean function Standard deviation function Vector of the correlation function Autocorrelation coefficient function Eigenvalues of the correlation matrix Eigenvectors of the correlation matrix Cumulative distribution function for the standard Gaussian distribution Probability density function for the standard Gaussian distribution Relative error Maximum real-time estimation error Target prediction error
Basic Concept of Time-Dependent Reliability Analysis Introduction
In recent years, with the expansion of production scale and complexity of large-scale equipment, the probability of failure is also increasing, causing a large number of economic losses and casualties. There are various time-varying variables in the actual design and manufacturing process, which are regarded as stochastic processes. For example, external loads vary in time and material properties degenerate over time. While one of the main concerns of product design is to ensure a high level of system reliability throughout the product life cycle, it is crucial to perform time-
6.2
Expansion of the Stochastic Process
147
dependent reliability analysis in the design. Here, time-dependent reliability is defined as the probability that a time-dependent probability constraint is satisfied over the design lifetime. Unlike time-independent reliability analysis, timedependent reliability analysis involves limit state functions that vary with time; therefore, time-dependent reliability analysis requires a series of instantaneous limit states over the design life of the system. Existing time-independent probabilistic analysis methods are difficult to handle the time dependence of limit state functions. To estimate the realistic reliability of complex equipment, researches on time-dependent reliability analysis (TRA) [1, 2] were the hotspot in the past few decades.
6.1.2
Mathematical Expression of Time-Dependent Reliability Analysis Problem
The theoretical basis of time-dependent reliability mainly includes the analysis of random variables and stochastic processes. The Kriging model is widely used in time-dependent reliability problems, which can provide not only a prediction of the model response but also the predicted variance. In time-dependent reliability analysis, the limit state function (LSF) represents the working state of the system or structure. An LSF containing time factors can be expressed as G(X, Y(t), t), where X = ½X 1 , X 2 , . . . , X nX ] is the vector of random variables, Y ðt Þ = ½Y 1 ðt Þ, Y 2 ðt Þ, . . . , Y nY ðt Þ] is the vector of stochastic processes, t is the time parameter, and nX and nY are the number of random variables and stochastic processes, respectively. The failure event occurs when the limit state function is less than zero in arbitrary time node within a specified time period [0, TL]. Hence, the time-dependent failure probability can be expressed by PF ð0, T L Þ = PrfGðX, YðtÞ, tÞ < 0Þ, ∃t 2 ½0, T L ]g
ð6:1Þ
In order to solve the problem of TRA, several time-dependent reliability analysis methods have been proposed, such as the out-crossing rate method, extreme value methods. This chapter will introduce these methods in detail.
6.2
Expansion of the Stochastic Process
As mentioned in the previous section, stochastic process is widely involved in timedependent reliability analysis. Stochastic process models have been widely used for function approximation, where details can be find in [3, 4]. In TRA, the inputs of the limit state function may contain stochastic processes. However, stochastic processes are hard to handle in TRA. Discretization is a common strategy for dealing with
148
6
Time-Dependent Reliability Analysis
stochastic processes. Discretization methods commonly used in stochastic problems mainly include Karhunen-Loeve expansion [5], the Orthogonal Series Expansion (OSE) [6], and the Expansion Optimal Linear Estimation (EOLE) [7]. For time-dependent reliability analysis problems, the EOLE is often used to expand the stochastic process Y(t) by a series of random variables Z = (Z1, Z2, . . ., Zp), which is shown as follows: p
Y ðt Þ = μ Y ðt Þ þ σ Y ðt Þ i=1
1 p Φi Z i ρY ðt Þ λi
ð6:2Þ
where p is the number of dominant eigenvalues, which should be smaller than or equal to the number of discrete time nodes s; μY(t) and σ Y(t) are the mean function and standard deviation function of Y(t), respectively; Zi is the independent standard normal variable; ρY(t) = [ρY(t, t1), ρY(t, t2), . . ., ρY(t, ti), . . ., ρY(t, ts)]T is a vector of the correlation function; ti(i = 1, 2, . . ., s) are the discrete time nodes; and λi and Φi are the eigenvalues and eigenvectors, respectively, of the correlation matrix CY. The correlation matrix CY is expressed as
CY =
ρY ð t 1 , t 1 Þ ρY ð t 2 , t 1 Þ
ρY ð t 1 , t 2 Þ ⋯ ρY ð t 2 , t 2 Þ ⋯
ρY ð t 1 , t s Þ ρY ð t 2 , t s Þ
⋮ ρY ð t s , t 1 Þ
⋮ ρY ðt s , t 2 Þ
⋮ ρY ð t s , t s Þ
⋱ ⋯
ð6:3Þ s×s
where ρY(ti, tj) is the autocorrelation coefficient function. First, the time interval [0, T] is divided into time nodes, and the EOLE method is used to convert the stochastic process Y(t) into random variables Z. Then [X, Y(t)] is transformed into [X, Z], followed by the use of the sampling strategy to select next sample. Because the sample is at a specific time node, Z *j1 , . . . , Z *jpz , t s is put into Eq. (6.2), Y *j ðt s Þ can be obtained, and the surrogate model is constructed.
6.3
Outcrossing Rate Methods
The outcrossing rate method is developed based on the “outcrossing event” proposed by Rice [8]. As is shown in Fig. 6.1, an outcrossing event takes place when the timevariant system performance function G(X, Y(t), t) is about to cross the G = 0 axis with respect to time t. In TRA, this indicates that the system is about to fail, and the first derivative of reliability with respect to time can be approximated by the outcrossing rate [9]. Therefore, the key to solve TRA problems lies in the calculation of outcrossing rate.
6.3
Outcrossing Rate Methods
149
Fig. 6.1 The outcrossing event within the [0, TL]
The approach based on the outcrossing rate calculates the probability of failure according to the expected average number of outcrossing events. The instantaneous outcrossing rate at time t is defined as vþ ð t Þ =
lim
Δt → 0, Δt > 0
PfGðX, Yðt Þ, t Þ > 0 \ GðX, Yðt þ Δt Þ, t þ Δt Þ ≤ 0g Δt
ð6:4Þ
The outcrossing events are assumed to be statistically independent and be approximated using the Poisson’s distribution, which is shown in Fig. 6.1. The probability of failure can be defined as [1] PF ð0, T L Þ ≈ 1 - exp½ - E½N þ ð0, T L Þ]] vþ T L = 1 - ð1 - PF ð0ÞÞ exp 1 - P F ð 0Þ
ð6:5Þ
where [N+(0, TL)] denote the number of outcrossing from safe domain to failure domain within [0, TL], PF(0) is the instantaneous probability of failure at initial time t. Although outcrossing rate methods tend to provide a mathematical derivation of time-dependent reliability, it is difficult to assess outcrossing rates for general stochastic processes. Only special stochastic processes (e.g., stationary Gaussian processes) can be used to analyze outcrossing rates [10]. A large number of methods have focused on the asymptotic integration method to calculate the outcrossing rate [11, 12]. In this section, two methods for calculating the outcrossing rate, namely the PHI2 method [13] and its improved PHI2+ method [14], will be briefly discussed. The differences between PHI2 and the traditional methods are: 1. PHI2 considers the LSF itself as the stochastic process whose outcrossing of the zero level is to characterize;
150
6
Time-Dependent Reliability Analysis
2. The stochastic processes involved in G(X, Y(t), t) don’t need to be discretized; 3. PHI2 does not make a first-order Taylor expansion of the limit state at t + Δt. Replacing the limit passage in Eq. (6.4) by a finite difference-like operations, the outcrossing rate can be rewritten as vPHI2 þ ðt Þ =
PfGðX, Yðt Þ, t Þ > 0 \ GðX, Yðt þ Δt Þ, t þ Δt Þ ≤ 0g Δt
ð6:6Þ
Based on the PHI2 method, Sudret et al. [14] proposed the improved PHI2 method (PHI2+) to stabilize the PHI2 method, which considers the stationary and non-stationary case. In this time-dependent reliability approach, reliability indexes are directly estimated from first-passage probabilities [15, 16]. When using the outcrossing rate method, many hypothetical models already exist, such as Poisson models and Markov models [17]. The time-dependent failure probability and reliability indexes vary for these different hypothetical models. Although the outcrossing rate method has been widely used in time-dependent reliability problems in the past [18], its calculation accuracy is relatively low due to the assumption that all outcrossings are independent with each other and follow a Poisson distribution.
6.4
Extreme Value Methods
Instead of calculating the outcrossing rate, the extreme value methods calculate the probability that the extreme value of performance exceeding the specified performance threshold of an engineering system. These methods substitute the performance function with surrogate model, as introduced in Chap. 3. The extreme value methods mainly include two types, i.e., the extreme response surrogate-based methods and the response surrogate-based methods. The extreme response surrogate-based method aims at constructing the extreme surface with respect to time by performing global optimization at every training point, while the response surrogate-based method constructs a global surrogate model for the performance function with the random variables, stochastic processes, and time parameter as inputs.
6.4.1
Nested Extreme Response Surface Approach
This section presents the details about one of the extreme response surrogate-based methods for time-dependent reliability analysis, namely, the nested extreme response surface (NERS) approach [19]. The NERS approach is based on nested time-prediction model (NTPM). For a given system, the performance function G(X, Y(t), t) may change over time due to time-dependent uncertainty. Figure 6.2 shows
6.4
Extreme Value Methods
151
Fig. 6.2 Examples of timedependent performance functions
the random realizations of two different types of system responses, where the red and black lines represent monotonically and non-monotonically increasing performance, respectively. If G(X, Y(t), t) increases or decreases monotonically with time, extreme responses will usually occur at the time interval boundary, where the probability of failure will also approach its maximum value. For such time-dependent reliability constraints, reliability analysis only needs to be performed at the time interval boundary. However, the situation is more complicated when the system response G(X, Y(t), t) is non-monotonic. In this case, the key is to carry out the reliability analysis at the instantaneous time when the extreme response of the performance function is obtained. The time that leads to the extreme response of the system performance function varies with different design X, and thus a response surface of the time with respect to system design variables can be determined as T = f ðXÞ :
max Sðx, t Þ, 8x 2 X t
ð6:7Þ
For illustrative purpose, consider a time-varying LSF GðX, t Þ = 20 X 21 X 2 þ 5X 1 t - ðX 2 þ 1Þt 2 where t 2 [0, 25]. For any given design x : {x1 2 [0, 10], x2 2 [0, 10]}, there exists an instantaneous time t* that maximizes G(x, t). To obtain t*, one may set the derivative of G with respect to t equal to zero to yield t* = 5x1/(2x2 + 2). The time instant t* for extreme response of G varies with the design variables x1 and x2. For example, for a specific implementation of the system design, where x1 = 4, x2 = 1, the system approaches the maximum response at time t = 5; while for another specific implementation of the system design, where x1 = 5, x2 = 0, the system’s maximum response is reached at t = 12.5. Here the response surface at time t is defined as an NTPM in the NERS method. The inputs of NTPM are the design variables and parameters of interest, which can be stochastic or deterministic. The output is the time at which the system response approaches an extremum. With NTPM, time-dependent reliability analysis problems can be transformed into time-independent ones, so that existing advanced reliability
152
6
Time-Dependent Reliability Analysis
analysis methods such as first-order reliability methods (FORM), second-order reliability methods (SORM), can be conveniently used. In design optimization problems with time-dependent probability constraints, NTPM can be fully nested in the design process to convert time-dependent reliability analysis into a timeindependent one by estimating the extreme time responses for any given system design. Although NTPM is helpful for time-dependent reliability analysis, effectively developing high-fidelity time prediction models can be very challenging. First, the analytical form of the time-dependent limit state is usually not available in practical design applications. Therefore, NTPM must be developed based on a limited sample. Secondly, since the samples used to develop NTPM require extreme temporal responses over the design space, it is critical to efficiently extract these responses to make the design process computationally affordable. Thirdly, NTPM must be adapted to perform two different roles: predicting the extreme temporal response of reliability analysis, and including additional sample points in necessary regions during an iterative design process to improve the model itself. This section presents the NERS approach, which addresses the three challenges above. The key to NERS approach is the efficient construction of nested time prediction models in the design space of interest, which can be used to predict when the system response approaches an extreme value. The NERS approach consists of three main techniques in three sequential steps: (1) efficient global optimization (EGO) for extreme time response identification, (2) construction of Kriging-based NTPM, and (3) adaptive time response prediction and model maturation. The first step, EGO, is used to efficiently extract a certain number of extreme time response samples for the development of the second step NTPM. Once the Kriging-based NTPM for extreme time responses is established, then use the adaptive response prediction and model maturation mechanism to ensure the accuracy and efficiency of the predictions by autonomously enrolling new sample points as needed during the analysis. The NERS method is outlined in the flowchart shown in Fig. 6.3, and the three key techniques mentioned above are explained in detail in the rest of this section. 1. Efficient Global Optimization for Extreme Time Response Identification For the reliability analysis of time-dependent system response, it is very important to effectively calculate the extreme value response of the LSF and effectively locate the corresponding time when the system response is close to the extreme value. For a given system design, the response of LSF is time-dependent and can be a monotonic or non-monotonic one-dimensional function with respect to time. Here, the EGO technique [20] can be used to effectively locate the extreme system response and the corresponding time when it is close to the extreme response, mainly because of its ability to handle non-monotonic limit states while ensuring excellent computational efficiency. This section focuses on the application of the EGO technique to the identification of extreme time responses. Discussion of the EGO technique itself is omitted, as more details on the technique can be obtained from references [21, 22].
6.4
Extreme Value Methods
153
Fig. 6.3 Flowchart of NERS approach for reliability analysis
In order to find the globally optimal time leading to the extreme value response of the LSF, the EGO technique generates a one-dimensional stochastic process model from the existing sample response as a function of time. Stochastic process models have been used extensively for function approximation; more information on these models can be found in reference [3, 4]. In this study, the time-dependent response of the limit state function at a particular design point is represented by a one-dimensional stochastic process model with a constant global mean in EGO as F ðt Þ = μ þ eðt Þ
ð6:8Þ
where μ is the global model representing the function mean, and e(t) is a stochastic process with zero mean and variance σ 2e . The covariance between e(t) at two different points ti and tj is defined as Cov eðt i Þ, e t j = σ 2e Rc t i , t j , in which the correlation function is given by Rc t i , t j = exp - a t i - t j
b
ð6:9Þ
where a and b are unknown model hyperparameters. Based on a set of initial samples, F(t1), F(t2), . . ., F(tk), an initial stochastic process model of F(t) can always be constructed by maximizing the likelihood function LF =
ðF - μÞ0 Rc- 1 ðY - μÞ 1 exp 2σ 2 ð2π Þn=2 ðσ 2 ÞjRc j1=2
ð6:10Þ
where F = (F(t1), F(t2), . . ., F(tk)) denote the sample responses of the limit state function, and Rc is the covariance matrix which is defined by Eq. (6.9). After
154
6
Time-Dependent Reliability Analysis
developing an initial one-dimensional stochastic process model, EGO iteratively updates the model by continuously searching for the most useful sample points to ensure maximum accuracy until the convergence criterion is met. To update the stochastic process model, EGO employs the expected improvement [23] metric to quantify the potential contribution of new sample points to the existing response surface; the sample point giving the largest expected improvement value is selected in the next iteration. Next, the expected improvement metrics will be briefly described. Let us consider a continuous function F(t) over time t that represents the timedependent LSF for a given system design point in the design space. Here, we employ the expected improvement metric to determine the global minimum of F(t). Due to the finite sample size of F(t), the initial stochastic process model may introduce large model uncertainties, so the approximation of the function represented by f(t) may be significantly biased compared to the actual function F(t). Due to the uncertainty involved in this model, in EGO, the function approximation of f(t) at time t is treated as a normal random variable whose mean and standard deviation are computed by an approximated response Fr(t), and its standard error e(t) is determined from the stochastic process model. With these notations, the improvement at time t can be defined as I ðt Þ = maxðF min - f ðt Þ, 0Þ
ð6:11Þ
where Fmin indicates the approximated global minimum value at the current EGO iteration. By integrating the expectation of the right part of Eq. (6.11), the expected improvement at any given time t can be presented as [20] E½I ðt Þ] = E½maxðF min - f , 0Þ] F - F r ðt Þ F - F r ðt Þ = ðF min - F r ðt ÞÞΦ min þ eðt Þϕ min eð t Þ eð t Þ
ð6:12Þ
where Φ(.) and ϕ(.) are the cumulative distribution function and the probability density function for the standard Gaussian distribution, respectively. The larger the expected improvement at time t, the more likely it is to achieve a better approximation to the global minimum. Therefore, a new sample should be evaluated at a specific time ti where the maximum expected improvement is obtained to update the stochastic process model. Using the updated model, a new global minimum approximation to F(t) can be obtained. The same process can be iteratively repeated by evaluating new samples at time ti, which provides the value of the maximum expected improvement and updating the stochastic process model for the new global minimum approximation until the maximum expected improvement is sufficiently small, and less than a critical value Ic (in [19], Ic = |Fmin|%, which is 1% of the absolute value of the current best global minimum approximation).
6.4
Extreme Value Methods
155
2. Kriging-Based Nested Time Prediction Model This section describes the process of developing a nested temporal prediction model using Kriging. After repeating the EGO process for different system sample points in the design space, a set of data can be obtained, including the initial sample point x0 in the design space, and the corresponding time t0 when the system response is close to its extreme value. To construct NTPM using the NERS method, design points are randomly generated in the design space according to the stochastic properties of the design variables. To balance the accuracy and efficiency of NTPM, it is recommended to initially use 10 * (nX - 1) samples to build a Kriging-based NTPM for nX-dimensional problems (nX > 1). The accuracy and efficiency of NTPM are controlled by the Adaptive Response Prediction and Model Maturity (ARPMM) mechanism, which is detailed in the next subsection. The goal here is to develop a predictive model to estimate the times leading to extreme performance of the limit state function for any given system design in the design space. For this purpose, a Kriging model was constructed based on the sample dataset obtained during the EGO process. It is worth noting that different response surface approaches, such as simple linear regression models or artificial neural networks [24–26], can be applied here for the development of NTPM. In this study, Kriging was used because it can model nonlinear relationships between extreme time responses of system design variables well. The detail of Kriging has been shown in Chap. 3. With the Kriging time prediction model, the extreme time response for any given new point x′ can be estimated as tðx0 Þ = μ þ rT ðx0 ÞRc- 1 ðt - AμÞ
ð6:13Þ
where r(x′) is the correlation vector between x′ and the sampled points x1~xn, in which the i-th element of r is given by ri(x′) = Rc(x′, xi). 3. Adaptive Response Prediction and Model Maturation When designing with nested temporal prediction models, model prediction accuracy is critical. Therefore, during the design process, a model maturation mechanism is needed to automatically register new sample points to improve the accuracy of the nested temporal prediction model when the accuracy conditions are not met. Wang et al. [19] developed an Adaptive Response Prediction and Model Maturity (ARPMM) mechanism based on the mean squared error e(x) of the current best prediction. The detail procedure can be find in [19]. Before predicting the time response at a new design point x using the latest update of NTPM, the ARPMM mechanism can be employed by first computing the current mean squared error e(x) of the best prediction. e ð xÞ = σ
2
1-r R T
-1
1 - AT R - 1 r rþ AT R - 1 A
2
ð6:14Þ
156
6
Time-Dependent Reliability Analysis
To reduce the numerical error, the relative MSE is suggested as a prediction performance measure for the NTPM, which is given by ξðxÞ =
e ð xÞ μ
ð6:15Þ
The predicted time response t' for a new design point x using NTPM is accepted only if the relative error ξ(x) is less than a user-defined threshold ξt. In order to balance a smooth design process with the required prediction accuracy, the value of ξ(x) is recommended to be in the range [10-3, 10-2]. Once the prediction for this particular design point x is accepted, the time response t' of extreme performance can be estimated using Eq. (6.13) and return to the time-dependent reliability analysis procedure. If the relative error is greater than a threshold, x will be registered as a new sample input, and the EGO procedure will be employed to extract the true time response for x when the limit state function approaches its limit performance. Using the new design point x and the real-time response of x, NTPM will be updated. Through the developed ARPMM mechanism, NTPM can be adaptively updated during time-dependent reliability analysis to guarantee accuracy while maintaining efficiency. Note that the ARPMM mechanism automatically improves the Kriging model during the design process; in rare cases, when multiple design points close together are used to seed the Kriging model, stability caused by singular matrices may occur question. Therefore, we recommend including an extra step in the ARPMM procedure to improve the predictive accuracy of the Kriging model by checking for singularities after adding new sample points.
6.4.2
Other Methods
Hu et al. [27] proposed a mixed efficient global optimization for time-dependent reliability analysis. Different from the current EGO method, which draws samples of random variables and time independently, the m-EGO method draws samples for the two types of samples simultaneously. The m-EGO method employs the adaptive Kriging–Monte Carlo simulation (AK–MCS) so that high accuracy is also achieved. Then, Monte Carlo simulation (MCS) is applied to calculate the time-dependent reliability based on the surrogate model. The detail of m-EGO can be found in [27]. Nevertheless, finding its extreme time at each point will bring much computational burden.
6.5
Response Surrogate-Based Methods
Instead of constructing an extreme response surface, the response surrogate-based approach constructs a global surrogate model with random variables, random processes, and time parameters as input as a performance function. Subsequently, the
6.5
Response Surrogate-Based Methods
157
extremum surfaces can be approximated based on the global surrogate model. The real concern of the global surrogate model is whether the random point will fall into the failure state in any discrete time node, rather than whether the extreme value of the random point in the time interval meets the failure condition. It is obvious that the response surrogate-based approach is more efficient than the extreme response surrogate-based method. This section will introduce several response surrogatebased methods. The confidence-based adaptive extreme response surface method (AERS) [9] has to build many Kriging models at all discretized time nodes, and the training samples of different Kriging models cannot be reused. The stopping criterion of the equivalent stochastic process transformation method (eSPT) [28] varies according to the specific problems. Li et al. [29] presents a new instantaneous response surface method t-IRS for time-dependent reliability analysis. Hu et al. [31] have proposed a surrogate-based time-dependent reliability analysis (STRA) for a digital twin (DT).
6.5.1
Confidence-Based Adaptive Extreme Response Surface Method
This method makes spectral decomposition of input stochastic processes at first, representing a stochastic process Y(t) by a set of deterministic functions with corresponding random coefficients. The EOLE method usually be adopted in this part. With the employment of spectral decomposition, input stochastic process Y(t) is represented by a formulation of p random variables Z = [Z1, Z2, . . ., Zp] with the corresponding eigenfunctions. The time-dependent limit state function is derived as GðX, Yðt Þ, t Þ = GT ðX, Z, t Þ
ð6:16Þ
If a sample [x, z] is drawn from the joint distribution of random variables [X, Z], we then obtain a deterministic realization of stochastic process. The time-dependent extreme value function is defined as Ge,T ðX, Z, T Þ = max GT ðX, Z, t Þ t2½0, T ]
ð6:17Þ
where the extreme value Ge,T(.) is a function of random variables X and Z, as well as time parameter T. Note that the time-dependent extreme value is a monotonic increasing function with time for a realization of random parameters X and Z. The probability of failure for time interval [0, T] is calculated by Pf ð0, T Þ = PrðGe,T ðX, Z, T Þ > 0Þ =
⋯
Ge,T ðX,Z,T Þ > 0
f X ðXÞf Z ðZÞdxdz
ð6:18Þ
158
6
Time-Dependent Reliability Analysis
Since it is unlikely to derive the time-dependent extreme value function and evaluate the multi-dimension integration analytically in Eq. (6.18), in this method, a set of surrogate models will be constructed for predicting the time-dependent extreme value function and approximation of the time-dependent probability of failure in Eq. (6.18) using MCS. This method also adopts some advanced sampling strategies to improve the efficiency of building surrogate models, which can be found in [9].
6.5.2
Equivalent Stochastic Process Transformation Method
This approach presents an equivalent stochastic process transformation method for cost-effectively predicting reliability degradation over the lifetime of engineering systems, taking into account random variables and stochastic process parameters. A new concept called transient failure surface is introduced to cover all potential failure events that may occur within a certain time period. To obtain the instantaneous fault surface, the time-independent reliability problem is formulated by transforming the stochastic process and time parameters into random variables. A surrogate model is constructed using Kriging techniques to predict the instantaneous failure surface, and the surrogate model is efficiently updated using a maximum confidence enhanced sequential sampling scheme. Time-dependent reliability of dynamical systems is assessed by Monte Carlo simulations based on an updated high-fidelity Kriging model. The detail of this method can be found in [28].
6.5.3
Instantaneous Response Surface Method
There is a new instantaneous response surface method t-IRS for time-dependent reliability analysis. The procedure of this method can be expressed as: 1. The time interval of interest [0, T] is discretized into s time nodes ti using Δt = t i - t i - 1 = s -T 1 , i = 1, 2, . . . , s. Y(t) is reconstructed and transformed into Z by using the EOLE approach. 2. Instantaneous response surrogate model construction and update of the instantaneous response surrogate model. 3. The time-dependent reliability is calculated by MCS. The t-IRS method only needs to build one instantaneous response surrogate model, and thus computational efficiency can be greatly improved. And the detail of this method can be found in [29].
6.5
Response Surrogate-Based Methods
6.5.4
159
Real-Time Estimation Error-Guided Active Learning Kriging Method
Since the Kriging model can provide local uncertainty information, it is easy to obtain the probability of wrong sign estimation (WSIE) of each random candidate at discrete time nodes. When estimating time-dependent failure probabilities by Monte Carlo simulation (MCS), the Kriging model classifies all stochastic candidates as failure candidates and safe candidates. However, the classification is not entirely accurate. Based on the probability of WSIE, the probability of wrong state estimation (WSE) for failed and safe candidates can be calculated separately. In each iteration, the candidate with the highest WSE probability will be selected as a new training sample, and the corresponding time node will be determined according to the WSIE probability. Additionally, the probability of WSE is used to calculate a confidence interval for the total number of misclassified safe or failed candidates. According to the confidence interval, the real-time maximum estimation error of the predicted failure probability can be obtained. Then use the maximum error to judge whether to stop the training of the surrogate. The maximum real-time estimation error can be calculated by [30]. εkmax = max
6.5.5
1þ
Nf Nf þ
nls
- nus
, 1-
Nf N f þ nls - nus
× 100%
ð6:19Þ
Surrogate-Based Time-Dependent Reliability Analysis Method
The authors proposed a new surrogate-based time-dependent reliability analysis method for digital twin, particularly focusing on stochastic process discretization and the adaptive sampling strategy [31]. The EOLE method requires two parameters: the number of discrete time nodes s and the number of dominated eigenvalues p. For the same p, the different s can make representation of the stochastic processes with different accuracy. In addition, the number of discrete time nodes s affects the calculation time of TRA, which affects the real-time feature of a DT. The STRA uses the variance of error to select best s. The variance of the error for the EOLE method at j-th test time node tj is given by Eq. (6.20): p
Var Y t j - Y t j
= σ 2Y t j -
i=1
1 T Φ C t λi i Y j
2
ð6:20Þ
160
6
Time-Dependent Reliability Analysis
For a given s, the variances of the error at 106 test time nodes are calculated. For each stochastic process, the average of the variances of error at all test time nodes is set as the error of the discretization of the stochastic process, expressed by Eq. (6.21). If there are several stochastic processes in a TRA problem, the error of the discretization of all the stochastic processes is equal to summation of the average of the error of each stochastic process. For different s, the one corresponding to the minimum error of the discretization is selected for discretizing the stochastic process (es). Therefore, STRA can make a trade-off between accuracy and real-time. 106
mVar =
j=1
Var Y t j - Y t j 106
ð6:21Þ
To decrease the number of random variables, the dominated eigenvalues need to be selected. For a given s, there will be s eigenvalues and eigenvectors. Arrange these eigenvalues in descending order, and the first p are selected as dominated eigenvalues. The value of p is defined by Eq. (6.22): p i=1 s i=1
λi λi
>θ
ð6:22Þ
Here, the θ is set as 0.99. Therefore, the stochastic process Y(t) can be represented by several independent standard normal variables Z = (Z1, Z2, . . ., Zp). The STRA method aims to improve the efficiency of constructing the Kriging model by sampling in the regions with large failure probability prediction errors and shifts the adaptive sampling domain from the global space to the vicinity of the LSF. Two major issues need to be addressed: (1) properly partitioning the sampling space and (2) identifying the sensitive regions near the LSF. 1. Partitioning the Sampling Region Based on Voronoi Tessellation As shown in Chap. 3, the Voronoi diagram is used for sampling region partition. Figure 6.4 shows a two-dimensional example partitioned by the Voronoi tessellation. The asterisks in Fig. 6.4 are the existing samples. All the candidate samples are then categorized into each Voronoi cell based on the Voronoi tessellation. The candidate samples in the entire design space are first generated by the Monte Carlo simulation (MCS) method, as demonstrated by the triangles. 2. Identification of Sensitive Voronoi Cells For efficient TRA, each new sample is expected to contribute to the improvement of the accuracy of reliability analysis using the surrogate model. To select samples more efficiently, it is necessary to filter out the Voronoi cells with large prediction errors. the modified (LOO) cross-validation is developed using the error of the
6.5
Response Surrogate-Based Methods
161
Fig. 6.4 Example of two-dimensional Voronoi tessellation
failure probability as the error-index to select the sensitive Voronoi cells. The modified LOO error index of a point pi can be defined by eiLOO* = j Pf - Pf
P∖pi
j
ð6:23Þ
where Pf is the predicted failure probability evaluated by using the sample set P and P∖pi
Pf is the predicted failure probability evaluated by using the sample set P\pi. The above process is repeated for each point in P, and eventually the failure probability errors of all existing samples will be obtained. It is observed that large failure probability error eiLOO* of sample pi indicates that the constructed Kriging model deviates from the real model largely in cell Ci. The formula for calculating the average failure probability error eLOO* is as follows: m
eLOO* =
i=1
eiLOO* m
ð6:24Þ
where m is the number of Voronoi cells. The sensitive Voronoi cells, expressed by C lsensitive , l = 1, 2, . . . , q, are defined as the cells whose error of failure probability eiLOO* is greater than the average failure probability error eLOO* . The new points will be sampled in these sensitive cells. After the sensitive Voronoi cells are selected, the next step is to select new training samples among all the candidate MCS samples in the sensitive cells to update the Kriging model. One of the most popular learning functions used to identify new training samples of the adaptive Kriging model is the expected feasibility function (EFF) proposed in the efficient global reliability analysis (EGRA) algorithm [32]. The EFF tends to select points that are close to the LSF with high uncertainty [32]. It is expressed as
162
6
- μ ð xÞ
EFFðxÞ = μ ðxÞ 2Φ
G
σ ð xÞ
G
- εð xÞ - μ ð xÞ
-Φ
σ ð xÞ
G
- σ ðxÞ 2ϕ G
þεðxÞ Φ
Time-Dependent Reliability Analysis
G
-Φ
G
μ ð xÞ G
σ G ðxÞ
-ϕ
εð xÞ - μ ð xÞ G
σ ðxÞ
-Φ
G
σ ð xÞ G
- εð xÞ - μ ð xÞ σ G ð xÞ
εð xÞ - μ ð xÞ
-ϕ
G
εðxÞ - μ ðxÞ G
σ G ð xÞ
- εð xÞ - μ ð xÞ
G
σ ð xÞ
G
G
ð6:25Þ where ϕ is the standard normal probability density function; Φ is the standard normal cumulative distribution function; ε(x) is the tolerance, which is set to 2σ ðxÞ; and G
μ ðxÞ and σ ðxÞ are the predicted mean and standard deviation, respectively, for G
G
sample x. To be applied in DT, the efficiency of EFF need to be enhanced and decrease the number of DoE samples. The weighted expected feasibility function (WEFF) is proposed to improve the sampling efficiency, which combines weights based on EFF. The WEFFs balance the importance of samples and their positional relationship. For each sensitive Voronoi cell Clsensitive , l = 1, 2, . . . , q, there is a corresponding weight which is expressed as weight ðiÞ =
eiLOO* maxðeLOO* Þ
ð6:26Þ
where max(eLOO*) is the maximum failure probability error over all the identified sensitive cells. For candidate point x, which belongs to the Voronoi cell C lsensitive , its weighted expected feasibility values can be defined by WEFFðxÞ = weight ðlÞ × EFFðxÞ
ð6:27Þ
By filter out the candidate sample points located in sensitive Voronoi cells C lsensitive , l = 1, 2, . . . , q, the best next sample to be evaluated and added for updating the surrogate model is the sample having the maximum WEFF value. In this way, the Kriging model is iteratively updated until the following stopping criterion is satisfied: εmax ≤ εtar
ð6:28Þ
where εmax is the predicted error, which can be calculated according to the maximum real-time estimation error calculation method [30], and εtar is the target prediction error, which is equal to 0.05 [30]. After the surrogate models GðX, Y ðt Þ, t Þ are successfully constructed, the time-dependent probability of failure can be instantly calculated by MCS. A flowchart of the STRA method is shown in Fig. 6.5.
6.5
Response Surrogate-Based Methods
Fig. 6.5 Flowchart of the STRA method
163
Initializing
Discretize time interval and reconstruct the stochastic process by improved EOLE
Generate initial candidate design points by Latin Hypercube Sampling (LHS)
Construct Kriging model with current training set S Partition the sampling region based on Voronoi tessellation identify sensitive Voronoi cells l Csensitive , l = 1, 2,..., q
Select new sample point and add it to training set S x = arg max(WEFF ( x))
N
Hmax ≤ Htar
Y Calculate the time-dependent probability of failure by MCS
End
Example 6.1 Consider the following performance function GðX, Y ðt Þ, t Þ = - 20 þ X 21 X 2 - 5X 1 ð1 þ Y ðt ÞÞt þ ðX 2 þ 1Þt 2 where t represents the time parameter, which varies from 0 to 1; X = [X1, X2] denotes two independent random variables; and Y(t) is a Gaussian process. The statistical parameters of the random variables and the stochastic process are shown in Table 6.1.
164
6
Time-Dependent Reliability Analysis
Table 6.1 Statistical parameters of the random variables and the stochastic process used in the case study Random variable or stochastic process X1 X2 Y(t)
Table 6.2 Reliability analysis results of the case study
Distribution type Normal Normal Gaussian process
Method MCS t-IRS eSPT REAL STRA
Mean 3.5 3.5 0
NFE 106 80.3 48.1 32.1 27.4
Standard deviation 0.25 0.25 1
Pf 0.307814 0.308050 0.299643 0.299636 0.304104
Autocorrelation function
exp(-(t2 - t1)2)
Error (%) 0.08 2.65 2.66 1.20
Calculate the time-dependent reliability by response surrogate methods. Solution As shown in Table 6.2, although the error of the failure probability calculated by the STRA method (1.20%) is larger than that calculated by the t-IRS method (0.08%), the STRA method requires the smallest number of function evaluations (NFE) among the four tested methods. This indicates that the STRA method can efficiently create a surrogate model using fewer DoE samples. To solve this exercise, readers are welcome to use the codes which are available at the MathWorks File Exchange (https://ww2.mathworks.cn/matlabcentral/ fileexchange/123480-surrogate-based-time-dependent-reliabillity-analysis) and at the GitHub Repository (https://github.com/WeifeiHuZJU/Surrogate-based-timedependent-reliabillity-analysis). Exercises There is a corroded beam structure as shown in Fig. 6.6. The length of the beam, L, is 5 m, and the cross-section is rectangular with an initial width b0 and height h0. Considering that the beam is affected by gravity, a uniformly distributed load p = ρstb0h0 N/m is used, where ρst = 78.5 kN/m3 is the steel density. In addition, the beam is subjected to a concentrated force F(t) at the center point. The beam will be corroded by external effects such as raindrop erosion during its life cycle. Its cross-sectional area is assumed to linearly decrease with time, and the mechanical strength of the corroded area will be gradually lost. The remaining uncorroded cross-sectional area A(t) can be expressed as Aðt Þ = bðt Þ × hðt Þ
ð6:29Þ
where b(t) = b0 - 2κt, h(t) = h0 - 2κt, κ = 0.03 mm/year denotes a constant corrosion rate, and t represents the time parameter, which varies within [0, 20] years.
6.5
Response Surrogate-Based Methods
165
Fig. 6.6 Corroded beam structure
Table 6.3 Statistical information on the random parameters of the corroded beam structure Parameter fy b0 h0 F(t)
Distribution Lognormal Lognormal Lognormal Gaussian process
Mean 240 MPa 0.2 m 0.03 m 3500 N
Standard deviation 24 MPa 0.01 m 0.003 m 700 N
Autocorrelation function
exp(-(t2 - t1)2/λ2)
The performance function of the corroded beam can be expressed as GðX, Y ðt Þ, t Þ = M u ðt Þ - M ðt Þ =
bðt Þh2 f y F ðt ÞL ρst b0 h0 L2 þ 4 4 8
where Mu(t) = b(t)h2(t)fy/4 is the ultimate bending moment capacity of the beam and fy is the steel yield stress. The bending moment at the midpoint of the beam is the largest, which can be expressed as M ðt Þ = ðF ðt ÞL=4Þ þ ρst b0 h0 L2 =8 Let X = [fy, b0, h0] be lognormally distributed random variables, F(t) be a Gaussian process, and the correlation parameter λ in the autocorrelation function be equal to 5 years. Detailed statistical information on the random variables and the stochastic process is presented in Table 6.3. Calculate the time-dependent reliability of this corroded beam in 5 years. Note: this exercise is one of the case studies from the reference [31]. For detailed methodology, results, and discussion, readers are suggested to check the reference [31].
166
6
Time-Dependent Reliability Analysis
References 1. Zhang, D., et al. (2017). Time-dependent reliability analysis through response surface method. Journal of Mechanical Design, 139(4), 041404. 2. Wang, L., et al. (2017). Structural time-dependent reliability assessment of the vibration active control system with unknown-but-bounded uncertainties. Structural Control Health Monitoring, 24(10), e1965. 3. Koehler, J. R., & Owen, A. B. (1996). 9 Computer experiments. In Handbook of statistics (Vol. 13, pp. 261–308). 4. Sacks, J., et al. (1989). Design and analysis of computer experiments. Statistical Science, 4(4), 409–423. 5. Ghanem, R. G., & Spanos, P. D. (2003). Stochastic finite elements: A spectral approach. Courier Corporation. 6. Zhang, J., & Ellingwood, B. (1994). Orthogonal series expansions of random fields in reliability analysis. Journal of Engineering Mechanics, 120(12), 2660–2677. 7. Li, C.-C., & Der Kiureghian, A. (1993). Optimal discretization of random fields. Journal of Engineering Mechanics, 119(6), 1136–1154. 8. Rice, S. O. (1944). Mathematical analysis of random noise. The Bell System Technical Journal, 23(3), 282–332. 9. Wang, Z., & Chen, W. (2017). Confidence-based adaptive extreme response surface for timevariant reliability analysis under random excitation. Structural Safety, 64, 76–86. 10. Breitung, K. (1988). Asymptotic crossing rates for stationary Gaussian vector processes. Stochastic Processes Their Applications, 29(2), 195–207. 11. Schrupp, K., & Rackwitz, R. (1988). Outcrossing rates of marked poisson cluster processes in structural reliability. Applied Mathematical Modelling, 12(5), 482–490. 12. Breitung, K. (1994). Asymptotic approximations for the crossing rates of Poisson square waves (pp. 75–75). Nist Special Publication Sp. 13. Andrieu-Renaud, C., Sudret, B., & Lemaire, M. (2004). The PHI2 method: A way to compute time-variant reliability. Reliability Engineering System Safety, 84(1), 75–86. 14. Sudret, B. (2008). Analytical derivation of the outcrossing rate in time-variant reliability problems. Structure Infrastructure Engineering, 4(5), 353–362. 15. Lutes, L.D. and S. Sarkani, Reliability analysis of systems subject to first-passage failure. 2009. 16. Song, J., & Der Kiureghian, A. (2006). Joint first-passage probability and reliability of systems under stochastic excitation. Journal of Engineering Mechanics, 132(1), 65–77. 17. Jiang, C., et al. (2014). A time-variant reliability analysis method based on stochastic process discretization. Journal of Mechanical Design, 136(9), 091009. 18. Hu, Z., & Du, X. (2015). First order reliability method for time-variant problems using series expansions. Structural Multidisciplinary Optimization, 51, 1–21. 19. Wang, Z., & Wang, P. (2012). A nested extreme response surface approach for time-dependent reliability-based design optimization. Journal of Mechanical Design, 134(12). 20. Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4), 455. 21. Schonlau, M., Computer experiments and global optimization. 1997. 22. Stuckman, B. E. (1988). A global search method for optimizing nonlinear systems. IEEE Transactions on Systems, Man, Cybernetics, 18(6), 965–977. 23. Mockus, J. (1998). The application of Bayesian methods for seeking the extremum. Towards Global Optimization, 2, 117. 24. Knill, D. L., et al. (1999). Response surface models combining linear and Euler aerodynamics for supersonic transport design. Journal of Aircraft, 36(1), 75–86. 25. Madsen, J. I., Shyy, W., & Haftka, R. T. (2000). Response surface techniques for diffuser shape optimization. AIAA Journal, 38(9), 1512–1518. 26. Welch, W. J., et al. (1992). Screening, predicting, and computer experiments. Technometrics, 34(1), 15–25.
References
167
27. Hu, Z., & Du, X. (2015). Mixed efficient global optimization for time-dependent reliability analysis. Journal of Mechanical Design, 137(5), 051401. 28. Wang, Z., & Chen, W. (2016). Time-variant reliability assessment through equivalent stochastic process transformation. Reliability Engineering System Safety, 152, 166–175. 29. Li, J., et al. (2019). Developing an instantaneous response surface method t-IRS for timedependent reliability analysis. Acta Mechanica Solida Sinica, 32, 446–462. 30. Jiang, C., et al. (2020). Real-time estimation error-guided active learning Kriging method for time-dependent reliability analysis. Applied Mathematical Modelling, 77, 82–98. 31. Hu, W., et al. (2023). Surrogate-based time-dependent reliability analysis for a digital twin. Journal of Mechanical Design, 1–36. 32. Bichon, B. J., et al. (2008). Efficient global reliability analysis for nonlinear implicit performance functions. AIAA Journal, 46(10), 2459–2468.
Chapter 7
Reliability-Based Design Optimization
Nomenclature c C d d(k) dk fx(x) fX(xi, μi) FG(g) g* gj(x) G(x) hi(x) m1 m2 m(y) Mf nc nd nr PFjTar r s u v Xrp x*(k - 1) x(m)
Constant number Copula function Design variable Design vector of the k-th iteration Searching direction Joint probability density function (JPDF) of all random system Marginal PDF corresponding to the i-th random variable Xi Statistic description of G(x) Target probability metric Inequality constraints System performance function Equality constraints Number of equality constraints Number of inequality constraints Value function Number of failed samples Number of probabilistic constraints Number of design variable Sum of random variables and parameters Target probability of failure Nonlinear indicator Coordinate offset coefficient Marginal CDFs for Xi Marginal CDFs for Xj Random parameter of the input The MPP point in the k-1th iteration m-th realization of X
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_7
169
170
Z(Y) βs εd Ψ μi μsf ΩF θ
7.1
7
Reliability-Based Design Optimization
Approximation function Reliability index Constant of the design tightness criterion Vector of distribution parameters First-order score function Mean value of the score function Failure set Correlation coefficient between Xi and Xj
Basic Concept
The computational ability of computers has greatly enhanced in last few decades, which decrease the computational time of the large-scale simulation techniques (e.g., finite element analysis (FEA), computational fluid dynamics (CFD)) for complex systems. By utilizing the large-scale simulation techniques, the performance (e.g., stress, strain, heat field, etc.) can be analyzed and further provide the possibility to find an optimal design of the complex systems by using different input of simulation. The process of obtaining optimal design is known as design optimization. Optimization is used to find optimal design, which is achieved by achieving a higher performance or a lower cost (i.e., the objective function) while satisfying the performance requirements (i.e., constraints) by changing the design variable. Typical objective functions include minimizing weights, improving performances, and increasing the production, etc. Typical constraints include satisfying maximum stress and allowable deflection/strain, design within the feasible design domain, etc. The basic paradigm in design optimization is to find a set of design variables that optimizes an objective function while satisfying the constraints, which can be seen in Eq. (7.1). minimize
CostðXÞ hi ðxÞ = 0, i = 1, . . . , m1
subjectto
gj ðxÞ ≤ 0, j = 1, . . . , m2
ð7:1Þ
x 2 X 2 Rnr where X is the design variable, cost is one type of object function, hi(x) are equality constraints, gj(x) is the inequality constraints, nr is the sum of random variables, and parameters, m1 and m2 are the number of equality constraints and inequality constraints, respectively. In the process of design optimization, the design variables in the problems are assumed to be deterministic [6]. In this situation, these problems with deterministic design variable are defined as deterministic design optimization. Existing design optimization methods can be divided into two groups, the sensitivity-based methods, and the non-sensitivity-based methods. For the
7.2
Problem Statement and Formulation
171
sensitivity-based optimization methods, gradient information of the objective function with respect to design variables is applied in finding an optimal design. However, the gradient information may not be available for some optimization problems with comprehensive simulation. To overcome these shortcomings, the non-sensitivity-based methods are utilized for design optimization. The advantage of non-sensitivity-based methods include: (1) it requires no gradients, which can be utilized for high-dimensional multi-physical problems and (2) it has a better global searching ability than those of sensitivity-based method. However, such methods can be computationally inefficient, as they do not use the gradient information while exploring the same design space. The computational time of non-sensitivity-based algorithm are large. Additionally, uncertainties may lead to large variations in the performance characteristics of the system. In real world, there must be various uncertainties, which need to be considered for safety and reliability of design optimization process. Otherwise, results of deterministic optimal problem may be associated with high possibility of failure without considering uncertainties. To associate the actual engineering problems with uncertainties, the reliabilitybased design optimization (RBDO) is proposed to optimize design characterized by a low probability of failure [8]. The RBDO tries to find the optimum design by finetuning the design variables and satisfying probabilistic constraints at a specific level of reliability or failure probability. In the process of RBDO, the uncertain variables and the failure modes are determined initially carried by probability theory, which can be referred in Chap. 1. Then in an RBDO formulation, the critical failure modes in deterministic optimization are replaced by these failure modes accompanied with uncertain variables, which can be analyzed by reliability analysis described in Chaps. 5 and 6. In the optimization process, the RBDO is traditionally treated as a nested optimization problem (i.e., double-loop optimization problem), which contains the analysis of deterministic optimization as well as reliability analysis [4]. However, such methods may be computationally intensive when the number of design variables or random variables increases. It is even impossible to obtain an accurate result when the design parameters obey a complex joint probability density function. To tackle the mentioned issues, researchers have proposed most probable point (MPP) and sampling-based RBDO methods from the perspective of probability resolution and the surrogate model, respectively. The main goal of this chapter is to introduce the reliability-based design optimization methods including the MPP-based RBDO and sampling-based RBDO.
7.2
Problem Statement and Formulation
The mathematical formulation of general component level of RBDO problem is expressed as
172
7
minimize subjectto
Reliability-Based Design Optimization
CostðdÞ P Gj ðXÞ > 0 ≤ PTar F j , j = 1, . . . , nc
ð7:2Þ
dL ≤ d ≤ dU , d 2 Rnd andX 2 Rnr
where d = μ(Xrv), i = 1 ~ nd is the design variable, which is the mean of the nddimensional random variable Xrv = {X1, X2, . . ., Xnd}T; X = {Xrv, Xrp}T, where Xrp represent the random parameter of the random input X; PFjTar is the target probability of failure, which is achieved by reliability analysis, for the j-th constraint; nc and nd are the number of probabilistic constraints, design variables, respectively. The system performance criteria are described by system performance functions. Consider a system performance function G(x), where the system fails if G(x) > 0. The statistic description of G(x) is characterized by its CDF FG(g) as F G ðgÞ = PðGðxÞ > gÞ =
GðxÞ > g
...
f x ðxÞdx1 . . . dxn , xL ≤ x ≤ xU
ð7:3Þ
where fx(x) is the joint probability density function (JPDF) of all random system, FG(x) is the cumulative distribution function of G. The reliability analysis for both component and systems level involves calculation of the probability of failure, which has been described in Chaps. 5 and 6. The statistical sampling methods (e.g., MCS) is utilized to estimate the true responses of probability from computer simulation. However, the statistical sampling methods may be computationally inefficient and almost prohibited. To achieve a fast and accurate estimating of the probability of failure, the surrogate model is adopted in RBDO, which is introduced in Chap. 3.
7.3 7.3.1
Most Probable Point-Based RBDO Reliability Index Approach and Performance Measure Approach
1. Mathematical Model of RIA and PMA Reliability index approach (RIA) and performance measure approach (PMA) are two traditional RBDO methods based on most probable point (MPP) [13]. The RIA and PMA methods are equivalent for probabilistic constraints. However, the two methods have different perspectives on the evaluation of probabilistic constraints, and the study shows that the PMA method has better robustness and efficiency compared to the RIA method. As mentioned in Chap. 5, the failure of probability can be expressed as
7.3
Most Probable Point-Based RBDO
173
PTar F j = Φð- βt Þ
ð7:4Þ
Hence, the formulation of RBDO can be represented as F Gj ð0Þ ≥ Φð- βt Þ
ð7:5Þ
Equation (7.5) can be expressed in two ways using the following inverse transformations, respectively, as βs = - Φ - 1 ðF G ð0ÞÞ ≤ βt
ð7:6Þ
g* = F G- 1 ðΦð- βt ÞÞ ≤ 0
ð7:7Þ
where βs is the reliability index and g* is the target probability metric. Hence, the RIA replaces the probability constraint with Eq. (7.6), which can be expressed as minimize subjectto
CostðdÞ βs,j ≤ βt,j , j = 1, . . . , nc
ð7:8Þ
d ≤ d ≤ d , d 2 R andX 2 R L
U
nd
nr
Besides, when the probability constraint is replaced with Eq. (7.7), the PMA is formulated, expressed as minimize subjectto
CostðdÞ g*j ≤ 0, j = 1, . . . , nc
ð7:9Þ
d ≤ d ≤ d , d 2 R andX 2 R L
U
nd
nr
where the meaning of other letters in Eqs. (7.8) and (7.9) is the same as that in Eq. (7.1) RBDO model. 2. Reliability Analysis Process and Inverse Analysis Process To use RIA, PMA for solving RBDO problems, the calculation of reliability index βG and target probability metric g* needs to be determined, which can be achieved by reliability analysis and inverse reliability analysis for the design point according to the derivation process βG and g* are calculated as follows. β G = - Φ - 1 ð F G ð 0Þ Þ = - Φ - 1 g* = F G- 1 ðΦð- βt ÞÞ = F G- 1
GðxÞ > g
GðxÞ > g
...
...
f x ðxÞdx1 . . . dxn f x ðxÞdx1 . . . dxn
ð7:10Þ ð7:11Þ
174
7
Reliability-Based Design Optimization
Obviously, the applied both methods always need to take multiple integrals, which are difficult to calculate directly to obtain accurate results in practical engineering applications. To take these issues, some approximate probability integration methods have been developed to provide effective solutions [1, 5, 14, 17], such as the first-order reliability method (FORM) or the second-order reliability method (SORM) which are introduced in Chap. 5. Among these methods, FORM is widely accepted for RBDO applications because of its accuracy and less computational burden than SORM. Since the reliability index and performance function are obtained by two different inverse transformations according to Eqs. (7.10) and (7.11), the calculation process for RIA and PMA is called reliability analysis process and inverse reliability analysis process, respectively. Next, the process of reliability analysis and inverse reliability analysis based on FORM applied to RIA and PMA is introduced. It has been shown that the RIA method produces singularities in its results for certain problems, while the PMA direction has better robustness with respect to the RIA method. Besides, the computational efficiency of both methods depends on the activity of the probabilistic constraints. For the optimal solution of the RBDO model, the robustness and efficiency of the algorithm are important. Hence, previous authors have proposed some numerical methods for RIA, PMA methods to improve the efficiency and stability of the probabilistic constraint evaluation of the RBDO model, which will be introduced in the next section.
7.3.2
Numerical Reliability Analysis Method Based on RIA
1. Hasofer-Lind Rackwitz-Fiessler Method The Hasofer-Lind Rackwitz-Fiessler (HL-RF) method is similar to FORM, which performs a Taylor first-order expansion to linearize the constraint at design point [18] and has been introduced in Chap. 5. In most cases, the method can converge quickly. However, it has been shown [11] that under certain conditions, this method may converge slowly or even lead to divergence when the limit state function is complex and highly nonlinear. Therefore, an improved method is proposed to improve its robustness HL-RF method called improved HL-RF method, which will be introduced in the next section. 2. Improved HL-RF Method Compared to the HL-RF method, the improved HL-RF algorithm introduces a value function m( y) to evaluate the convergence of the sequence 2
∇GðyÞy 1 1 ∇GðyÞ þ cGðyÞ2 mðyÞ = y 2 2 2 2 ∇GðyÞ
ð7:12Þ
7.3
Most Probable Point-Based RBDO
175
where c is a constant number. During the optimization process, the next point is selected by the linear search along the direction. dk =
1 ½∇Gðyk Þyk - Gðyk Þ]∇Gðyk ÞT - yk j∇Gðyk Þj2
ð7:13Þ
The improved HL-RF method, searching along the direction of dk until the value function m( y) decreases sufficiently. The dk does not always search the direction of gradient descent of m( y), which resulting in the global convergence of the improved HL-RF method is still not guaranteed. But the method improves the robustness of the original HL-RF method. 3. Two-Point Approximation Method The two-point approximation method introduces adaptive intervening variables based on the HL-RF method to propose an approximation function Z(Y) to solve for the exact function value and gradient, with the following expressions. Z ðY Þ = gðY k Þ þ
1 r
n i=1
y1i,k- r
∂gðY k Þ × ∂ui
ui þ
xi σi
r
- ui,k þ
xi s σi
r
ð7:14Þ
where r is the nonlinear indicator, which controls the generalized function nonlinearity, s is the coordinate offset coefficient. The nonlinear indicator and the coordinate offset coefficient can be determined by the following equation. gðY k - 1 Þ - gðY k Þ þ ×
ui þ
xi σi
1 r
n i=1
r
- ui,k þ
ð1 - r Þ
yi,k
xi s σi
∂gðY k Þ ∂ui
r
=0
ð7:15Þ
Since the obtained function values and gradients are exact, the method is suitable for highly nonlinear and implicit performance function problems that require largescale FEM for structural analysis.
7.3.3
Numerical Reliability Analysis Method Based on PMA
1. Advanced Mean Value (AMV) Method The most commonly used numerical methods for PMA are advanced mean value (AMV) [16], and conjugate mean value (CMV) [22]. The advanced mean value (AMV) method is simple and efficient for convex performance functions, but for concave performance functions will show numerical defects such as slow
176
7 Reliability-Based Design Optimization
convergence and even divergence, while the conjugate mean value (CMV) method is always convergent for the results, but inefficient for convex functions when solving. Therefore, according to the characteristics of AMV and CMV methods applied to PMA, once the type of performance function (whether it is convex or concave) is determined, the adaptive selection of AMV and MCV methods for PMA can evaluate the probability constraint more effectively, which is called hybrid mean value (HMV) method [19, 20]. The AMV method is widely used by researchers due to its simple formula and fast solution. The AMV method starts from the mean value (MV), where the mean value is defined as u*MV = βt nð0Þ n ð 0Þ = -
ð7:16Þ
∇X GðμÞ ∇ U G ð 0Þ =k ∇ U G ð 0Þ k k∇X GðμÞk
ð7:17Þ
where n(0) represents the direction that the performance function G is minimized, i.e., the normalized steepest descent direction. The AMV method iteratively updates the direction vector of the steepest descent method. After using the MV method to obtain the initial value u*MV , the AMV method can be expressed as ð1Þ
ðkþ1Þ
ðk Þ
uAMV = u*MV , uAMV = βt n uAMV
ð7:18Þ
where ðk Þ
n
ðk Þ uAMV
=-
∇U G uAMV ðk Þ
∇U G uAMV
ð7:19Þ
Existing study shows that the AMV method exhibits fast and efficient properties for convex performance functions, but slow convergence or even divergence for concave performance functions, which can be seen in Figs. 7.1 and 7.2. The concave and convex properties of the performance function are defined based on the performance function around the MPP with respect to the U-space origin, taking a convex performance function G1(X) and a concave performance function G2(X), respectively, as examples G1 ðX Þ = - eðX 1 - 7Þ - X 2 þ 10; X i N ð6,0:8Þ; i = 1, 2; βt = 3 G 2 ðX Þ =
eð0:8X 1 - 1:2Þ þ eð0:7X 2 - 0:6Þ - 5 ; X i N ð5,0:8Þ; i = 1, 2; βt = 3 10
7.3
Most Probable Point-Based RBDO
177
Fig. 7.1 The iterative process of AMV for the solution of convex performance function adapted from Youn et al. [21]
Because the update point of AMV method is the steepest descent direction of the performance function, it has a faster convergence speed for the gradient descent direction of the performance function, but for the concave performance function, the oscillation phenomenon as shown in Fig. 7.3. 2. Conjugate Mean Value (CMV) Method In the AMV method, the lack of sufficient update information in conducting reliability analysis causes it to be unstable and inefficient. To address the shortcomings of the AMV method, the conjugate mean value (CMV) method is formed by considering the current as well as previous MPP information so that it points to the diagonal of the three consecutive steepest descent directions. The CMV method is expressed as
178
7
Reliability-Based Design Optimization
Fig. 7.2 The iterative process of AMV for the solution of concave performance function adapted from Youn et al. [21] Fig. 7.3 AMV method solving result oscillation phenomenon adapted from Hao et al. [7]
7.3
Most Probable Point-Based RBDO
179
Fig. 7.4 Iterative process of CMV for solving convex performance functions adapted from Youn et al. [21] ð0Þ
ð1Þ
ð1Þ
ð2Þ
ð2Þ
uCMV = 0, uCMV = uAMV , uCMV = uAMV ðkþ1Þ
uCMV = βt
ðk Þ
ðk Þ
ðk - 1Þ
þ n uCMV
ðk Þ
ðk - 1Þ
þ n uCMV
n uCMV þ n uCMV n uCMV þ n uCMV
wheren uCMV = -
∇U G
ðk - 2Þ ðk - 2Þ
,k≥2
ð7:20Þ
ðk Þ uCMV ðk Þ
∇U G uCMV
A comparison of the AMV method with the CMV method using the same numerical cases as in the AMV method is shown in Figs. 7.4 and 7.5. Figure 7.4 shows the MPP iteration history applied to the convex performance function using the CMV method, and Fig. 7.5 shows the MPP iteration history applied to the concave performance function using the CMV method. Clearly, for convex performance functions, the CMV method has more iterations compared to the AMV method. The numerical efficiency is not as good as AMV method for convex performance functions. While for the concave performance function, the numerical results of CMV method are convergent and stable.
180
7
Reliability-Based Design Optimization
Fig. 7.5 Iterative process of CMV for solving concave performance functions adapted from Youn et al. [21]
Figure 7.6 can explain the reason why the CMV method is suitable for solving MPP for concave performance functions. In Fig. 7.6, n(u0) is the normalized steepest descent direction for the performance function at u0. Since the CMV method considers the previous MPP information in the new search direction at the same time, even if the MPP of the current and subsequent iterations oscillates, the next update point is restricted to the iterative oscillation interval, thus gradually converging to the final MPP. The AMV method is efficient for convex performance functions and the CMV method is stable for concave performance functions. The applicability of the PMA method can be greatly improved if the AMV method and CMV method can be adaptively selected for the information of the performance function types around the current iteration of MPPs, and thus the hybrid mean value (HMV) method is proposed, which will be introduced in the next section. 3. Hybrid Mean Value (HMV) Method The core of the HMV method is to adaptively select the AMV method or the CMV method to solve the next updated MPP by giving a discriminant for the type of performance function. The HMV numerical algorithm effectively integrates the AMV and CMV methods, making it more robust and efficient. In the following, the discriminant in the HMV method is described.
7.3
Most Probable Point-Based RBDO
181
Fig. 7.6 Convergence process of CMV method for concave performance function adapted from Hao et al. [7]
The HMV method proposes a discriminant of function type based on the steepest descent direction at three consecutive iterations, expressed by ξðkþ1Þ = nðkþ1Þ - nðkÞ . nðkÞ - nðk - 1Þ
ð7:21Þ
kþ1 When ξ(k + 1) is larger than 0, the function is convex function at uHMV . Otherwise, the function is concave function. Thus, once the type of performance function is identified, the next iteration of the update point can be performed by adaptively selecting the appropriate method. The process of HMV can be shown as follows.
Step 1: Set the number of iterations and the convergence parameter, and use the MV ð0Þ
method to calculate the steepest direction of gradient descent n uHMV
of the
initial performance function in U-space. Step 2: If the performance function is of convex type or k < 3, the next update point is calculated using the AMV method. Otherwise, if the performance function is concave and k ≥ 3, CMV method is used to compute the next update point. ðkþ1Þ
ðkþ1Þ
Step 3: In the new MPP uHMV calculate G uHMV
and reliability index β(k + 1). If the
convergence criterion is satisfied then the computation is stopped and the optimal MPP is considered to be found, otherwise the next step is executed. ðkþ1Þ
Step 4: Compute ∇U G uHMV
and the performance function discriminant ς(k + 1),
and set k = k + 1 to return Step 2. For highly nonlinear performance functions, the efficiency of the HMV method is still unsatisfactory and may not even converge. Therefore, the HMV method is improved, and thus the enhanced Hybrid-Mean-Value method (HMV+) is proposed, which will be introduced in the next section.
182
7
Reliability-Based Design Optimization
4. Enhanced HMV (HMV+) Method Based on HMV method, the arc interpolation method is introduced in the HMV+ method to obtain a more accurate output probability distribution FG(g). When the probability level βi > 0, the value of the performance function decreases at the next search point. In the case when the value of the performance function increases at the next search point, the arc interpolation method is used, carried by using the perforðk Þ ðk - 1Þ mance function and its sensitivity values at the two search points uHMV þ and uHMV þ . Hence, the performance function is interpolated along the arc region between these ðkþ1Þ two search points to obtain the next update point uHMV þ . Step 1: Set the number of iterations and the convergence parameter as for the ð0Þ probability level and make uHMV þ = 0; ðk Þ
Step 2: Compute the performance function g uHMV þ ∇g
ðk Þ uHMVþ
and its sensitivity
;
Step 3: Check whether the Karush-Kuhn-Tucker condition satisfies the following equation, stop if it does, and consider that the final MPP has been found. ðk Þ
sgnðβi Þ .
uHMV þ ðk Þ uHMV þ
ðk Þ
. nHMV þ - 1 ≤ ε
ð7:22Þ
where k ≥ 2, n is the normalized steepest ascent direction of GU(u), and sgn(βi) is the sign function whose value is -1 when βi< 0 otherwise it is 1. ðk Þ
ðk - 1Þ
Step 4: If k ≥ 2 and sgnðβi Þ g uHMV þ - g uHMV þ
< 0, the performance function
is interpolated along the arc region between the two search points and obtain a ðkþ1Þ new search point uHMVþ by maximizing the approximate performance function. ðkþ1Þ
Otherwise, the HMV method is used to obtain a new search point uHMV þ and set k = k + 1 and return to Step 2. The HMV+ method can achieve significant results, and the numerical strategy actually plays a key role in the computational efficiency. The researchers further considered the numerical strategy to find numerical methods that make the PMA method more efficient and robust. Thus, the enriched PMA (PMA+) method was proposed, which will be introduced in the next section. 5. Enriched PMA (PMA+) Method The numerical computation scheme is also decisive for the efficiency of the computational solution. The PMA+ (enriched PMA) method integrates four major improvements aimed at developing new efficient and robust methods for probabilistic constraint evaluation of RBDO without sacrificing numerical accuracy and stability. The four major improvements involved in PMA+ are summarized as follows:
7.3
Most Probable Point-Based RBDO
183
Fig. 7.7 The relative position of the DDO optimal solution to the RBDO optimal solution
(1) HMV+ method is applied in reliability analysis HMV+ further solves the shortcomings of the HMV method and is a better numerical method from the comprehensive viewpoint of solution efficiency and stability. Therefore, the HMV+ method is used for reliability analysis. (2) Carried RBDO based on DDO result As shown in Fig. 7.7, the optimal solution of RBDO usually exists near the optimal solution of deterministic design optimization (DDO). Hence, the HMV+ method executes DDO at the initial design point first. Then the RBDO starting from the optimal solution of DDO, which has fewer iterations and higher numerical efficiency compared to executing RBDO solution from the initial design point. (3) Probabilistic feasibility check The reliability analysis of the current design point is required continuously during the iteration of the optimal solution of RBDO. And the PMA+ method proposes the concept of probabilistic feasibility checking, which aims to perform probabilistic feasibility judgment without performing the reliability analysis process completely while maintaining numerical accuracy. Thus, the numerical calculation efficiency is improved. As shown in Fig. 7.8, a small positive number εf is introduced. If the probabilistic performance satisfies -gi < - εf, the i-th probabilistic constraint is considered feasible at the current design point; if -εf ≤ - gi ≤ 0, the i-th probabilistic constraint is considered active; if -gi > - εf, the i-th probabilistic constraint is considered to violate the reliability requirement.
184
7
Reliability-Based Design Optimization
Fig. 7.8 Feasibility of probabilistic constraints in RBDO adapted from Youn et al. [19, 20]
(4) Rapid reliability analysis under design proximity conditions The computational cost of RBDO mainly comes from the reliability analysis of the current design point. When the design points of the previous and subsequent iterations are close to each other, the reliability analysis can be performed by using the information obtained in the previous design iteration, which can effectively reduce the computational burden. Thus, the numerical calculation efficiency can be effectively improved when the iteration is near the end. The proximity of the design points in X-space and the MPP sum in U-space between the previous and subsequent iterations is determined using the following equations: Δd ðkÞ = Δx*ðk - 1Þ =
ðX Þ - 1 d ðkÞ - d ðk - 1Þ x*ðk - 1Þ - x*ðk - 2Þ
≤ εd ≤ εd
ð7:23Þ ð7:24Þ
where d(k) is the design vector of the k-th iteration, x*(k - 1) is the MPP point in the k-1th iteration, εd the constant of the design tightness criterion, and the diagonal components of the covariance matrix ∑(X) of the random vector X corresponding to the design vector d are defined as
7.3
Most Probable Point-Based RBDO
Σi = σ 2X i = where
1 -1
185
ðxi - μi Þ2 f X i ðxi Þdxi
Σ = σXi , i
Σ = diagonal i
ð7:25Þ Σ
i
Once it has been determined that the MPPs updated in the two previous and two iterations are close enough, the reliability analysis calculations can be performed using the HMV+ method, which is similar to the principle of performing RBDO at the optimal solution of DDO instead of starting from the initial design point. Performing the reliability analysis close to the optimal MPP reduces the number of reliability evaluations and thus increases the efficiency of numerical calculations.
7.3.4
Full Loop of MPP-Based RBDO
Figure 7.9 describes the complete numerical procedure of the proposed RBDO based on PMA+. On the left of Fig. 7.9 is the sub-optimization loop of the reliability analysis for evaluating a set of potential probabilistic constraints and their sensitivities. The sensitivity is obtained by taking derivative of the performance function. For explicit performance functions, the sensitivity can be directly obtained through derivation rules. For implicit performance functions, the derivative of the performance function obtained by difference method, including forward difference, backward difference, or central finite difference. Taking the commonly used central finite difference method as an example, the sensitivity of the performance function is calculated as follows:
Fig. 7.9 Flow chart of PMA+-based RBDO adapted from Youn et al. [19, 20]
186
7
Reliability-Based Design Optimization
∂Gi ðxÞ Gi ðx þ ΔÞ - Gi ðx - ΔÞ ffi 2Δ ∂x
ð7:26Þ
where Gi(.) represents the i-th performance function and Δ represents the step size of the difference method. The right part of the loop of Fig. 7.9 is the main design optimization procedure where the RBDO undergoes design optimization. In the main optimization loop, a set of potential probabilistic constraints is identified by feasibility checks prior to the sub-optimization loop, thus reducing the computational burden throughout the RBDO process. In the sub-optimization loop, probability feasibility is identified to determine if fast reliability analysis should be used. The sequential quadratic programming method is used for design optimization and the HMV+ method is used for reliability analysis. The detailed description of other optimization procedure and methods will be introduced in Sect. 7.5.
7.4 7.4.1
Sampling-Based RBDO Monte Carlo Simulation
The MPP-based RBDO method was proposed to solve the multiple integration problem involved in calculating the probability constraint in the RBDO model from the perspective of probabilistic resolution, and robust and efficient numerical algorithms were introduced. The common point of these numerical algorithms is that they all need to utilize the sensitivity of the performance function, which also contains many uncertainties in practical engineering applications. For the RBDO model where the performance function is not available, we call it the “black-box model.” The Monte Carlo simulation (MCS) captures the frequency of the target event by means of “experimentation,” which is introduced in Chap. 5. Combined the MCS, the sampling-based mathematical model of RBDO can be rewritten as minimize
CostðdÞ 1
subject to
nMC
nMC k=1
I ΩFj xk ≤ PTar F j , j = 1, . . . , nc
ð7:27Þ
dL ≤ d ≤ dU , d 2 Rnd andX 2 Rnr The MCS method does not require gradient information of the performance function, but require response information of the performance function at the sample points. If the response information is easily obtained, the computational requirements can be negligible. However, for practical engineering applications, the evaluation of the response of the performance function need to estimate under complex simulation, such as FEM. The evaluation of performance functions would be extremely time-consuming or even unrealistic. Therefore, the MCS method cannot be directly used for solving the practical engineering RBDO problems yet.
7.4
Sampling-Based RBDO
187
The surrogate model can provide an approximate limit state bound (i.e., G(x) = 0). Besides, the computational effort of MCS based on surrogate models can be greatly reduced, and sufficiently accurate predictions of the performance function can be obtained with lower computational requirements. Next, the sampling-based RBDO approach is presented from the perspective of surrogate models in the next section.
7.4.2
Surrogate Model
There are many types of surrogate models, among which polynomial response surface modeling (PRSM) is the most used method due to its fast establishment, but it is limited in terms of global interpolation accuracy. The problem can be well solved by using chaotic polynomial (PCE), which corresponds to the response surface of a specific basis, but it is difficult to determine an appropriate DoE and the number of polynomials, and cross-validation is also required for accuracy assessment. Support vector machine (SVM) has shown many unique advantages in solving small sample, nonlinear and high-dimensional pattern recognition, and is also widely used in agent model building. In addition to the abovementioned common agent models, Kriging has attracted a lot of attention in the construction of surrogate models. Sachs-Ayer applied this method to approximate computer experiments in 1989. The details of the surrogate model can be referred in Chap. 3.
7.4.3
Stochastic Sensitivity Analysis Based on Surrogate Model
After the accurate surrogate model is established, the reliability analysis can be performed using MCS. An important step in design optimization is to obtain sensitivity derivatives, which can be used to study the effect of parametric modifications, to calculate search directions to find an optimum design. The sensitivity analysis discusses “how” and “how much” changes in the parameters of an optimization problem modify the optimal objective value and the point where the optimum is attained. On the one hand, the sensitivity of the probabilistic response can be obtained by the finite difference method (FDM). However, since the probability response is derived from the MCS, many sample points are required to obtain an accurate probability response sensitivity. In addition, using the FDM to solve the probability response sensitivity for optimization may also lead to infeasible points. On the other hand, the probability response sensitivity can be obtained from the constructed surrogate model. However, even if the surrogate model is very accurate, the sensitivity information it provides is inaccurate. Therefore, Lee et al. proposed a sampling-based stochastic sensitivity analysis method that uses a scoring function
188
7 Reliability-Based Design Optimization
(score function) for the calculation of probabilistic response sensitivity. The method neither requires a transformation involving space (X-space to U-space) nor does the computational procedure have any approximation. The probability of failure PF in Eq. (7.27) can be defined as PF ðΨÞ = P½X 2 ΩF ] =
nr
I ΩF ðxÞf X ðx; ΨÞdx = E ½I ΩF ðXÞ]
ð7:28Þ
R
where Ψis a vector of distribution parameters, which is the mean of X, ΩFis the failure set; fX(x; Ψ) is the joint probability density function (PDF) of X. Taking the partial derivative of the probability of failure in Eq. (7.28) with respect to the i-th design variable μi yields ∂PF ðψ Þ ∂ = ∂μi ∂μi
Rnr
I ΩF ðxÞf X ðx; μÞdx
ð7:29Þ
By utilizing the Leibniz’s rule of differentiation, the differential and integral operators can be expressed as ∂PF ðψ Þ = ∂μi
∂f X ðx; μÞ dx ∂μi ∂ ln f X ðx; μÞ I Ω F ð xÞ f X ðx; μÞdx = ∂μi Rnr ∂ ln f X ðx; μÞ = I ΩF ð xÞ ∂μi Rnr
I ΩF ðxÞ
ð7:30Þ
Since I ΩF ðxÞ is not a function of μi, the sensitivity analysis only needs to consider the partial derivative of the log function of the joint PDF of Eq. (7.30) with respect to μi is known as the first-order score function for μi and is denoted as sðμ1i Þ ðx; μÞ =
∂ ln f X ðx; μÞ ∂μi
ð7:31Þ
Hence, to further derive sensitivities of the probability of failure in Eq. (7.31), the derive of the first-order score function of (31) can be carried out by independent input random variables and correlated input random variables, which will be introduced in the next section. 1. Independent Input Random Variables Consider a random input X = {X1, . . ., Xnr}T whose components are statistically independent random variables. Then the joint PDF of X is expressed as a multiplication of its marginal PDFs as
7.4
Sampling-Based RBDO
189
f X ðx; μÞ =
nr
f ðx ; μ Þ i = 1 Xi i i
ð7:32Þ
where fX(xi, μi) is the marginal PDF corresponding to the i-th random variable Xi. Therefore, for statistically independent random variables, the first-order score function for μi is expressed as sðμ1i Þ ðx; μÞ =
∂ ln f X ðx; μÞ ∂ ln f X i ðxi ; μi Þ = ∂μi ∂μi
ð7:33Þ
The marginal PDF and the cumulative distribution function (CDF) are available analytically as listed in Chap. 1. 2. Correlated Input Random Variables Consider a bivariate correlated random input X = {Xi, Xj}T. Then, the joint PDF of X is expressed as 2
∂ Cðu, v; θÞ f X i ðxi ; μi Þf X j xj ; μj ∂u∂v = C ,uv ðu, v; θÞf X i ðxi ; μi Þf X j xj ; μj
f X ðx; μÞ =
ð7:34Þ
where C is the copula function, u = F xi ðxi , μi Þ and v = F xj xj ; μj are marginal CDFs for Xi and Xj, respectively, and θ is the correlation coefficient between Xi and Xj. The partial derivative of the copula function with respect to the marginal CDFs u and v in Eq. (7.34) is called the copula density function and denoted as 2
cðu, v; θÞ =
∂ Cðu, v; θÞ = C ,uv ðu, v; θÞ ∂u∂v
ð7:35Þ
Accordingly, using Eq. (7.34), the first-order score functions in Eq. (7.33) for a correlated bivariate input are expressed as sðμ1i Þ ðx; μÞ =
∂ ln f X ðx; μÞ ∂ ln cðu, v; θÞ ∂ ln f X i ðxi ; μi Þ = þ ∂μi ∂μi ∂μi
ð7:36Þ
The derivation of the first term in the right-hand side of Eq. (7.36) can be inferred in Table. 1, and the second term can be calculated similarly in Eq. (7.33). In Table 7.1, the partial derivative of the marginal CDF with respect to μi , ∂μi can be straightforwardly obtained from the analytic CDFs shown in Table 7.1. Even if several pairs of bivariate correlated random variables exist in X = {X1, . . ., Xnr}T, the first-order score function for μi is the same as Eq. (7.36).
190
7
Reliability-Based Design Optimization
Table 7.1 Log-derivative of copula density function adapted from [9] ∂ ln cðu, v; θÞ ∂μi
Copula type
- ð1þθÞ
Þu þ ð2θþ1 u - θ þv - θ þ1
Clayton
-
AMH
θ ð1 - vÞþθðvþ1Þ 1 - θ2 ð1 - uÞð1 - vÞ - θð2 - u - v - uvÞ
1þθ u
∂u ∂μi
2
Frank
2ðeθð1þuÞ - eθðuþvÞ Þ eθ - eθð1þuÞ - eθð1þvÞ þeθðuþvÞ
θ
FGM
2θð2v - 1Þ ∂u 1þθð1 - 2uÞð1 - 2vÞ ∂μi
Gaussian
Φ - 1 ðuÞ ϕðΦ - 1 ðuÞÞ
Independent
-1
þ1
-1
þ ϕθΦΦ - 1ðuðuÞÞ- Φ1 - ðθu2Þ ð Þð Þ
-
3θð1 - vÞ ∂u 1 - θð1 - uÞð1 - vÞ ∂μi
∂u ∂μi
∂u ∂μi
0
3. Sensitivity Calculation of Probability of Failure Denote the surrogate model for the constraint function Gj(X) as Gj ðX Þ. Then the MCS is used to express the probabilistic constraints can be expressed by PFj = P Gj ðX Þ > 0 ffi
1 M
M
I m=1
ΩF j
xðmÞ ≤ PTar Fj
ð7:37Þ
where M is the sampling size of MCS, x(m) is the m-th realization of X, and the failure set ΩFj for the surrogate model is defined as ΩF j = x : Gj ðxÞ > 0 . The sensitivity of the probability of failure in Eq. (7.2) can be transmitted ∂PFj 1 ffi M ∂μi
M
I m=1
ΩF j
xðmÞ sðμ1i Þ xðmÞ ; μ
ð7:38Þ
where sðμ1i Þ xðmÞ ; μ can be obtained by Eq. (7.36). The aforementioned first-order score functions (Eqs. (7.33) and 7.36)) can be used to calculate the sensitivity of the probability of failure. To estimate of the accuracy of Eq. (7.38) can be rewritten as 1 M
M
I m=1
1 = M
ΩF j
xðmÞ sðμ1i Þ xðmÞ ; μ
Mf
I mf = 1
ΩF j
Mf xðmf Þ sðμ1i Þ xðmf Þ ; μ = μ = P F j μsf M sf
ð7:39Þ
where Mf is the number of failed samples and μsf is the mean value of the score function value for the failed samples. Hence, the accuracy for the MCS to compute Eq. (7.38) can be measured by εMCS μsf . Since μsf changes based on the problems, the accuracy of the probability of failure varies according to problems or design points
7.5
Double-Loop, Single-Loop, and Decoupled RBDO
191
but not related to the surrogate model. Hence, the computation of the sensitivity using the score function does not include any approximation except the statistical noise due to MCS when the number of the sampling point is large enough. After the analysis of probability of failure, the design procedure can be carried, which will be introduced in the next section.
7.5 7.5.1
Double-Loop, Single-Loop, and Decoupled RBDO Double-Loop RBDO
In RIA and PMA, the optimization model structure for reliability-based design optimization using either the RIA-based RBDO mathematical model or the PMA-based RBDO mathematical model is called the “double-loop structure,” which can be referred in Sect. 7.3.4. The “double-loop” means that the “innerloop” performs reliability analysis and the “outer-loop” performs design optimization, as shown in Fig. 7.10, the two steps (design optimization and reliability analysis). Most of the computational effort in performing RBDO comes from the reliability analysis process. The characteristic of the two-cycle structure dictates that a large number of times of reliability analysis is required during the execution of RBDO. Therefore, the efficiency of RBDO with a two-cycle structure is low. To overcome these difficulties, subsequent researchers have proposed “single-loop structure” and “decoupled structure.”
7.5.2
Single-Loop RBDO
The single-loop structure has only one loop in the design optimization process, which greatly improves the efficiency of the solution compared with the two-loop structure, but with the corresponding sacrifice of the accuracy of the results. The key idea of single-loop is to approximate the probabilistic constraints into deterministic Fig. 7.10 Flowchart of double-loop RBDO adapted from [12]
192
7
Reliability-Based Design Optimization
Fig. 7.11 Flowchart of single-loop RBDO adapted from Shan et al. [12]
ones without additional reliability analysis (as shown in Fig. 7.11). Chen et al. [2] proposed the single-loop single-variable (SLSV) method in 1997. However, the SLSL method is unstable due to the influence of conditions such as the selection of the initial point, the degree of nonlinearity of the constraint, etc. Liang et al. [10] developed the single-loop approach (SLA) based on SLSV, which approximates the most probable target point (MPTP) with active constraints by introducing the KKT condition. The optimization model of the SLA method is as follows: find d, μX min f ðd, μX , μP Þ s:t:
kþ1
kþ1
gc d, X i,MPTP , Pi,MPTP ≥ 0, c = 1, 2, . . . , N
L U dLi ≤ di ≤ d U i , μX j ≤ μX j ≤ μX j k
kþ1 X c,MPTP
= μXkþ1
- σ X βtc
kþ1 Pc,MPTP
= μPkþ1
- σ P βtc
σ X ∇gc X c,MPTP
ð7:40Þ
k
σ X ∇gc X c,MPTP k
kþ1
σ P ∇gc Pc,MPTP k
σ P ∇gc Pc,MPTP
kþ1 where X MPTP and PMPTP are approximate MPTP points. Since the single-loop approaches do not require reliability analysis during the optimization process, which can significantly reduce the computer burden. However, the single-loop approaches may produce an infeasible design for highly nonlinear design problems whose accuracy of approximation cannot be guaranteed.
7.5
Double-Loop, Single-Loop, and Decoupled RBDO
7.5.3
193
Decoupled RBDO
Decoupling in RBDO separates the reliability analysis and the optimization process. It applies the reliability analysis results to the optimization process by using a singleloop strategy. Compared to the double-loop RBDO, which conducts the reliability analysis for all design changes in the outer loop, the decoupled RBDO conducts the reliability analysis only once after the deterministic optimum design from the outer loop is achieved. That is, the outer loop (i.e., design optimization) may have several iterations but it does not call the inner loop (i.e., reliability analysis) each time, which reduces the number of the reliability analyses thus reduce the computational cost. The structure diagram is shown in Fig. 7.12. The key idea is the usage of shifting vector sikþ1 , which transforms deterministic constraints into shifted constraints, updated after every reliability analysis. As the design optimization proceeds, the difference between the shifted and probabilistic constraints diminishes. The optimization process can be expressed by min imize f ðdÞ subject to Gj d, μx - sjkþ1 ≤ 0, j = 1, 2, . . . , N c
ð7:41Þ
The earliest decoupling structure was proposed by Li et al. [15] which was based on the RIA model. Li et al. solved the RBDO problem with the help of linear programming. They constructed an approximate probability constraint by linearly Fig. 7.12 Flowchart of decoupled RBDO adapted from [15]
194
7 Reliability-Based Design Optimization
approximating the reliability index based on the reliability results and sensitivity analysis results of the previous iteration. Later, Cheng et al. [3] proposed a sequential approximate programming (SAP) for the RBDO problem by adapting the traditional sequential approximate programming method. Zou and Mahadevan [23] proposed another direct decoupling method by expanding the approximation of the probability constraint with first-order Taylor technique based on the current failure probability and sensitivity information. One of the main concerns of the decoupled RBDO methods is that the calculated sensitivity of the probability of failure with respect to design variables may be inaccurate resulting in nonconvergence of the RBDO process or inaccurate RBDO optimum. Example 7.1: RBDO of the Numerical Problem 1 Find the optimal solution of the following RBDO problem with 2 constraints. min imize f ðd Þ = ðd1 - 3:7Þ2 þ ðd 2 - 4Þ2 subject to ½gi ðX Þ < 0] ≤ Φ - βti , i = 1, 2
ð7:42Þ
d ≤ d ≤ d , d 2 R and X 2 R L
U
2
where βt1 = βt2 = 2:0, two constraint functions are expressed as g1 ðXÞ = - X 1 sinð4X 1 Þ - 1:1X 2 sinð2X 2 Þ g2 ð X Þ = X 1 þ X 2 - 3
ð7:43Þ
Solution The function is drawn in Fig. 7.13 and the properties of two random variables are shown in Table 7.2. Example 7.2: RBDO of the Numerical Problem 2 Find the optimal solution of the following RBDO problem with 3 constraints. min imize f ðdÞ = 10 - d 1 ‐d2 subject to ½gi ðX Þ < 0] ≤ Φ - βti , i = 1, 2, 3 dL ≤ d ≤ dU , d 2 R2 and X 2 R where βt1 = βt2 = 3:0, three constraint functions are expressed as X 21 X 2 -1 20 ðX þ X 2 - 5Þ2 ðX 1 þ X 2 - 12Þ2 þ -1 g2 ðXÞ = 1 30 12 80 g3 ðXÞ = 2 -1 X 1 þ 8X 2 þ 5 g1 ðXÞ =
7.5
Double-Loop, Single-Loop, and Decoupled RBDO
195
Fig. 7.13 RBDO of the numerical problem 1 with 2 constraints
Table 7.2 Parameter of RBDO of the numerical problem 1 with 2 constraints Random variables X1 X2
Distribution Normal Normal
dL 0.0 0.0
d0 2.5 2.5
dU 3.7 4.0
Standard deviation 0.1 0.1
Solution The function is drawn in Fig. 7.14; and the properties of two random variables are shown in Table 7.3. Example 7.3: RBDO of the Numerical Problem 3 Find the optimal solution of the following RBDO problem with 3 constraints. ðd1 þ d 2 - 10Þ2 ðd 1 - d 2 þ 10Þ2 30 120 = 2:275%, j = 1, 2, 3 subject to P Gj ðXðdÞÞ > 0 ≤ PTar Fj min imize CostðdÞ = -
dL ≤ d ≤ dU , d 2 R2 and X 2 R where three constraint functions are expressed as
ð7:44Þ
196
7
Reliability-Based Design Optimization
Fig. 7.14 RBDO of the numerical problem 2 with 3 nonlinear constraints
Table 7.3 Parameter of RBDO of the numerical problem 2 with 3 nonlinear constraints Random variables X1 X2
Distribution Normal Normal
dL 0.0 0.0
d0 5 5
dU 10 10
Standard deviation 0.3 0.3
X 21 X 2 20 G2 ðXÞ = - 1 þ ðY - 6Þ2 þ ðY - 6Þ3 - 0:6 × ðY - 6Þ4 þ Z 80 G3 ðXÞ = 1 - 2 X 1 þ 8X 2 þ 5 G1 ðXÞ = 1 -
where
Y Z
=
0:9063
0:4226
X1
0:4226
- 0:9063
X2
ð7:45Þ
Solution The functions are drawn in Fig. 7.15, and the properties of two random variables are shown in Table 7.4, and they are correlated with the Clayton copula (τ = 0.5). The target probability of failure (PTar F ) is 2.275% for all constraints.
References
197
Fig. 7.15 RBDO of the numerical problem 3 with 3 nonlinear constraints
Table 7.4 Parameter of RBDO of the numerical problem 3 with 3 nonlinear constraints Random variables X1 X2
Distribution Normal Normal
dL 0.0 0.0
d0 5.0 5.0
dU 10.0 10.0
Standard deviation 0.3 0.3
References 1. Breitung, K. (1984). Asymptotic approximations for multinormal integrals. Journal of Engineering Mechanics, 110(3), 357–366. 2. Chen, X., et al. (1997). Reliability based structural design optimization for practical applications. In Proceeding of 38th Structures, structural dynamics, and materials conference: 1403. 3. Cheng, G., et al. (2006). A sequential approximate programming strategy for reliability-based structural optimization. Computers & Structures, 84(21), 1353–1367. 4. Cheng, J., et al. (2019). Hybrid reliability-based design optimization of complex structures with random and interval uncertainties based on ASS-HRA. IEEE Access, 7, 87097–87109. 5. Ditlevsen, O., & Bjerager, P. (1986). Methods of structural systems reliability. Structural Safety, 3(3-4), 195–229. 6. Fang, J., et al. (2022). Wind turbine rotor speed design optimization considering rain erosion based on deep reinforcement learning. Renewable & Sustainable Energy Reviews, 168, 112788.
198
7
Reliability-Based Design Optimization
7. Hao, P., et al. (2019). An augmented step size adjustment method for the performance measure approach: Toward general structural reliability-based design optimization. Structural Safety, 80, 32–45. 8. Hu, W., et al. (2016). Reliability-based design optimization of wind turbine blades for fatigue life under dynamic wind load uncertainty. Structural and Multidisciplinary Optimization, 54, 953–970. 9. Lee, I., et al. (2011). Sampling-based RBDO using the stochastic sensitivity analysis and dynamic Kriging method. Structural and Multidisciplinary Optimization, 44, 299–317. 10. Liang, J., et al. (2004). A single-loop method for reliability-based design optimization. International Design Engineering Technical Conferences, 46946, 419–430. 11. Liu, P.-L., & Der Kiureghian, A. (1991). Optimization algorithms for structural reliability. Structural Safety, 9(3), 161–177. 12. Shan, S., et al. (2008). Reliable design space and complete single-loop reliability-based design optimization. Reliability Engineering & System Safety, 93(8), 1218–1230. 13. Tu, J., et al. (1999). A new study on reliability-based design optimization. Journal of Mechanical Design, 121(4), 557–564. 14. Tvedt, L. (1990). Distribution of quadratic forms in normal space—Application to structural reliability. Journal of Engineering Mechanics, 116(6), 1183–1197. 15. Weiji, L., & Li, Y. (1994). An effective optimization procedure based on structural reliability. Computers & Structures, 52(5), 1061–1067. 16. Wirsching, P., et al. (1991). Advanced fatigue reliability analysis. International Journal of Fatigue, 13(5), 389–394. 17. Wu, Y.-T., & Wirsching, P. H. (1987). New algorithm for structural reliability estimation. Journal of Engineering Mechanics, 113(9), 1319–1336. 18. Yang, S., et al. (2014). Travel time reliability using the Hasofer–Lind–Rackwitz–Fiessler algorithm and kernel density estimation. Transportation Research Record, 2442(1), 85–95. 19. Youn, B. D., et al. (2005a). Adaptive probability analysis using an enhanced hybrid mean value method. Structural and Multidisciplinary Optimization, 29, 134–148. 20. Youn, B. D., et al. (2005b). Enriched performance measure approach for reliability-based design optimization. AIAA Journal, 43(4), 874–884. 21. Youn, B. D., et al. (2003). Hybrid analysis method for reliability-based design optimization. Journal of Mechanical Design, 125(2), 221–232. 22. Zhu, S.-P., et al. (2021). Reliability-based structural design optimization: Hybridized conjugate mean value approach. Engineering with Computers, 37, 381–394. 23. Zou, T., & Mahadevan, S. (2006). A direct decoupling approach for efficient reliability-based design optimization. Structural & Multidisciplinary Optimization, 31, 190–200.
Chapter 8
Robust Design Optimization
Nomenclature D F f g gi gi N nsc n o p p R r so U w x xL xU x* β μf μ*f σf
A real number The original optimization objective The new optimization objective function Constraint vector The value of the design metric The class-function that need to be minimized for the metric The Nadir point where each objective has been maximized The number of design metrics The normal unit direction to the convex hull of individual minima The parent population Engineering system constant parameter vector The specific metric The reliability vector specified for the constraint vector The n-dimension objective function vector The size of the parent population The Utopia point where each objective has been minimized Weight vector using in normal boundary intersection method Design variable vector Lower bounds of design variable Upper bounds of design variable Optimal solution for design variables A positive value The mean of the original optimization objective The ideal solution of function μf The standard deviation of the original optimization objective
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_8
199
200
σ *f Φ Φ
8.1
8
Robust Design Optimization
The ideal solution of function σ *f The payoff matrix The normalized payoff matrix
Introduction
With the advancement of science and technology, the demands for industrial products with better performance, higher reliability, and lower cost have never been greater than it is now. However, due to the inevitable presence of uncertainties in the lifecycle of products, e.g., tolerances on manufactured dimensions or material properties [1, 2], product performance can fluctuate significantly, or even deviate substantially from the original value, leading to functional failures and accidents. Thus, it is essential to take uncertainty into account in the process of product analysis and design for robust and reliable performance while asking for a cost-effective solution. Robust design optimization (RDO) is such a method developed to achieve the goal. Different from the reliability-based design optimization (RBDO) mentioned in the earlier chapter that only considers uncertainty in the probabilistic constraints, the RDO strategy accounts for the effects of variation and uncertainties by simultaneously optimizing the objective function and minimizing performance parameter variations [3]. The RDO does not expunge the uncertainties rather it makes the product performance insensitive to these variations. Our discussion of RDO will cover the problem statement and formulation of RDO as well as its main procedure and several optimization approaches.
8.2
Problem Statement and Formulation
It has been widely recognized that RDO is one of the most common methods for incorporating the impact of uncertainty into the design optimization formulation [4– 8]. It aims at developing low-cost and high-quality products whose performances are insensitive to the effects of various natural variations in their manufacturing and operational environments. RDO is developed based on the philosophy of robust design proposed by Taguchi [9, 10]. As a pioneer of robust design, Taguchi presents the concept of RDO as a method of dealing with geometric deviations arising in the manufacturing process in the 1950s, which is widely known as Taguchi method. The core component of Taguchi method is design of experiments (DoE) [4, 11], which involves evaluating various designs to identify factors that impact product quality and optimize their normal levels. Meanwhile, the concept of robustness is also first defined by Taguchi as being insensitive to variations in both the system itself and the environment. Although Taguchi method is beneficial for the improvement of
8.2
Problem Statement and Formulation
201
product quality, it usually encounters the problem of curse of dimensionality and is incapable of handling design problems in a continuous space with several design constraints due to the DoE. Hence RDO based on optimization techniques is proposed to solve the above phenomena. In general, an RDO problem for an engineered system can be formulated as follows: Find x 2 Rd Min Fðμf ðx, pÞ, σf ðx, pÞÞ S:t: gðx, pÞ ≤ 0 xL ≤ x ≤ x U
ð8:1Þ
where x is design variable vector, p is engineering system constant parameter vector, both x and p could be uncertain, μf and σ f are the mean and standard deviation of the original optimization objective f(.), which represent general performance and robustness assessments respectively, F(.) is the new optimization object function with respect to μf and σ f, for example, weighted sum of the mean and standard deviation, g(.) denotes unequal constraint vector which is also replaced by the equal constraints, and xL and xU are lower bounds and upper bounds of design variable which define the boundaries of design space. It is apparent that system sensitivity to uncertainties can be minimized by incorporating σ f into the objective function. To realistically and vividly describe the concept of RDO, the graphical illustration of RDO is shown in Fig. 8.1. Let the horizontal axis represent the uncertain parameters, including random design variables and other system constant parameters, and the vertical axis represent an optimization objective function. There are two
Fig. 8.1 Schematic diagram illustrating RDO
202
8 Robust Design Optimization
optimal solutions, among which x2 is considered as more robust compared with x1 because the former solution does not alter the objective function too much. Hence x1 is not recommended as a design in practice even though it achieves better performance (lower objective function value).
8.3
Main Procedure of RDO
To effectively solve the RDO problems formulated by Eq. (8.1), a general RDO solving process composed of three important steps is presented to provide an overall understanding and a reference in this section. The first step, i.e., uncertain system modeling, consists of system modeling and uncertainty modeling. System modeling mainly refers to the mathematical expression of optimization problems, for example, design variables, optimization objectives, constraints, design space, and so on. Uncertainty modeling is identification, classification, and quantification of uncertainties involved in the system design. There are many mathematical theories and methods developed for engineering problems to model uncertainties so far, such as probabilistic approaches [12], possibilistic approaches [13], hybrid possibilisticprobabilistic approaches [14, 15], information gap decision theory (IGDT) [16], clouds theory [17], etc. Considering that there exists a great deal of uncertainties throughout the lifecycle of products in engineering practices, it will inevitably lead to an unbearable computational burden. Thus, sensitivity analysis is required to screen out some uncertainties that a negligible effect on engineering system design for simplifying RDO problems. The second step focuses on adopting proper optimization algorithms to optimize the robust objective under uncertainties. Due to the incapability of the deterministic global optimization in solving the large-scale, highly nonlinear, and non-convex problem, it naturally gets even worse if extra effort is put into dealing with uncertainties. Therefore, to improve the overall optimization efficiency under uncertainty, the researches of optimization algorithms is necessary. The main goal of the third step is to establish efficient uncertainty propagation and analysis approaches, that is, quantifying the uncertainty characteristics of the output performance combining the uncertainty into the design system using efficient computational simulation and analysis approaches. Based on the quantification results, the robustness of design can be further analyzed. Generally, the robust descriptions of objectives can be decided by the numerical approximation of their statistical moments, e.g., mean and variance. For current engineering systems with multiple disciplines intercrossing and syncretizing each other, the cross propagation of uncertainties makes uncertainty analysis very challenging, which is one of the hotspots of RDO research. It is noteworthy that uncertainty propagation step is bound to the optimization process and executed at each optimization iteration point to analyze the system output response. To illustrate the overall process of RDO, a flowchart of the general RDO procedure is given in Fig. 8.2.
8.4
RDO Methods
203
Start
System modeling
Uncertainty modeling
Establishment of uncertainty propagation approach
Optimization initialization
x, p Uncertainty propagation
Optimize algorithm
,
Optimization converged? Yes
f
x, p
No
High fidelity simulation/Surrogate model
Robust optimal solutions
End
Fig. 8.2 General flowchart of the RDO process
8.4
RDO Methods
Considering the particular discussion and illustration of uncertainty modeling and propagation methods in the former chapters, the focus of the section is to introduce the optimization methods used for RDO. RDO is essentially a typical multi-objective optimization problem and often involves two conflicting objectives, i.e., reducing
204
8 Robust Design Optimization
the mean and variance at the same time is very difficult or even impossible. Some studies have indicated that these expected properties, including high performance and performance robustness, are always in competition with each other, for example, the better robustness but the lower performance [18]. To address the issue, there are efficient methods developed to obtain all the potential points on the Pareto frontier of RDO and enable designers to reach a trade-off between performance and robustness. In this section, some general used multi-objective optimization methods will be introduced, including weighted sum method, compromise programming method, physical programming methods, and evolutionary multi-objective optimization methods.
8.4.1
Weighted Sum Method
For multi-objective optimization problem, the most common method is to convert the multi-objective functions into a single objective function. As the most widely used robust optimization technique, weighted sum (WS) method [19, 20] optimizes a linear combination of mean and variance with corresponding weights. Equation (8.1) can be rewritten as: Min w1 μf ðx, pÞ þ w2 σ f ðx, pÞ
ð8:2Þ
where wi is the weight and meets the requirement of w1 + w2 = 1. Weight represents the importance of corresponding objective and often needs to be adjusted according to the designer’s experience, preference, or multiple attempts. It is noticed that whether the WS method can be efficient depends on the choice of weights and there may not exist weights such that a given Pareto point may be found by WS method. In other words, only potential solutions on convex portions of the Pareto optimal set can be obtained in the criterion space. Moreover, another existing problem of WS method has been recognized in some studies that an even spread of weights does not produce an even spread of points in the Pareto set [21]. To sum up, WS method is not efficient enough but it is popular because it is easy and simple to implement.
8.4.2
Compromise Programming Method
To compensate the shortcomings of the WS method, compromise programming (CP) method [22] is developed for solving multi-objective optimization. As a modified version of goal programming [23], the fundamental principle of CP is to identify an ideal solution as a point at which each attribute reaches its optimum value and then to find a solution that is as near to that point as possible. The point is also
8.4
RDO Methods
205
called a Utopia point. Since to minimize the distance between the efficient point and the Utopia point, different metrics can be used. Therefore, the CP method for multiobjective optimization is formulated as the following: 1=p
n
Min
kr - Ukw p
ðwi jr i - ui jÞ
=
p
ð8:3Þ
i=1
where r = (r1, r2, . . ., rn) = ( f1, f2, . . ., fn) is the n-dimension objective function vector. U represents the Utopia point where each objective has been minimized, which equals f *1 , f *2 , . . . , f *n for this minimization optimization problem. p = {1, 2, . . .} [ {1} defines the specific metric and wi ≥ 0, i = 1, 2, . . ., n. The designers first determine the value of p and then adjust the weight vector w to find the Pareto optimal solutions. Notice that for x 2 Rd, ui ≤ fi(x), i = 1, 2, . . ., n, the symbol of absolute value in Eq. (8.3) can be removed. In particular, p = 1 stands for the Manhattan distance metric, that is the WS method. The p = 2 is Euclidean distance metric and p = 1 is the Tchebycheff distance metric. In order to capture certain solutions located on the non-convex portions of the Pareto optimal set, a relatively large value of p may be required. When Tchebycheff distance metric is applied, Eq. (8.3) can be transformed as a min-max problem: min x2X
max fwi ðf i ðxÞ - ui Þg
i = 1, ..., n
ð8:4Þ
which is equivalent to the following β-problem: Min β wi ðf i ðxÞ - ui Þ ≤ β, i = 1, . . . , n
ð8:5Þ
where β is a positive value. After substituting two objectives of RDO (mean and variance) into the above equation, the formula of RDO is given by Find x 2 Rd Min β S:t: w1
μf σf - 1:0 þ E1 ≤ β, w2 * - 1:0 þ E2 ≤ β μ*f σf
ð8:6Þ
gðx, pÞ ≤ 0, xL ≤ x ≤ xU where μ*f and σ *f are the ideal solution of function μf and σ f respectively. Based on the above CP method using Tchebycheff distance metric, all Pareto optimal solutions of the RDO problem can be guaranteed to be found by adjusting weights. To make a better comparison between the WS method and the CP method, the solutions
206
8
Robust Design Optimization
Fig. 8.3 Generating Pareto solutions by WS and CP methods
obtained from two methods are plotted in the objective space, as shown in Fig. 8.3. It can be observed that the efficient Pareto solutions located on the arc between A and B could not be found by the WS method, e.g., point C, but they are accessible for the CP method based on Tchebycheff distance metric.
8.4.3
Physical Programming Method
As another method employed for multi-objective optimization problem, physical programming (PP) method was developed by Messac et al. [24] and has already been used in RDO [25, 26]. By constructing the so-called class-function reflecting the degree of designer’s preference, the PP method allows designers to make a qualitative and quantitative physical description of their preferences without conjuring corresponding weights. Therefore, compared with CP method, PP method is more flexible in expressing designer’s preference to each design metric and more capable of capturing complete Pareto optimal set. When adopting PP method, the designers firstly choose a set of design parameters and design metrics based on the current level of knowledge regarding the desired design. After that, they develop a mapping between design parameters and design metrics and construct a set of class-functions according to design metrics. Each class contains two cases, hard and soft, referring to the sharpness of the preference. All soft class-functions will become constituent components of the objective function and all the hard classes would simply become constraints. In detail, they are subdivided into four situations: smaller is better, larger is better, value is better,
8.4
RDO Methods
207
Fig. 8.4 Classification of class-function [24]
Fig. 8.5 Range division of Class-1S function [24]
and range is better. Figure 8.4 shows the qualitative meaning of each class. The horizontal axis represents the value of the design metric under consideration, gi, while the vertical axis represents the class-function, gi , which need to be minimized for the responding metric. Furthermore, the PP method defines the preference range for each class design metrics in order to more precisely and flexibly express preferences for each metric rather than just using the terms minimize, maximize, greater than, less than, or equal to. Take the case of Class-1 as example, the arranged range are defined as Fig. 8.5, which are highly desirable range, desirable range, tolerable range, undesirable range, highly undesirable range, and unacceptable range
208
8 Robust Design Optimization
in order of decreasing preference. The parameters gi1 through gi5 are physically meaningful values specified by designers to quantify the preference for the i-th design metric. In particular, the objective space optimization direction is determined by the class-function value for each design metric. Generally, the common logarithm of the average value of the class-function of each design metric is selected as the objective function of the physical programming optimization model, which is given as follows: g = log
1 nsc
nsc
gi ½gi ðx, pÞ]
ð8:7Þ
i=1
where nsc represents the number of design metrics. With the PP method, designers can judge and weigh mean and deviation by specifying the range for each design metrics. Compared with WS and CP method, it is more efficient for PP method to obtain potential solutions on both convex and non-convex Pareto frontier. In addition, it has been reported [27] that the PP method can be used to provide all Pareto optimal points as a sufficient and necessary condition for Pareto optimality. However, due to the need to determine the design metrics sets for different preferences, more knowledge of optimization is required for PP method, leading to more complex in initializing set of optimization formulation than WS and CP methods.
8.4.4
Normal Boundary Intersection Method
Normal boundary intersection (NBI) method is an optimization routine developed to find uniformly spread Pareto-optimal solutions for multi-objective optimization problem [28]. In the NBI method, the initial stage involves constructing the payoff matrix Φ, which is determined by computing the individual minimum of each objective function. The solution that minimizes the i-th objective function fi(x) is denoted as x*i . Supposing there are n objective functions, Φ will be a n × n matrix where each row consists of i-th objective function value by substituting n individual optima points x*1 , x*2 , . . . , x*n respectively.
Φ=
f *1 x*1
. . . f 1 x*i
...
f 1 x*n
⋮ f i x*1
⋱ . . . f *i x*i
...
⋮ f i x*n
... f n x*1
f n x*i
⋱ ...
f *n
...
ð8:8Þ
⋮ x*n
All objective functions need to be normalized based on their minimum and maximum values to avoid differences in magnitude and order of magnitude between them. By joining the minimum and maximum values of each objective function, two
8.4
RDO Methods
209
Fig. 8.6 Graphical description of NBI method for RDO U U vectors can be obtained, called the Utopia point U = f U 1 ,f2 , ...,fn = * * * * * * N N N f 1 x1 , f 2 x2 , . . . , f n xn and the Nadir point N = f 1 , f 2 , . . . , f n . Then, the normalization of the objective functions can be achieved using the above two vectors. Take the i-th objective function as an example:
f i ð xÞ =
f i ð xÞ - f U i , f Ni - f U i
i = 1, 2, . . . , n
ð8:9Þ
As a result of this normalization process, the normalized payoff matrix Φ is obtained in which each element is calculated by Eq. (8.9). Figure 8.6 illustrates how the convex combination of each row in the payoff matrix forms the convex hull of individual minima (CHIM) [29] in an arbitrary bi-objective problem. The endpoint of the Utopia line is also called the anchor point, corresponding to the solution of the single-optimization problem. Considering a convex weighting w, any point in the normalized space, on the Utopia line, is expressed as Φw. Let n denote the normal unit direction to the CHIM at the point Φw towards the origin, then Φw þ Dn, D 2 R represents the set of points on that normal. The intersection point between the normal vector and the boundary of the feasible region closest to the origin corresponds to maximizing the distance between the Utopia line and the Pareto frontier. The multi-objective optimization problem then can be formulated by Max D
s:t: Φw þ Dn = FðxÞ
ð8:10Þ
210
8
Robust Design Optimization
where FðxÞ is a normalized objective vector as FðxÞ = f 1 ðxÞ, f 2 ðxÞ, . . . , f n ðxÞ . By iteratively solving this optimization problem for various values of parameter w, it is possible to generate a uniformly distributed Pareto frontier. Since RDO is a bi-objective optimization problem, Eq. (8.10) can be simplified by eliminating the conceptual parameter D according to [30]: Find x 2 Rd Min
s:t:
μf - μf ðx*μ Þ μf ðx*σ Þ - μf ðx*μ Þ
μf - μf ðx*μ Þ σf - σf ðx*σ Þ þ 2w - 1 = 0 * * μf ðxσ Þ - μf ðxμ Þ σf ðx*μ Þ - σf ðx*σ Þ
ð8:11Þ
gðx, pÞ ≤ 0, xL ≤ x ≤ xU where x*μ and x*σ the solutions that individually minimize μf and σ f, respectively. While the NBI method produces a smooth approximation of the Pareto boundary, it is important to note that some points obtained through the NBI method may not necessarily be Pareto optimal solutions. It is worth noting that despite the high efficiency and well-defined mathematical significance of the four RDO methods (WS, CP, PP, and NBI) discussed so far, it is important to acknowledge that these methods rely on prior knowledge and experience. For instance, the selection of weights and preferences requires a level of expertise and understanding.
8.4.5
Evolutionary Multi-objective Optimization Method
Due to the complexity and multimodal characteristics of realistic optimization problems, it is difficult for traditional gradient-based algorithms to deal with multiobjective problems unless a multi-objective problem is converted into multiple single-objective problems, which raises the bar for beginners. In addition, gradient-based algorithms use local gradient information as the optimization direction and start from a single initial point to search for the optimal solution iteratively, leading to its vulnerability to the impact of initial point and precision of gradient information and easy to fall into the local optimum. Conversely, stochastic search algorithms with stochastic and gradient-free nature are able to deal with linear or nonlinear multi-objective optimization problems directly and more robust and faster to converge to the global optimal result. As the most representative of stochastic search algorithms, evolutionary algorithms have been widely used to address robust design optimization problems. Evolutionary multi-objective optimization (EMO) method finds a set of non-dominated solutions for multi-objective optimization problems by evolving a population, which simulates natural evolution mechanism. The solution can be taken
8.4
RDO Methods
211
to be the optimal solution if there does not exist another solution that dominates it. The definition of domination is that, for x1, x2 2 Rd, 8i 2 {1, 2, . . ., n}, fi(x2) ≥ fi(x1), and ∃j 2 {1, 2, . . ., n}, fj(x2) > fj(x1), in such scenario x1 is said to dominate x2. Generally, the EMO method starts with the initialization of a parent population o with the size of so, including the evaluations of the objectives and constraints of each candidate in the population. The fitness of each candidate is assigned based on the objective function and constraint violations. If a candidate violates the constraint, it will be punished, resulting in its worse fitness than other unviolated solutions. After the parent population is placed in the mating pool, three popular operators, namely selection, crossover, and mutation, are usually used to generate the offspring population. Despite their popularity, one of the main obstacles encountered by EMO methods in dealing with robust design problems is the high computational cost, which is the result of the large number of objective functions and constraint evaluation required by the majority of existing EMO methods. In order to address the above expensive problems, three main schemes have been proposed by [31], which are problem approximation, functional approximation, and evolutionary approximation, respectively. In detail, the idea of problem approximation is to substitute a less expensive problem statement for the original one. For example, in finite element simulation process, coarse grid is used rather than the fine grid. For functional approximation, surrogate model is adopted to replace the original system model and to predict the objectives and constraints of iteration point in the optimization process. In spite of extensive development of surrogate model technology, high-fidelity surrogate models are still required when faced with high-dimensional and highly nonlinear design problems. As a unique method for evolutionary algorithms, evolutionary approximation aims to estimate the fitness of individuals by using information from other (similar) individuals (such as clustering and fitness inheritance), thus saving computing cost. It is worth noting that neither the clustering nor the fitness inheritance strategies demonstrate the required convergence in practice. Compared with WS, CP, and PP methods, EMO method requires designers to have less priori knowledge and experience, which allows the designers to easily and conveniently identify the desired or best trade-offs between high performance and performance robustness from the Pareto frontier. Besides, EMO method is capable of finding optimal trade-offs for RDO problem in a single run [32], while WS, CP, and PP methods must carry out a sequence of separate executions to search for all Pareto frontier over the entire design space. However, under the limited computational cost, WS, CP, and PP methods may be more suitable for searching for a satisfactory solution than EMO methods.
212
8.5
8
Robust Design Optimization
Reliability-Based Robust Design Optimization
To improve product design in both robustness and reliability, a hybrid algorithm called reliability-based robust design optimization (RBRDO) has been developed to combine the principles of both RDO and RBDO [33]. By integrating RDO and RBDO techniques, RBRDO enhances the search for robust optima while obeying reliability type of constraints. In RBRDO, the mean and variance of a performance function are simultaneously minimized subject to probabilistic constraints. A typical RBRDO problem is defined as follows [33]: Min F μf ðx, pÞ, σ f ðx, pÞ S:t:Pfgðx, pÞ ≤ 0Þg ≥ R x ≤x≤x L
ð8:12Þ
U
where P{.} is the probability of the statement within the braces to be true, and R is the reliability vector specified for the constraint vector. The concept of RBRDO is not novel, and numerous RBRDO methods have already been developed. Du et al. [34] utilized an inverse strategy, employing percentile performance to evaluate the robustness of the objective function and probabilistic constraints. Youn et al. [35] applied eigenvector dimension reduction (EDR) method to conduct probability analysis in the RBRDO problem. Rathod et al. [36] conducted a comparative study of different formulations of the RBRDO model and proposed an evolutionary genetic algorithm to optimize the RBRDO model based on a hybrid quality loss function. However, these works have not covered the problem of multi-objective RBRDO. Gonçalo et al. [36] proposed a new approach for the multi-objective optimization of composite structures under the effects of uncertainty in mechanical properties, structural parameters, and external loads. More advanced methods will be proposed in the future. Exercises 8.1 Explain the definition of RDO. Provide examples of its engineeering applications. 8.2 Explain the main differences between RDO and RBDO. 8.3 Describe the three main steps involved in the RDO method in brief. 8.4 List the advantages and disadvantages of the WS method, the PP method, and the CP method used for RDO. 8.5 Suppose you are designing a cantilever beam and attempting to minimize its deflection by adjusting three parameters: width (b), height (h), and length (L ). Each parameter has two levels as defined in the Taguchi method [9, 10]. The following experimental data was obtained after conducting a series of experiments. Use the Taguchi method to determine the optimal combination of parameters.
213
References Experiment 1 2 3 4 5 6 7 8
b + + + +
h + + + +
L + + + +
Deflection 0.02 0.03 0.04 0.03 0.02 0.04 0.05 0.06
References 1. Cheng, J., Lu, W., Hu, W., Liu, Z., Zhang, Y., & Tan, J. (2019). Hybrid reliability-based design optimization of complex structures with random and interval uncertainties based on ASS-HRA. IEEE Access, 7, 87097–87109. 2. Hu, W., Choi, K., & Cho, H. (2016). Reliability-based design optimization of wind turbine blades for fatigue life under dynamic wind load uncertainty. Structural Multidisciplinary Optimization, 54, 953–970. 3. Huan, Z., Zhenghong, G., Fang, X., & Yidian, Z. (2019). Review of robust aerodynamic design optimization for air vehicles. Archives of Computational Methods in Engineering, 26, 685–732. 4. Beyer, H.-G., & Sendhoff, B. (2007). Robust optimization – A comprehensive survey. Computer Methods in Applied Mechanics Engineering, 196(33–34), 3190–3218. 5. Chen, W., Allen, J. K., Tsui, K.-L., & Mistree, F. (1996). A procedure for robust design: Minimizing variations caused by noise factors and control factors. Journal of Mechanical Design, Transactions of the ASME, 118, 478. 6. Gabrel, V., Murat, C., & Thiele, A. (2014). Recent advances in robust optimization: An overview. European Journal of Operational Research, 235(3), 471–483. 7. Zang, C., Friswell, M., & Mottershead, J. (2005). A review of robust optimal design and its application in dynamics. Computers Structures, 83(4–5), 315–326. 8. Chatterjee, T., Chakraborty, S., & Chowdhury, R. (2019). A critical review of surrogate assisted robust design optimization. Archives of Computational Methods in Engineering, 26(1), 245–274. 9. Taguchi, G. (1987). System of experimental design: Engineering methods to optimize quality and minimize costs. UNIPUB/Kraus International Publications. 10. Taguchi, G., & Asian Productivity Organization. (1986). Introduction to quality engineering: Designing quality into products and processes. Asian Productivity Organization. 11. Yao, W., Chen, X., Luo, W., Van Tooren, M., & Guo, J. (2011). Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles. Progress in Aerospace Sciences, 47(6), 450–479. 12. Dantzig, G. B. (1955). Linear programming under uncertainty. Management Science, 1(3-4), 197–206. 13. Klir, G., & Yuan, B. (1995). Fuzzy sets and fuzzy logic. Prentice Hall. 14. Aien, M., Rashidinejad, M., & Fotuhi-Firuzabad, M. (2014). On possibilistic and probabilistic uncertainty assessment of power flow problem: A review and a new approach. Renewable and Sustainable Energy Reviews, 37, 883–895. 15. Soroudi, A., & Ehsan, M. (2011). A possibilistic–probabilistic tool for evaluating the impact of stochastic renewable and controllable power generation on energy losses in distribution networks—A case study. Renewable and Sustainable Energy Reviews, 15(1), 794–800. 16. Ben-Haim, Y. (2006). Info-gap decision theory: Decisions under severe uncertainty. Elsevier.
214
8
Robust Design Optimization
17. Neumaier, A., Fuchs, M., Dolejsi, E., Csendes, T., Dombi, J., Bánhelyi, B., Gera, Z., & Girimonte, D. (2007). Application of clouds for modeling uncertainties in robust space system design. ACT Ariadna Research ACT-RPT-05- European Space Agency. 18. Asafuddoula, M., Singh, H. K., & Ray, T. (2014). Six-sigma robust design optimization using a many-objective decomposition-based evolutionary algorithm. IEEE Transactions on Evolutionary Computation, 19(4), 490–507. 19. Lee, S. W., & Kwon, O. J. (2006). Robust airfoil shape optimization using design for six sigma. Journal of Aircraft, 43(3), 843–846. 20. Tang, Z., & Périaux, J. (2012). Uncertainty based robust optimization method for drag minimization problems in aerodynamics. Computer Methods in Applied Mechanics Engineering, 217, 12–24. 21. Shukla, P. K., & Deb, K. (2007). On finding multiple Pareto-optimal solutions using classical and evolutionary generating methods. European Journal of Operational Research, 181(3), 1630–1652. 22. Chen, W., Wiecek, M. M., & Zhang, J. (1998). Quality utility: A compromise programming approach to robust design. In Proceeding of the international design engineering technical conferences and computers and information in engineering conference (p. V002T002A032). American Society of Mechanical Engineers. 23. Tamiz, M., Jones, D., & Romero, C. (1998). Goal programming for decision making: An overview of the current state-of-the-art. European Journal of Operational Research, 111(3), 569–581. 24. Messac, A. (1996). Physical programming-effective optimization for computational design. AIAA Journal, 34(1), 149–158. 25. Chen, W., Sahai, A., Messac, A., & Sundararaj, G. J. (2000). Exploration of the effectiveness of physical programming in robust design. Journal of Mechanical Design, 122(2), 155–163. 26. Messac, A., & Ismail-Yahaya, A. (2002). Multiobjective robust design using physical programming. Structural Multidisciplinary Optimization, 23, 357–371. 27. Messac, A., & Mattson, C. A. (2002). Generating well-distributed sets of Pareto points for engineering design using physical programming. Optimization Engineering, 3, 431–450. 28. Das, I., & Dennis, J. E. (1998). Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM Journal on Optimization, 8(3), 631–657. 29. Vahidinasab, V., & Jadid, S. (2010). Normal boundary intersection method for suppliers’ strategic bidding in electricity markets: An environmental/economic approach. Energy Conversion Management Science, 51(6), 1111–1119. 30. Lopes, L. G. D., Brito, T. G., Paiva, A. P., Peruchi, R. S., & Balestrassi, P. P. (2016). Robust parameter optimization based on multivariate normal boundary intersection. Computers Industrial Engineering, 93, 55–66. 31. Jin, Y. (2005). A comprehensive survey of fitness approximation in evolutionary computation. Soft Computing, 9(1), 3–12. 32. Jin, Y., & Sendhoff, B. (2003). Trade-off between performance and robustness: An evolutionary multiobjective approach. In Proceeding of the evolutionary multi-criterion optimization: Second international conference, EMO 2003, Faro, Portugal, April 8–11, 2003. Proceedings 2, Springer, pp. 237–251. 33. Shahraki, A. F., & Noorossana, R. (2014). Reliability-based robust design optimization: a general methodology using genetic algorithm. Computers Industrial Engineering, 74, 199–207. 34. Du, X., Sudjianto, A., & Chen, W. (2004). An integrated framework for optimization under uncertainty using inverse reliability strategy. Journal of Mechanical Design, 126(4), 562–570. 35. Youn, B. D., & Xi, Z. (2009). Reliability-based robust design optimization using the eigenvector dimension reduction (EDR) method. Structural Multidisciplinary Optimization, 37, 475–492. 36. Rathod, V., Yadav, O. P., Rathore, A., & Jain, R. (2013). Optimizing reliability-based robust design model using multi-objective genetic algorithm. Computers Industrial Engineering, 66(2), 301–310.
Chapter 9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Nomenclature b c d G i j L N N Nf NMCS PF PTar Fj t u ud v X x Y Z z δ θ λ σ Ω
Boundary conditions Wave speed Design variable Performance function Initial condition Network model loss Network layer Neutral network Nonlinear operator Number of sampling points Sampling size of Monte Carlo simulation Probability of failure Target probability of failure Time Solution of nonlinear equations Threshold value System’s viscosity Input variable Spatial coordinate Network output Decay rate Network layer output Perturbation Network model parameters Lagrange multipliers Nonlinear activation function Domain(math)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_9
215
216
9.1
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Introduction
Reliability analysis (RA) is a method that uses the observed values of the system variables to calculate the probability of system failure [1]. The traditional method of reliability analysis has been introduced detailly in Chaps. 5 and 6. The surrogate models are always utilized to regress the limit-state function and predict the probability of failure [2], which have been introduced in Chap. 3. However, it is hard to obtain enough data for establishing a well-trained surrogate model through simulations in high-dimensional nonlinear problems. In general, system response is governed by complex ordinary/partial differential equation (ODE/PDE), which take expensive computational costs by their repeatedly executing relevant experiments or simulations. Recently physics-informed machine learning (PIML) methods are proposed to solve PDE by incorporating prior physical knowledge into common neutral networks during the training stage. PIML can be seen as a way of integrating data-driven machine learning models with physicsbased mathematical models, such as partial differential equations (PDEs), conservation laws, or symmetries. There are different approaches for implementing PIML, depending on how the physical prior knowledge is represented and incorporated into the machine learning model, including loss function regularization, network architecture design, data augmentation, and model-based optimization [3]. By doing so, PIML can guide the machine learning model towards solutions that are physically plausible, improving accuracy and efficiency even in uncertain and highdimensional contexts. To address the problem of the large demand for real response data during the construction of surrogate model, physics-informed neural network (PINN) as well as PINN-based RA is introduced. The govern function of system response in form of ODE/PDE will be converted in loss function of the neutral network to embedding priori knowledge. The neural network is trained to fulfill the PDE (i.e., physical laws) with the minimum loss of the equation and then can be used to predict the system response. The loss values in network training are calculated directly from the current model output and the automatic differentiation (AD) technique, so that the training data of a PINN model can be obtained without relying on costly and timeconsuming high-fidelity simulations. In RA process, state parameters and stochastic variables are set as the input of the PINN model and use the network model to estimate the system response. PINNs are a type of artificial intelligence that combines data and physical laws to solve forward and inverse PDEs [4]. Cuomo et al. [5] provided a comprehensive review of PINNs, with the main objective of describing these networks and their associated strengths and weaknesses and then attempted to incorporate publications on a broader range of collocation-based physics-informed neural networks. Considering the slow convergence and underfitting of PINN model, Nabian et al. [6] presented an active training scheme based on importance sampling. Wu et al. [7] summarized commonly used sampling methods and introduce two residual-based adaptive sampling methods for PINN to generate training points. For reliability
9.2
Basis of Physics-Informed Neural Network
217
applications, Xu et al. [8] provided a comprehensive survey of PIML methods for reliability and system safety applications; Souvik et al. [9] used a PINN model to predict the complex system response governed by PDE and proposed a simulationfree framework for reliability analysis. Based on this framework, Zhang et al. [10] then presented an active learning approach that adaptively trains the PINN model on regions of high importance for failure probability characterization to enhance the training efficiency and accuracy of RA. The PINN methods also have been applied to different reliability assessment objects, such as multi-state system [11], small failure probabilities system [12], uncertainty quantification & propagation [13], and realistic industrial problems [14]. In conclusion, PINNs can be used to efficiently and accurately solve some reliability analysis problems due the inherent advantages of PINNs that include the physical theories and do not require computationally expensive simulations. In addition, PINNs could possibly carry out uncertainty quantification and propagation, measurement data fusion, and system reliability assessment. In this chapter, a physics-informed method is introduced to replace traditional surrogate models in reliability analysis problems, including Sect. 9.2, the basis introduction of PINNs; Sect. 9.3, the construction of the PINN model used in RA and several numerical examples; and Sect. 9.4, a brief introduction to PINN-based design optimization under uncertainty.
9.2
Basis of Physics-Informed Neural Network
Physics-informed methods are a way of integrating data and mathematical physics models to improve machine learning performance on tasks that involve a physical mechanism [15]. They can leverage physical prior knowledge, such as partial differential equations (PDEs), symmetries, conservation laws, or intuitive physics, to guide the machine learning model towards solutions that are physically plausible and accurate.
9.2.1
Basic Structure of Multi-layer Perceptron
The key idea of PINN is using a neural network model as a high precision function approximator. The network model structure refers to the input and output of practical applications (e.g., finite analysis inputs and results). Here, a fully connected neural network (FCN) is taken as an example. This FCN includes an input layer and model output layer. The output of each layer is used as the input of the next layer after affine transformation and activation function. The basic structure of FCN is shown in Fig. 9.1. Assuming that the number of neurons contained in l-th layer is nl, the weighted input zli for the i-th neuron of the layer can be expressed as Eq. (9.1):
218
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Fig. 9.1 The structure diagram of the feed-forward neural network
zli = σl - 1
nl - 1 k=1
ðW li,k zlk- 1 þ bli Þ
ð9:1Þ
where W li and bli represent the corresponding weights and biases, respectively, and σ denotes the nonlinear activation function between the layers of the network. The output of each layer of the network can be represented by Eq. (9.2): uL = σL ðWLþ1 zL þ bL Þ zL = σL - 1 ðWL zL - 1 þ bL Þ zL - 1 = σL - 2 ðWL - 1 zL - 2 þ bL - 1 Þ
ð9:2Þ
⋮ z1 = σ0 ðW1 X þ b1 Þ: To simplify the formulation of the model, the output of the entire network is represented by u in Eq. (9.3) u = NðX, θÞ
ð9:3Þ
where X is the network model input, and θ represents the current network parameters [W, b]. In the actual application process of the neural network model, the network model parameters θ need to be continuously adjusted by minimizing the loss function. The traditional neural network model training relies on the loss function to match the actual observation or experimental data, so it is also called data-driven NN.
9.2.2
Loss Construction Based on Priori Knowledge
To overcome the data dependency of traditional neural networks, the concept of PINN has been proposed in [4]. The PINN model can automatically satisfy the
9.2
Basis of Physics-Informed Neural Network
219
predefined physical constraints by embedding prior knowledge (i.e., partial differential equations) during training process. Figure 9.2 describes the basic structure of a PINN model, which reconstructs the loss function of network training. Priori physical knowledge is usually expressed in partial differential equations, which is defined as Eq. (9.4): ut þ N ½u, ux , uxx , . . .] = 0
ð9:4Þ
where u(x, t) is the implicit solution of nonlinear equations; N is a nonlinear operator containing partial differential components. The subscript in Eq. (9.5) is the partial differential of u(x, t) to the x and t. 2
ut =
∂u ∂u ∂ u , ux = , uxx = 2 , . . . ∂x ∂t ∂x
ð9:5Þ
The AD is used in the neural network to generate the partial differential components by backpropagation. Based on the equations mentioned above, the neural network approximation of the implicit solutions of the partial differential equations is obtained, as shown in Eq. (9.6). In addition, Eqs. (9.7) and (9.8) are the neural network expressions of Eqs. (9.4) and (9.5), respectively: uðx, t Þ ≈ uðx, t, θÞ
ð9:6Þ
ut þ N ½u, ux , uxx , . . .] = 0
ð9:7Þ
2
ut =
∂u ∂u ∂ u , ux = , uxx = 2 , . . . ∂x ∂t ∂x
Fig. 9.2 The structure diagram of the physics-informed neural network
ð9:8Þ
220
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
where θ is the training parameters of each layer in neural network, which are the weight and bias coefficients in Eq. (9.2). The nonlinear activation function is introduced for the network model to approximate nonlinear input and output. For the nonlinear partial differential expression in Eq. (9.7), the equivalent constraint is defined as shown in Eq. (9.9). Thus, the problem of solving the equations is transformed into a coefficient optimization problem. Further, the neural network can approximate the partial differential equation (PDF) solution by training the weight coefficient θ of each layer in the neural network to make f(x, t) tend to zero in their domain. f ðx, t Þ = ut þ N ½u, ux , uxx , . . . ; λ]
ð9:9Þ
Based on the above method, the equivalent constraint equations, including partial differential equations, initial conditions, and boundary conditions, could be constructed. Then the loss function for neural network training can be built and expressed as: Lp = Lf þ λBC . LBC þ λIC . LIC Lf = LBC =
1 N BC
LIC =
1 N IC
1 Nf
Nf i=1
jf ðxif , t if Þj
ð9:10Þ
2
ð9:11Þ
N BC
juðxiBC , t iBC Þ - uBC i j
i=1 N IC
i=1
2
juðxiIC , 0Þ - uIC i j
2
ð9:12Þ
ð9:13Þ
where Lp, Lf, LBC, LIC are loss function of PINN, partial differential equation, boundary condition, and initial condition, respectively; λBC and λIC are Lagrange multipliers of LBC and LIC, respectively. Nf, NBC, and NIC are the number of sampling points used in physical information neural networks training for the PDE function, the boundary condtion, and the initial condition, respectively; and uBC and uIC are collecting data at the boundary condition and the initial condition, respectively, which can be obtained by numerical simulation and experiment. In summary, the equation solving is transformed into the optimization of the weight parameters of the network model by the loss function constructed above. W * = arg min ðLp ðWÞÞ W2θ
b* = arg min ðLp ðbÞÞ b2θ
ð9:14Þ
9.2
Basis of Physics-Informed Neural Network
9.2.3
221
A Basic Example of PINN
To give a basic understanding of the performance of PINN, a simple propagation model of 1D sinusoidal wave is built, which can be defined as Eq. (9.15) u=
sin½πðt - xÞ] 0
t≤x t>x
ð9:15Þ
And the propagation diagram of this sinusoidal wave can be seen in Fig. 9.3. The propagation of sinusoidal wave is governed by wave equation (Eq. 9.16) 2
2
∂ u ∂ u - c2 2 = 0 ∂t 2 ∂x
ð9:16Þ
where c is the wave speed, and c = 1 in this case. The wave equation can be used as priori knowledge in construct the loss function of the PINN model, which can be expressed as Eqs. (9.17 and 9.18)
Fig. 9.3 Diagram of sinusoidal wave propagation
222
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Loss = MSE f þ MSE ic þ MSE b 2
ð9:17Þ
2
∂ u ∂ u - c2 2 ∂t 2 ∂x ic1 = u ∂u ic2 = ∂t bi = u - sinðπt Þ bo = u f=
x 2 ½0, L], t 2 ½0, T ] x 2 ½0, L], t = 0 x 2 ½0, L], t = 0
ð9:18Þ
x = 0, t 2 ½0, T ] x = 0, t 2 ½0, T ]
where f corresponds to the residuals of the partial differential equation; ic is the initialization condition, which specifies the displacement and velocity of each point at the initial time, b imposes the boundary conditions on both ends of the domain. On the left end, a sinusoidal oscillation is applied to drive the system, while on the right end, the boundary is fixed to zero. The loss function consists of the above three components and is calculated as the sum of the mean square error (MSE) of the sampling points in the respective regions. For training the network, 1000 training points have been generated using the Latin hypercube sampling. The Adams optimizer is run for 1500 iterations. The result of the prediction at each time is shown in Fig. 9.4. In each time, the prediction of neural network (represented by a solid line) along with the exact values (represented by a dashed line) in the left figure show a high degree of consistency. In the right figure, the prediction accuracy of the model is quantified by MSE of the predicted and actual results. What can be recognized in the results is that the value of MSE is in the range from 10-6 to 10-5, which shows that a fully trained PINN model can well predict the response of the one-dimensional sinusoidal system on a global scale.
9.3
Reliability Analysis Based on Physics-Informed Neutral Network
Chapters. 5 and 6 introduce time-independent reliability analysis and time-dependent reliability analysis, respectively. In this section, a PINN-based reliability analysis method is presented. The probability of failure PF is defined as: PF =
...
f X ðxÞdx
ð9:19Þ
Gð.Þ < 0
where X = (X1, X2, . . ., Xd) is a d-dimensional random input variable, fX(x) is the joint PDF of X, G(∙) is the performance function of the system. In reliability analysis, equating the system performance function G(∙) to zero, a failure event can be recognized when G(∙) < 0.
9.3 Reliability Analysis Based on Physics-Informed Neutral Network
223
Fig. 9.4 Result of wave propagation process prediction use PINN. (a) t = 1, (b) t = 2, (c) t = 3, (d) t = 4, (e) t = 5, and (f) t = 6
The PINN method introduced in Sect. 9.2 is extended for solving reliability analysis problems here. The stochastic variables are set as inputs to the PINN model, and the output of the PINN model is the system response, from which a performance function of the system G(X) can be constructed based on system response. Consider the performance function G(X) is represented as GðXÞ = uðξÞ - u0
ð9:20Þ
where u(ξ) denotes the system response, and u0 indicates the threshold value to identify the failure. It is also assumed that u(ξ) can be obtained by solving a stochastic PDE in the form of ut þ N ½u, ux , uxx , . . . ; ξ] = 0
ð9:21Þ
where ξ is the uncertainty of the system. The stochastic response can be described using PINN model as uðx, t, ξÞ ≈ uðx, t, ξÞ = Nðx, t, ξ; θÞ where stochastic variable ξ is set as an input of neutral network.
ð9:22Þ
224
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
By employing the loss function embedded with priori knowledge in form of PDE in the training process, the PINN model can automatically satisfy the physical constraints to which the system response adheres and can be used to predict system response. uðx, t, ξÞ = ub,i ðxb , t 0 Þ þ Cbi . Nðx, t, ξÞ
ð9:23Þ
where the function Cbi can be defined by Eq. (9.24), in which Ωb, i stands for the field related to boundary and initial condition. C bi =
ðx, t Þ 2 Ωb,i ðx, t Þ= 2Ωb,i
0 1
ð9:24Þ
The loss function used in PINN-based reliability analysis is in the same form of Eq. (9.10), the only difference is the definition of the residual of the PDE. Focus on Eq. (9.21), use neural network and AD to calculate derivatives present in PDE. ∂u ∂u ∂Nðx, t, ξ; θÞ ≈ = = N t ðx, t, ξ; θÞ ∂t ∂t ∂t ∂u ∂u ∂Nðx, t, ξ; θÞ ≈ = = N x ðx, t, ξ; θÞ ux = ∂x ∂x ∂x 2 2 2 ∂ u ∂ u ∂ Nðx, t, ξ; θÞ uxx = 2 ≈ 2 = = N xx ðx, t, ξ; θÞ ∂x ∂x ∂x2 ut =
ð9:25Þ
And the PDE residual can be defined as RPDE = N t ðx, t, ξ; θÞ þ N ½Nðx, t, ξ; θÞ, N x ðx, t, ξ; θÞ, N xx ðx, t, ξ; θÞ, . . .]
ð9:26Þ
The network training process can be fully referred to the general form of PINN. 1. Generate collocation points, D = fxi , t i , ξi gNi =c 1 are generated by using some suitable DOE scheme. 2. Formulate the loss-functions as
LðθÞ =
1 Nc
Nc
½RPDE ]
ð9:27Þ
k=1
3. Compute θ by minimizing the loss-function θ* = arg min LðθÞ θ
ð9:28Þ
When the training converges and the model parameters θ are determined, N(x, t, ξ; θ) can predict the system response at any location point (x*, t*, ξ*).
9.3
Reliability Analysis Based on Physics-Informed Neutral Network
225
Finally, the performance function G(x) can be constructed from the system response uðx, t, ξÞ predicted by PINN and reliability analysis can be conducted by calculating the probability of failure through Monte Carlo simulation (MCS). ð9:29Þ
GðXÞ ≈ GðxÞ = uðξÞ - u0 N MCS
IðGðXÞÞ PF ≈ PF =
j=1
ð9:30Þ
N MCS
where NMCS is the sampling size of MCS. Exercises for PINN-Based RA Problem 1: Reliability analysis with a limit-state function in form of ODE Given a simple stochastic ODE (Eq. 9.31), du = - Zu dt
ð9:31Þ
where Z is the decay rate and is considered to be a stochastic variable Z N ðμ, σ 2 Þ. Z is considered to be following normal distribution with μ = - 2 and σ = 1. ODE (Eq. 9.32) follows initial condition, uðt = 0Þ = u0
ð9:32Þ
where the initial value u0 = 1. The limit-state function for reliability analysis can be defined as GðuÞ = u - ud
ð9:33Þ
where ud = 0.5 is the threshold value. Solution (Table 9.1)
Table 9.1 Results of serveral RA methods in problem 1 adapted from [9]
Methods Exact MCS FORM SORM IS DS SS PI-DNN
Pf 0.003539 0.0035 0.0036 0.0036 0.0034 0.0034 0.0030 0.0035
β 2.6932 2.6949 2.6874 2.6874 2.7074 2.7074 2.7456 2.6949
NS — 106 42 44 1000 7833 1199 0
E=
jβe - βj βe
— 0.06% 0.21% 0.21% 0.52% 0.52% 1.95% 0.06%
226
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Problem 2: Reliability analysis with a limit-state function in form of Burger’s equation Considering a viscous Burger’s equation (Eq. 9.34) ut þ uux = vuxx
ð9:34Þ
where u(x, t) is the solution of Burger’s equation, which can be interpreted as the system response; v is the system’s viscosity and v > 0; x is a variable of [-1, 1]. We can subject the given Burger’s equation by following boundary conditions (Eq. 9.35) uðx = - 1Þ = 1 þ δ uðx = 1Þ = - 1
ð9:35Þ
and the initial condition of the system uðt = 0, xÞ = - 1 þ ð1 - xÞ 1 þ
δ 2
ð9:36Þ
where the δ stands for a little perturbation which is a uniformly distributed variable, and δ U ð0, eÞ, e ≪ 1. The limit-state function can be defined as (Eq. 9.37) J ðzðδÞÞ = - zðδÞ þ z0
ð9:37Þ
where z0 is the threshold. For this example, we consider e = 0.1, z0 = 0.45. Solution (Table 9.2)
Table 9.2 Results of serveral RA methods in problem 2 adapted from [9]
Methods MCS FORM SORM IS DS SS PI-DNN
Pf 0.1037 0.1091 0.1091 0.1126 0.0653 0.0800 0.0999
β 1.2607 1.2313 1.2313 1.2128 1.5117 1.4051 1.2821
Ns 10,000 58 60 1000 4001 1000 0
E=
jβe - βj βe
— 2.33% 2.33% 3.80% 19.9% 11.45% 1.70%
9.4
9.4
PINN-Based Design Optimization Under Uncertainty
227
PINN-Based Design Optimization Under Uncertainty
Reliability-based design optimization (RBDO) is proposed to optimize design that is characterized by a low probability of failure. By giving a specific level of risk and reliability, RBDO aims to determine the optimum design of the products/systems. The detailed concept of RBDO has been introduced in Chap. 7. In this section, the previously proposed PINN-RA method is applied in the operation process of RBDO to improve its performance, and then the PINN-based RBDO method will be proposed. The mathematical formulation of the general component level of RBDO problem is introduced in Chap. 7, expressed as minimize
CostðdÞ
subject to
P½Gj ðXÞ > 0] ≤ PTar F j , j = 1, . . . , nc
ð9:38Þ
dL ≤ d ≤ dU , d 2 Rnd and X 2 Rnr
where d is the design variable, which is the mean of the nd-dimensional random variable Xrv = {X1, X2, . . ., Xnd}T; X = {Xrv, Xrp}T, where Xrp represents the random parameter of the random input X; PTar Fj is the target probability of failure, which is achieved by reliability analysis, for the j-th constraint; nc and nd are the number of probabilistic constraints and design variables, respectively. The traditional way of RBDO involves two loops: an outer loop that optimizes the design variables and an inner loop that evaluates the reliability constraints. Some methods use approximations surrogate methods to reduce the computation time for calculate the reliability constraints are introduced in Chap. 7. In RBDO process, surrogate models such as Kriging and support vector machine (SVM) are used to approximately estimate the reliability index or failure probability to reduce the computational cost. However, the construction of a surrogate model relies on actual system response. Acquiring response data in massive quantities incurs immeasurable computational costs in some high-dimensional nonlinear problems. To solve the abovementioned problems, a PINN model is constructed to approximate the system performance function. The response of system can be expressed by PINN model as uðXÞ ≈ uðXÞ = NðX, θÞ
ð9:39Þ
where u can be considered as the response of system, u is the predicted value of PINN model to the system response. The design variable and the random parameters can be considered as the input of the PINN model N(*). Constraint function Gj(X) is in the familiar form as Eq. (9.29) proposed in the previous sections, so that the PINN model for Gj(X) can be denoted as Gj ðXÞ.
228
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
Gj ðXÞ ≈ Gj ðXÞ = uðXÞ - ud
ð9:40Þ
where ud indicates the threshold value to identify the failure. Then the probabilistic constraints can be expressed using MCS as PFj = P½Gj ðXÞ > 0] ffi P Gj ðXÞ > 0 =
1 M
M m=1
I Ω^ F xðmÞ j
ð9:41Þ
where M is the sampling size of MCS, x(m) is the m-th realization of X, and the failure ^ Fj = fx : G^j ðXÞ > 0g. set ΩFj for the PINN model is defined as Ω For design optimization, sensitivity analysis is a crucial step as it quantifies the impact of parameter variations on the optimal objective value and the optimal point. One frequently used approach is using surrogate model to approximate the probabilistic response. However, this approach needs actual response data which comes from expensive simulations or observations, and it may provide inaccurate sensitivity information when the models cannot obtain sufficient and accurate data for training. In Chap. 7, a sampling-based stochastic sensitivity analysis method that employs a score function is introduced. Now the previous PINN model will be incorporated in current approach and then the PINN-based sensitivity analysis method will be proposed. The definition of the probability of failure with respect to the i-th design variable μi yields is given in Chap. 7 as ∂PF ðψ Þ ∂ = ∂μi ∂μi
Rnr
I ΩF ðxÞf X ðx; μÞdx
ð9:42Þ
then gave the definition of first-order score function for μi as sðμ1i Þ ðx; μÞ =
∂ ln f X ðx; μÞ ∂μi
ð9:43Þ
The sensitivity of the probabilistic constraint can be transmitted to ∂PFj 1 ffi M ∂μi
M
I m=1
ΩF j
xðmÞ sðμ1i Þ xðmÞ ; μ
ð9:44Þ
where sðμ1i Þ ðxm ; μÞ is the first-order score function, which is calculated in Sect. 7.4.3. After the analysis of probability of failure, the design procedure can be conducted. The complete numerical procedure of the proposed RBDO based on PINN is described in four main steps:
9.4
PINN-Based Design Optimization Under Uncertainty
229
1. Construct a PINN model that can approximate the system performance function. The PINN model is a neural network that is trained with physics-informed constraints so that actual response data from simulations or observations is not needed. 2. Use the PINN model to approximate the probabilistic constraints. The probabilistic constraints are expressed as the probability of failure, which is calculated using Monte Carlo simulation (MCS) with the PINN model. 3. Use the sensitivity analysis method to quantify the impact of parameter variations on the optimal objective value and the optimal point. The sensitivity analysis method uses a score function, which is a function that measures how sensitive the probability of failure is to the changes in the design variables. The probability of failure used in score function is also calculated using MCS with the PINN model. 4. Use an optimization algorithm that can handle both deterministic and probabilistic constraints to find the optimal design variables that minimize the objective function, subject to the probabilistic constraints and any other deterministic constraints. Figure 9.5 is a flowchart of PINN-based RBDO.
Fig. 9.5 Flowchart of PINN-based RBDO
230
9
Physics-Informed Neural Networks for Design Optimization Under Uncertainty
References 1. Jiang, Z., et al. (2017). Structural reliability analysis of wind turbines: A review. Energies, 10, 2099. 2. Hu, W., et al. (2023). Surrogate-based time-dependent reliability analysis for a digital twin. Journal of Mechanical Design, 145(9), 1–36. 3. Hao, Z., et al. (2022). Physics-informed machine learning: A survey on problems, methods and applications. 4. Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686–707. 5. Cuomo, S., et al. (2022). Scientific machine learning through physics–informed neural networks: Where we are and what’s next. Journal of Scientific Computing, 92(3), 88. 6. Nabian, M. A., et al. (2021). Efficient training of physics-informed neural networks via importance sampling. Computer-Aided Civil and Infrastructure Engineering, 36(8), 962–977. 7. Wu, C., et al. (2023). A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 403, 115671. 8. Xu, Y., et al. (2022). Physics-informed machine learning for reliability and systems safety applications: State of the art and challenges. Reliability Engineering and System Safety, 230, 108900. 9. Chakraborty, S. J. A. P. A. (2020). Simulation free reliability analysis: A physics-informed deep learning based approach. 10. Zhang, C., & Shafieezadeh, A. J. (2022). Simulation-free reliability analysis with active learning and Physics-Informed Neural Network. Reliability Engineering and System Safety, 226, 108716. 11. Zhou, T., et al. (2023). A generic physics-informed neural network-based framework for reliability assessment of multi-state systems. Reliability Engineering and System Safety, 229, 108835. 12. Peng, X., et al. (2022). Estimation of small failure probability based on adaptive subset simulation and deep neural network. Journal of Mechanical Design, 144(10), 101704. 13. Zhou, T., Droguett, E. L., & Mosleh, A. J. (2022). Physics-informed deep learning: A promising technique for system reliability assessment. Applied Soft Computing Journal, 126, 109217. 14. Wenzel, S., Slomski-Vetter, E., & Melz, T. J. M. (2022). Optimizing system reliability in additive manufacturing using physics-informed machine learning. Machines, 10(7), 525. 15. Cai, S., et al. (2021). Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mechanica Sinica, 37(12), 1727–1738.
Chapter 10
Engineering Applications of Design Optimization Under Uncertainty
Nomenclature AFP AFROM ASSA AUV BP CFD CF CFEP CGF CLSC CS CTS EV FEA GA HHC HHCRSP H-LCF HM IGA KDE KL LCF LSF MCDM MCS MIP
Automated fiber placement Auxiliary fuzzy robust optimization model Augmented step size adjustment Autonomous underwater vehicle Back propagation Computational fluid dynamics Creep-fatigue Carbon fiber reinforced polymer Cumulant generating function Closed-loop supply chain Constant stiffness Continuous tow shearing Electric vehicle Finite element analysis Genetic algorithm Home health care Home health care route scheduling problem High and low creep-fatigue Horsetail matching Isogeometric analysis Kernel density estimation Karhunen-Loeve Low cycle fatigue Limit state function Multi-criteria decision-making Monte Carlo simulation Mixed integer programming
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2_10
231
232
10
MILP MLE MOMFPFRP MOPSO MRBIS MVSOSA NSGA-II OWT PC PCE PMA PS PSs PSO RBDO RBF RDO RIA RVE SA SIMP SORA SRPS TH method TS TST VRB-VCS FLB VS WCA YAG method
10.1
Engineering Applications of Design Optimization Under Uncertainty
Mixed-integer linear programming Maximum likelihood estimation Multi-objective mixed fuzzy possibilistic flexible robust programming Multi-objective particle swarm optimization Multimodal radial-based importance sampling Mean-value second-order saddle point approximation Non-dominated sorting genetic algorithm II Offshore wind turbine Prestressed concrete Polynomial chaos expansion Performance measurement approach Power system Photovoltaic systems Particle swarm optimization Reliability-based design optimization Radial basis function Robust design optimization Reliability index approach Representative volume element Saddle point Solid isotropic materials with penalization Sequence optimization and reliability assessment Sustainable and reliable power system Torabi and Hassini method Tabu search Trip and service times Variable-roll-blank and variable-cross-sectional shape front longitudinal beam Variable stiffness Water cycle algorithm Yavari and Geraeli method
RBDO Engineering Applications
RBDO is proposed to ensure safe and reliable operation of the product/system. The use of RBDO can decrease operating and maintenance costs and increase product durability and safety. This section illustrates RBDO engineering applications in four fields: aeronautical engineering, ocean engineering, bridge engineering, and vehicle engineering.
10.1
RBDO Engineering Applications
10.1.1
233
Aeronautical Engineering
To ensure the reliability of aerospace structures in complex atmosphere, the design optimization of aerospace structures needs to consider fail-safe size uncertainties, whose sources can be divided into three categories [1]. The first type of uncertainty refers to the uncertainty of structural parameters, which can be described as aleatory, epistemic, or hybrid uncertainty. The second type of uncertainty is the partial collapse of the airframe when an unexpected event occurs. The last type of uncertainty is the characteristics of the debris during engine failure such as the number of impacts or the location and size of holes in the fuselage. This section reviews various engineering examples of design optimization of variable stiffness (VS) composite laminates, compressor pallets, and aircraft tail fins in aerospace structures under uncertain conditions.
10.1.1.1
VS Composites Plate
In the RBDO problem of lightweight design considering manufacture constraints of VS composite laminates, the uncertainty mainly comes from the property of heterogeneous material that consists of various composite laminates. The uncertainty distribution of material properties is related to lamination parameters and lamination thickness. Note that number of layers of composite material is discrete, so traditional RBDO method cannot solve such problems. Hao et al. [2] investigated the RBDO problem for lightweight design of VS composite laminates with discrete random variables, and solved the problem by the idea of transforming discrete random variables into continuous random variables. In the RBDO of VS composite laminates, the weight is measured by the total thickness of laminate, and the constraint boundary is that the buckling load cannot exceed a maximum threshold. Then, a two-stage RBDO framework is built. In the first stage, gradient optimization is performed directly using lamination parameters and layer thickness to obtain an approximate optimal layer value. In the second stage, the discrete layers are transformed into continuous variables by using intermediate density variables. Then, inspired by the idea of solid isotropic materials with penalization (SIMP), the buckling analysis and sensitivity derivation are carried out by isogeometric analysis (IGA), while the reliability analysis is performed by augmented step size adjustment (ASSA) method. Finally, the optimal lightweight design is obtained through gradient optimization, which realizes the purpose of using gradient optimization in RBDO problems with discrete random variables. The optimization results show that the maximum buckling load satisfying the constraints is reduced by 18.3% and the weight is reduced by 12.5% compared to the initial design.
234
10.1.1.2
10
Engineering Applications of Design Optimization Under Uncertainty
Compressor Disc
High-pressure compressor disc is generally subjected to mechanical stress from centrifugal load, vibration stress, and thermal stress during operation. Therefore, the failure modes of compressor disc in service include low cycle fatigue (LCF) at the center [3, 4], creep-fatigue (CF) at the rim [5], and combined high and low cycle fatigue (H-LCF) at the slot [6]. There is a strong correlation among these failure modes due to the interaction of mutual working condition and geometry structure. Meanwhile, uncertainties in geometry, applied loads, and material properties will affect the reliability of high-pressure compressors disc. To reduce the influence of these uncertainties, Liu et al. [7] proposed an RBDO method to solve the ill-posed multi-mode model of compressor disc with multiple correlated failure modes from the perspective of multi-objective. The method they propose is performed in the framework of the double-loop strategy. First, the pair-copula function [8] is used to quantify the multivariate correlation between failure modes. Combined with the reliability allocation method [9], the reliability requirements of system-level structural targets are allocated to the reliability requirements under each failure mode at the subsystem level, thus eliminating the repeated calculation of structural reliability in the double-loop framework. Because of the complex correlation between failure modes, the lifetime of the structure cannot be fitted with the generalized distribution function. Therefore, a non-parametric kernel density estimation (KDE) method [10] is used to simulate the PDF of the structure lifetime. Finally, the improved multi-objective particle swarm optimization (MOPSO) [11, 12] algorithm integrated niche sorting strategy and dynamic inertia weight is used to solve the ill-conditioned multi-objective problem. The niche sorting strategy can reduce the fitness of similar particles to promote diversity, while the dynamic inertia weight can adjust the search speed according to the number of iterations to obtain the global optimal solution efficiently and accurately. The optimized results show that the maximum von-Mises stress and strain of the compression discs are reduced by 1.63% and 5.93%, respectively, and the reliability design target of 99% is satisfied.
10.1.1.3
Aircraft Tail Fuselage
Design optimization of tail fuselage considering local collapse is a multi-model optimization problem, because engine failure or blade release will randomly impact the tail fuselage and cause different degrees of local collapse. Cid et al. [10] applied the concept of multi-model optimization to existing uncertainty optimization methods, and compared the advantages and disadvantages of traditional RBDO methods (performance measurement approach (PMA) and sequence optimization and reliability assessment (SORA)) and horsetail matching (HM) methods [11, 12].
10.1
RBDO Engineering Applications
235
In this case, the target of RBDO is to minimize the weight of aircraft while maintaining von-Mises stress under applied loads below the maximum threshold. The Young’s modulus of the structural material is regarded as the uncertainty. The local collapse model considers three failure scenarios that are most harmful to the fuselage structure, including blade failure, turbine disk failure, and the unducted blade failure in open rotor engines, corresponding to eight local collapse models. To apply HM technology to multi-model optimization methods, the external penalty function method is used to weight the objective function and design constraints to transform the constrained problem into an unconstrained problem. Using the multi-model optimization technique [13–15], PMA, SORA, and HM methods are compared for the multi-model RBDO problem of aircraft tail. The optimization results show that all methods increases the weight of the tail fuselage by 39.42%, and obtain the damage resistant structure with a reliability index of 3.719. Here the increase of weight is a compromise of assuring reliability under uncertainty. According to their conclusion, the traditional RBDO method can guarantee the reliability index applied in all applicable limit states, while HM enhances the ability of non-random dominant design with the same objective function values.
10.1.1.4
Real-World Engineering Application
In decades, the potential of fiber-reinforced composites in lightweight design has been evident in the aerospace application. With the development of composite manufacturing techniques such as automated fiber placement (AFP) and continuous tow shearing (CTS) [16], fiber paths can be arranged by pre-designed curves, as shown in Fig. 10.1. Therefore, compared to the constant stiffness (CS) fiber path, the curved fiber path can freely distribute the rigid and flexible regions, thereby improving the
Fig. 10.1 (a) A square panel with linear variation fiber orientations and (b) Variable-stiffness laminates containing different fiber plies
236
10
Engineering Applications of Design Optimization Under Uncertainty
Fig. 10.2 The square panel subject to combined compression–shear load under the fully simply supported boundary condition
response of VS laminates to buckling loads. However, uncertainties of material properties and manufacturing probably cause the performance of laminate to deviate from design expectations. Hao et al. [2] studied the RBDO problem of a square panel made of carbon fiber composite T300/5208 with a length of 254 mm, and proposed a bi-stage optimization method to solve the optimal RBDO solution. The square panel subjects to combined compression and shear loads under completely simply supported boundary conditions. The square panel and load distribution are shown in Fig. 10.2. The first stage of the bi-stage optimization method is to perform deterministic optimization with the total thickness of the CS laminate as the objective and the buckling load response within the feasible region of the panel parameters as constraints to obtain an approximate number of layers. In the second stage, RBDO is performed with the panel mass as the objective function and the buckling load response under material property uncertainty as the probabilistic constraint. It is worth mentioning that intermediate density variables are introduced in the method to avoid discrete programming problems (because the number of laminated layers is discrete), then the number of layers is treated as a continuous variable. Readers interested in specific methods can refer to [2]. The reliability of the problem is set to β = 3. The RBDO optimization program uses the reliability index approach (RIA) and uses the ASSA algorithm in the reliability analysis loop. After optimization, the weight of the laminate is reduced by 12.5% compared to the original design.
10.1.2
Ocean Engineering
Ocean structures are exposed to harsh working environments. The failure of ocean structures is usually due to fatigue, buckling, erosion, weld cracking, excessive deflection, corrosion, vibration, and fouling, etc. [17, 18]. The structural fatigue
10.1
RBDO Engineering Applications
237
failure caused by environmental load (such as wave or sea breeze) is common. Therefore, the uncertainty of such environmental loads needs to be quantified so that reliability regarding fatigue can be well performed [19]. Existing methods cannot fully quantify the uncertainty of waves [20]. Meanwhile, the strength design process of ocean structures needs to consider the coupling effect of multiple disciplines (e.g., fluid mechanics, structural mechanics, and fracture mechanics) which may eventually affect the safety of marine structures. Three engineering examples in ocean structure optimization under uncertainties is introduced, including a lightweight optimum design of head pressure shell of an autonomous underwater vehicle (AUV), the optimization of stress distribution on a wellhead platform, and the lightweight design of support structure of an offshore wind turbine (OWT).
10.1.2.1
Autonomous Underwater Vehicle’s Head Pressure Shell
The head pressure shell can provide a secure working environment for sensors and other functional systems of AUV in the unpredictable ocean wave environment. Affected by the uncertainty of ocean wave load, buckling deformation is an important failure mode of the head pressure shell. Li et al. [21] applied the grid sandwich structure to the head pressure shell of AUV in modeling. The RBDO optimization mathematical model is established by setting the total head pressure shell weight and maximum von-Mises stress as the objective function and the strength requirement as the constraints, respectively. The back propagation (BP) neural network model is established based on result of finite element model of the head pressure shell, whose parameters are trained by particle swarm optimization (PSO) algorithm. The distribution regularity of maximum von-Mises stress is obtained by random sampling in optimum neural network. Combined with the distribution regularity of yield strength, which is assumed to obey the normal distribution, the stress-strength interference model is established. With the help of this model, the failure probability of the head pressure shell can be easily obtained by integrating. Finally, the optimization problem is solved by genetic algorithm (GA). The proposed method finally achieves the lightweight design effect of 38.26% reduction in weight of the grid sandwich head pressure shell compared with the solid pressure shell. Meanwhile, the head pressure shell of the grid sandwich structure reaches the level of 3-σ of the random variables, which can ensure sufficient reliability of the AUV.
10.1.2.2
Wellhead Platform
The use of wellhead platform is one of the effective ways to reduce the production cost of offshore oil fields [22]. To ensure the reliability of the wellhead platform, wave load cannot be ignored. The influence of long-term wave load can cause various failure forms of wellhead platform components, such as yield, instability,
238
10
Engineering Applications of Design Optimization Under Uncertainty
fatigue, plastic failure, or large deformation. At the same time, because of the unquantifiable randomness of wave load, the reliability analysis of wellhead platform becomes a difficult task. In this context, Meng et al. [22] investigated the RBDO problem to enhance the safety performance of wellhead platforms and proposed the RBDO-MVSOSA strategy. Firstly, Morrison formula is used to calculate the approximate wave load, which is used to establish the performance function of yield failure at each failure point of the wellhead platform. In the reliability analysis part, the mean-value second-order saddle point approximation (MVSOSA) method is used to performance function reliability analysis. As an improved method based on the saddlepoint approximation (SA) method, the MVSOSA method does not require the transformation of random variables (from X-space to U-space) to search for the most probable point. The X-to-U space transformation is a nonlinear transformation, which will increase the nonlinearity of the limit state function (LSF) and finally resulting in a loss of reliability analysis accuracy. Meanwhile, the MVSOSA method has a higher accuracy of reliability analysis results compared to that using the SA method. Finally, gradient optimization algorithm is used to find the optimal solution of RBDO for wellhead platforms. The results show that wellhead platform volume is reduced by 9% compared to the initial design, and the stress distribution is improved. However, it is worth pointing out that MVSOSA method is an SA-based method, and its reliability analysis process relies on the cumulant generating function (CGF), but only Gaussian variables are available for the analytic derivation of moment generating function (MGF), so their method is only applicable to the case where the random variables obey a Gaussian distribution.
10.1.2.3
Offshore Wind Turbine Blades
The key to RBDO of wind turbine blades is to accurately predict the fatigue life under complex load conditions such as wind load, gravity load, and centrifugal load [23]. The uncertainty of dynamic wind load is the most crucial source of uncertainty affecting the fatigue reliability of wind turbine blades. The review [24] on the structural reliability analysis of wind turbines is concluded that the method of quantification of uncertainty is a key factor in the structural reliability analysis of wind turbines. Hu et al. [25] studied the RBDO of 5-MW wind turbine blades. The uncertainty model of dynamic wind load involves two parts: the annual wind load variation and the wind load variation in a large spatiotemporal range. The annual wind load variation is represented by a joint PDF of the 10-min average wind speed and the 10-min average turbulence intensity, while the wind load variation in a large spatiotemporal range is represented by a PDF of multiple set parameters. By using the maximum likelihood estimation (MLE) method and 249 sets of measured wind data, the 10-min fatigue damage table is obtained, and the Kriging surrogate model is used to fit these data and create a 10-min fatigue damage surrogate model. Finally,
10.1
RBDO Engineering Applications
239
the uncertainty model of dynamic wind load is established when Riemann integral method, wind load probability table, and 10-min fatigue damage table are used to derive the 20-year fatigue damage. Their proposed method consists of two parts. The first part is the DDO procedure. The purpose of performing DDO is to provide a good initial design for RBDO. The design variable is to control the thickness of the composite laminates and the normalized total cost of the composite materials used in the blades is set as the objective function. The constraint is chosen as the 20-year fatigue damage under the influence of dynamic wind load. Since DDO does not involve uncertainty in dynamic wind loads, wind loads are provided by means of an average wind load probability table generated using Monte Carlo simulation (MCS) methods. The second part is the RBDO procedure. Based on the mathematical model of DDO, the uncertainty factor and failure probability are further taken into consideration. The uncertainties include the deduced uncertainties of the dynamic wind load and the uncertainty of the thickness of the composite laminates during the manufacturing process, the latter of which is assumed to follow a normal distribution. The probability constraint of RBDO is that the fatigue failure probability at the fatigue hotspot is less than the target failure probability. Due to the implicit character of 20-year fatigue damage estimation at fatigue hotspots, its sensitivity is difficult to obtain. Therefore, the sampling-based reliability method of MCS is used to calculate the fatigue failure probability. Finally, SQP algorithm is used as an optimizer. The optimization results show that the fatigue failure probability of 50.06% in the initial design is reduced to 2.281% in the optimal design, while the cost is only increased by 3.01%.
10.1.2.4
Real-World Engineering Application
Wind energy is an important direction for the country’s clean energy development. Monopile offshore wind turbines is one of the wind power equipment that converts offshore wind energy into electricity. The basic structure of a monopile offshore wind turbine is shown in Fig. 10.3 [26]. During normal operation, the monopile is subjected to four external loads: (1) Current loads; (2) Wave load; (3) Top wind turbine load; and (4) Wind pressure, as shown in Fig. 10.4. Meng et al. [27] studied the RBDO problem of monopiles under mixed uncertainty, where external dimensions are random variables and material density is an interval variable. The optimization objective in this optimization problem is to minimize weight m of the monopile. The optimization variables are diameter and thickness of the monopile. The mathematical model of this optimization problem is shown in Eq. (10.1).
240
10
Engineering Applications of Design Optimization Under Uncertainty
Fig. 10.3 Basic structure of monopile offshore wind turbine [26]
min
F = mðμD , μT Þ
s:t:
Pf fσ allow - σ max ½D, T, ρ] ≤ 0g ≤ Φð - βt Þ D T ≥ 6:36 þ 100 4 m ≤ μD ≤ 8 m, 20 mm ≤ μT ≤ 200 mm 7840 kg=m3 ≤ ρ ≤ 7860 kg=m3
ð10:1Þ
where σ allow = 345 MPa represents the allowable stress of material for the monopile. σ max represents the maximum stress on the monopile. ρ is the density of monopile material and is an interval variable. The nominal value of interval variable ρ is 7850 kg/m3. SORA is used to find the optimal solution, in which the water cycle algorithm (WCA) is used in the optimization cycle for global optimization. For details on the
10.1
RBDO Engineering Applications
241
Fig. 10.4 Load distribution on offshore monopile
reliability calculation process and specific optimization methods under mixed uncertainty, refer to [27]. Compared with the original design, the weight after the optimized design is reduced by 12.78%, and the reliability reaches 99.87%.
10.1.3 Bridge Engineering One of the most important components of bridge design is ensuring structural stability, even when low-probability events such as heavy rains, hurricanes, earthquakes, and other harsh environments occur. The service life of bridges is also affected by many factors, such as the random gravity loads exerted on the bridge by cars driving on the bridge deck, corrosion, and deformation of bridge construction materials, etc. However, it is necessary to model the uncertainty of these factors to ensure the reliability of bridge design, which is not an easy task. This section reviews engineering examples of bridge design optimization under uncertainties including corrosion, traffic loads, and flutter, respectively, and shows the way to optimize bridge designs under these uncertainties.
242
10.1.3.1
10
Engineering Applications of Design Optimization Under Uncertainty
Bridge Corrosion
In prestressed concrete (PC) bridges, the main factor leading to severe deterioration of the bridge is corrosion of the steel reinforcement. For instance, the sudden collapse of the Ynys-y-Gwas Bridge in West Glamorgan, UK, in 1985 was caused by corrosion of the bridge reinforcement [28]. The shear and torsion bars of PC box girder bridges, especially the post-tensioned prestressed tendons, will all have pitting corrosion. Currently, few of researches of PC bridge design optimization under uncertainties have been considered. Nguyen et al. [29] studied the RBDO problem of PC bridges with uncertainty pitting damage effects. To involve the influence of shear and torsional reinforcement by pitting during the analysis, the model of maximum pit depth over the standard length of the reinforcement is invoked [30], while the maximum pitting depth at a given length can be expressed by the Gumbel statistical parameter. To involve the influence of post-tensioned tendon by pitting during the analysis, the corrosion initiation time is estimated using the method of Bentur et al. [31] and then the failure of the wire due to stress corrosion cracking is modeled according to the linear fracture mechanics method. With the help of a pitting failure model, performance functions on strength limits and deflection limits are established and reliability analysis is performed using FORM. According to the reliability analysis results, optimization is executed using the first-order Taylor series expansion and quasi-Newton optimization algorithm of MATLAB optimization toolbox. Finally, the optimal dimensional design solution of PC bridge is obtained, which considers the production cost and reliability. In the process of optimization, it is found that the target reliability index should be in the range of 3.5 ~ 5 to get the optimal solution.
10.1.3.2
Bridge Traffic Load
Traffic load is also an important factor to be considered in bridge design and evaluation. The dynamic response of a bridge under moving load is greater than that under the same horizontal static load. In addition, random fluctuation of material parameters will affect the response of bridge structure to vehicle load. Dynamic analysis of bridge structures involving material parameter uncertainties had been extensively studied in the past decades [32–34]. However, the research of RBDO method involving the interaction between bridge and vehicle in bridge design optimization is still limited. Ni et al. [35] proposed an RBDO method to carry out RBDO for a three-dimensional box-section bridge that considered the uncertainties of system material parameters (such as Young’s modulus and mass density) and the influence of moving loads. In bridge-vehicle systems, the uncertainties of material parameters are simulated as a Gaussian and/or lognormal random field, where the Gaussian random input is
10.1
RBDO Engineering Applications
243
approximated by Karhunen-Loeve (KL) expansion, while the lognormal random input is approximated by a combination of KL expansion and polynomial chaos expansion. The moving load acting on the bridge is approximately equivalent using the Rayleigh damping model. After quantifying the related uncertainties, the maximum allowable deformation of the bridge is defined as the LSF and approximated by Kriging model. The RBDO method proposed by them is based on the framework of the double-loop strategy. The reliability analysis loop uses MCS method to calculate the failure probability, while the optimization loop uses SQP algorithm to calculate the minimum cross section area of the bridge under the preset probability constraints. Finally, the optimization results show that when the reliability of the bridge reaches 90%, the cross-section area of initial design (corresponding to the reliability of 84.496%) is increased by 20%.
10.1.3.3
Bridge Flutter
Long-span suspension bridge is a kind of flexible structure, which is easily affected by wind vibration. Therefore, flutter instability caused by wind vibration is also the important factors to consider in RBDO of long-span suspension bridges, which may lead to the collapse of bridge structures. Kusano et al. [36] extended the study of RBDO of single box girder for suspension bridge (taking East Belt Bridge as an example) to shape optimization and size optimization under probabilistic flutter constraint [36]. In their study, aerodynamic coefficients of different deck sections are simulated through a series of computational fluid dynamics (CFD), and based on the response of CFD, a pneumatic surrogate model is constructed using Kriging model to estimate the force coefficients of different deck sections [37]. Then, the quasi-steady formula is used to define the flutter derivatives of these force coefficients in completely numerical terms. In the RBDO procedure of suspension bridge box girder shape and size, the construct of LSF considers the uncertainty of force coefficient and slope, and the ultimate wind speed at the bridge position. The RIA is adopted to solve the optimal solution in the final. The optimization results can be obtained under different reliability indexes. When the reliability index β is reduced to 7, the beam volume decreases by 13.33% compared with the initial design where β = 7.58, and when the reliability index β is increased to 9, the beam volume increases by 10.03% compared with the initial design.
10.1.3.4
Real-World Engineering Application
Existing famous long-span suspension bridges such as Akashi Kaikyo Bridge, the Xihoumen Bridge, the Yi Sunsen Bridge, the Jiangsu Runyang Bridge, and the Tsing Ma Bridge, etc., are flexible structures. However, flexible structures are susceptible to wind vibrations, which can lead to bridge flutter instability and collapse. The
244
10
Engineering Applications of Design Optimization Under Uncertainty
Fig. 10.5 Overall view of the Great Belt East Bridge. (Picture from Wikipedia: Great Belt Bridge – Wikipedia)
Fig. 10.6 Box girder geometry and design variables
study of Diana et al. [38] underlined the importance of bridge deck shape to the vortex shedding phenomenon while Ge and Xiang [39] studied the importance of bridge deck section on flutter velocity for cable-supported bridges. Therefore, the design optimization of the bridge deck shape has a positive effect on the resistance of the structure to wind-induced instabilities. In addition, material cost as one of the main expenses of long-span suspension bridge construction, material lightweight design will help to reduce the costs. Kusano et al. [36] took the Great Belt East Bridge in Denmark as the research object and proposed RBDO method for the design of single-box girder of suspension bridge under probabilistic flutter constraints. Figure 10.5 shows the overall view of the Great Belt East Bridge, Fig. 10.6 shows the stream-lined box girder geometry and design variables, the solid line is the initial design, and the dashed line is the shape after design optimization. The RBDO mathematical model representation of the box girder is as Eq. (10.2):
10.1
RBDO Engineering Applications
Table 10.1 RBDO results of box girder for different target reliability
βt 7 8 9 10
245 Objective function value 2409.19 2718.78 3058.50 3628.01
min
Girder volumeðδH, δB, d 1 , d 2 , d 3 , d 4 Þ
s:t:
g1 : P V f ðxÞ - xw ≤ 0 ≤ Pf
Variation -13.33 -2.19 10.03 30.52
g2 : - 10% ≤ δH ≤ 10% g3 : - 10% ≤ δB ≤ 10% g4 : 7 mm ≤ di ≤ 25 mm, i = 1, 2, 3, 4
ð10:2Þ
g5 : σ c = 565 MPa z g6 : d - 1 ≤ 0 zmax where zmax =
L ; L = 1624 m 500
The objective is to minimize the volume of the beam while meeting the requirements of constraint reliability. The uncertainties come from the bridge fluttering wind speed Vf(x) and force coefficient x. To avoid the shape of box girder δH and δB being infeasible, the range of values for the design variables are constrained to ±10% of the original size. Constraint g5 considers tensile stresses σ c under static overload. Constraint g6 considers the vertical displacement Zd across the middle deck under static overload, Zmax is the threshold. L is the total length of the bridge. The probabilistic constraint is defined by the structural limit state function of flutter failure, where xw is the limit wind speed at the position of the beam. The uncertainty of Vf(x) is assumed to follow the Gaussian distribution, with the output of CFD simulation as the mean, and the custom parameters as standard deviation. For details, refer to [36]. The response of the limit state function is provided by the Kriging surrogate model. RIA is used to solve Eq. (10.2). The optimal solutions under different reliability index are summarized in Table 10.1.
10.1.4
Vehicle Engineering
As a complex system, the vehicles are confronted with multiple challenges throughout their lifecycle. Recent customer surveys show that in addition to cost, reliability and maintainability are key decision factors that customers seriously consider when purchasing a vehicle. Because vehicles are machines with high risk, reliability must
246
10
Engineering Applications of Design Optimization Under Uncertainty
be guaranteed during their design, manufacture, and use [40]. This section reviews 3 aspects of vehicle crashworthy design, composite battery box design, and vehicle gearbox housing design.
10.1.4.1
Variable Blank Variable Section Shape Front Longitudinal Beam
Duan et al. [40] invented a novel variable-roll-blank and variable-cross-sectional shape front longitudinal beam (VRB-VCS FLB), which has more potential than traditional FLBS in crashworthiness and lightweight design. Studies had shown that in the stamping process of VRB, transition zone center movement, sheet metal thinning [41], and rebound [42] may occur. Therefore, it is necessary to consider various uncertainties in the manufacturing process to improve the lightweight and crashworthy reliability of VRB-VCS FLBs. They proposed a multi-objective RBDO method for VRB-VCS FLB. The objective function consists of two parts: the crashworthiness of the vehicle is measured by the peak acceleration of the collision pulse, and the lightweight of the vehicle is measured by the body weight. The performance function involves the crash pulse first step acceleration, crash pulse second step acceleration, energy absorption of VRB-VCS FLB, and dash panel intrusion. Their proposed multi-objective RBDO method is performed in the framework of the double-loop strategy. In the reliability analysis loop (the inner loop), the multimodal radial-based importance sampling (MRBIS) method [43] is used to estimate the probability of a system with multiple failure modes, which solves the problem that traditional reliability analysis methods (such as MCS and importance sampling) cannot directly evaluate the reliability of a system involving multiple failure modes. In the design optimization loop (the outer loop), the non-dominated sorting genetic algorithm II (NSGA-II) is used to generate Pareto boundaries. In the meantime, to overcome the high computational cost of a single run of crashworthiness finite element analysis (FEA), the ε-support vector regression is used as a surrogate model to reduce the computational burden. Finally, the optimal Pareto set is obtained, while the weight and peak acceleration reduction percentage of the optimal overall performance relative to the initial state are 11.357% and 14.356%, respectively. However, additional uncertainties such as material properties, collision direction, and collision velocity have not been considered in this study, so there is still enough exploration space for RBDO studies of VRB-VCS FLBs.
10.1.4.2
Carbon Fiber Reinforced Polymer Composites
The application of carbon fiber reinforced polymer (CFRP) composites brings great challenges to the design optimization process, such as complex nonlinear material behavior, inherent uncertainties and multilayer characteristics of design variables, and multiple working conditions of structural components [44–47]. Liu et al. [48]
10.1
RBDO Engineering Applications
247
studied the application of CFRP composite materials to the lightweight problem of electric vehicle battery boxes, proposed an RBDO method to find the optimal combination of the microstructure parameters of battery boxes and the geometrical parameters of macrostructures. The method consists of three parts: uncertainty quantification and propagation, reliability analysis, and optimization. In the first part, a representative volume element (RVE) model considering the damage evolution of the component is established using X-ray micro-CT scanning, while the mechanical properties of CFRP are simulated based on the homogeneity theory [49] to reveal the propagation process of uncertainty. In the second part, based on the simulation data of RVE model, a constitutive model considering tension asymmetry, anisotropy, and failure is established, which is used to analyze the stiffness and strength of the cell box structure. In the third part, to avoid the optimization falling into the local optimal solution and to give consideration to the search efficiency, OLRPSO [50] (an optimization algorithm based on improved particle swarm optimization) is selected as the optimizer to perform the optimization. The above optimization method is integrated in the framework of SORA method. The Kriging model is used to predict the mechanical response of carbon fiber cloth and evaluate its reliability. After optimization, the weight of the new CFRP battery box cover is reduced by 22.14%, all performance indicators meet the requirements, and the reliability reaches 90%. Finally, they suggested that the stacking direction and sequence of composite laminates could be further considered in the design optimization process in future studies to better exploit the excellent mechanical properties of CFRP.
10.1.4.3
Reducer Housing
The reducer housing needs to ensure the stability of the designed structure under complex load environment. Therefore, RBDO needs to be utilized to reduce the risk of failure due to uncertainties such as design variables and materials. Xu et al. [51] investigated the RBDO problem of electric vehicle reducer housing, proposed a multi-objective reliability design optimization method considering the maximum allowable deviation range of design variables. The method uses interval theory to analyze the reliability, and describes the uncertain value by interval value. The advantage of this method is that the distribution of the exact probability of the uncertain value is not required as in probability theory, and can adapt to the reliability analysis of complex working conditions. With minimum mass and maximum first-order natural frequency as design objectives, maximum von-Mises stress as constraint condition, and material uncertainty as the main uncertainty source, a mathematical model of RBDO for vehicle reducer housing is established. Because radial basis function (RBF) neural network has outstanding ability in nonlinear function approximation, the proposed method chooses RBF neural network as the surrogate model. Based on the finite element model response, the structural parameters of the RBF are optimized in combination
248
10
Engineering Applications of Design Optimization Under Uncertainty
with the PSO algorithm to establish an accurate alternative model for reliability analysis. In the optimization part, SQP algorithm is used to solve the lower and upper bounds of constraints, while NSGA-II is used to generate discrete Pareto solution sets. Finally, multi-criteria decision-making (MCDM) is used to sort Pareto solution sets. Compared with the deterministic optimization results, the optimal ordering solution set shows that the minimum mass is increased by 0.64%, while the maximum first-order natural frequency and maximum von-Mises stress of the reducer housing are decreased by 4.8% and 4.1%, respectively.
10.1.4.4
Real-World Engineering Application
The application of new lightweight materials and structural design optimization are the main ways to realize lightweight vehicles. The new composite material CFRP has attracted much attention due to its low density, high specific stiffness, flexibility, and energy absorption capacity. The battery box of an electric vehicle is a typical component using composite materials, Fig. 10.7 shows the appearance of an electric vehicle battery cap. However, CFRP composite materials have obvious material and structure integration characteristics, which have complex damage evolution and destruction processes under complex external loads [52, 53]. Zhao et al. [48] redesigned the battery cap made of CFRP material based on reliability. The objective function is to minimum the mass of battery cap while the constraint functions include mode constraints, impact constraints, and drop constraints. Based on the internal geometric structural parameters of plain woven CFRP obtained from radiographic micro-CT scanning experiments, they used Abaqus to establish an RVE model that considers component damage evolution, and combined it with the Kriging surrogate model to achieve rapid prediction of constraint response. Fig. 10.7 Electric vehicle battery box system shape cover and related shape design variables
10.2
RDO Engineering Applications
249
SORA is used to solve the optimal solution, in which the modified particle swarm optimization algorithm [50] is used in the optimization loop for global optimization. The mass of the battery cap is 4.924 kg after optimizing, which is 22.14% lighter than the original structure.
10.1.5
Summary
The following summarizes this section, as shown in Table 10.2, including application field, specific application, sources of uncertainty, optimization methods, optimization results, and references.
10.2
RDO Engineering Applications
Robust design optimization (RDO) refers to improving the adaptability to external anomalies as much as possible under the premise of ensuring the normal operation of the system, which focuses on the stability of the optimization results to the solution. The quantification of various uncertainty is currently a major challenge for RDO in engineering applications. This section reviews the relevant examples of RDO engineering applications from three aspects: energy management, logistics scheduling, and closed-loop supply chain (CLSC). Finally, summarizes them with a table.
10.2.1
Energy Management
The contradiction between power supply demand and environmental protection in power system (PS) has attracted wide attention. Overuse of traditional PS can cause serious environmental and social problems, while renewable PS are characterized by supply instability due to the intermittent nature of renewable energy supply. Considering the uncertainty of supply and demand for the RDO of PS is an effective measure to balance the contradiction. This section reviews the RDO for power systems powered by both wind and solar renewable energy sources, and for power systems powered by a mix of renewables and conventional energy sources (e.g., coal, oil, etc.).
10.2.1.1
Wind Energy
A wind turbine-driven ammonia (NH3) synthesis plant is powered primarily by wind energy, but the supply of wind energy is volatile. Verleysen et al. [55] studied the RDO of Power-to-NH3 process under wind uncertainty by combining uncertainty
Ocean engineering
Wellhead platform
AUV head pressure housing
Tail fuselage
Sources of uncertainty 1. Material properties 2. Manufacturing processes 1. Material properties 2. Manufacturing processes 3. Loads 1. Fan blade failure 2. Turbine disk failure 3. Ductless fan blade failure of open rotor engine 1. Material properties 2. Manufacturing processes 3. Wave loads 1. Material properties 2. Manufacturing processes 3. Wave load 4. Sea state MVSOSA + Gradient optimization in double-loop strategy
BP neural network + GA
PMA、SORA、MH
Reliability allocation methods + KDE + Modify MOPSO in doubleloop strategy
Method Bi-stage RBDO framework
References Hao et al. [2]
Liu et al. [7]
Cid et al. [54]
Li et al. [21]
Meng et al. [22]
Result/Contribution (" is increment, # is decrement) Weight # 12.5% and maximum buckling load # 18.3%.
Maximum von-Mises # 1.63% and strain # 5.93%.
Tail fuselage’s weight " 39.42% but β is equal to 3.7190.
Head pressure shell’s weight # 38.26%, and the grid sandwich structure reaches the level of 3σ of the random variable.
Platform volume " 9% but stress distribution of the platform is improved.
10
Compressor disc
RBDO engineering applications Application Specific field application Aeronautical VS composite engineering plate
Table 10.2 The summary of RBDO engineering applications
250 Engineering Applications of Design Optimization Under Uncertainty
Vehicle engineering
Bridge engineering
Reducer housing
CFRP composite battery case cover
VRB-VCS FLB
Long span bridge (Great Belt East Bridge)
1. Material properties 2. Manufacturing processes 1. Material properties 2. Manufacturing processes 1. Material properties 2. Manufacturing processes 3. Loads
1. Material properties 2.Vehicle moving load 1. Wind-induced vibration
3D box bridge
PC bridge
1. Wind load 2. Manufacturing processes 1. Material properties 2. Corrosion
Offshore wind turbine blades
RBF + SQP + NSGA-II + MCDM
Kriging + OLRPSO + SORA in decoupling strategy
ε-support vector regression + MRBIS + NSGA-II in double-loop strategy
Kriging + RIA
FORM + First-order Taylor series expansion and quasi-Newton optimization algorithm in double-loop strategy Kriging + MCS + SQP in double-loop strategy
MCS + SQP in double-loop strategy
Best solution: Minimum mass of reducer housing # 0.64%, maximum first natural frequency # 4.8%, and maximum von-Mises stress # 4.1% compare to the initial design.
Battery case cover’s weight # 22.14%, reliability up to 90%.
Kusano et al. [36]
β decreased from 7.58 to 7, volume of bridge # 13.33%; β increased from 7.58 to 9, volume of bridge " 10.03%. The weight # 11.357% and peak acceleration # 14.356% relative to the initial state.
Xu et al. [51]
Liu et al. [48]
Duan et al. [40]
Ni et al. [35]
Nguyen et al. [29]
Hu et al. [25]
Sectional area " 20% but reliability up to 90% compared to the initial design (reliability corresponds to 84.496%).
The fatigue failure probability # from 50.06% to 2.281%, while the cost only increased by 3.01%. Get the optimal design size with reliability index distribution between 3.5 and 5.
10.2 RDO Engineering Applications 251
252
10
Engineering Applications of Design Optimization Under Uncertainty
quantization method with multi-objective genetic optimization algorithm. To maximize the average NH3 production demand and minimize the sensitivity to NH3 production, an optimization model is established with the current density of the electrolyzer, the H2/N2 ratio entering the NH3 synthesis unit and its outlet pressure as constraints. The NH3-based energy storage system in Aspen Plus is modeled in Python to quantify the uncertainty of wind velocity measurement error, cell temperature change, and NH3 synthesis amount. In the process of RDO, polynomial chaos expansion (PCE) is used as surrogate model to identify the uncertainty propagation process of the system model. On this basis, MCS method is used to quantify the mean and standard deviation of the target. Finally, the robust optimization of the Power-to-NH3 process of the system model is performed by using the NSGA-II algorithm. The results of global sensitivity analysis on the design optimization show that the influence of temperature fluctuation of NH3 reactor on the average NH3 yield is 99.7%. Using the same sensitivity analysis for the most high-yield design, the effect of wind velocity measurement error and temperature change on NH3 yield is 75.4% and 22.5%, respectively. The CoV of the optimized device is 1.46%. According to their summary, the research in the future will include analyzing the dynamic operation of the dynamic NH3 production pathway and strengthening its levelized costs.
10.2.1.2
Solar Energy
The off-grid charging station is powered by distributed energy sources (including renewable, non-renewable, and a combination of both), which inevitably needs to meet load demand during its operation. The renewable energy is unstable, which hinders the energy supply of off-grid charging station. However, few studies have been conducted on off-grid charging stations [56–58], especially no studies on the robust model of the risk of off-grid charging stations. Wang et al. [59] investigated the RDO problem of off-grid solar-powered charging station powered by photovoltaic systems (PVs). A novel approach is proposed which integrates robust optimization as a non-stochastic framework into mixedinteger linear programming (MILP) of deterministic models. The MILP model of the charging station is established by minimizing the annualized investment cost of the PV system, the diesel generator, and their operating cost, with the output power of the PV system and the diesel generator in the specified time period as the constraint. On the basis of the MILP model, a robust optimization model is established by considering the uncertainty of solar power generation and hydrogen production from water electrolyzer in the energy conversion process. Finally, a robust optimization method is used to search for the optimal solution. The optimization results show that when handing the worst uncertainties, the capacity of the photovoltaic system and the diesel generator is increased by 12.78% and 33.33%, respectively. Although the total annualized cost is increased by 13.75%, the robust model of charging stations is obtained.
10.2
RDO Engineering Applications
10.2.1.3
253
Power System
Sustainability and reliability are two indispensable aspects of PS design. Most PS design studies focus on economic and environmental sustainability [60–62]. In recent years, social standards related to reliability have also received increasing attention [63, 64]. Taking sustainability and reliability into consideration makes PS design more realistic. However, the design of sustainable and reliable power systems (SRPSs) is a conflicting multi-objective optimization problem that is also subject to many different types of uncertainties. Such as the inherent uncertainty of supply, demand, and energy distribution and the uncertainty affected by climate disasters [65]. Tsao et al. [66] proposed an interactive fuzzy multi-objective programming method by considering the uncertainty of power demand, intermittency of renewable energy, and production cost parameters, which combined fuzzy possibility-flexible programming, robust optimization, and Torabi and Hassini (TH) method [67]. The proposed method is called multi-objective mixed fuzzy possibilistic flexible robust programming (MOMFPFRP). The sustainability target includes the CO2 emission cost of generator set, while the reliability target includes the average failure cost of network interruption. Then, a multi-objective MILP model is established with supply demand, assembly capacity, and system transmission power as constraints. The expected value of the objective function is quantified by the expected interval and expected value method of the triangular fuzzy number proposed by Jimenez [38], which has the advantage of not increasing the number of objective function and inequality constraints and has high computational efficiency. Then, feasibility and optimal robustness are formulated on the basis of the fuzzy sequencing method proposed by Yager [68]. Based on a set of given triangular fuzzy numbers, the multiobjective MILP model is rewritten into an auxiliary fuzzy robust optimization model (AFROM) by fuzzy possibility-flexible programming, then the AFROM model is transformed into a single objective model by TH aggregation function to solve the multi-objective conflict problem. Finally, taking PS in southern Vietnam as an example, the optimization results show that the total installed capacity increased by about 54.42%, with an increase in the installed capacity of renewable energy (e.g., biomass from 23.35% to 39.76%) and a decrease in the installed capacity of traditional energy (e.g., coal and natural gas). Although the total cost of the power system is increased by about 4.2%, the conflict multi-objective optimization problem is solved, and the reliable and sustainable power supply of the system is achieved. According to their summary, the research can realize the intelligent planning problem of power system in the future by combining storage technology, visible control, etc.
254
10.2.1.4
10
Engineering Applications of Design Optimization Under Uncertainty
Real-World Engineering Application
One of the innovative applications of renewable energy is to power electric vehicle charging stations [69]. An off-grid solar charging station requires energy provided by distributed energy sources. An off-grid solar charging station working system proposed in Wang’s research [59] is shown in Fig. 10.8. The energy supply includes renewable and non-renewable energy sources. Renewable energy is provided by photovoltaic systems, and part of the energy generated by the photovoltaic system directly powers electric vehicles through fuel cells; while non-renewable energy is provided by diesel generators. When suffer from thunderstorms lead to insufficient solar power supply, the diesel generator supplies power to ensure the energy demand of electric vehicles. However, the unpredictability of renewable energy supply leads to probabilistic energy supply. To ensure a stable supply of energy, the investment cost of non-renewable energy power generation equipment needs to increase. Wang et al. [59] proposed a robust MILP model to relieve the negative impact of uncertainty.
Fig. 10.8 Off-grid solar charging station working system proposed by Wang’s research [59]
10.2
RDO Engineering Applications
255
The objective function of the robust MILP model includes the annualized investment cost of the photovoltaic system, the diesel generator, and its operating cost. The constraints consider the daily average power generation distribution of photovoltaic systems and diesel power generation systems, hydrogen storage capacity, power demand, etc. For details, refer to [59]. The robust optimization model is solved using traditional robust optimization methods and the CPLEX solver in GAMS software. The optimization increases the total annualized cost from $287,256 in deterministic mode to $326,757 in robust mode, which increases 13.75%, but obtains a robust solution.
10.2.2
Logistics Scheduling
Logistics scheduling refers to the allocation, arrangement, and management of logistics resources to achieve the purpose of optimal logistics operation. However, in the actual logistics operation, there are many uncertain factors such as weather, traffic conditions, manpower, and materials, etc. These complex factors often lead to the delay of logistics transportation, cost increase, resource waste, and other problems. Therefore, the way of deal with these uncertain factors accurately and quickly becomes the key factor of logistics scheduling optimization. This section reviews the engineering application of RDO in logistics scheduling from three aspects: medical resource scheduling, emergency logistics scheduling, and logistics distribution scheduling.
10.2.2.1
Home Health Care Routing and Scheduling Problem
Considering the robustness of the optimization solution in the home health care route scheduling problem (HHCRSP) can provide a more rational decision when uncertainties interfere between operating costs and the requirement for timely service. Trip and Service Time (TST) is key factors in scheduling home health care (HHC) services. Factors such as time to diagnosis, medical skills, road conditions, weather conditions, and driving skills are among the uncertainties included in TST. Only few studies on HHCRSP have included the above uncertainties [70–72]. Shi et al. [73] are the first study of HHCRSP with uncertainty including trip and service times (TST) from the perspective of robust optimization. The objective function is to minimize the total travel cost and the fixed cost of the caregiver, while the constraint function includes the service time window, resource scheduling, and allocation, etc. A deterministic model of mixed integer programming (MIP) is then constructed. The uncertainty set includes patient service time and business travel time. To achieve robust design, the worst case is selected in the uncertainty parameter set. The uncertainty variables are defined based on the budget uncertainty theory, while the arrival time of each caregiver is rewritten as a complex recursive function to rewrite MIP model as a robust optimization model quantitively. Finally, the robust model is
256
10
Engineering Applications of Design Optimization Under Uncertainty
solved by the Tabu Search (TS) method to avoid the case where the optimal solution search falls into local optimum in a strictly constrained problem. Using the benchmark example proposed by Solomon et al. [74], a set of weighted total distance and total time has been minimized and the optimal solutions are obtained. The study provides guidance to HHC companies on reliable schedule development, which enhances industry competitiveness. Future research work is to combine big data with RDO to enhance the method. In addition, using the metaheuristic optimization algorithm as the optimizer can improve the practicability of the proposed framework.
10.2.2.2
Emergency Logistics Planning and Scheduling
Currently, there is no reliable way to accurately predict the time, location, or magnitude of an earthquake. In the aftermath of an earthquake, relief activities such as tactical decisions (i.e., location of temporary facilities, mobilization level of relief supplies, and deployment of transport vehicles) and operational decisions (i.e., transport plans between temporary facilities and transport vehicles) are fraught with uncertainties. The way to deal with these uncertainties is a challenge in emergency logistics planning and scheduling [75–78]. Stochastic programming is a common method for the study of decision problems with uncertainties [77, 78]. However, the use of stochastic programming not only requires prior knowledge of the probability distribution of uncertain parameters, but also has the disadvantage of not being suitable for problems with a large number of uncertain variables. Firstly, a deterministic MILP model of the post-disaster relief logistics system is established. The model aims to minimize the cost, which includes personnel mobilization, helicopter allocation, and transportation strategies. The constraints are composed of the maximum number of rescue personnel and key populations temporarily set up and the number of helicopters and dispatching constraints. Then, the robust optimization methods proposed Bertsimas and Sim [79] are introduced to deal with the uncertainties. By expressing the uncertain parameters of resource allocation and scheduling time as interval data, the deterministic MILP model is transformed into a stochastic model and the min-max method is used to ensure the feasibility of the solution. Readers interested in specific methods can read the references [79]. Finally, an example based on Sichuan earthquake disaster logistics is used to test the proposed model. The test results show that the model can be used to help decision-makers determine the level of mobilization of relief supplies, the initial helicopter deployment of relief supplies, and the transportation plan within the disaster area. However, the model proposed in the study mainly focuses on developing initial post-disaster relief logistics plans based on information collected immediately after a disaster. Because information pertaining to the demand level and transportation environment will be continuously updated and because relief logistics campaigns typically require multiple periods to complete [80, 81], the
10.2
RDO Engineering Applications
257
future work is to extend the model to multi-cycle settings, which will help provide decision-makers with the opportunity to dynamically adjust and optimize existing plans.
10.2.2.3
Electric Vehicle Logistics and Distribution Dispatch
The logistics of delivery using electric vehicles (EVs) is fraught with uncertainties. On the one hand, with the increase in the number of private vehicles, the problem of road traffic congestion has been exacerbated, which resulting in the uncertain of logistics delivery time. On the other hand, due to the limited battery life, EVs must be charged at a designated station halfway to their destination. However, the charging time is affected by customer demand scheduling, which leads to the uncertainty of the battery life of EVs. The researches on the optimization of EV logistics distribution are still in its infancy [82–87]. However, most researches on path optimization mainly focus on the deterministic conditions. Whereas, researches involving analysis of uncertain conditions mainly rely on assumptions of prior knowledge [88–90]. Ma et al. [91] studied the robust optimization of multi-distribution center distribution routes for EVs. Based on the robust discrete optimization theory of Bertsimas, taking the minimum transportation time as the goal, the uncertainty of time in the distribution section is expressed by the interval, which reduces the need for uncertain basic data. Meanwhile, a robust optimization model of EV distribution path nonlinear planning with adjustable robustness is established with the constraints of EV running path and battery capacity and consumption. Since GA can avoid the trouble of nonlinear and multimodal constraints in solving nonlinear programming problems, a three-segment hybrid coding method is designed in this study. A hybrid coding method combining distribution center, customer demand point, and distribution path is used to encode. Then the corresponding decoding method and genetic operation are designed to avoid the infeasible solution in the process of population evolution. Finally, taking part of the road network in Xifeng District of Qingyang City as an example, a distribution scheme with high applicability and economy is obtained. Meanwhile, EV distribution schemes under different transportation conditions can be obtained by setting different robust control risk coefficients to provide decision support for the selection of EV distribution paths in multiple distribution centers.
10.2.2.4
Real-World Engineering Application
HHC can provide convenience for home treatment, elderly care, etc., alleviating the resource shortage problem caused by limited hospital beds. The most critical goal of HHC companies is to meet the needs of patients in a timely manner. However, there is a tension between the company’s operating costs and the problem of delayed patient service. Delayed services lead to lower patient satisfaction and even missed
258
10
Engineering Applications of Design Optimization Under Uncertainty
Fig. 10.9 Description of HHC route scheduling
opportunities for optimal treatment. TST is a key factor in scheduling HHC services, Fig. 10.9 describes HHC path scheduling. Fully considering the uncertainty of scheduling time, service time, etc., when formulating plans can improve service quality and reduce losses. Shi et al. [73] studied a robust optimization problem for HHC routing scheduling problem considering uncertain TST. The objective function contains two parts. The first part is the total cost of traveling and hiring personnel. The second part is the penalty term of the maximum possible arrival time of the caregiver to the patient’s home, which is derived based on budget theory and recursive functions. The constraint function includes uncertainty time windows of service time and business travel time. The RO model is solved using the Taguchi design method combined with the tabu search algorithm. Robust optimization results compared with deterministic design optimization, the robust solution has advantages over the deterministic model in terms of delayed service percentage and average delay time, which can provide a valuable framework for HHC companies to develop robust schedules when scheduling caregivers.
10.2.3 Closed-Loop Supply Chain Closed-loop supply chain (CLSC) network is an integrated system that includes both forward supply chain and reverse supply chain. During its design optimization process, multiple objectives sometimes contradict each other [92]. Meanwhile, CLSC’s strategic and tactical decisions are heavily influenced by uncertainties such as demand, recycling rates, and recycling quality, which can create volatility
10.2
RDO Engineering Applications
259
in supply chain models. From the perspective of product service life, this section reviews the CLSC network robust design optimization engineering cases from three aspects: perishable goods, disposable goods, and durable goods.
10.2.3.1
Closed-Loop Supply Chain Network for Perishable Goods
Perishable goods, such as those in the dairy and pharmaceutical industries, are characterized by a limited life span and limited storage time. Therefore, the CLSC network is highly sensitive to the uncertainty of demand, which can easily cause waste and environmental pollution if improperly allocated. A specialized model for studying the cost and environmental objectives of CLSC for perishable goods under uncertainty can effectively balance the relationship between cost and pollution. Some scholars have conducted studies on the supply chain network design of perishable goods [93–95], but these studies have not considered the uncertainty of return rate and return quality. In view of the uncertainties of demand, return rate, and quality of returned goods, Yavari et al. [96] proposed a multi-objective robust MILP optimization model for green closed-loop supply chain network of perishable goods. The double optimization objectives are to minimize the total cost of the supply chain and minimize the environmental pollution, while the constraints include 25 constraints on the multiperiod, multi-product supply chain composed of suppliers, manufacturers, warehouses, retailers, and collection centers. To solve the multi-objective problem, the compromise programming is used to transform the problem into a single-objective optimization model. The robust optimization method of Bertsimas and Sim [79] and Ben-Tal et al. [97] is used to deal with these uncertainties. Due to the NP-hard nature of the problem [98], traditional algorithms can lead to inefficient consequences. They defined two linear programming models and a small-sized integer linear programming model (the distribution model). Based on solving the model, an efficient heuristic method named Yavari and Geraeli (YAG) method is developed, which proved to be effective for large complex problems in subsequent experiments. Finally, the supply chain of an Iranian dairy company is taken as a case study. The experimental results show that the proposed method is helpful for managers to make comprehensive decisions on the forward and reverse flow of the closed-loop supply chain of perishable products. The average difference between the YAG method and the optimal solution in a reasonable time is less than 1.65%. In addition, the YAG method found the optimal solution in more than 34% instances.
10.2.3.2
Closed-Loop Supply Chain Network for Disposable Home Appliances
In the CLSC network of disposable products, erratic fluctuations in the number of returns can affect the quantity of products offered by suppliers. Effective handling of demand and cost uncertainties can facilitate the development of CLSC networks for disposable products.
260
10
Engineering Applications of Design Optimization Under Uncertainty
Gholizadeh et al. [99] first considered the concept of grading for multi-product closed-loop supply chain by taking the closed-loop supply chain design optimization of disposable household appliances as an example. The forward chain of the model consists of four levels of suppliers, manufacturers, distributors, and customers, while the reverse chain consists of three levels of collection, recycling, and processing. In the uncertainty model, the objective function is to maximize the value of reverse products and direct supply products in the closed-loop chain network, including sales revenue of direct sales products, recovery revenue of reverse products, operating costs, and transportation costs at all points of the organization. The constraint set consists of 28 constraints, including customer demand, return quantity, raw material recovery, and procurement, etc. Uncertain parameters include demand, product return rates, shipping costs, and operating costs for each period. To reduce the difficulty of calculation, the nonlinear constraints are linearized by substituting some of the variables in the multivariable constraints with their determined boundary values. In the proposed robust optimization model, the objective function consists of three parts. The first two parts are the mean and variance of the total cost of the chain, which are used to represent the stability of the model. The third part is used to measure the equilibrium of the objective function. GA is used to solve the model. Different from general GA, the parameters of GA are adjusted with the help of orthogonal experiment and Taguchi design. Finally, a disposable necessities company located in Amer, Iran, is taken as an example to verify the effectiveness of the method. The results show that this method can provide a solution very close to the exact solution and has good convergence, which provides the company with the optimal operation scheme of CLSC network for disposable electrical products. In addition, they mentioned that there is due to the uncertainty nature utilizing of fuzzy or stochastic programming with the approach combination and probable uncertainty.
10.2.3.3
A Closed-Loop Supply Chain Network for Durable Goods
Most of the design optimization of durable product supply chain focuses on forward supply chain. There are few researches on CLSC network. Atabaki et al. [100] complement the design of reverse logistics engineering for durable goods, which enables the supply chain of durable products to form a closed loop. Various recycling facilities are considered in the CLSC structure of durable products. Economic costs, CO2 emissions, and energy consumption are considered as multiple objectives, while uncertainties included stochastic uncertainties in the operating costs and demand of forward flow facilities and cognitive uncertainties in facility construction costs, reverse facility operating costs, number of returns, and recovery rates. The corresponding robust optimization model is built to make robust decisions on supplier selection, location allocation, transportation mode, assembly technology, and recycling level. The robust optimization model introduces possibility programming and scenario-based stochastic programming to deal with mixed uncertainties in the model. Scenario-based stochastic programming is used to deal
10.2
RDO Engineering Applications
261
with stochastic uncertainties, which is represented by probability distribution. Possibility programming is used to deal with cognitive uncertainties, which deals with the problem of cognitive uncertainty whose parameters in the data can be represented by fuzzy triangular numbers. Finally, a numerical case of CLSC network optimization for durable products made of nickel, steel, and copper is tested. The optimization results show that the obtained optimal solution is not affected by any realization of the uncertainty and can provide a robust decision for the decision maker.
10.2.3.4
Real-World Engineering Application
Supply chain design is one of the important strategic decisions that affects a company’s competitive advantage and economic growth. Figure 10.10 is a schematic diagram of the closed-loop supply chain of disposable electrical appliances. The life cycle of disposable products includes two parts: forward production and reverse recycling. The CLSC is an integrated system that integrates the forward and reverse supply chains. CLSC underlines resource recycling and extending product life cycles, which can bring positive benefits to the environment and corporate economy. Therefore, many companies turn to the design of closed-loop supply chain networks [99]. However, supply chain network design is a complex problem with multiple objectives that sometimes conflict with each other [92]. Gholizadeh et al. [99] studied a disposable electrical recycling network of a multi-layer closedloop supply chain. By using the linearization formula [101], they proposed a simplified version of the robust optimization model, readers interested in relevant details refer to [99]. The objective function is to minimize the mean and variance of CLSR total profit and the balance of the supply chain model. The constraint function contains the parameter interaction relationships between various links in the supply chain.
Fig. 10.10 Disposable appliances closed-loop supply chain
262
10
Engineering Applications of Design Optimization Under Uncertainty
The optimization program is implemented in LINGO software, and Taguchi design and genetic algorithm are used to obtain robust solutions. Finally, operations research techniques are used to achieve optimal decision-making. Taking the supply chain case of a disposable household products company located in Amir, Iran, as an example, the proposed method efficiently provides a solution that is very close to the exact solution.
10.2.4
Summary
As in Sect. 10.1, we utilize a Table 10.3 to summarize the information in this section, including the application field, specific application, sources of uncertainty, optimization method, optimization results, and references.
10.3 10.3.1
Research Outlook of Design Optimal Under Uncertainty Challenges and Prospects of RBDO
The main challenges of RBDO research in the engineering field are embodied in the expansion to the high-dimensional design space, the trouble of highly nonlinear constraints, the effective integration of sensitivity information, the processing of continuous-discrete random variable mixing, and the consideration of multiobjective optimization problems. High-dimensional design space and high nonlinear constraints will rapidly increase the amount of computation required for reliability analysis. The surrogate model can effectively alleviate these troubles, which widely used include Kriging [103], radial basis function [104], support vector machine regression [105], etc. However, with the increase of the degree of constraint nonlinearity and the dimension of design parameters, the cost of accurate surrogate model construction increases, which requires the development of more effective sequential sampling methods and strategies to improve the efficiency of surrogate model construction. To improve the efficiency of RBDO procedures, gradient-based optimization algorithms are often the first choice. The key to correct and reasonable optimization direction is the integration of sensitivity information. However, due to the existence of discrete random variables and/or multi-modal constraints, the optimization cannot find the right direction or fall into local optimal problems. Meta-heuristic optimization algorithms (such as PSO, GA, etc.) can get rid of this dilemma. The coupling of the meta-heuristic algorithm with RBDO framework and the improvement of the efficiency of the meta-heuristic algorithm are the key to improve the solution of these problems. The phenomenon of multiple failure modes also makes the RBDO problem complicated. They can be converted into multi-objective optimization
Logistics scheduling
Bertsimas and Sim’s method [79]
1. Demand level 2. Transit time
Post-earthquake relief activities
Total installed capacity " 54.42%, total cost " 4.2%, but the conflict multiobjective optimization problem is solved. It is the first to study the HHCRSP problem with uncertain travel and service time from a robust optimization, and the study provides guidance to HHC companies on reliable schedule development, which enhances industry competitiveness. Using interval theory to quantify the uncertainty. The test results show that the model can be used to help decisionmakers determine the level of mobilization of relief supplies, the initial helicopter deployment of relief supplies, and the transportation plan within the disaster area. MOMFPFRP
Robust design optimization + TS
Total annualized cost " 13.75% but charging stations are more robust.
Robust optimization + MILP
1. Service time 2. Travel time
Result/Contribution (" is increment, # is decrement) The CoV of plant output is 1.46%.
Method PCE + MCS + NSGA-II
HHC
RBDO engineering applications Application Specific Sources of uncertainty field application Energy NH3 synthe- 1. Wind energy 2. NH3 reactor temperature management sis plant Photovoltaic 1. Power distribution of photovoltaic system system 2. Demand for hydrogen and electricity Hybrid PS 1. CO2 emission 2. Renewable energy sustainability
Table 10.3 The summary of RDO engineering applications
Research Outlook of Design Optimal Under Uncertainty (continued)
Liu et al. [102]
Shi et al. [73]
Tsao et al. [66]
Reference Verleysen et al. [55] Wang et al. [59]
10.3 263
1. Demand for goods 2. Return rate 3. Quality of return
1. Product demand 2. Transportation costs 3. Operating cost
1. Random uncertainty (Demand level; Operating cost) 2. Cognitive uncertainty (Plant establishment costs; Reverse plant operating costs; The number of returned goods; Recovery rate)
Perishable goods
Disposable household appliances
Durable product
Possibility planning + scenario-based stochastic planning
GA + orthogonal experiment + Taguchi design
Bertsimas and Sim’s method [79] + Ben-Tal’s method [97] + YAG
Method Bertsimas’s robust discrete optimization theory + modify GA
Result/Contribution (" is increment, # is decrement) EV distribution schemes under different transportation conditions can be obtained by setting different robust control risk coefficients to provide decision support for the selection of EV distribution paths in multiple distribution centers. The average difference with the optimal solution is less than 1.65%. The YAG method found an optimal solution more than 34% of the time. Improve the efficiency of large-scale problem solving. First proposed the hierarchical concept of multi-product CLSC. The results show that this method can provide a solution very close to the exact solution and has good convergence, which provides the company with the optimal operation scheme of CLSC network for disposable electrical products. The reverse logistics engineering of durable goods is designed and CLSC of durable goods is formed. The optimization results show that the obtained optimal solution is not affected by any realization of the uncertainty and can provide a robust decision for the decision maker.
Atabaki et al. [100]
Gholizadeh et al. [99]
Yavari et al. [96]
Reference Ma et al. [91] 10
CLSC
RBDO engineering applications Application Specific field application Sources of uncertainty EV path 1. Delivery time planning 2. Delivery distance
Table 10.3 (continued)
264 Engineering Applications of Design Optimization Under Uncertainty
10.3
Research Outlook of Design Optimal Under Uncertainty
265
problems, then need to develop appropriate decision criteria to sort the Pareto solution set. In terms of optimization strategy selection, the double-loop strategy is usually used as the optimization method for RBDO in new engineering applications because of its advantages of good generality and easy implementation in any general optimization software. Due to the nested relationship between reliability analysis loop and optimization loop, the calculation of reliability analysis is repeated. Using single-loop strategy (such as SLSV, etc.) or decoupling strategy (such as SORA, etc.) can effectively avoid unnecessary calculation, but the key is to develop advanced methods for reasonable equivalent approximation of constraints. In the future, efforts regarding RBDO should focus on providing and executing efficient and reliable numerical programs using sound/improved theoretical algorithms and appropriate tools. The development of new simulation schemes for reliability and sensitivity evaluation provides a further choice for the development of new RBDO methods for engineering practice, and the advanced meta-heuristic algorithm is a further expansion of RBDO application field. Different methods provide different advantages and difficulties for the optimization process, and selecting the right advanced theory and method for the characteristics of the problem is the key to improving the efficiency of RBDO. Overcoming these challenges can lead to significant progress in the field and ultimately help in complex decisionmaking processes in real-world situations.
10.3.2
Challenges and Prospects of RDO
RDO models usually contain multiple objective functions. With the gradual expansion of the scale of design, the problem of design optimization is also becoming more and more complicated. In the future, RDO will consider constraints more comprehensively. Different noise types, incomplete and missing data, multimodal problems, and even the integration of multidisciplinary constraints make RDO implementation more difficult. This requires the design team to have efficient computational and simulation tools that can address the design optimization needs of large-scale scenarios. The meta-heuristic optimization algorithm is helpful to solve the problem of large amount of computation and to escape the local optimal trap of multi-modal problems. Advanced uncertainty quantization technology can quantify various uncertainty variables. In the future, RDO can be combined with cloud computing and big data to mine the rules and patterns in the design process, and visualization technology and collaborative design platform can be used to realize the simulation, visualization, and collaboration of robust design optimization, promote the information exchange and integration among multiple disciplines, so as to build a complete and practical optimization model and improve the parallel processing and computing capabilities of RDO.
266
10
Engineering Applications of Design Optimization Under Uncertainty
Fig. 10.11 Challenges of RBDO and RDO
As the usage of RDO becomes more widespread, the future research and exploration of RDO can be developed in the aspects of uncertainty modeling, non-convex optimization methods, and robustness indicators. By introducing uncertainty modeling, the uncertain factors are predicted and controlled, which improves the robustness of the model and the adaptability of the model to unknown situations. Exploring non-convex optimization methods enables RDO to adapt to more complex models and problems. Finally, the development of more generalized robustness indicators helps ensure that RDO is not limited in a particular application domain or problem (Fig. 10.11).
References 1. Cid Bengoa, C. (2021). Probabilistic fail-safe size optimization of aerospace structures under several sources of uncertainty. 2. Hao, P., et al. (2021). Efficient reliability-based design optimization of composite structures via isogeometric analysis. Reliability Engineering System Safety, 209, 107465.
References
267
3. Zhu, S.-P., et al. (2013). Bayesian framework for probabilistic low cycle fatigue life prediction and uncertainty modeling of aircraft turbine disk alloys. Probabilistic Engineering Mechanics, 34, 114–122. 4. Hu, D., et al. (2019). Effect of inclusions on low cycle fatigue lifetime in a powder metallurgy nickel-based superalloy FGH96. International Journal of Fatigue, 118, 237–248. 5. Hu, D., et al. (2016). Creep-fatigue behavior of turbine disc of superalloy GH720Li at 650 C and probabilistic creep-fatigue modeling. Materials Science Engineering A, 670, 17–25. 6. Zhu, S.-P., et al. (2017). A combined high and low cycle fatigue model for life prediction of turbine blades. Materials, 10(7), 698. 7. Liu, X., et al. (2021). Reliability-based design optimization approach for compressor disc with multiple correlated failure modes. Aerospace Science and Technology, 110, 106493. 8. Aas, K., et al. (2009). Pair-copula constructions of multiple dependence. Insurance: Mathematics Economics, 44(2), 182–198. 9. Nowak, A. S., & Collins, K. R. (2012). Reliability of structures. CRC Press. 10. Cid Bengoa, C., et al. (2020). Multi-model optimization approach of aircraft structures under uncertainty using horsetail matching and RBDO methods. In AIAA Scitech 2020 Forum. Orlando, FL. 11. Chengwei, F., et al. (2021). Whole-process design and experimental validation of landing gear lower drag stay with global/local linked driven optimization strategy. Chinese Journal of Aeronautics, 34(2), 318–328. 12. Meng, D., et al. (2019). Structural reliability analysis and uncertainties-based collaborative design and optimization of turbine blades using surrogate model. Fatigue Fracture of Engineering Materials Structures, 42(6), 1219–1227. 13. Baldomir, A., et al. (2012). Size optimization of shell structures considering several incomplete configurations. In 53rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference 20th AIAA/ASME/AHS adaptive structures conference 14th AIAA. 14. Cid, C., Baldomir, A., & Hernandez, S. (2016). Reliability based design optimization of structures considering several incomplete configurations. In 17th AIAA/ISSMO multidisciplinary analysis and optimization conference. 15. Cid Bengoa, C., et al. (2018). Multi-model reliability-based design optimization of structures considering the intact configuration and several partial collapses. Structural Multidisciplinary Optimization, 57, 977–994. 16. Kim, B. C., Weaver, P. M., & Potter, K. (2014). Manufacturing characteristics of the continuous tow shearing method for manufacturing of variable angle tow composites. Composites Part A: Applied Science and Manufacturing, 61, 141–151. 17. Scheu, M. N., et al. (2019). A systematic failure mode effects and criticality analysis for offshore wind turbine systems towards integrated condition based maintenance strategies. Ocean Engineering, 176, 118–133. 18. Martinez-Luengo, M., & Shafiee, M. (2019). Guidelines and cost-benefit analysis of the structural health monitoring implementation in offshore wind turbine support structures. Energies, 12(6), 1176. 19. Yeter, B., Garbatov, Y., & Soares, C. G. (2015). Fatigue reliability of an offshore wind turbine supporting structure accounting for inspection and repair. In Analysis and design of marine structures V (pp. 751–762). CRC Press. 20. Ćorak, M., et al. (2022). Uncertainties of wave data collected from different sources in the Adriatic Sea and consequences on the design of marine structures. Ocean Engineering, 266, 112738. 21. Li, N., et al. (2021). Optimal design and strength reliability analysis of pressure shell with grid sandwich structure. Ocean Engineering, 223, 108657. 22. Meng, D., et al. (2020). Reliability-based optimisation for offshore structures using saddlepoint approximation. In Proceedings of the Institution of Civil Engineers-Maritime Engineering. Thomas Telford.
268
10
Engineering Applications of Design Optimization Under Uncertainty
23. Hu, W., et al. (2016). Integrating variable wind load, aerodynamic, and structural analyses towards accurate fatigue life prediction in composite wind turbine blades. Structural Multidisciplinary Optimization, 53, 375–394. 24. Jiang, Z., et al. (2017). Structural reliability analysis of wind turbines: A review. Energies, 10(12), 2099. 25. Hu, W., Choi, K., & Cho, H. (2016). Reliability-based design optimization of wind turbine blades for fatigue life under dynamic wind load uncertainty. Structural Multidisciplinary Optimization, 54, 953–970. 26. Larsenon, K. (2020). Corrosion risks and mitigation strategies for offshore wind turbine foundations. 27. Meng, D., et al. (2023). A novel hybrid adaptive kriging and water cycle algorithm for reliability-based design and optimization strategy: Application in offshore wind turbine monopile. Computer Methods in Applied Mechanics and Engineering, 412, 116083. 28. Forde, M. C. (1998). Bridge research in Europe. Construction Building Materials, 12(2–3), 85–91. 29. Nguyen, V.-S., et al. (2013). Reliability-based optimisation design of post-tensioned concrete box girder bridges considering pitting corrosion attack. Structure Infrastructure Engineering, 9(1), 78–96. 30. Stewart, M. G. (2009). Mechanical behaviour of pitting corrosion of flexural and shear reinforcement and its effect on structural reliability of corroding RC beams. Structural Safety, 31(1), 19–30. 31. Bentur, A., Berke, N., & Diamond, S. (1997). Steel corrosion in concrete: fundamentals and civil engineering practice. CRC Press. 32. Wu, S., & Law, S. (2011). Dynamic analysis of bridge with non-Gaussian uncertainties under a moving vehicle. Probabilistic Engineering Mechanics, 26(2), 281–293. 33. Mao, J., et al. (2016). Random dynamic analysis of a train-bridge coupled system involving random system parameters based on probability density evolution method. Probabilistic Engineering Mechanics, 46, 48–61. 34. Ni, P., et al. (2019). Using polynomial chaos expansion for uncertainty and sensitivity analysis of bridge structures. Mechanical Systems Signal Processing, 119, 293–311. 35. Ni, P., et al. (2021). Reliability based design optimization of bridges considering bridgevehicle interaction by kriging surrogate model. Engineering Structures, 246, 112989. 36. Kusano, I., et al. (2020). Reliability based design optimization for bridge girder shape and plate thicknesses of long-span suspension bridges considering aeroelastic constraint. Journal of Wind Engineering and Industrial Aerodynamics, 202, 104176. 37. Montoya, M. C., et al. (2018). CFD-based aeroelastic characterization of streamlined bridge deck cross-sections subject to shape modifications using surrogate models. Journal of Wind Engineering Industrial Aerodynamics, 177, 405–428. 38. Diana, G., et al. (2013). Wind tunnel tests and numerical approach for long span bridges: The Messina bridge. Journal of Wind Engineering and Industrial Aerodynamics, 122, 38–49. 39. Ge, Y., & Xiang, H. (2008). Recent development of bridge aerodynamics in China. Journal of Wind Engineering and Industrial Aerodynamics, 96(6–7), 736–768. 40. Duan, L., et al. (2019). Multi-objective reliability-based design optimization for the VRB-VCS FLB under front-impact collision. Structural Multidisciplinary Optimization, 59, 1835–1851. 41. Zhang, H., & Zhang, X. (2016). Crashworthiness performance of conical tubes with nonlinear thickness distribution. Thin-Walled Structures, 99, 35–44. 42. Lu, R., et al. (2017). Simulation of springback variation in the U-bending of tailor rolled blanks. Journal of the Brazilian Society of Mechanical Sciences Engineering, 39, 4633–4647. 43. Duan, L., et al. (2017). Multi-objective system reliability-based optimization method for design of a fully parametric concept car body. Engineering Optimization, 49(7), 1247–1263. 44. Liu, Q., et al. (2013). Lightweight design of carbon twill weave fabric composite body structure for electric vehicle. Composite Structures, 97, 231–238.
References
269
45. Belingardi, G., Beyene, A. T., & Koricho, E. G. (2013). Geometrical optimization of bumper beam profile made of pultruded composite by numerical simulation. Composite Structures, 102, 217–225. 46. Hesse, S., Lukaszewicz, D.-J., & Duddeck, F. (2015). A method to reduce design complexity of automotive composite structures with respect to crashworthiness. Composite Structures, 129, 236–249. 47. Kalantari, M., Dong, C., & Davies, I. J. (2016). Multi-objective robust optimisation of unidirectional carbon/glass fibre reinforced hybrid composites under flexural loading. Composite Structures, 138, 264–275. 48. Liu, Z., et al. (2018). Reliability-based design optimization of composite battery box based on modified particle swarm optimization algorithm. Composite Structures, 204, 239–255. 49. Charalambakis, N. (2010). Homogenization techniques and micromechanics. A survey and perspectives. Applied Mechanics Reviews, 63(3). 50. Jin, R., Chen, W., & Sudjianto, A.. (2003). An efficient algorithm for constructing optimal design of computer experiments. In International design engineering technical conferences and computers and information in engineering conference. 51. Xu, X., et al. (2022). Multi-objective reliability-based design optimization for the reducer housing of electric vehicles. Engineering Optimization, 54(8), 1324–1340. 52. Li, X., et al. (2016). Effect of strain rate on the mechanical properties of carbon/epoxy composites under quasi-static and dynamic loadings. Polymer Testing, 52, 254–264. 53. Wu, Y., et al. (2017). Dynamic crash responses of bio-inspired aluminum honeycomb sandwich structures with CFRP panels. Composites Part B: Engineering, 121, 122–133. 54. Cid Bengoa, C., et al. (2020). Multi-model optimization approach of aircraft structures under uncertainty using Horsetail Matching and RBDO methods. In AIAA Scitech 2020 Forum. 55. Verleysen, K., et al. (2020). How can power-to-ammonia be robust? Optimization of an ammonia synthesis plant powered by a wind turbine considering operational uncertainties. Fuel, 266, 117049. 56. Ugirumurera, J., & Haas, Z. J. (2017). Optimal capacity sizing for completely green charging systems for electric vehicles. IEEE Transactions on Transportation Electrification, 3(3), 565–577. 57. Xie, R., et al. (2018). Planning fully renewable powered charging stations on highways: A data-driven robust optimization approach. IEEE Transactions on Transportation Electrification, 4(3), 817–830. 58. Mehrjerdi, H. (2019). Off-grid solar powered charging station for electric and hydrogen vehicles including fuel cell and hydrogen storage. International Journal of Hydrogen Energy, 44(23), 11574–11583. 59. Wang, Y., et al. (2020). Robust design of off-grid solar-powered charging station for hydrogen and electric vehicles via robust optimization approach. International Journal of Hydrogen Energy, 45(38), 18995–19006. 60. Sharafi, M., & ElMekkawy, T. Y. (2015). Stochastic optimization of hybrid renewable energy systems using sampling average method. Renewable Sustainable Energy Reviews, 52, 1668–1679. 61. Li, S., Coit, D. W., & Felder, F. (2016). Stochastic optimization for electric power generation expansion planning with discrete climate change scenarios. Electric Power Systems Research, 140, 401–412. 62. Muela, E., Schweickardt, G., & Garces, F. (2007). Fuzzy possibilistic model for medium-term power generation planning with environmental criteria. Energy Policy, 35(11), 5643–5655. 63. Santoyo-Castelazo, E., & Azapagic, A. (2014). Sustainability assessment of energy systems: integrating environmental, economic and social aspects. Journal of Cleaner Production, 80, 119–138. 64. Iddrisu, I., & Bhattacharyya, S. C. (2015). Sustainable Energy Development Index: A multidimensional indicator for measuring sustainable energy development. Renewable Sustainable Energy Reviews, 50, 513–530.
270
10
Engineering Applications of Design Optimization Under Uncertainty
65. You, A. D., et al. (2011). A study of electrical security risk assessment system based on electricity regulation. Energy Policy, 39(4), 2062–2074. 66. Tsao, Y.-C., & Thanh, V.-V. (2020). A multi-objective fuzzy robust optimization approach for designing sustainable and reliable power systems under uncertainty. Applied Soft Computing, 92, 106317. 67. Torabi, S. A., & Hassini, E. (2008). An interactive possibilistic programming approach for multiple objective supply chain master planning. Fuzzy Sets Systems, 159(2), 193–214. 68. Yager, R. R. (1981). A procedure for ordering fuzzy subsets of the unit interval. Information Sciences, 24(2), 143–161. 69. Fathabadi, H. (2017). Novel wind powered electric vehicle charging station with vehicle-togrid (V2G) connection capability. Energy Conversion and Management, 2020, 229–239. 70. Yuan, B., Liu, R., & Jiang, Z. (2015). A branch-and-price algorithm for the home health care scheduling and routing problem with stochastic service times and skill requirements. International Journal of Production Research, 53(24), 7450–7464. 71. Liu, R., Yuan, B., & Jiang, Z. (2019). A branch-and-price algorithm for the home-caregiver scheduling and routing problem with stochastic travel and service times. Flexible Services Manufacturing Journal, 31, 989–1011. 72. Lanzarone, E., & Matta, A. (2014). Robust nurse-to-patient assignment in home care services to minimize overtimes under continuity of care. Operations Research for Health Care, 3(2), 48–58. 73. Shi, Y., Boudouh, T., & Grunder, O. (2019). A robust optimization for a home health care routing and scheduling problem with consideration of uncertain travel and service times. Transportation Research Part E: Logistics Transportation Review, 128, 52–95. 74. Solomon, M. M. (1987). Algorithms for the vehicle routing and scheduling problems with time window constraints. Operations Research, 35(2), 254–265. 75. Caunhye, A. M., Nie, X., & Pokharel, S. (2012). Optimization models in emergency logistics: A literature review. Socio-Economic Planning Sciences, 46(1), 4–13. 76. Özdamar, L., & Ertem, M. A. (2015). Models, solutions and enabling technologies in humanitarian logistics. European Journal of Operational Research, 244(1), 55–65. 77. Liberatore, F., et al. (2013). Uncertainty in humanitarian logistics for disaster management. A review. In Decision aid models for disaster management emergencies (pp. 45–74). Springer. 78. Hoyos, M. C., Morales, R. S., & Akhavan-Tabatabaei, R. (2015). OR models with stochastic components in disaster operations management: A literature survey. Computers Industrial Engineering, 82, 183–197. 79. Bertsimas, D., & Sim, M. (2004). The price of robustness. Operations Research, 52(1), 35–53. 80. Özdamar, L., Ekinci, E., & Küçükyazici, B. (2004). Emergency logistics planning in natural disasters. Annals of Operations Research, 129, 217–245. 81. Bozorgi-Amiri, A., & Khorsi, M. (2016). A dynamic multi-objective location–Routing model for relief logistic planning under uncertainty on demand, travel time, and cost parameters. The International Journal of Advanced Manufacturing Technology, 85, 1633–1648. 82. Artmeier, A., et al. (2010). The optimal routing problem in the context of battery-powered electric vehicles. In CPAIOR Workshop on Constraint Reasoning and Optimization for Computational Sustainability (CROCS). 83. Worley, O., Klabjan, D., & Sweda, T. M.. (2012). Simultaneous vehicle routing and charging station siting for commercial electric vehicles. In 2012 IEEE international electric vehicle conference. IEEE. 84. Yi, T., et al. (2020). Joint optimization of charging station and energy storage economic capacity based on the effect of alternative energy storage of electric vehicle. Energy, 208, 118357. 85. Schneider, M., Stenger, A., & Goeke, D. (2014). The electric vehicle-routing problem with time windows and recharging stations. Transportation Science, 48(4), 500–520. 86. Afroditi, A., et al. (2014). Electric vehicle routing problem with industry constraints: trends and insights for future research. Transportation Research Procedia, 3, 452–459.
References
271
87. Strehler, M., Merting, S., & Schwan, C. (2017). Energy-efficient shortest routes for electric and hybrid vehicles. Transportation Research Part B: Methodological, 103, 111–135. 88. Caplice, C., & Mahmassani, H. S. (1992). Aspects of commuting behavior: Preferred arrival time, use of information and switching propensity. Transportation Research Part A: Policy Practice, 26(5), 409–418. 89. Wang, Y., et al. (2012). Location optimization of multiple distribution centers under fuzzy environment. Journal of Zhejiang University Science A, 13(10), 782. 90. Wang, Y., et al. (2015). Vehicle routing problem based on a fuzzy customer clustering approach for logistics network optimization. Journal of Intelligent Fuzzy Systems, 29(4), 1427–1442. 91. Ma, C., et al. (2018). Distribution path robust optimization of electric vehicle with multiple distribution centers. PLoS One, 13(3), e0193789. 92. Srivastava, S. K. (2008). Network design for reverse logistics. Omega, 36(4), 535–548. 93. Hasani, A., Zegordi, S. H., & Nikbakhsh, E. (2012). Robust closed-loop supply chain network design for perishable goods in agile manufacturing under uncertainty. International Journal of Production Research, 50(16), 4649–4669. 94. Govindan, K., et al. (2014). Two-echelon multiple-vehicle location–routing problem with time windows for optimization of sustainable supply chain network of perishable food. International Journal of Production Economics, 152, 9–28. 95. Keyvanshokooh, E., Ryan, S. M., & Kabir, E. (2016). Hybrid robust and stochastic optimization for closed-loop supply chain network design using accelerated Benders decomposition. European Journal of Operational Research, 249(1), 76–92. 96. Yavari, M., & Geraeli, M. (2019). Heuristic method for robust optimization model for green closed-loop supply chain network design of perishable goods. Journal of Cleaner Production, 226, 282–305. 97. Ben-Tal, A., El Ghaoui, L., & Nemirovski, A. (2009). Robust optimization (Vol. Vol. 28). Princeton University Press. 98. Ibaraki, T., & Katoh, N. (1988). Resource allocation problems: Algorithmic approaches. MIT Press. 99. Gholizadeh, H., Tajdin, A., & Javadian, N. (2020). A closed-loop supply chain robust optimization for disposable appliances. Neural Computing Applications, 32, 3967–3985. 100. Atabaki, M. S., Mohammadi, M., & Naderi, B. (2020). New robust optimization models for closed-loop supply chain of durable products: Towards a circular economy. Computers Industrial Engineering, 146, 106520. 101. Glover, F., & Woolsey, E. (1974). Converting the 0-1 polynomial programming problem to a 0-1 linear program. Operations Research, 22(1), 180–182. 102. Liu, Y., et al. (2018). Robust optimization for relief logistics planning under uncertainties in demand and transportation time. Applied Mathematical Modelling, 55, 262–280. 103. Sacks, J., et al. (1989). Design and analysis of computer experiments. Statistical Science, 4(4), 409–423. 104. Musavi, M. T., et al. (1992). On the training of radial basis function classifiers. Neural Networks, 5(4), 595–603. 105. Smola, A. J., & Schölkopf, B. (2004). A tutorial on support vector regression. Statistics Computing, 14, 199–222.
Index
A Adaptive sampling, 67, 68, 73–89, 159, 160, 216 Aleatory uncertainty, 36, 42, 110 Area metric, 98–105, 108, 110, 113, 114, 116, 118, 120
C Covariance, 17–18, 70, 71, 107, 110, 116, 153, 184 Cumulative distribution function (CDF), 7–9, 11, 12, 20–28, 31, 42, 44, 49, 98–101, 103–106, 108, 110–114, 116–118, 126, 135, 154, 162, 172, 189
D Design optimization, 36, 67, 152, 170, 171, 186, 187, 191, 193, 200, 228, 233, 242, 244, 246–248, 252, 258–260, 263, 265 Design optimization under uncertainty (DOUU), 36, 38, 49, 67, 216, 232–266 Dimensionality reduction, 136
E Engineering applications, 15, 39, 56, 57, 69, 95, 96, 106, 116, 174, 186, 232–266 Epistemic uncertainty, 36, 42, 110 Evaluation metrics, 68, 71, 73, 83, 84 Expectation, 13–18, 32, 97, 138, 154, 236 Extreme value method, 147, 150–156
G Global exploration, 68, 76, 77, 79, 89
I Importance sampling, 126, 132, 135–139, 141, 216, 246
L Limited data, 116–118 Local exploitation, 68, 76, 77, 79, 82, 89
M Mahalanobis distance (MD), 106–111, 116–117 Model validation, 94–118 Model verification, 94–118 Monte Carlo simulation (MCS), 48, 49, 51–52, 76, 80, 82, 86, 126, 132–137, 140, 141, 156, 158–162, 164, 172, 186, 187, 190, 225, 226, 228, 229, 239, 243, 246, 251, 252, 263 Most probable point (MPP), 124, 126–132, 136, 171, 177, 179–182, 184–186 Most probable point-based RBDO, 171, 183, 185, 186
O Optimization under uncertainty, 202 Outcrossing rate method, 147–150
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Hu, Design Optimization Under Uncertainty, https://doi.org/10.1007/978-3-031-49208-2
273
274 P Performance measure approach (PMA), 172, 185, 191, 234, 235, 250 Physics-informed neural network (PINN), 67, 216 Probability, 2, 36, 74, 103, 124, 171, 212, 216, 237 Probability density function (PDF), 8, 9, 11–14, 16, 18, 20–29, 31, 32, 38–40, 44, 49–51, 56, 125, 132, 138, 141, 154, 162, 171, 188, 189, 234, 238 Probability integral transformation (PIT), 103, 104, 118 Probability of failure, 29, 30, 125–127, 129–131, 133, 135–137, 141, 146, 149, 151, 157, 158, 162, 171, 172, 188, 190, 191, 196, 216, 222, 225, 227–229
R Random variable, 6, 39, 98, 126, 147, 170, 227, 233 Random variable transformation, 10–13, 23 Reliability, 21, 36, 67, 97, 124, 146, 171, 200, 216, 233 Reliability analysis (RA), 13, 29, 30, 67, 97, 124–140, 145–146, 171–173, 178, 183, 184, 186, 191, 192, 216, 217, 222, 224–227, 233, 236, 238, 242, 243, 246–248, 262, 265 Reliability-based design optimization (RBDO), 30, 170–198, 207, 227, 232–236, 250, 262–266 Reliability index approach (RIA), 172, 174, 191, 193, 236, 243, 245, 251 Response surrogate-based methods, 150, 156–164 Robust design optimization (RDO), 200, 259
Index S Sampling-based RBDO, 171, 190 Stochastic sensitivity analysis, 187, 228 Surrogate model, 67–89, 132, 134, 136, 148, 150, 156–158, 160, 162, 164, 171, 172, 187, 190, 211, 216, 217, 227, 228, 238, 243, 245–248, 252, 262
T Time-dependent reliability, 124–140, 146–167, 222
U Uncertainty, 15, 36, 94, 124, 150, 171, 200, 217, 233 Uncertainty expression, 94, 95, 98 Uncertainty model, 238, 239, 260, 266 Uncertainty modeling, 36–63, 202, 203 Uncertainty propagation, 49–63, 95, 202, 217, 247 Uncertainty quantification, 38–48, 217, 238, 247, 249, 252 Uncertainty quantization, 249, 265
V Variance, 15–18, 26, 32, 40, 42, 44, 70, 79, 86, 87, 132, 135–140, 147, 153, 159, 160, 202, 204, 205, 212, 260, 261
W Weight decision, 79, 83