125 36 10MB
English Pages 269 [262] Year 2020
Xu Han Jie Liu
Numerical Simulationbased Design Theory and Methods
Numerical Simulation-based Design
Xu Han Jie Liu •
Numerical Simulation-based Design Theory and Methods
123
Xu Han Hunan University Changsha, Hunan, China
Jie Liu College of Mechanical and Vehicle Engineering Hunan University Changsha, Hunan, China
ISBN 978-981-10-3089-5 ISBN 978-981-10-3090-1 https://doi.org/10.1007/978-981-10-3090-1
(eBook)
Jointly published with Science Press The print edition is not for sale in China Mainland. Customers from China Mainland please order the print book from: Science Press. Translation from the Chinese language edition: 基于数值模拟的设计理论与方法 by Xu Han, © Science Press 2015. Published by Science Press. All Rights Reserved. © Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publishers, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
Numerical simulation-based design is the linchpin in the development of the innovative high-end equipment in the manufacturing industry. In the stage of mechanical equipment design, the performances in manufacturing, assembly, service, and maintenance can be comprehensively synthesized with the application of the numerical simulation. It greatly reduces the trial and test number of the physical prototypes, shortens the design cycle, effectively improves the design quality, and reduces substantially the cost of the traditional serial design. Although numerical simulation has demonstrated the outstanding advantages and potential application prospects, at the present stage, practical implementation of the simulation-based mechanical design in the equipment manufacturing industry and engineering is still not widespread. The reason lies in the features of the modern equipment generally characterized by the complex structures, extreme server conditions, diverse functional requirements, and uncertainties of parameters. There are not yet proficient numerical approaches to the key scientific problems reflected in the technical difficulties including the ineffectiveness in the simulation model, the intractable computational intensity, the high-dimensional variables, the multi-objective design, and the uncertainty quantification for the structural design. It leads to hardly guarantee in the design precision, efficiency, functionality, and reliability for the simulationbased design, and thus, greatly restricts the general applications of the numerical simulation in the design and development of engineering equipment. In view of the aforementioned imperfections in the numerical simulation, developments of high-fidelity modeling method, rapid structural analysis method, and structural multi-objective optimization design method and uncertainty quantification method are imperative. This book endeavors to integrate the numerical simulation with the mechanical design and systematically and comprehensively develop the relevant simulation-based design theory and the specific implementation methods for complex equipment. The contents of this book consist of four parts: (1) the high-fidelity modeling method based on the computational inverse techniques to ensure the design precision of mechanical equipment, (2) the rapid structural analysis techniques based on the surrogate models and the reduced basis methods to facilitate proficient computation for the cost-effective design of the complex equipment, (3) the v
vi
Preface
high-performing multi-objective engineering optimization methods to synthesize the comprehensive performances in equipment design, and (4) uncertainty modeling and optimization design methods to ensure the design reliability of the complex equipment under the uncertain conditions. Exemplifications for the specific applications of the developed modeling procedures of the inverse techniques, rapid structural analysis, structural optimization, and uncertainty evaluation, which configurate the basic framework of the simulation-based structural design, will be demonstrated. It aims to promote the key transformations for the numerical simulation from the assumptions of the simulation data to the reliable design information and knowledge, as well as from the auxiliary analysis tool to the leading design tool. For the research works presented in this book and its wrapping up, many researchers have provided their unreserved assistances, valuable suggestions, and comments. The authors would take this opportunity to express their appreciations and gratitude. Special thanks go to Prof. Fang Wang from Hebei University of Technology, Prof. Guirong Liu from University of Cincinnati, and Prof. Daolin Xu from Hunan University for reviewing this book thoroughly and putting forward valuable suggestions. The authors are very indebted to Prof. Chao Jiang for his contributions in the work of uncertainty optimization, to Dr. Guiping Liu and Guodong Chen for their participations in the work of multi-objective optimization, and to Dr. Fei Lei, Zheng Zhang and Ziheng Zhao for their involvements in the work of the rapid structural analysis. The authors also wish to acknowledge the innovative works and useful discussions from the graduate students, which enrich the presentation of the context during the repeated revisions and finalization of this book. The works in this book have been supported financially by the National Program on Key Basic Research Project of China (2004CB719402, 2010CB832705) and the National Science Fund for Distinguished Young Scholars (11202076). Changsha, China
Xu Han Jie Liu
Contents
1
2
3
. . . .
1 1 4 7
...
7
...
9
...
10
... ... ...
11 12 14
. . . .
. . . .
. . . .
. . . .
17 17 20 26
. . . . . .
. . . . . .
. . . . . .
. . . . . .
29 29 31 31 32 37
........ ........
38 41
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background and Significance . . . . . . . . . . . . . . . . . . . . . . . 1.2 Key Scientific Issues and Technical Challenges . . . . . . . . . 1.3 State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Theory and Methods for High-Fidelity Numerical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Theory and Methods for Rapid Structural Analysis for Complex Equipment . . . . . . . . . . . . . . . . . . . . 1.3.3 Theory and Methods for Efficient Structural Optimization Design . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Theory and Methods for Uncertainty Analysis and Reliability Design . . . . . . . . . . . . . . . . . . . . . 1.4 Contents of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to High-Fidelity Numerical Simulation Modeling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Engineering Background and Significance . . . . . . . . . . . . 2.2 Modeling Based on Computational Inverse Techniques . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational Inverse Techniques . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Sensitivity Analysis Methods . . . . . . . . . . . . . . . . . . 3.2.1 Local and Global Sensitivity Analysis . . . . . 3.2.2 Direct Integral-Based GAS Method . . . . . . . 3.2.3 Numerical Examples . . . . . . . . . . . . . . . . . . 3.2.4 Engineering Application: Global Sensitivity Analysis of Vehicle Roof Structure . . . . . . . 3.3 Regularization Methods for Ill-Posed Problem . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . .
. . . .
vii
viii
Contents
3.3.1 3.3.2 3.3.3 3.3.4
Ill-Posedness Analysis . . . . . . . . . . . . . . . . . . . Regularization Methods . . . . . . . . . . . . . . . . . . Selection of Regularization Parameter . . . . . . . . Application of Regularization Method to Model Parameter Identification . . . . . . . . . . . . . . . . . . 3.4 Computational Inverse Algorithms . . . . . . . . . . . . . . . . . 3.4.1 Gradient Iteration-Based Computational Inverse Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Intelligent Evolutionary-Based Computational Inverse Algorithm . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Hybrid Inverse Algorithm . . . . . . . . . . . . . . . . . 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..... ..... .....
41 42 47
..... .....
50 53
.....
55
. . . .
. . . .
. . . .
. . . .
59 61 63 64
Computational Inverse for Modeling Parameters . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Identification of Model Characteristic Parameters . . . . . . . . . 4.2.1 Material Parameter Identification for Stamping Plate . 4.2.2 Dynamic Constitutive Parameter Identification for Concrete Material . . . . . . . . . . . . . . . . . . . . . . . 4.3 Identification of Model Environment Parameters . . . . . . . . . . 4.3.1 Dynamic Load Identification for Cylinder Structure . 4.3.2 Vehicle Crash Condition Identification . . . . . . . . . . . 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
67 67 68 68
. . . . . .
. . . . . .
72 79 79 82 85 86
5
Introduction to Rapid Structural Analysis . . . 5.1 Engineering Background and Significance 5.2 Surrogate Model Methods . . . . . . . . . . . . 5.3 Model Order Reduction Methods . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
89 89 90 93 94
6
Rapid Structural Analysis Based on Surrogate Models . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Polynomial Response Surface Based on Structural Selection Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Polynomial Structure Selection Based on Error Reduction Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Engineering Application: Nonlinear Output Force Modeling for Hydro-Pneumatic Suspension . . . . . . 6.3 Surrogate Model Based on Adaptive Radial Basis Function . 6.3.1 Selection of Sample and Testing Points . . . . . . . . . 6.3.2 Optimization of the Shape Parameters . . . . . . . . . .
... ...
97 97
...
98
4
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . . .
... 98 . . . 100 . . . .
. . . .
. . . .
101 105 106 108
Contents
ix
6.3.3 6.3.4 6.3.5
RBF Model Updating Procedure . . . . . . . . . . . . . . Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . Engineering Application: Surrogate Model Construction for Crash Worthiness of Thin-Walled Beam Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 High Dimensional Model Representation . . . . . . . . . . . . . . 6.4.1 Improved HDMR . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Analysis of Calculation Efficiency . . . . . . . . . . . . . 6.4.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
8
9
Rapid Structural Analysis Based on Reduced Basis Method . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The RBM for Rapid Analysis of Structural Static Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 The Flow of Rapid Calculation Based on RBM . . 7.2.2 Construction of the Reduced Basis Space . . . . . . 7.2.3 Engineering Application: Rapid Analysis of Cab Structure . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 The RBM for Rapid Analysis of Structural Dynamic Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Parameterized Description of Structural Dynamics 7.3.2 Construction of the Reduced Basis Space Based on Time Domain Integration . . . . . . . . . . . . . . . . 7.3.3 Projection Reduction Based on Least Squares . . . 7.3.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . 7.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 108 . . . 110
. . . . . . .
. . . . . . .
. . . . . . .
112 115 116 119 120 122 123
. . . . 125 . . . . 125 . . . . 126 . . . . 126 . . . . 129 . . . . 130 . . . . 132 . . . . 132 . . . . .
. . . . .
. . . . .
. . . . .
133 135 136 138 140
. . . . .
. . . . .
. . . . .
. . . . .
141 141 143 144 144
Introduction to Multi-objective Optimization Design . . . 8.1 Characteristics of Multi-objective Optimization . . . . . 8.2 Optimal Solution Set in Multi-objective Optimization 8.3 Multi-objective Optimization Methods . . . . . . . . . . . 8.3.1 Preference-Based Methods . . . . . . . . . . . . . 8.3.2 Generating Methods Based on Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 146 . . . . . . . . 150
Micro Multi-objective Genetic Algorithm . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . 9.2 Procedure of lMOGA . . . . . . . . . . . . . . 9.3 Implementation Techniques of lMOGA .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
153 153 154 156
x
Contents
9.3.1 Non-dominated Sorting . . . . . . . . . . . . . . . . . . 9.3.2 Population Diversity Preservation Strategies . . . 9.3.3 Elite Individual Preserving Mechanism . . . . . . 9.4 Algorithm Performance Evaluation . . . . . . . . . . . . . . . . 9.4.1 Numerical Examples . . . . . . . . . . . . . . . . . . . . 9.4.2 Engineering Testing Example . . . . . . . . . . . . . 9.5 Engineering Applications . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Optimization Design of Guide Mechanism of Vehicle Suspension . . . . . . . . . . . . . . . . . . 9.5.2 Optimization Design of Variable Blank Holder Force in Sheet Metal Forming . . . . . . . . . . . . . 9.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . . . . .
. . . . . . .
156 158 159 160 160 167 169
. . . . . . 169 . . . . . . 174 . . . . . . 177 . . . . . . 177
10 Multi-objective Optimization Design Based on Surrogate Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Multi-objective Optimization Algorithm Based on Intelligent Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Intelligent Sampling Technology . . . . . . . . . . . . . 10.2.2 Convergence Criteria . . . . . . . . . . . . . . . . . . . . . 10.2.3 Procedure of IS-lMOGA . . . . . . . . . . . . . . . . . . 10.2.4 Performance Tests . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 Engineering Application: Multi-objective Optimization Design of Commercial Vehicle Cab Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Multi-objective Optimization Algorithm Based on Sequential Surrogate Model . . . . . . . . . . . . . . . . . . . . 10.3.1 Multi-objective Trust Region Model Management 10.3.2 Sample Inheriting Strategy . . . . . . . . . . . . . . . . . 10.3.3 Computational Procedure . . . . . . . . . . . . . . . . . . 10.3.4 Performance Test . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Engineering Application: Multi-objective Optimization Design of the Door Structure of a Minibus . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Introduction to Uncertain Optimization Design . . . . . 11.1 Stochastic Programming and Fuzzy Programming . 11.2 Interval Optimization . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . .
. . . .
. . . . 179 . . . . 179 . . . . .
. . . . .
. . . . .
. . . . .
182 182 184 185 187
. . . . 192 . . . . .
. . . . .
. . . . .
. . . . .
197 198 200 201 204
. . . . 207 . . . . 212 . . . . 213 . . . .
. . . .
. . . .
. . . .
215 215 217 219
Contents
12 Uncertain Optimization Design Based on Interval Structure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 The General Form of Nonlinear Interval Optimization . . . . 12.3 Interval Optimization Model . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Interval Order Relation and Transformation of Uncertain Objective Function . . . . . . . . . . . . . 12.3.2 Interval Possibility Degree and Transformation of Uncertain Constraints . . . . . . . . . . . . . . . . . . . 12.3.3 Deterministic Optimization . . . . . . . . . . . . . . . . . 12.4 Interval Structure Analysis Method . . . . . . . . . . . . . . . . . 12.5 Nonlinear Interval Optimization Algorithm Based on Interval Structure Analysis . . . . . . . . . . . . . . . . . . . . . 12.6 Engineering Applications . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Uncertain Optimization Design of Vehicle Frame Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Uncertain Optimization Design of Occupant Restraint System . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
. . . .
. . . .
. . . .
. . . .
221 221 221 223
. . . . 223 . . . . 225 . . . . 229 . . . . 230 . . . . 233 . . . . 235 . . . . 235 . . . . 238 . . . . 241 . . . . 241
13 Interval Optimization Design Based on Surrogate Models . . . . 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Interval Optimization Algorithm Based on Surrogate Model Management Strategy . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Approximate Modeling for Uncertain Optimization 13.2.2 Design Space Updating . . . . . . . . . . . . . . . . . . . . . 13.2.3 Calculation of the Actual Penalty Function . . . . . . 13.2.4 Algorithm Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.5 Engineering Application: Uncertain Optimization for Grinder Spindle . . . . . . . . . . . . . . . . . . . . . . . 13.3 Interval Optimization Algorithm with Local-Densifying Surrogate Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Approximate Uncertain Optimization Modeling . . . 13.3.2 Algorithm Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Engineering Application: Crashworthiness Design on a Thin-Walled Beam of a Vehicle Body . . . . . . 13.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 243 . . . 243 . . . . .
. . . . .
. . . . .
243 244 245 246 248
. . . 249 . . . 251 . . . 252 . . . 253 . . . 254 . . . 258 . . . 258
Chapter 1
Introduction
1.1 Background and Significance In the current development trend of informatization, digitalization, and intelligentization, different disciplines are comprehensively intersecting. Digital technologies unprecedentedly permeate and facilitate to realize the advanced design and manufacturing technologies, which lead to significant progress of the manufacturing industry in concurrence with both great opportunities and challenges. The profound integration of the information technology into the manufacturing technology enables the digitalization of the mechanical equipment to accommodate massive data comprehended in the full lifecycle. Complex modeling, simulation, analysis, and data mining in each individual stage of initiation of the concept, design, processing, manufacturing, assembling, testing, service, maintenance, scrap and recycling, and the transformation from simulation data to higher-level information and knowledge will greatly promote the intelligentization of industrial products and boost the innovative research and development (R&D) of high-tech equipment. The objective of the advanced design and manufacturing technology, based on numerical simulation, digitalization, and informatization of equipment, is to integrate the numerical models, design methods, computing tools, data sets and so on into an advanced manufacturing process, and to establish the scientific computation foundation for the physical mechanism analysis, processing and its controlling, performance prediction, and optimization of complex equipment. Simulation-based engineering science (SBES) [1, 2] is the linchpin in the field of complex equipment design and manufacturing, and has made a significant impact on the development of the manufacturing industry globally. Nevertheless, the general application of the simulation technology to the mechanical equipment design and manufacturing is yet to be expected and advanced. Currently, the key technologies of the simulation-based design and manufacturing are still under development. Specifically, there are still wanting on the advanced design technology with the consideration of the full lifecycle of products and the professional digitalization design platform. In this regard, this book is mainly dedicated to the discussion of the common technologies in the simulation-based design. © Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_1
1
2
1 Introduction
Based on these advanced common technologies, the highly-efficient scientific computation can be applied to the complex model data in the design process to realize the transformation from design data to design information and knowledge, and to further improve the design precision, efficiency, functionality, and reliability of mechanical equipment. In the field of scientific research and engineering, numerical simulation is an important approach with equal emphasis on both theoretical analysis and experimental validation. It is the most powerful analysis tool for complex physical and engineering problems. The advanced simulation-based design technologies comprehensively integrate multiple disciplines of mechanical engineering, mechanics, materials, computer science, and physics for the digitalization of equipment. It has been an important means to the independent R&D and innovative design of equipment. Numerical simulation, as a bridge between the basic data and service performance of equipment, has the significant advantages in the reproducibility of model development, the controllability of development process, and the predictability of equipment performance. With application of the numerical simulation technology at the early stage of equipment design, designers can take into consideration of the information, knowledge and scientific laws in the manufacturing and service stages to improve the digitalization design level of equipment. At present, numerical simulation technology has been widely applied to various fields of science and engineering. Figure 1.1 shows examples of its applications in the designs of ship, automobile, aircraft, wind turbine, and other complex equipment. The numerical simulation with consideration of the full lifecycle of equipment acts as a bridge for the transformation from data integration to knowledge integration in equipment design. It is applied to optimize design process, reduce development cycle, and thus, substantially enhance the design efficiency and performance of equipment. Successful applications of the numerical simulation-based design in industry have been reported. For instance, the Boeing 777 aircraft was the world’s first fully digitalized large engineering project, in which each individual component was represented by a respective three-dimensional digital model. The utilization of the virtual assembly technology reduced the design modifications and rework rate by more than 50%, eliminated the assembly problems by 50–80% and shortened the R&D cycle by 40%. It was a pioneering showcase of the numerical simulation-based design and
Fig. 1.1 Applications of the numerical simulation technology in advanced designs
1.1 Background and Significance
3
manufacturing in the aviation field. Another example was the famous Joint Strike Fighter project, which adopted the numerical simulation technology throughout the full lifecycle of design and service of the aircraft. Comparing to the traditional methods, the aircraft design time with the simulation-based technologies was reduced by 50%, the manufacturing time was decreased by 66%, the assembly time was shortened by 90%, the total number of components was reduced by 50%, and the maintenance cost was spared by 50%. This project significantly promoted the development of the numerical simulation-based design and manufacturing. Nowadays the numerical simulation technology has been widely used in the design and manufacturing by the world-leading automobile enterprises and results in the benefits of quality improvement, development cycle and cost reduction. The specific beneficial effects are tabulated in Table 1.1. Table 1.1 Benefits of simulation technology in automobile industry Benefit
Reflection of the beneficial effect
Case
Improvement in the reusability of the best knowledge
In the simulation environment, the identified best knowledge can be easily shared and reused in product design, process planning and production line design
General Motors reused 80% of the digital models of engine production equipment to save substantially research cycle and cost
Development of concurrent engineering
In numerical simulation, many practically sequential jobs from the design to production line can be implemented in parallel, which can efficiently accelerate the product development
Toyota Corporation employed concurrent digitalizing project and reduced 2/3 overall development time from design to production
Innovation of design and process plan
Due to the low cost of the adjustment of numerical simulation models, the design performance and manufacturing process can be continuously modified and verified in the virtual environment
DaimlerChrysler conducted the process simulation and optimization design before automobile assembly, which saved the operation time and made the process almost 100% accurate
Optimization design for manufacturing and assembly process
Numerical simulation improves the production reliability and manufacturing quality by identifying in advance the potential flaws in manufacturing and assembly and conducting the optimization design
In Ford Motor, through the simulation-based optimization, the manufacturing of clutch was realized with a single fixture, which reduced the manufacturing cost by 10% and enhancing the planning efficiency by 18%
4
1 Introduction
For the development of the special equipment stationed in extreme environments, significance of the numerical simulation technology is more prominent and even indispensable. The structures of the special equipment could be very complex, the functions and the service conditions challenge the existing technical limits, and the demand on the operation safety and reliability tends to be more stringent. The present state and future trend of the development of complex special equipment have brought up a series of new technical challenges on the numerical simulation technology. The fundamental theory, methods, and technical approaches to improve the processing capacity of ultra-large and ultra-small scale equipment, to ensure the working stability of special equipment in the extreme service conditions, or to realize the high reliability and safety in full lifecycle, are all beyond the capacities of the conventional design, manufacturing, and standards. Fortunately, the advanced numerical simulation technology on the basis of scientific computation is expected to offer the powerful analysis tool and guidance for resolving the above problems [3, 4]. For instance, for the alien landing exploration in extreme environments, due to the limitation of the experimental conditions and the insufficiency of the theoretical understanding on complex physical processes, the numerical simulation technology will play a prominent role in the development and service of the aircrafts. In general, the advantages of the advanced simulation-based design can be summarized in the following aspects. (1) It can realize the accurate quantitative analysis of the complex equipment under the extreme service conditions. (2) The comprehensive performance of products during manufacturing and service processes can be taken full account at the initial design stage. (3) The trial-manufacturing and testing effort of the physical prototypes and product design cycle can be greatly reduced. (4) The bottleneck problems of design quality, efficiency and others in the traditional sequential design can be effectively overcome. (5) The digitization, parallelization, intellectualization, and optimization designs of products in complex multi-disciplinary environments can be realized. However, at the present time, there are still a couple of technique problems emerging in the numerical modeling and optimization design of the complex high-end equipment. Hence, it is necessary to develop the common technologies associated with advanced design, such as the frontier computation theory, numerical simulation, optimization design, and uncertainty quantification methods and so on to promote the general applications of the numerical simulation technology in mechanical equipment design and development, and to boost the innovation design in the manufacturing industry.
1.2 Key Scientific Issues and Technical Challenges Modern complex equipment is a comprehensive system with multi-physical processes, multi-modular structures, and multi-functional characteristics. The structure usually has the characteristics of the extremely small or large scale, coupled components, various materials, extreme service conditions, multisource uncertainties and multi-objective design requirements. As a result, there exist a series of technical and
1.2 Key Scientific Issues and Technical Challenges
5
practical challenges in modeling, analysis, and optimization, etc. Theoretically, the behavior or performance of the equipment is predictable and controllable in the full lifecycle, but it is difficult to be realized practically at the current scientific and technical levels. In the modeling process, the complex physical or engineering problems need to be highly abstracted and simplified to produce mathematical expressions, and the epistemic limitation makes it inadequate to establish a full lifecycle model for real equipment. And the proficiency of the simulation-based design could be further impaired by the potentially inevitable errors in the computation models and methods, ill-formed expressions of the knowledge and experience information, and the imperfectness of the key control parameters and model libraries. Without the localized advanced computing modules, and the key material and control parameters, the simulation and design of complex high-end equipment are still out of reach. In the following context, several industrial R&D cases participated by the authors will be addressed to identify specifically the challenges in the numerical simulation of complex equipment and the resulted limitation of the modern design method in the practical equipment development. As shown in Fig. 1.2, if the linearity simplification is adopted in modeling the stiffness curve and the damping curve of oil-gas suspension system, it appears inefficient to simulate and design the dynamic characteristics of vehicles. Without a reliable load spectrum, it may lead to a large deviation between the numerical simulation and the actual experimental observation for the on-road driving performance of vehicles. In the absence of proper knowledge of the time-varying stiffness and the friction damping parameters of the gear tooth, the design of highspeed and heavy-load gears may fail to meet the practical requirements. In lack of the adequate database of human body and the accurate human body model, the developed dummy models may produce significant unfitness when applied to the passive safety design of automotive. Therefore, adaptation of the numerical simulation techniques and appropriate applications of the modern design theory and technologies to improve the R&D capability and innovative level of products have become a general concern in the equipment manufacturing industry.
Fig. 1.2 Influence of the accurate numerical simulation and the key modeling parameters on the design [5–7]. a Nonlinear properties of oil-gas suspension system, b time-varying stiffness and friction of high-speed and heavy-load gear, c dummy model for simulation of vehicle collision
6
1 Introduction
In simple summary, some scientific issues and technical challenges including the precision of simulation model, the calculation efficiency under high dimensional design variables, the balance among multi-objectives, and the uncertainties on the structure design should be addressed properly before the realization of design of complex equipment based on the numerical simulation technology [1, 8]. Elaborations on these issues and challenges are given below. (1) Design precision. High-fidelity numerical simulation model, bridging the real equipment and the design performance, is the basic premise and foundation to realize the high quality design of equipment. With the increasing demand on the precision of numerical model in the modern manufacturing, the modeling process requires more and more accurate parameters of materials, processes, structures, and service environments, etc. However, the traditional theoretical analyses and testing methods are inadequate to determine the key parameters of the model, resulting in imprecision of the equipment design. (2) Design efficiency. In application of the numerical simulation method to analyze uncertainty and implement optimization design for the complex equipment, it more often than not involves high dimensional design variables, strong nonlinearity, and highly intensive and time-consuming computation. The current speedy development of computer hardware technology still not meets the increasing demand on the massive and repeated numerical simulations, which leads to the inefficiency in the design of complex structures. Therefore, it is imperative to develop efficient structural analysis technology to improve the design efficiency, shorten the design cycle to respond promptly to the market demand. (3) Requirements of multi-objective design. In the simulation-based design of complex products, it usually involves high dimensional design variables, multifunctional requirements, multi-constrained conditions in addition to multidisciplinary integration. Therefore, it is necessary to consider the multiple performance indicators in view of functionality, manufacturability, and economic benefit in the equipment design. The development of the multi-objective optimization design technology to meet the functional requirements of the complex equipment has the practical significance to achieve the best balance among the multi-objective design and to effectively improve the comprehensive performance of complex equipment. (4) Design reliability. The complex structures, processes, and operating conditions will unavoidably involve the coupling uncertainties in the material properties, geometrical characteristics, boundary conditions, initial conditions, measurement errors, etc. Due to high cost and testing difficulties for complex equipment, reliable experimental data is generally lacking. With highly inadequate sampling data, to quantify the uncertainties appropriately and perform reasonably accurate analysis for the complex equipment design are significant to ensure the reliable design of equipment in complex environments. (5) Lack of modeling specifications and standards. In the process of equipment design, the numerical simulation models encompass different disciplinary fields,
1.2 Key Scientific Issues and Technical Challenges
7
while there are no common basis of the professional specifications and standards. Specifically, the modeling specifications including mesh generation, element selection, boundary definition, constitution model, solver selection, and convergence criterion, etc. are not available. The consistency of the simulation results for the same type of equipment cannot be guaranteed. Thus, the design specifications and standards should be identified to reduce the influence of anthropogenic factors on the accuracy of numerical simulation, and to ensure the consistency of the analysis and design, as well as the transferability of the development of the same type equipment.
1.3 State-of-the-Art Concerning the aforementioned key scientific and technical inadequacies and challenges in the complex equipment and structure design, intensive researches with significant progress have been reported in recent years. A series of modern advanced design methods including mechanisms, models, and algorithms from the fields of physics, mechanics, materials science, etc. have been achieved. The research progress and current status of several frontier issues are summarized as below, and the details will be presented in the subsequent chapters.
1.3.1 Theory and Methods for High-Fidelity Numerical Modeling Numerical modeling and high-fidelity simulation technology play an increasingly important role in the innovative design of industrial products and in the independent R&D of high-tech equipment. In 1990s, the United States identified the numerical simulation technology as the key driving force of the science and technology development strategy. In the programs of 21 key technologies and 7 key projects, the simulation technology is a prominent highlight. In 2006, the U.S. Congress approved the advance in the numerical simulation technology as the ‘National Key Technology’. In 2009, the American Competitiveness Council released a white paper on the American manufacturing industry highlighting the importance of the numerical modeling and simulation technology and signifying modeling and simulation to maintain the global leadership. Under such strong advertisement concurrent with the driving force of the enhancement of the application of numerical simulation in complex engineering, the simulation model modification and validation techniques have been developed rapidly. The existing methods can be classified into the structural dynamic model modification technology and the model verification and validation (V&V) technology [9, 10]. These two major categories of methodologies are addressed in depth as below.
8
1 Introduction
The structural dynamic model modification is developed for the improvement of the simulation precision of numerical models. Based on the combined physical experimental testing and optimization technology, it modifies the mass, stiffness and damping matrices of the structures in order to narrow the gap between the experimental measurement responses and the modeling prediction responses and to ensure the reliability of the numerical simulation. Currently, the prevailing methods of dynamic model modification can be basically divided into the model modification method based on modal parameters and the model modification method based on frequency response function. The model modification method based on modal parameter is relatively mature, which has been widely applied in the industrial field. The key techniques include the model reduction and expansion methods, the eigenvalue and eigenvector sensitivity calculation, and the regularization-based model parameter identification, which are used in the correlation analysis between the computational model and experimental model. The model modification method based on frequency response function can reduce the error caused by modal identification. However, this modification approach is rather complicated and intractable, and the resulted error is relatively larger around the resonance peak due to the inaccurate representation of damping and noise. Although the structural dynamic model modification technology has achieved great progress, the current model modification is principally established under the framework of deterministic analysis in the context of linear structures with low frequency. However, for the equipment involving strong nonlinearity, large deformation, and multi-field coupling, etc. more sophisticated model modification methods are yet to be developed. With the general tendency of transmission from the deterministic model modification to the statistical model modification, the model V&V technology was proposed and promoted in 1990s. It is an expansion of the dynamic model modification technology, with the latter can rather be considered as a special case of the model V&V technology. The three national laboratories that are affiliated with the U.S. Department of Energy, Los Alamos National Laboratory (LANL), Sandia National Laboratory (SNL) and Lawrence Livermore National Laboratory (LLNL) introduced the V&V technology into the famous accelerating strategy computation innovation plan around 1998, which was subsequently named as the Advanced Simulation and Computing (ASC) Program. In 2006, LLNL published a white paper on the model V&V of the ASC program [11], in which several key issues and perspectives were addressed with regard to the subjects of the verification and validation of numerical simulation model, uncertainty quantification, etc. Subsequently, the American Institute of Aeronautics and Astronautics (AIAA) issued a V&V tutorial for the fluid mechanics model, and the SNL developed a validation framework of computational fluid mechanics model. The American Society of Mechanical Engineers (ASME) published a V&V tutorial of computational solid mechanics in 2006. In order to promote the research, development, and application of the V&V technology in the numerical simulation of complex engineering, V&V technology conference has been held almost every year around the world. Generally, the key issues of model V&V
1.3 State-of-the-Art
9
are proper representation of the uncertainty measurement, propagation, and management. Intensive researches are required for many critical issues probably because the relevant theoretical research history is relatively short.
1.3.2 Theory and Methods for Rapid Structural Analysis for Complex Equipment As the scientific and engineering problems become more and more comprehensive, the complexity of the computational model grows exponentially. The research on the advanced rapid analysis technology for improving the computational efficiency for large-scale problems becomes a mainstream research direction in the field of modern design. Nowadays, an increasing number of researchers have been involved in this field. Invaluable theoretical and technical achievements have been reported. Amongst, some of the technologies have been commercialized for general applications. These methods are spread out as follows. (1) Variation method. This method is based on finite element method (FEM) and Taylor expansion, and currently is only applicable for the static analysis with varying parameters in small region. (2) Model order reduction method. In this method, the system with high dimensional state space is projected into a lower dimensional state space to realize rapid computation. The effective implementation lies in appropriately preserving the physical properties and structural characteristics of the original system in reducing the degree of freedom. (3) Parallel computing technology. It adopts multiple processors to solve a problem coordinatively. The computation procedure is decomposed into several parts and each part is processed by a separate processor. The parallel computing technology expands the computational capacity compared with single processor for large-scale problems with indispensable high cost in concurrence with required more extensive computer hardware resources. (4) Surrogate model method. It adopts the polynomial, radial basis function, Kriging model and/or other mathematical models to reconstruct the efficient mapping relationship between the structural response and design parameters to achieve the real-time calculation and analysis of structures. It is one of the prevailing methods in the optimization design of complex structures. (5) Reduced basis method. It is a kind of real-time computational method developed in recent years. The basic principle is to project the original large system into an orthonormalized reduced space which is constructed by a series of system solutions in the parametric sample space. By using the constructed reduced space, the computing and storage costs for the response calculation under a new parameter can be efficiently reduced. The reduced basis method has been successfully applied to solve the statics and elastodynamics problems, and it has been extended to the nonlinear structural analysis, fluid analysis, nonlinear
10
1 Introduction
steady-state thermal analysis, and other issues. It is promising to be developed as a practical tool that is generally available for the rapid design of complex equipment.
1.3.3 Theory and Methods for Efficient Structural Optimization Design Over the past two decades, the structural optimization design, aimed at improvement of structural performances and the increase of the economic benefit, has achieved remarkable progress in the theory and methods, and has been widely used in the design of complex equipment. From the present situation and development trend of structural optimization, multi-disciplinary and multi-objective structural optimization design, and structural topology optimization design are still of intensive research interest. Specifically, in recent years, the multi-disciplinary optimization research is mainly focused on the optimization strategy complementary to the available decoupling strategies for engineering applications, such as hierarchical optimization, concurrent subspace optimization, collaborative optimization, etc. The multi-disciplinary optimization of complex system based on the surrogate model with different fidelities is another research highlight. In the last decade, the developed and improved optimization algorithms have been serving as an efficient numerical tool for solving practical engineering problems. On the other hand, for different mathematical and physical characteristics of the optimization problems, to develop the algorithms with the stable convergence feature based on the successive approximation technique for large-scale optimization problems is still under development for the general application of the structural and multi-disciplinary optimization methods. In the structural multi-objective optimization, there are mainly two kinds of research approaches, namely, the preference-based method and the generating method based on evolutionary algorithm. The preference-based method, such as the linear weighting method, assumes that the preference information of each objective function is known and can be specifically represented. The vector objective is then converted into a scalar objective according to the preference information. The generating method based on evolutionary algorithm, such as the multi-objective genetic algorithm, does not make assumption of the preference information for each objective. Instead, it solves the non-dominated optimal solution set, and then selects the optimal compromise solution from the obtained solution set. This method has become most prevailing in the structural multi-objective optimization. Topology optimization has been a focus of research interest in the field of structural optimization for many years. In principle, topology optimization can be divided into two categories, degenerate methods and evolutionary methods, including the level set method, homogenization method, variable density method, variable thickness method, evolutionary structural optimization method, etc. Topology optimization
1.3 State-of-the-Art
11
has a broad prospect in engineering applications, such as the material-structure integrative functional design. Apart from the research on linear elastic structure, research has also been extended to cover the geometrical and physical nonlinearity of structures, the dynamic optimization problem, and the coupling problem of multi-physical fields, etc.
1.3.4 Theory and Methods for Uncertainty Analysis and Reliability Design The uncertainty modeling is the basis of uncertain designs for structures and equipment. The main mathematical model representing uncertainty is the probability model. However, the complete probability distribution is not readily available for the design of complex structures and equipment. In view of the insufficient samples or incomplete information, the approximate probability model may be constructed by using the classical methods such as maximum entropy theory. On the other hand, the non-probabilistic models, such as the fuzzy set, non-probabilistic convex set, random set, evidence theory, are also the alternatives for uncertainty modeling. Therefore, the reliability design based on non-probabilistic convex set or random set models under the condition of insufficient samples has received extensive attention. However, it is still in the preliminary stage of development. Some issues in engineering applications are still to be addressed appropriately, such as the multi-physical field coupling, reliability analysis and optimization of multi-disciplinary performance of structures, the uncertainty modeling of macroscopic structure of functional materials with specific microstructural features. In view of that, different uncertainty modeling theory, hybrid probability and non-probability models, and the development of the effective uncertainty analysis, propagation and design methods are explored to be applied for the dynamic and multiple-source uncertainties of complex equipment. The structural design in uncertain environment mainly involves the structural reliability optimization design and the robust optimization design. The technical challenges lie in overcoming the low efficiency in the design with the multilayer nested optimization, the complex structures with multi-variable, and multi-reliability constraints. In recent years, a variety of decoupling strategies of nested optimization have been developed. However, the convergence and precision are not yet guaranteed for the complex structure design. In simple summary, the structural optimization under the epistemic uncertainties described by the probability and non-probability hybrid model, or the evidence theory becomes a challenging research focus.
12
1 Introduction
1.4 Contents of This Book Review of the state of the art reveals that the numerical simulation technology is still not sophisticated to meet the requirements in precision, efficiency, functionality, and reliability of the equipment design in complex manufacturing environments. The main obstacles include (1) how to build up the high-fidelity simulation model for ensuring the design precision, (2) how to realize the fast simulation and highly efficient process design in the condition of high dimensional design variables and strong nonlinearity, (3) how to carry out the comprehensive optimization design in the requirement of multi-objective functions and the condition of the coupled multiconstraints, (4) how to realize the reliability evaluation and design of the equipment under the complex uncertain environments. To overcome these obstacles, this book provides a systematical approach to the key common technologies. The advanced design theory and methodologies will be introduced, covering the topics of the computational inverse modeling, rapid structural analysis, optimization design and uncertainty quantification. It aims to improve the simulation-based design with regard to the accuracy, efficiency, multi-functionality, high reliability, and thus to enhance the capability for the innovation design of mechanical equipment. The contents of this book are divided into four parts and its structure is illustrated in Fig. 1.3. (1) The first part elaborates on the high-fidelity numerical modeling in Chaps. 2, 3 and 4, which highlights the design precision problem of equipment. The inverse problem theory and computational inverse methods are introduced into highfidelity digital modeling of complex equipment. Several inverse methods are particularly discussed including global sensitivity analysis method for modeling parameters, regularization method to overcome ill-posed problems, and the highly efficient computational inverse algorithms. These methods are the potential solutions to the technical problems in which the key modeling parameters in the manufacturing and service conditions are not available for the conventional testing technology. They serve as substitutes for the model V&V and model calibration and can derive the accurate parameters for the high-fidelity modeling of complex equipment to improve the precision of numerical model and design quality. (2) The second part describes rapid structural analysis methods in Chaps. 5, 6 and 7. Specifically, the design efficiency problem of equipment is addressed with a highlight of the methods for the improvement in computing efficiency. Two types of rapid computational methods for structural design are discussed including rapid structural analysis based on surrogate models and rapid structural analysis based on reduced basis methods. They are efficient in the analysis and design of complex equipment by reducing computational intensity in the design and verification of equipment and shortening the equipment design cycle. (3) The third part addresses multi-objective optimization design in Chaps. 8, 9 and 10 expositing the multi-functional design requirements of equipment. The multiobjective optimization theory and methods with high performance for practical engineering requirements are discussed including the micro multi-objective
1.4 Contents of This Book
13
Classification and procedure of modeling inverse problem Global sensitivity analysis for modeling parameters
Computational inverse techinques
Regularization methods for ill-posed problem
Design precision Efficient hybrid computational inverse algorithms
Numerical Simulation-based Design: Theory and Methods
High-fidelity numerical modeling
Computational inverse for modeling parameters
Rapid structural analysis methods
Rapid structural analysis based on surrogate models
Design efficiency
Rapid structural analysis based on reduced basis methods
Computational inverse for characteristic modeling parameters Computational inverse for environmental modeling parameters
Polynomial response surface based on structural selection technique Adaptive surrogate model based on radial basis function High dimensional model representation and its improvement
Design multi-functionality
Multi-objective optimization design (MOD)
Uncertain optimization design (UOD)
Reduced basis method for structural static response Reduced basis method for structural dynamic response
Basic procedure of µMOGA
Micro multiobjective genetic algorithm (µMOGA)
Non-dominated sorting strategy Population diversity maintaining strategy
MOD based on surrogate models
MOD based on intelligent sampling strategy MOD based on sequential surrogate model
Interval order relation for transformating uncertain objective UOD based on interval structure analysis
Interval possibility degree for transformating uncertain constraints Interval optimization based on interval structure analysis
Design reliability Nonlinear UOD based on surrogate models
Interval optimization based on surrogate model management Interval optimization based on local-densifying strategy
Fig. 1.3 Main contents and structure of this book
genetic algorithm and multi-objective optimization design method based on surrogate model. The purpose is to promote the engineering practicability of the optimization design methods to ensure the uniformity and completeness of the solution set of the multi-objective design and to improve the efficiency and comprehensive performance of the optimization design for the complex mechanical equipment. (4) The forth part introduces uncertain optimization design in Chaps. 11, 12 and 13, focusing on the design reliability problem of equipment. It is generally understood the traditional probabilistic method requires a large number of sample information in dealing with uncertain problems, the testing cost is very high and the available data are limited. In view of that the structural uncertainty
14
1 Introduction
analysis and optimization design techniques based on interval analysis, as the advanced preference, will be discussed in details. The focus is on the effective solution to the problem of the low efficiency of the nested solution to provide an effective tool for the reliability design of complex equipment with the bounded uncertain parameters and to realize the optimal design of the structure under the uncertain conditions in the practical engineering. In this book, the four topics of inverse modeling, rapid analysis, structural optimization, and uncertainty evaluation can generally cover the overall process of the simulation-based structural design. It is worth mentioning that the four topics are mutually auxiliary and/or prerequisite to each other. Specifically, the high-fidelity simulation modeling by computational inverse methods is the prerequisite and foundation for performing the high quality design. The rapid analysis techniques based on surrogate model and reduced basis method are the fundamental analysis tool for the design of complex equipment. The multi-objective optimization design method with high performance attends the end-all target of the simulation-based design of equipment. And the structural uncertainty optimization design is indispensable to assure the design reliability of equipment. The contents of this book present the fundamental and key common technologies in the field of mechanical design. These technologies play the key role for transforming the simulation techniques from the auxiliary analysis tool to the leading design platform in the R&D activities. There is a wide range of potential applications in the fields of high-tech machine tools, engineering machinery, special defense equipment, vehicle engineering, aviation, and aerospace, etc. It is hoped that this book can serve as a textbook for a subject course of numerical simulation-based design to fit into the disciplinary curriculum of the manufacturing engineering.
References 1. Oden, J. T., Belytschko, T., Fish, J., Hughes, T. J. R., et al. (2006). Simulation-based engineering science: Revolutionizing engineering science through simulation. Report of NSF blue ribbon panel on simulation-based engineering science. 2. Glotzer, S., Kim, S., Cummings, P. T., et al. (2009). International assessment of research and development in simulation-based engineering and science. Maryland: World Technology Evaluation Center. 3. Long, L. B., Qing, Q. X., & Wen, G. L. (2010). Simulation analysis of lander soft landing’s stability based on ADAMS. Journal of Engineering Design, 17(5), 334–338. 4. Liu, X., Wen, G. L., & Han, X. (2011). Analysis of dynamic characteristic parameters of airdropping equipment landing water and optimization. Journal of System Simulation, 23(2), 252–256. 5. Xu, D., Yap, F. F., Han, X., et al. (2003). Identification of spring-force factors of suspension systems using progressive neural network on a validated computer model. Inverse Problems in Engineering, 11(1), 55–74. 6. Liu, J., Han, X., & Wen, G. L. (2007). Nondestructive evaluation of the nonlinear characteristics of the hydragas suspension based on genetic algorithm. China Mechanical Engineering, 18(10), 1161–1164.
References
15
7. Han, X., Liu, G. R., Li, G. Y., et al. (2005). Applications of computational inverse techniques to automotive engineering. In Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice. 8. Oden, J. T., Belytschko, T., Babuska, I., et al. (2003). Research directions in computational mechanics. Computer Methods in Applied Mechanics and Engineering, 192(7), 913–922. 9. Friswell, M., & Mottershead, J. E. (1995). Finite element model updating in structural dynamics. Springer Science & Business Media. 10. Coleman, H. W., & Steele, W. G. (2009). Experimentation, validation, and uncertainty analysis for engineers. Wiley. 11. Klein, R., Trucano, T., & Graziani, F. (2006). ASC predictive science academic alliance program verification and validation whitepaper. Department of Energy: US.
Chapter 2
Introduction to High-Fidelity Numerical Simulation Modeling Methods
2.1 Engineering Background and Significance In the design and development process of mechanical equipment, effects of the different working conditions and the design variables on the equipment performance should be comprehensively explored and appropriately addressed. Physical experiments on the equipment are usually very effective to shed lights on the intricate mechanisms. However, physical experiments are restrained from extensive implementation due to lack of the experimental conditions, the long experimental cycle, the high financial and man-power costs, the destructiveness of some experiments, and some vague intermediate variables. On the other hand, with the rapid development of the high-performance computer hardware and software, numerical simulation is generally applied to replace the expensive physical experiments provided that the simulation model represents the practical structure, the boundary and the loading condition. High adaptability of the numerical simulation model enables the numerical simulations approaching efficiently the different working conditions and design variables. Thus, numerical simulation is prevailing in the profound and comprehensive quantitative analysis for the characteristics and kinematic mechanisms of the practical structures by analyzing, predicting and assessing the mechanic equipment performance relatively rapidly, accurately, economically and safely. In view of these, it is extensively applied to examine and optimize the design plan to eventually provide quantitative guidance and basis for the final design of mechanical equipment. As mentioned in the Chap. 1, the numerical simulation has become the foundation of the modern design [1–3] as facilitated by the continuous development of the SBES. The numerical simulation model, as the bridge and link between the practical structure and the performance design, is playing an increasing dominant role in the R&D of the mechanical equipment. On the other hand, the credibility of the numerical simulation model, which is case-specific, is the essential prerequisite to ensure the designed machinery and equipment performance. Therefore, only if the proficiency of the numerical model is verified, the advanced modern design methods can be further applied to optimize the mechanical equipment to ensure its high reliability © Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_2
17
18
2 Introduction to High-Fidelity Numerical Simulation …
and performance in the working process. For the digitization-based modern design, it is the premier step to establish the high-fidelity numerical model for ensuring the consistency and a high degree of approximation between the numerical simulation model and the practical physical process. With the increasing precision requirements of the numerical simulation model in modern design, the modeling process of the practical mechanical equipment demands highly refined structure and mechanic parameters because of the complex material, structure, process, assembly and service environment. However, it is a timeconsuming process to determine the key parameters during the modeling process of the complex mechanical equipment. Due to the complexity and uncertainty of the system, and the lack of a priori knowledge, the theoretical analysis method is highly improbable to provide all the required parameters. In the experimental test, because of the limitations in the existing measurement technology, high cost, the nondestructive requirements of structure during the experiment, the modeling parameters that can be extracted directly are far from satisfactory. Some key parameters simply cannot be derived by the experiment. In view of the incomplete prior knowledge and limited measurement data, assumptions and simplification for some key parameters by designer could usually ensue substantial discrepancy between the numerical simulation model and the practical structure. In this case, there is a certain degree of blindness to hinder the designed mechanical equipment from meeting the specified performance requirements. Worse yet, it could possibly lead to serious security problems in the service process. In this chapter, specific engineering structural design problems are exposited to exemplify the difficulties on obtaining the key modeling parameters and the corresponding adversarial influences on the model precision and the equipment design performance. (1) Appropriate representation of the structural dynamic load is the linchpin for effective simulation of some dynamic problems, such as the vibration isolation design and dynamic optimization. Because of the limitations of the imprecision in existing measurement technology, it is infeasible to measure the dynamic load directly in some conditions, e.g., the wind load on the large engineering machinery, the ice load on offshore platform, the road excitation on the driving vehicle, and the aerodynamic load on the aircraft. In these cases, the assumptions on load forms are made based on the understanding of the relevant physics and the reliability of design is ensured by introducing a safety factor. For example, in the vehicle dynamic modeling and designing process, the input data of vehicle vibration, namely, the road roughness, is divided into different levels according to specification standards, and the power spectral density functions under different level standards are given. It also provides specific earthquake spectrum in the specification of the seismic design for large scaled structure. However, there is a difference between the actual dynamic load and the specified standard load, which may lead to the low credibility of structural dynamic model. In this regard, the design performance could either fail to satisfy the requirement or is over conservative. Thus, it is significant to derive the realistic representation of the dynamic load and the external environment parameters by indirect
2.1 Engineering Background and Significance
19
approaches to ensure the modeling credibility and the design performance of the mechanical equipment. (2) Vehicle crash safety design is the indispensable link in the vehicle development. It aims to reduce the potential occupant injury in crash accident while also realizing the lightweight design of vehicle. Because of the high cost, long cycle, complex working condition in the physical vehicle crash experiment, it is not cost-effective to comprehensively analyze the influence of the vehicle structure, geometry, material properties and manufacturing processes on vehicle crash safety. Feasible alternatives are the numerical simulation model for the vehicle crash to develop the vehicle crash safety design. On the other hand, for the complex physical process of vehicle crash, the initial impact conditions, properties of human biomaterials, airbag parameters, contact parameters and the controlling parameters for the stable numerical simulation should be appropriately derived and represented. If these parameters are not addressed reasonably, the precision of the numerical model for vehicle crash will be seriously affected, which will further result in the insufficient reliabilities of a series of designs including the energy absorption structure, the belt and seat, the airbag, the crashworthiness of vehicle body structure and protection of human injury, etc. Therefore, in the design of complex equipment based on the numerical simulation technology, it is prerequisite to accurately determine the model parameters and the numerical simulation algorithm parameters to ensure the numerical simulation proficiency and to improve the vehicle body design performance. (3) Structures of the aerospace equipment become more and more sophisticated, and thus, complex. The performance requirements for the environment adaptability, high reliability and lightweight, etc. are highly demanding. In order to make the aerospace equipment work appropriately in the orbit, whereas the design cycle and cost are acceptable, numerical simulation model and structural optimization design are becoming indispensable. However, due to the high integration of the aerospace equipment, the complexities of the structures and functions, and the extreme working conditions, the modeling parameters are not straightforward to derive via conventional measurement methods. It is understood that most of the materials of aerospace equipment are composite materials with stiffness and strength dispersion, inhomogeneity and uncertainty. It requires a great number of physical experiments to determine the key model parameters for the dynamic constitutive model. The external loads applied to the aerospace equipment including the pulsant thrust of engine, impulsion, overload, environment noise in the launch phase and the alternating thermal load in the on-orbit phase are complex and difficult to be measured directly. In addition, because the aerospace structure is huge and complex, the connection characteristics among the components are intricate. During the modeling process, the connection characteristics should be reasonably and equivalently addressed. Therefore, for modern complex aerospace equipment, it is necessary to effectively evaluate and define the structural characteristic parameters, the external environment parameters and the model equivalent parameters. It is the linchpin to ensure
20
2 Introduction to High-Fidelity Numerical Simulation …
the modeling credibility for the complex aerospace equipment and the design precision. Numerical simulation and modeling credibility techniques [4–6] are playing a more and more important role for improving the capability of equipment innovative design. In order to adapt to the gradually increasing complexity of equipment and to meet the requirements of high-fidelity numerical model in equipment design, it is necessary to develop the generic technologies and cost-effective methods to derive key parameters for the numerical simulation modeling. It has practical significance to improve the modeling credibility and precision of the complex equipment, and to fulfill the effective analysis and optimization design for mechanical equipment.
2.2 Modeling Based on Computational Inverse Techniques For many engineering problems, some key modeling parameters are not practical to be derived by theoretical analysis or experimental test. Nevertheless, the parameters are prospective to be determined by designing a few specific experiments and measuring the structural responses, which are strong sensitive to the modeling parameters. Through making full use of the easily measured structural responses and the basic numerical simulation model, the highly practical computational inverse techniques will be developed to deduce the indeterminate internal characteristic parameters and external environment parameters, which serves as an effective approach to derive the key modeling parameters proficiently. Accordingly, this chapter will introduce the computational inverse techniques [7–9] to identify the key model parameters so as to improve the precision of numerical simulation. The principal idea is outlined as follows. For the derivation of the modeling parameters, the physical experiments based on sensitivity analysis are firstly carried out. And then, the inverse model for key parameters is established combining the experimental test results with the basic numerical model. Thereby, the key modeling parameters that are most consistent with the experimental observation can be optimally solved by using the computational inverse algorithms. Brief review of the concept of inverse problem and the solving procedure are presented to exposit more clearly the high-fidelity numerical modeling method based on the computational inverse techniques. Although the initiation of classical inverse problems can be traced back very early, as a scientific discipline, the inverse problem, especially the engineering inverse problem, arises in recent decades. In the 1960s, Tikhonov [10] developed the variation regularization theory, which marked the formation of the basic theory frame of inverse problem and its numerical implementation. Yet, the explicit definition of inverse problem is still not available at present. As the name implies, the inverse problem is always relative to the positive problem. And there is no strict standard to judge that a specific problem is a forward problem or an inverse problem. Keller [11] gave a more general definition of the inverse problem. For a pair of interconverted problems, if the statement of a problem contains all or
2.2 Modeling Based on Computational Inverse Techniques
21
part of the information of another problem, the one is the forward problem, whereas the other one is the inverse problem. Based on the systematic summation on the ill-posedness source of inverse problems, Liu and Han [7] considered that a problem described by an integral process is the forward problem, while a problem described by a differential process is the inverse problem. From the perspective of systematology, the forward problem is the process to determine the output responses via the input parameters and the system model, while the inverse problem is the process to determine the system model or the input parameters by partial output responses. A more applicable mathematical definition of the inverse problem is to determine the unknown part of a definite solution problem by using the partial information of the solutions of a definite solution problem. In practical engineering applications, researchers often distinguish the forward problem and inverse problem according to the natural order of happenings or phenomena, such as causal order, chronological order or space order. The forward problem is generally an evolution process in a certain natural order from the cause to the effect, which is regarded as an analysis process. The inverse problem explores the internal principles or the external influences by using the observed information, which can be considered as a synthesis process from the effect to the cause. Therefore, both the forward problem and the inverse problem are the important researching contents in science and engineering. In recent decades, with the advancements of sensor and measuring techniques and the extensive explorations of the numerical simulation and the intelligent computing methods. The inverse problem not only has considerable development in the theoretical context, but also widely prevails in practical applications in the fields of the machinery, geophysics, medicine, environment, telemetry, control, communication, weather and economic, etc. The reason lies in the urgent requirements of the engineering applications in various disciplines, and the novelty and challenge in the inverse problem theory itself, which arrests many scholars’ interests. In different fields, the meanings of the inverse problem are also different, which are usually described by the words of inversion, inverse, recognition, prediction, reconfiguration, reconstruction, identification, assimilation, inverse design and fault diagnosis, etc. [12–15]. The solution of inverse problem is to identify the past state parameters or the internal characteristic parameters of a system by the measured data or the expected objectives. Therefore, firstly, a set of observed data by the physical experiments or the subjective expected objectives should be provided in advance, which are usually the system responses rather than the model characteristics. The purpose of the inverse analysis is to transform the response data into the description of the model characteristics. Secondly, the potential function relationship between the model parameters and the system response should be established. Thereby, when a set of model parameters are given, the corresponding system responses are expected to be calculated correctly using the derived function relationship, implying the realization of the calculation of the forward problem. It is understood that the effective calculation of the forward problem is the premise and condition of the solution of inverse problem. The inverse of unknown parameters is only possible when the calculation of the forward problem is implemented via analytical or numerical methods. Thirdly, the measured responses by experiment are compared with the calculated responses by
22
2 Introduction to High-Fidelity Numerical Simulation …
the forward problem, and thus the inverse model can be established using specified criteria. Finally, the computational inverse algorithms are usually adopted to solve the inverse problem, which can overcome ill-posedness in the inverse process and achieve the stable calculation of unknown parameters. Summarizing the concept of the inverse problem and the solving process, it is found that the inverse problem generally includes four aspects, i.e., data, models, criteria and algorithms [16–18]. Specifically, they are the experimental observed data, the forward model, the inverse criterion, and the computational inverse algorithm, respectively. Therefore, the theory and methods of inverse problem can be applied to the high-fidelity modeling process of complex mechanical equipment. Integrating with the specialties and difficulties involved in the modeling process of the practical mechanical equipment, the computational inverse techniques for high-fidelity modeling will be developed. The inverse problem can be classified into different categories from different perspectives. For example, according to the function relationship between the observed responses and the inverse parameters, it can be classified into linear inverse problems and the nonlinear inverse problem. From the perspective of systematology, it is classified into the first kind of inverse problem, i.e., the system identification problem, and the second kind of inverse problem, namely, the input identification problem. To explore the unknown part for the system equation described by mathematical formulae, the inverse problem can be divided into system parameter inverse problem, the input source inverse problem, the boundary condition inverse problem, the initial condition inverse problem and the geometry inverse problem, etc. In this book, the inverse problem for high-fidelity numerical modeling for the complex mechanical equipment will be concerned, and it can be classified according to the modeling requirements and the involved unknown parameters. In order to realize the highfidelity numerical modeling for the complex equipment, several different types of unknown modeling parameters should be determined as follows. The first type is the model characteristic parameters. Due to the more and more complex modern equipment, not all the material parameters and structural characteristic parameters in modeling process can be obtained directly, such as material constitutive parameter, heat transfer coefficient, sound reflection coefficient, wavedrag coefficient, expansion coefficient, structural damping, damage parameter and friction coefficient. These characteristic parameters describe the intrinsic characteristics of a system or equipment, and have great effects on the functionalities and performances of equipment. Accurate structural characteristic parameters will help to improve the modeling proficiency and the design precision of equipment. The second type is the model environmental parameters. In the design process of equipment, clear understanding is prerequisite for the various external environment during its service, such as the structural prestress state, impact overload, boundary condition, collision condition, force, heat, sound, electricity, magnetic and other external excitations. In the numerical modeling and analysis process, environmental parameters directly affect the precision of the model and the credibility of the analysis results. If there is a deviation between these parameters with the actual working condition, the deviations may be amplified in the stages of structural strength checking
2.2 Modeling Based on Computational Inverse Techniques
23
and optimization to lead to dysfunctions of the designed equipment in the practical service environment. The third type is the model equivalent parameters. Because the structural characteristics, the connection pattern, and the working conditions of the modern mechanical equipment are becoming more and more complex, simplification and equivalence are indispensable in the numerical modeling. For example, when establishing the guideway system model of a machine tool, an equivalence of the guide contact stiffness under different external loads and preloads should be assumed. When building the vehicle-road coupled system model, an equivalent model is required for the complex tire and ground. During the numerical modeling of large aerospace equipment, a number of structural connections with nonlinear characteristics should be simplified. In order to guarantee the reliability of the overall structural model and the numerical simulation, the equivalent model and its parameters should match with the structural physical state. Therefore, appropriate specification for the modeling equivalent parameters is also a linchpin to ensure the proficiency of the numerical simulation of complex mechanical equipment. The fourth type is the model controlling parameters. During the design and development of the mechanical equipment, the simulation of the established numerical model is enormously helpful to analyze and evaluate of the design performance of equipment. The simulation process generally involves controlling parameters for specific numerical algorithm, such as the hourglass controlling parameters, the contact controlling parameters and the grid controlling parameters in the impact dynamics simulation. Especially, when the commercial simulation software is applied, these parameters are often directly set as the default values, which may seriously deteriorate the efficiency of the numerical simulation and the correctness of the calculation results. In order to achieve the high-fidelity numerical modeling of the complex equipment, it is necessary to reasonably determine the model controlling parameters to ensure the stability and precision of the numerical simulation. Different from the traditional classifications of the inverse problem, the above classification is based on the requirements of different kinds of parameters for the highfidelity numerical modeling. Based on this classification, the computation inverse process for high-fidelity modeling is shown in Fig. 2.1. Exposition for each step is given as follows. (1) Definition of the problem of numerical simulation modeling. Through the analysis of the design objectives, the form of the numerical simulation model which can reflect the design indexes should be specified, and different levels and types of modelling parameters are summarized and classified. Exhaustive exploration of the prior knowledge should be implemented to make application in order to decrease the number of the identified parameters and narrow the ranges of the unknown parameters. It is favorable to the improvement of the efficiency and precision of computational inverse. (2) Modeling and solving the forward problem. According to different design objectives and indexes, the corresponding forward models should be established. For example, in the design of the truss structure, if the kinematics index is concerned,
24
2 Introduction to High-Fidelity Numerical Simulation …
Unknown parameter types Model characteristic parameter
Model environmental parmeter
Model equivalent parameter
Model controlling parameter
Definition of modeling problem
Basic forward model
Sensitivity analysis
Design physical experiment
Sampling strategy
Update the surrogate model
Measured responses
Calculated responses
Solve forward problem
Criterion function
Ill-posed analysis
Regularization method
Gradient algorithm
Homotopy algorithm
Intelligent evolution algorithm
Hybrid reverse algorithm
Bayesian inference algorithm
ĂĂ
Computational inverse algorithms
Convergence condition
No
Yes Output identified parameters and build high-fidelity numerical model
Fig. 2.1 Flowchart of high-fidelity numerical simulation based on computational inverse techniques
the multi-rigid body dynamics model should be established. If the structural stiffness and strength under the external load are the design objectives, the finite element analysis model is then established. The modeling step is to establish the basic problem solver to implement the forward calculation, which the key modelling parameters are yet undetermined. (3) Sensitivity analysis. During the identification of the modeling parameters, the measured responses and undetermined parameters should not only have a causal relationship to ensure the existence and solvability of inverse solution, but also have a strong sensitivity to depress the ill-posed problem arising in the inverse process. Through the sensitivity analysis and sorting, the more appropriate measured responses for identifying the unknown modeling parameters can be determined to guide the design of the specific physical experiment. Simultaneously,
2.2 Modeling Based on Computational Inverse Techniques
(4)
(5)
(6)
(7)
(8)
25
the sensitivity analysis also provides the required data and the analysis basis for sampling and surrogate model. Sampling and updating surrogate model. In the inverse process the modeling parameters of the complex structure, the forward problem solver is repeatedly called. In order to improve the efficiency, a surrogate model is generally employed to replace the time-consuming numerical simulation model. In spite of the type of the surrogate model, reasonable sampling is required. The popular sampling strategies include uniform design, orthogonal design and Latin hypercube design, etc. It should be noted that the inverse results based on the surrogate model may not be the global optimal solution. Thus, if the convergence criterion is not satisfied, the surrogate model should be modified and updated adaptively. Physical experimental test and response acquirement. After the appropriate measured responses are identified through sensitivity analysis, the specific experiment device should be built to derive the acquired response information. By the compatibility checking and filtering to the experimental measured data, the outliers and noises can be diminished. The appropriate response selection and noise reduction processing are benefit to not only improve the accuracy of the inverse results, but also ensure the stability of the inverse solution to eliminate the deviation or oscillation of the inverse results caused by the inaccuracy of measurement and the ill-posedness of system. Criterion for modeling parameter identification. The different inverse criterion functions can be established to quantify the closeness between the simulation model and the actual system by comparing the calculated responses by simulation model with the measured responses by experiment. Specifically, the criterion functions can be constructed using the least squares criterion, the minimum regularized functional criterion, the minimum mean square error criterion, the maximum likelihood criterion, the minimum error entropy criterion, the optimal control criterion, the mixed norm criterion, or the higher-order cumulate identification criteria, etc. It is understood not to be necessary to pursue the absolutely accurate solutions for the practical engineering problems. Different criterions can restrict the solution of inverse problem via the various specific error tolerance and the fusion degree of various measurement information, so that the derived modeling parameters can be modulated towards the requirements of the engineering practice. Computational inverse algorithms. The ill-posedness arising in the inverse process can be overcome by the regularization method to ensure the solution stability of the inverse problem. Then, the modeling parameters can be efficiently identified using the optimization algorithms, such as the mathematical programming methods based on the gradient computation, the evolutionary algorithms based on the intelligent computation, the hybrid algorithms combining the gradientbased optimization and the intelligent inversion method, the homotopy analysis algorithm, the extended Kalman filtering method, and the Bayesian inference method. Verification of the identified results and establishment of high-fidelity numerical model. In order to ensure the inverse modeling parameters approximate closely
26
2 Introduction to High-Fidelity Numerical Simulation …
the engineering practical values, the correctness and accuracy of the identified results are verified by the numerical simulation and physical experiment. As the key modeling parameters are proficiently derived, the precision and the credibility of the numerical model [19, 20] are greatly improved to provide a solid foundation for equipment performance analysis and optimization design. The overall process is presented from the perspective of engineering practicality. Through the combination of the experimental test and the numerical simulation, the computational inverse techniques can not only effectively derive specific modeling parameters, which are not straightforward to be achieved by traditional methods, but also greatly reduce the experimental cost because of the much reduced physical experimental tests, which provides an effective approach to realize high-fidelity numerical simulation. It should be noted that the basic forward model which should correctly reflect the actual physical process is the prerequisite for the modeling parameter identification by computational inverse techniques. Only if the calculation of the forward model is implemented, the inverse calculation of the modeling parameters can be possibly achieved. In this sense, the high-fidelity simulation model introduced in this book can provide the results consistent with the experimental observation as much as possible under the existing cognitive level. In conclusion, a high-fidelity numerical simulated model, which can adapt to the requirements of the complex structures, the extreme conditions, and the unknown parameters, is the premise and foundation for the analysis, optimization, and highquality design of equipment performance. Although the basal thought and method of computational inverse have been adopted to derive the in determinate parameters in the research of model modification, more or less, the theory and methods for inverse problems in modeling parameter identification have not been studied thoroughly and systematically. This book exposits the high-fidelity numerical simulation modeling method based on computational inverse techniques with exemplification of the specific problems in the modeling process of the practical engineering structures. It provides solutions to the derivation of the key modeling parameters, which cannot be determined directly by the traditional theoretical analysis and the experimental test, and lays the foundation for improving the design precision and product quality of complex equipment.
References 1. Oden, J. T., Belytschko, T., Fish, J., et al. (2006). Simulation-based engineering science: Revolutionizing engineering science through simulation. Report of NSF blue ribbon panel on simulation-based engineering science. 2. Glotzer, S., Kim, S., Cummings, P. T., et al. (2009). International assessment of research and development in simulation-based engineering and science. Maryland: World Technology Evaluation Center. 3. Law, A. M. (2000). Simulation modeling and analysis (4th ed.). New York: McGraw-Hill. 4. Oberkampf, W. L., & Roy, C. J. (2010). Verification and validation in scientific computing. New York: Cambridge University Press.
References
27
5. Oberkampf, W. L., & Roy, C. J. (2011). A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Computer Methods in Applied Mechanics and Engineering, 200, 2131–2144. 6. United States Department of Energy. (2005). Advanced simulation and computing program plan. Sandia National Laboratories Fiscal Year 2005, Report SAND 2004–4607PP. Issued by Sandia National Laboratories for NNSA’s Office of Advanced Simulation & Computing, NA-114. 7. Liu, G. R., & Han, X. (2003). Computational inverse techniques in nondestructive evaluation. Florida: CRC Press. 8. Tarantola, A. (2005). Inverse problem theory and methods for model parameter estimation. Philadelphia: SIAM. 9. Wang, Y. F., Yagola, A. G., & Yang, C. (2011). Optimization and regularization for computational inverse problems and applications. Springer: Higher Education Press. 10. Tikhonov, A. N. (1963). Solution of incorrectly formulated problems and the regularization method. Soviet Mathematics Doklady, 4, 1035–1038. 11. Keller, J. B. (1976). Inverse problems. American Mathematics Monthly, 83, 107–118. 12. Wang, Y. F. (2007). Computational methods for inverse problem and their application. Beijing: Higher Education Press. 13. Farrar, C. R., Hemez, F. M., Shunk, D. D., et al. (2004). A review of structural health monitoring literature: 1996–2001. Los Alamos, NM: Los Alamos National Laboratory. 14. Bui-Thanh, T., Damodaran, M., & Willcox, K. E. (2004). Aerodynamic data reconstruction and inverse design using proper orthogonal decomposition. AIAA Journal, 42(8), 1505–1516. 15. Isermann, R. (2006). Fault-diagnosis systems: An introduction from fault detection to fault tolerance. Springer Science & Business Media. 16. Eykhoff, P. (1974). System identification: Parameter and state estimation. Wiley. 17. Chen, B. D., Zhu, Y., & Hu, J. C. (2013). System parameter identification: Information criteria and algorithms. Beijing: Tsinghua University Press. 18. Fonseca, J. R., Friswell, M. I., Mottershead, J. E., et al. (2005). Uncertainty identification by the maximum likelihood method. Journal of Sound and Vibration, 288, 587–599. 19. Arendt, P. D., Apley, D. W., & Chen, W. (2012). Quantification of model uncertainty: Calibration, model discrepancy, and identifiability. Journal of Mechanical Design, 134(10), 100908. 20. Shabi, J., & Reich, Y. (2012). Developing an analytical model for planning systems verification, validation and testing processes. Advanced Engineering Informatics, 26(2), 429–438.
Chapter 3
Computational Inverse Techniques
3.1 Introduction The computational inverse technique-based high-fidelity numerical modeling is a comprehensive analysis of the experimental data and the numerical simulation model instead of a simple modeling analysis process or an optimal iteration process. Appropriate physical experiments are required to ensure the relatively strong sensitivity between the measured responses and the modeling parameters, while the numerical solution is expected to be available. In addition, the identification of the model parameters should address three problems, i.e., high computational intensity and the ill-posedness of the system, improvement for the identification efficiency and stability and the optimality of the solution to a specific extent. Therefore, when the inverse problem theory and the computational inverse techniques are introduced into the numerical modeling of complex structures, it improves the level of the reliability and ensure the numerical simulation proficiency. On the other hand, concurrent problems and improficiency are observed as follows [1–6]. (1) System modeling. The system modeling includes the forward model and the inverse model based on specified criteria, where the forward model is the precondition and foundation for the inverse solution. In the process of the system modeling, various factors should be considered comprehensively, e.g. the design goal, model applicability, solving efficiency in the forward modeling, the criterion to evaluate the calculated response with the measured response, and the fidelity standard between the system model and the actual system in the inverse modeling. (2) Sensitivity analysis. In the identification process and the design analysis of the model parameters for a complex structure, it often involves the high-dimensional parameters problem. Various dependent parameters may show inter-coupling effects on the structural response and design indexes. Thus, the development of a global sensitivity analysis method has become a significant part for the identification of the model parameters. It aims to effectively sort the contribution © Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_3
29
30
(3)
(4)
(5)
(6)
(7)
3 Computational Inverse Techniques
rate of the model parameters and to identify the best type and position of the measured responses. Design of physical experiment. The identification of the model parameters is based on a physical experiment. In view of the global sensitivity analysis, appropriate design of an optimal physical experiment is the linchpin to the success of the identification of the model parameter. In addition, redundant measured information adds unnecessary computational cost. The compatibility and sensitivity of too many measured data could be weak, and thus, may reduce the accuracy of the identified model parameters. Ill-posedness. It refers to the singularity and instability of the inverse solution of the model parameters. Experimental measured responses are only part of the information on the subset. Therefore, interference or measurement errors in the measured response may lead to no solution. In addition, insufficient measured information, the measurement noise and the ill-posedness of the system may result in the unstable solution or the multi-solution. In order to overcome the ill-posedness of the inverse problem, regularization methods are developed. It is highlighted theoretical research with practical application of the inverse problem. Computational efficiency. Since the modern equipment is more and more complex, the scale of the relevant numerical model tends to be more sophisticated. The solving process often involves different types of non-linearity, such as geometry non-linear problem, material non-linear problem, and boundary nonlinear problem. Therefore, even a single forward calculation is time-consuming, let alone application for the computational inverse to require many forward calculations. In order to improve the computational efficiency to implement the identification, an effective forward solver should be developed to apply the surrogate model technique or the reduced model-based technology. On the other hand, high-efficiency computational inverse algorithm should be proposed, which combines the gradient optimization and intelligent evolutionary algorithm to decrease the iterations. Identification of dynamic model parameters. Numerical model often requires many dynamic model parameters, such as the structural dynamic load, the timevarying characteristics during the service life and the structural damage evolution. The identification of the dynamic model parameters is more complex than that for static problems. The complexity can be traced back to the description of the relevant dynamic process of the model parameters, analysis of the illposedness of inverse problems and the efficient identification of the dynamic model parameters. Uncertainty quantification and evaluation of the identified results. Practical engineering structures inevitably contain random uncertainties or cognitive uncertainties. Quantifications and evaluations of their influence on the identified result is of great significance to expand the engineering applicability of the computational inverse technique and implement the verification and validation of the numerical model.
3.1 Introduction
31
This chapter focuses on the practical computational inverse techniques in the inverse problems of the model parameter, and discusses the sensitivity analysis, illposedness and the computational efficiency problems. Specifically, the global sensitive analysis method, the regularization method for the ill-posedness problem, and the high-efficient hybrid inverse algorithm will be exposited. The detailed engineering applications of the computational inverse techniques will be presented in the next chapter.
3.2 Sensitivity Analysis Methods Sensitivity analysis evaluates the contributions of the model parameters according to the mapping function from the model parameters to the structural response. It applies the contribution ratios to sort the relevance of each model parameter. In the identification process of the model parameters, the sensitivity analysis has practical significances for analyzing the structural model, specifying the key model parameters, reducing the numbers of the identified model parameters, guiding the target physical experiments and improving the well-posedness of the system model. Model parameters mainly has two effects on the structural responses. One is the unit perturbations of the model parameters, which refers to the structural response change under tiny perturbations of the model parameters. Another is the total perturbation, which is the contribution ratio in the changes of the structural responses when the model parameters vary in the value range. There are two main categories for the sensitivity analysis, the local sensitivity analysis (LSA) and the global sensitivity analysis (GSA) [7, 8]. Local sensitivity analysis method calculates the change of the structural response under the variation of only one parameter, while global sensitivity analysis implements with all the model parameter variations in the total feasible region.
3.2.1 Local and Global Sensitivity Analysis Local sensitivity analysis method is also called the one-at-a-time (OAT) method [9, 10]. When one model parameter varies at a fixed point in the parameter space and the other model parameters are kept unchanged, it adopts the change of the structural response to divide the change of this model parameter as the sensitivity index. Local sensitivity analysis method is developed based on linear model, and mainly includes the direct derivation method, finite difference method and parameter perturbation method. Its advantages lie in the straightforwardness with simple principle, manageable amount of the calculation and easy operation. On the other hand, its disadvantages are observed. First, it can’t evaluate the mutual effects of model parameters on the structural responses. Second, when the structure is nonlinear system, the sensitivity indexes heavily depend on the selection of the fixed point.
32
3 Computational Inverse Techniques
Third, when the model parameters have different order of magnitudes, it’s hard to sort effectively the sensitivity indexes. Global sensitivity analysis method evaluates the influence on the structure when various parameters change simultaneously. It analyzes the contribution ratios of each parameter and their cross items. The model parameters change in the total feasible region, and the sensitivity index of a single model parameter is derived when all parameters change simultaneously. Global sensitivity analysis method explores a large value space of the model parameter, and the sensitivity index favors to rank the importance of each parameter. But it demands relatively large amount of the calculation. Classical global sensitivity analysis methods include regression analysis method [11], screening-based method [12], variance-based method [13, 14] and surrogate-based method [15]. The main advantages and disadvantages of the respective sensitivity analysis method are given in Table 3.1.
3.2.2 Direct Integral-Based GAS Method Among the above sensitivity analysis methods, Sobol method [14, 17] can comprehensively analyze the sensitivity of each model parameter and their cross items. Therefore, Sobol method and its derivative surrogate method have a more general application. The sensitivity indexes of the Sobol sensitivity are usually solved by the Monte-Carlo method [14]. This process requires a lot of samples, and the stability of the sensitivity results heavily depends on the samples. Especially, for the complex engineering problems, even one numerical simulation is time-consuming. Thus, the large amounts of the numerical simulations make the computational intensity of the Sobol global sensitivity analysis method intractable. Therefore, a surrogate model, which is a simpler function between the structural response and the model parameters, is often adopted to reduce the computational cost. The surrogate model then combines Monte-Carlo sampling to implement the Sobol sensitivity analysis more efficiently. Although, it overcomes the problem of large calculation amount, proficiency of the analysis results still depends on the samples. The surrogate model precision, sampling methods, and the number of the samples will affect the accuracy of the sensitivity analysis results. Especially, the sensitivity index of the high order cross items may have large deviation and numerical instability. In order to more effectively and accurately quantify model parameter sensitivity, this section discusses a type of global sensitivity analysis method based on the direct integral. Based on the error reduction ratio, this method adopts the polynomial model of the structural response and the model parameters to establish the optimal polynomial mode, which not only has simple structure, high approximation precision, but also is suitable for direct integral-based Sobol sensitivity analysis. When Sobol sensitivity analysis method is applied to evaluate the main model parameters, the n-dimensional model parameter x is transformed into the cell body spatial domain n , where n = (x|0 < xi < 1, i = 1, 2, . . . , n ). The corresponding structural response f (x) can be divided into an orthogonal function combination
3.2 Sensitivity Analysis Methods
33
Table 3.1 Comparison of sensitivity methods [16] Method
Subclass
Characteristic
LSA
OAT method
–
The sensitivity of single parameter is evaluated at the fixed point. It has advantages on less calculation, easy operation and no cross interaction effect. The result depends on the selection of the fixed point
GSA
Regression analysis methods
SRC SRRC t-value
SRC and t-value are suitable for linear model. SRRC is suitable for nonlinear and monotonic changing model. The result is relative to regression function. The residual is standard contribution. If parameters are not independent, the result is not accurate
Screening-based methods
Morris
Calculation only goes at the discrete point. They are suitable for little significant parameter model. They can’t quantificationally output the overall change
Variance-based methods
FAST Sobol
They adopt the variance decomposition, and consider the main effect and the cross effect. They need large sampling calculation. FAST is not suitable for discrete distribution analysis
Surrogate-based methods
MARS PCE SVM
Calculation process is divided into two steps, namely the establishment of the surrogate model and variance-based sensitivity analysis. They are suitable for the complex model analysis. They perform higher calculation efficiency than variance-based method. The accuracy of sensitivity analysis depends on the accuracy of the surrogate model
Note SRC standardized regression coefficients. SRRC standardized rank regression coefficient. FAST Fourier amplitude sensitivity test. MARS multivariate adaptive regression splines. PCE polynomial chaos expansion. SVM support vector machine
of the single model parameter and the interactional model parameters [14] f (x) = f 0 +
i
f i (xi ) +
f i j (xi , x j ) + · · · + f 12···n (x1 , x2 , . . . , xn ) (3.1)
1≤i< j≤n
The right side of the Eq. (3.1) contains 2n items, which are calculated by the following integrations
34
3 Computational Inverse Techniques
1 f0 =
f (x)d x
(3.2)
0
1 f i (xi ) = − f 0 +
f (x) 0
d xk
(3.3)
k=i
1 f i j (xi , x j ) = − f 0 − f i (xi ) − f j (x j ) +
f (x) 0
d xk
(3.4)
k=i, j
Other high-order items can be similarly derived. Except for the constant f 0 , other sub-items satisfy 1 f i1 i2 ···is (xi1 , xi2 , . . . xis )d x k = 0 k = i 1 , i 2 , . . . , i s , 1 ≤ i 1 < · · · < i s ≤ n (3.5) 0
According to Eqs. (3.1) and (3.5), each two sub-items are orthogonal, and the decomposition is unique. Sobol sensitivity analysis method uses the ratio of the partial variance and the total variance to represent the influence on the structural response of the model parameters and their interactions. The total variance of the structural response f (x) is calculated as 1 f 2 (x)d x − f 02
D=
(3.6)
0
The partial variances of each sub-item in Eq. (3.1) are 1 Di1 i2 ···is =
f i21 i2 ···is (xi1 , xi2 , . . . xis )d xi1 d xi2 · · · d xis
(3.7)
0
With application of the orthogonality of every sub-item, the following equation can be derived by the integration of both sides of the Eq. (3.1), D=
n i=1
Di +
Di j + · · · + D12···n
1≤i≤ j≤n
Thus, the sensitivity index Si1 i2 ···is in Eq. (3.1) is given as
(3.8)
3.2 Sensitivity Analysis Methods
35
Si1 i2 ···is =
Di1 i2 ···is D
(3.9)
Because the sum of all sensitivity indexes is 1, the following equation can be derived n i=1
Si +
n n
Si j + · · · + S12···n = 1
(3.10)
i=1 j=1 i= j
The sensitivity index SiT of the model parameter xi can be defined as SiT = 1 − S−i
(3.11)
where S−i is the sum of Si1 i2 ···is except for Si . Sobol sensitivity analysis method involves multiple integrals when solving the orthogonal items and calculating the variances in Eqs. (3.2) to (3.7). The conventional operation adopts the Monte-Carlo sampling method to solve the problem. However, the accuracy and stability of the Monte-Carlo integration depends on the number and the uniformity of the samples, which will lead to the intractable computational cost in the sensitivity analysis for complex engineering structure. The accurate sensitivity results are also difficult to be derived, especially for the high dimensional parameters problem. If a relatively accurate surrogate model can be generated to replace the original system model, it will improve the efficiency and accuracy of sensitivity analysis with the direct integral. Among many surrogate models, the polynomial model is the most preferable for direct integral Sobol sensitivity analysis due to its simple structure, explicit expression and direct integral calculation. But it cannot select the significant items and the polynomial coefficients are not stable when the response contains measurement noise. For this reason, the structural selection technique based on the error ratio is applied to establish the polynomial surrogate model [18]. It evaluates the significance of each item and chooses the effective items to establish the optimal polynomial surrogate model, which further reduces the computational intensity to solve and evaluate the Sobol sensitivity indexes of each parameter. The detail of the establishment of the optimal polynomial surrogate model based on the structural selection technique will be introduced in the Chap. 6 of this book. Flowchart for the Sobol global sensitivity analysis based on the optimal polynomial and direct integral is given in Fig. 3.1. The detailed steps are exposited as follows. (1) The numerical model is established, and the type of the structural responses, the model parameters and their value ranges are specified. (2) Latin Hypercube experience design method is adopted to generate the required samples, and the polynomial surrogate model is extracted to calculate the structural responses.
36
3 Computational Inverse Techniques
Begin
Use the samples to establish the polynomial model
Determine the structural response and model parameters
Sample via Latin Hypercube method Orthogonalization Numerical simulations at every sample Solve the coefficients of the orthogonal items
Establish optimal polynomial model f ( x ) based on structural selection technique
Structural selection technique based ERR
No
Contribution radio threshold Yes Inverse orthogonal transformation
Optimal polynomial model
Use the optimal polynomial model to integrate Eqs.(3.2)-(3.4), and set up the orthogonal items as Eq.(3.1)
Use Eq.(3.1) to integrate Eqs.(3.6) and (3.7)ˈand calculate total variance and partial variances
Obtain the Sobol sensitivity values of model parameters via Eqs.(3.9) and (3.11)
End
Fig. 3.1 Flowchart of global sensitivity analysis based on direct integration
(3) Based on the model of the relation between the structural response and the model parameter, the structural selection technique based on the error reduction radio is applied to choose the significant items and build the optimal surrogate model. (4) The optimal polynomial model is used to integrate Eqs. (3.2) to (3.4) to yield Eq. (3.1) satisfying the orthogonal property. (5) Equation (3.1) is adopted to integrate Eqs. (3.6) and (3.7) directly, and calculate the structural total variance and partial variances of each sub-item. (6) According to Eqs. (3.9) and (3.11), the sensitivities of each sub-item and model parameters are quantified.
3.2 Sensitivity Analysis Methods
37
3.2.3 Numerical Examples Equation (3.12) is adopted to demonstrate the proficiency of the global sensitivity analysis method based on the optimal polynomials and the direct integration, so that f (x) = 0.810 − 0.116x1 + 0.121x2 + 0.152x3 + 0.065x12 − 0.025x1 x2 − 0.054x1 x3 + 0.013x22 − 0.013x2 x3 + 0.030x32
(3.12)
where x ∈ [0, 1]. Latin Hypercube method is adopted to generate 30 samples within the value range of parameters and the object function values at every sample are calculated. A certain degree of noise is added to the object function values and the order of the complete polynomials is set as 4. Through the selection technique based on error reduction ratio, the optimal polynomial surrogate model is established as follows f˜(x) = 0.810 − 0.090x1 + 0.122x2 + 0.149x3 + 0.044x12 − 0.038x1 x2 − 0.053x1 x3 + 0.010x22 + 0.005x2 x3 + 0.017x32 2 2 = 0.902 1 − 0.136x 1 + 0.053 + 0.010x 2 + 0.105x 2 − 0.056 + 0.044x f0
+ 0.017x32
f 1 (x 1 )
f 2 (x 2 )
+ 0.124x3 − 0.068 + 0.019x1 + 0.019x2 − 0.038x1 x2 − 0.010 f 12 (x 1 ,x 2 )
f 3 (x 3 )
+ 0.027x1 + 0.027x3 − 0.053x1 x3 − 0.013 + 0.005x2 x3 − 0.002x2 − 0.002x3 + 0.001 f 13 (x 1 ,x 3 )
f 23 (x 2 ,x 3 )
(3.13) Comparing Eq. (3.12) with Eq. (3.13), the structural type of the surrogate model is the same as that of the original function, and the coefficients are basically identical. This indicates that the structural selection technique effectively eliminates the redundant items in the complete polynomial, chooses the best polynomial items and has a good adaptability for noise. Meanwhile, through the direct integral operation shown in the Eqs. (3.2) to (3.4), the optimal polynomial can be transformed into the form of the orthogonal sub-polynomials. Continuing the direct integration operation in Eqs. (3.6) and (3.7), the total variance and the partial variances of each model parameter and the cross items can be directly calculated. The sensitivity indexes of the global sensitivity analysis method based on the direct integral and Sobol sensitivity analysis method according to Monte-Carlo sampling are listed in Table 3.2. Therein, Samplings 1 and 2 contain 5000 random samplings, and Sampling 3 and 4 contain 10,000 random samplings. From Table 3.1, for the Sobol sensitivity analysis based on the Monte-Carlo method, the sensitivities are not stable under different sampling sizes and batches. Especially for the high-order cross items, their sensitivity indexes have a big difference, thus, it is not convergent. On the other hand, the optimal polynomial based on the error ratio selection technology
38
3 Computational Inverse Techniques
Table 3.2 Comparison of the sensitivity analysis results Sensitivity index
Direct integral method
Sobol method based on Monte-Carlo
Original model
Optimal polynomial
Sampling 1
Sampling 2
Sampling 3
Sampling 4
S1
0.1917
0.2012
0.2406
0.1752
0.1594
0.1742
S2
0.2996
0.3147
0.3423
0.2583
0.1921
0.3022
S3
0.5017
0.4756
0.5299
0.4782
0.456
0.4515
S12
0.0012
0.0029
0.0574
0.0355
0.1461
0.0287
S13
0.0055
0.0056
0.0573
0.0507
0.0465
0.0430
S23
0.0003
4.2 ×
S123
0
0
10−5
0.0024
0.0312
0.1396
0.0072
0.0043
0.0291
0.1397
0.0068
is similar to the original function. Therefore, the sensitivity analysis results based on the direct integral has a higher accuracy.
3.2.4 Engineering Application: Global Sensitivity Analysis of Vehicle Roof Structure The main injury in cars tumbling accident is potentially caused by the roof deformation, which invades the occupant’s space to hurt the passengers. Thus, the roof strength should be enhanced to reduce the roof deformation in design of the body structure. There are many body structural parts. Each part contributes to the roof strength. Therefore, it is necessary to quantitatively analyze the sensitivity of each part, and to identify the main parts whose sensitivities are the strongest. A numerical model for the vehicle is established as shown in Fig. 3.2. This model consists of 492,384 elements and 491,956 nodes. The maximum bearing capacity of roof structure is an important index to evaluate the roof strength, so the maximum bearing forces is identified as the structure output response in the crash process that Fig. 3.2 Roof pressure numerical model of vehicle roof
3.2 Sensitivity Analysis Methods
39
a rigid wall presses the vehicle roof. The upper body parts, as shown in Fig. 3.3, are the analysis objects. The thicknesses of these parts are set as the model parameters. The value range of the thicknesses are plus or minus 50% to the initial design as shown in Table 3.3. The global sensitivity analysis method based on the direct integration and the optimal polynomial is adopted to evaluate the sensitivities of the component thicknesses to the maximum bearing force. In the nine value ranges of thicknesses, 30 samples are generated by Latin hypercube experimental design, and their corresponding maximum bearing forces at every sample are calculated through 30 times numerical simulation. The structural selection technique based on the error reduction ratio establishes the optimal polynomial model as follows,
9
4 6 8
5
3
7
2
1
Fig. 3.3 Upper structure of vehicle body
Table 3.3 Parameters and design ranges of the upper structure of vehicle body Part number
Part name
Initial thickness(mm)
Range value (mm)
1
Inner plate of A pillara
2.0
[1.0, 3.0]
pillara
2
Reinforcing plate of A
1.4
[0.7, 2.1]
3
Front cross beam of the cap
1.1
[0.6, 1.7]
4
Roof side rail
1.1
[0.6, 1.7]
5
Reinforcing plate of the roof side raila
1.2
[0.6, 1.8]
6
Inner plate of B pillara
1.2
[0.6, 1.8]
1.1
[0.6, 1.7]
pillara
7
Outside plate of A
8
Reinforcing plate of B pillara
1.5
[0.8, 2.4]
9
Aft cross beam
1.0
[0.5, 1.5]
Note a indicates symmetric part
40
3 Computational Inverse Techniques
f (x) = 25027.32 − 5960.52x5 + 2102.92x6 + 3946.78x8 − 4874.11x9 + 1654.19x1 x3 + 3018.51x2 x5 − 2359.05x2 x9 − 2031.41x4 x6 − 270.24x4 x7 + 1665.94x4 x9 + 947.79x5 x6 − 606.69x5 x7 + 1097.75x5 x8 + 2239.11x6 x7 − 1307.44x6 x8 + 832.32x6 x9 − 110.28x82 − 1728.46x8 x9 + 4852.16x92
(3.14)
where f (x) is the maximum bearing force of the vehicle roof and xi (i = 1, 2, . . . , 9) are the thicknesses of the nine components. Through the direct integration, Eq. (3.14) can be converted into the orthogonal items. Continuing the integration in Eqs. (3.6) and (3.7), the sensitivity analysis results of every thickness can be derived as shown in Fig. 3.4 and Table 3.4. According to the sensitivity results in Table 3.4, A-pillar (two parts), B-pillar (three parts), and the front cross beam contribute significantly to the maximum bearing force. Their first order sensitivity values are 0.2858, 0.2769, and 0.2150, respectively. They are the principal supports to the top pressure working condition, which is consistent with the practical observation. Sensitivity results quantify the contributions of each structural parameter to the vehicle roof pressure, and help to select the model parameters which contribute significantly to the roof strength. It should also be noted that the global sensitivity analysis method based on the direct integral only runs a limited number of numerical simulations when setting up the polynomial model, thus, the solving efficiency is high, and it is suitable for the analysis of complex engineering problems. 0.3
Sensitivity value
0.25 0.2 0.15 0.1 0.05 0
S1T
S2T S3T S4T S5T S6T Sensitivity index
S7T
S8T
S9T
Fig. 3.4 Global sensitivity results of the parameters of nine upper components
3.3 Regularization Methods for Ill-Posed Problem
41
Table 3.4 Sensitivity analysis results of the parameters of the upper structure Sensitivity index
Value
Sensitivity index
Value
Sensitivity index
Value
S1
0.2350
S13
0.0179
S67
0.0118
S2
0.0508
S25
0.0348
S68
0.0085
S3
0.2150
S29
0.0148
S69
0.0013
S4
0.0230
S46
0.0097
S89
0.0103
S5
0.0050
S47
0.0001
Others
0
S6
0.1152
S49
0.0045
S7
0.0534
S56
0.0025
S8
0.1083
S57
0.0009
S9
0.0711
S58
0.0060
3.3 Regularization Methods for Ill-Posed Problem When identifying the model parameters by physical experiments, the identified model parameters are expected to be subsistent and unique. Because there inevitably exists measurement noise in the response data, the stability of the solution is another concern. The existence, uniqueness, and stability of the solution to the model parameters have theoretical and practical significance for the high precision numerical modeling. When describing the problem and the specific condition, if a problem satisfies the following three conditions, i.e., the existence, uniqueness and stability of the solution, the problem is well-posed. If one of the above three conditions is not satisfied, the problem is identified as ill-posed. Most inverse problems of the model parameters are ill-posed problems. Therefore, revelation of the source of the ill-posedness to be reduced effectively is prerequisite to ensure the accuracy and stability of the identification of the model parameters.
3.3.1 Ill-Posedness Analysis If the established numerical model can reasonably represent the actual physical process and identify the model parameters with strong sensitivities to the physical measurement through the sensitivity analysis, the inverse solution in general exists in engineering problems. However, in some special cases, it is still difficult to ensure the existence of the inverse solution. For example, when there is large deviation or noise in the measured response, the corresponding inverse solution may not exist. When the measured response is an artificial expectation, there may be no inverse solution within the value ranges of the model parameters. In order to ensure the uniqueness of inverse solution, multi-variety sensing information can be fused, and the prior knowledge can be fully applied to enhance the constraints of the inverse solution. Experimental measured responses inevitably contain noise, and the slight
42
3 Computational Inverse Techniques
disturbance of the measured response may lead to large change of the inverse solution, which means that the inverse results don’t have the continuous dependence on the measured response. In order to analyze the reason for the instability, this section will discuss the ill-posedness and the regularization in the linear inverse problems. Assuming the dimensions of the experimental measured responses with noise and the parameters to be identified are n, the linear relationship can be described as yδ = Gx + er r
(3.15)
where yδ is the measured response with noise, er r denotes the noise, x represents the actual value of the model parameter. G : x → y is the linear operator in the matrix form. The singular value decomposition of system matrix G gives G = U Diag(σi )V T
(3.16)
where U = (u1 , u2 , . . . , un ), V = (v 1 , v 2 , . . . , v n ), ui and v i are the left and right singular value vectors of matrix G, respectively. So that
uiT u j = θi j , v iT v j = δi j Gv i = σi ui , G T ui = σi v i ,
(3.17)
where δi j is the Kronecker function implying that if i = j, δi j = 1, otherwise δi j = 0. If the inverse matrix of G exists, substituting Eqs. (3.16) and (3.17) into Eq. (3.15), the identified model parameters with the noise x δ is derived as
x δ = G −1 yδ = V Diag(σi−1 )U T U Diag(σi )V T x + er r =x+
n
σi−1 uiT · er r v i
(3.18)
i=1
According to Eq. (3.18), the instability of the identified solution x δ is mainly caused by two factors. One is the noise er r in the measured response, and another is the small singular value σi . Especially, when the system matrix is seriously illposed, the small singular will approach closely to zero and leads to large error in the inverse solution. In order to overcome this ill-posedness, on the one hand, the filtering method can be applied to reduce the noise in the measured response. On the other hand, regularization method can reduce the ill-posedness of the system matrix to deduce effectively the stably solving of the inverse problem.
3.3.2 Regularization Methods Regularization method is a generic method for solving the ill-posed problems. The basic idea is to adopt a bounded operator, which is similar to the original ill-posed
3.3 Regularization Methods for Ill-Posed Problem
43
problem, to transform the original problems into a well-posed problem. The solution of the transformed well-posed problem can effectively approximate the original solution. Therefore, the linchpin for the effectiveness of the regularization is how to construct a similar problem to get regular operator and the regularization solution, and how to control the similar degree to the original problem, namely, how to determine the appropriate regularization parameter. In all the regularization methods, the most representative is the Tikhonov regularization [5]. It is proposed based on the first type of operator equation to lay the theoretical basis for processing the inverse problem. A series of regularization methods were developed and advanced upon the Tikhonov method, such as Landweber iteration method [19], Backus-Gilbert method [20], Bubnov-Galerkin method [21], truncated singular value decomposition method [22], damping least squares method, and the ridge trace method [23]. Taking the linear inverse problem as an example, the regularization methods, and the selection of the best regularization parameter under the unknown noise levels will be discussed in the following context from three aspects, the variation principle, the spectral decomposition, and the iteration calculation. 1. Variation principle-based regularization method Tikhonov variational regularization constructs the regularization operator by introducing a stable functional. For an inverse problem, the measured response yδ meets the following condition δ y − y ≤ δ
(3.19)
The well-posed solution of problem in Eq. (3.15) should satisfy the following variation principle 2 x α,δ = arg min yδ − Gx + α(x) := M(x) x
(3.20)
where yδ − Gx is the residual modulus, α stands for the nonnegative regularization parameter, and (x) is the Tikhonov stability. The forms of (x) can be constructed from the norm, the information entropy, the singular value correction or the iteration to further derive the corresponding regularization methods. For example, if the norm x is selected as the (x), the regularized functional is then written as 2 M(x) = yδ − Gx + αx2 = [ yδ − Gx]T [ yδ − Gx] + α[x]T [x]
(3.21)
The regular approximate solution of the model parameters can be derived by minimizing the functional in Eq. (3.21), so that ∂ M(x) = −2G T [ yδ − Gx] + 2α I x = 0 ∂x
(3.22)
where I is unit matrix. Thus, the model parameter x α,δ can then be identified as
44
3 Computational Inverse Techniques
x α,δ = [G T G + α I]−1 G T yδ
(3.23)
Since it adds nonnegative item α on the diagonal item, the revised matrix [G T G + α I] reduces the ill-posedness in the original matrix [G T G] with a condition number of σ2 +α σ2 < max = cond[G T G], α > 0 cond[G T G + α I] = max 2 2 σmin + α σmin
(3.24)
In simple summary, when the system matrix G is ill-posed, the bounded operator [G T G + α I] can be adopted to replace the original ill-posed operator [G T G]. It can effectively reduce the ill-posedness of the inverse problem. It can be stated that Tikhonov regularization method increases the constraints to strengthen the wellposedness of the solution, and thus, to ensure the stability of the inverse solutions. The constraint conditions mainly consist of two types of constraints. One is the strong constraint, which requests the small residual modulus yδ − Gx to ensure the similar degree between the measured response and the calculated response. Another type is much broader to limit the norm x within a specified range to guarantee the stability of the result. 2. Spectral analysis-based regularization method Generalized operator-based inverse and spectrum analysis theory implement theoretical analysis for regularization method. Herein, regular filter is applied to further explain the variational regularization in Eq. (3.20). For the model parameter inverse problem, according to Eq. (3.18), the ill-posedness mainly comes from the small singular values of the system matrix. When σi approaches to zero, the σi−1 will tend to be infinity, which seriously amplifies the negative influence of the noise in yδ . Therefore, filtering function is introduced to weaken the noise influence. For the small singular value σi , its inverse σi−1 can be multiplied by adopting a filter function, i.e., regularization operator f (α, σi ), to ensue that f (α, σi )/σi tends to be zero when σi tends to be zero. Therefore, the stable approximate solution of the model parameter x α,δ can then be derived as follows x α,δ = V Diag( f (α, σi )σi−1 )U T yδ =
n
f (α, σi )σi−1 uiT yδ v i
(3.25)
i=1
Different filter functions f (α, σ ) corresponds to different regularization methods. If the filter function f (α, σ ) is chosen as f (α, σ ) =
σ2 α + σ2
(3.26)
it will formulate the famous Tikhonov regularization method. The corresponding regular solution is then
3.3 Regularization Methods for Ill-Posed Problem
x
α,δ
=
n σi uiT yδ i=1
σi2
+α
−1 v i = G T G + α I G T yδ
45
(3.27)
If the filtering function f (α, σ ) is identified to be f (α, σ ) =
1, σ 2 ≥ α 0, σ 2 < α
(3.28)
it will yield the truncated singular value decomposition (TSVD). The corresponding regular solution is
x α,δ =
σi−1 uiT yδ v i
(3.29)
σ 2 ≥α
The extended Tikhonov regular filter operator is f (α, σ ) =
σr , r ≥ 1, α > 0 α + σr
(3.30)
The exponential filter function as f (α, σ ) = 1 − e−σ/α
(3.31)
Particularly, when α = 0, the filtering function becomes f (α, σ ) = 1
(3.32)
which is the common least square estimation. Herein, an improved regular operator [24] is proposed. Define the filtering function f : (0, ∞) × (0, G] → R 1 as f (α, σ ) = 1 −
4 arctan(e−σ/α ) π
(3.33)
The function in Eq. (3.33) belongs to a regular operator. The improved regular operator curves under different regularization parameters are shown in Fig. 3.5. It can be seen that the filter operator can remain the large singular values, which helps to ensure the accuracy of the identified results. It is understood that the filter operator eliminates the small singular values to ensure the stability of the inverse results. The regularization parameters α is modulated to process different noisy levels to balance the accuracy and the stability of the inverse solution.
46
3 Computational Inverse Techniques
Fig. 3.5 Improved filter function under different regularization parameters
1 0.9 0.8
f (α,σ)
0.7 0.6 0.5 0.4
α = 1e-6 α = 1e-5 α = 1e-4 α = 1e-3 α = 1e-2
0.3 0.2 0.1 0 10-10
10-8
10-6
10-4
σ
10-2
10-0
102
3. Iteration regularization method Iteration regularization method constructs a vector sequence {x i } based on specific types of rules to regulate the limit vector x ∗ to fit into Eq. (3.15). Generally, the iteration regularization method can be divided into two types, namely, the traditional regularization iterations and the Krylov subspace iteration regularization methods. The traditional iteration regularization methods include Jacobi, Gauss-Seidel, SOR, and SSOR methods. These methods have a low convergence rate and have very limited application in the linear inverse problem. On the other hand, they are often adopted as the pretreatment of the Krylov subspace iteration regularization methods. Krylov subspace iteration regularization methods [25, 26] are iterative methods. These methods have several advantages including low calculation cost, less memory requirement, high convergence speed and good numerical stability. Contrary to the traditional regularization iterations, Krylov subspace iteration methods are an effective method for linear inverse problem, which include the Conjugate Gradient method (CG), the Conjugate Residual method (CR), the least-squares QR decomposition method (LSQR), and the Generalized Minimal Residual method (GMRES). For the inverse problem, iteration methods may lead to the half convergence phenomenon. In the early iterations, the approximate solution can be improved steadily to demonstrate a self-regularization effect. But, when the iteration step increases to a specific threshold, the approximate solution tends to be divergent. The reason is that low frequency part of the solution converges faster than the high frequency part does, and the structural responses always belong to the low frequency signals compared with the noise. Therefore, the model parameters inverse process based on iteration methods has the implicit self-regularization function, and the iteration step plays the role of regularization parameter. Thus, much reduced iteration steps will be sufficient to implement the stable inverse of the model parameters.
3.3 Regularization Methods for Ill-Posed Problem
47
3.3.3 Selection of Regularization Parameter When solving the model parameter of the ill-posed inverse problem based on the regularization method, parameter α can adjust the relative magnitude regularization of the residual norm yδ − Gx and the solution norm x in Eq. (3.21). From the viewpoint of the approximation accuracy, the regularization parameter α should be as small as possible, whereas from the viewpoint of the numerical stability, α should be as large as possible. Therefore, both the accuracy and the stability of the solution should be considered when choosing the best regularization parameter to achieve the balance between these two conflicting goals. There are many types of criteria to determine the regularization parameter. If the noisy level in the measured response is known, the Morozov deviation principle [27], Engl criterion [28], or the proposed optimal criterion [29] are preferable. However, in most engineering problems, it is difficult to specify the noisy level in the experimental response. In view of this, the generalized cross validation criterion (GCV) [30] and L-Curve criterion (L-Curve) [31, 32] are the potential candidates. 1. Generalized cross validation criterion GCV criterion stems from the best selecting model PRESS criterion of the statistical estimation theory, whereas it is more robust. The basic idea of GCV is that the model and the inverse solution corresponding to an appropriate regularization parameter should be able to predict any new data. GCV criterion engages each data point to determine the regularization parameter, and considers other data points to construct model and to predict new data points. The GCV function V (α) is shown as follows V (α) =
δ y − Gx α,δ 2 (tr[I − GG # ])2
=
(I − GG # ) yδ 2 (tr[I − GG # ])2
(3.34)
where G # = (G T G + α I)−1 G T , and ‘tr’ denotes the trace of matrix, which is the sum of diagonal elements. According to x α,δ = G # yδ = (G T G + α I)−1 G T yδ , Eq. (3.34) can be rewritten as follows V (α) =
(I − G(G T G + α I)−1 G T ) yδ 2 (tr[I − G(G T G + α I)−1 G T ])2
(3.35)
The minimum value of function V (α) corresponds to the best regularization parameter of the GCV criterion. In this minimization problem, it should be noted that the molecular of V (α) is actually the residual of the regularization solution, which is easy to be derived. But, when the size of matrix is large, the denominator demands a huge amount of calculation. 2. L-curve criterion
Since the norms of both the residual Gx α,δ − yδ and the solution x α,δ are functions of the regularization parameters, in the log-log coordinate system, if the
48
3 Computational Inverse Techniques
Fig. 3.6 L-curve sketch
norm of Gx α,δ − yδ is chosen as the x-coordinate and the norm of x α,δ is specified as the y-coordinate, different values of α will generate a curve as shown in Fig. 3.6. Because this curve generally takes an obvious L shape, the method applying the curve to determine the regularization parameter is called L-Curve method. It is understood that the corner point on the L-curve is expected to ensure the best balance between the norm of deviation and the norm of solution. Thus, the regularization parameter corresponding to this corner point is the optimal regularization parameter. Hansen and Leary [31] analyzed the characteristic of the L-curve theoretically based on the matrix singular value decomposition, and presented examples to verify the L-curve criterion. According to him, L-curve is equivalent to the deviation principle and the generalized cross validation criterion to specific extent, and has unique advantages. The key step of L-Curve method is to determine the corner. It was suggested by Hansen that the maximum curvature point of L-Curve in the loglog coordinates system was chosen as the corner [32]. After SVD decomposition, the norms of solution and residual can be described as 2 η = x α,δ 2 = i
f (α, σi ) δ 2 y , ui σi
2
2 ρ = Gx α,δ − yδ 2 = (1 − f (α, σi )) yδ , ui
(3.36) (3.37)
i
Through the logarithmic operation ηˆ = log η, ρˆ = log ρ
(3.38)
3.3 Regularization Methods for Ill-Posed Problem
49
and the L-curve is fitted by point ρ/2, ˆ η/2 ˆ . Applying ηˆ , ρˆ , ηˆ
and ρˆ
to represent the first and the second order derivative, respectively, the curvature of L-curve κ(α) is given by ρˆ ηˆ
− ρˆ
ηˆ
κ(α) = 2
2 3/2 2 ρˆ + ηˆ
(3.39)
The curvature of L-curve varies with different α as shown in Fig. 3.7. When the Tikhonov regularization method is adopted, the curvature function can be deduced as follows κ(α) = 2
ηρ α 2 η ρ + 2αηρ + α 4 ηη
3/2 η
α 2 η2 + ρ 2
(3.40)
where η is the first order derivative with respect to α, so that δ y , ui 4 2 η =− (1 − f (α, σi )) f (α, σi ) α i σi2
(3.41)
For the other regularization methods, the curvature equation of L-curve can be derived via the similar variable substitution. In addition, curve fitting method also can be adopted to calculate the maximum curvature point. Fig. 3.7 Curvature of the L-curve
50
3 Computational Inverse Techniques
3.3.4 Application of Regularization Method to Model Parameter Identification Dynamic load identification aims to specify the dynamic load applied on the structure using the measured response and system model. Due to the measurement error in the response and the ill-posedness of the system, regularization method is applied to implement the stabile identification of the dynamic load. For linear time invariant structure, unit impulse signal can be considered as unit signal, and the dynamic load in time domain is expressed as the superposition of these unit signal responses. Thus, the system response can be expressed as a convolution of Duhamel integration as follows t p(τ )g(t − τ )dτ
y(t) =
(3.42)
0
where y(t) is the structural response, which can be displacement, velocity, acceleration, strain, or stress, p(t) stands for the dynamic load to be identified, g(t) is the dynamic response when a unit impulse load is applied to the structure, called the Green kernel function response from load point to measurement point. The continuous dynamic load p(t) can be approximated by a series of rectangular pulse functions as shown in Fig. 3.8. t stands for the size of a discrete time step, m is the sample number, pi the load to be identified at t = i t. The convolution integration in Eq. (3.42) can be dispersed to a set of linear equations, which can be described in matrix form as follows ⎧ ⎫ ⎡ ⎫ ⎤⎧ g1 0 ⎪ ⎪ ⎪ p0 ⎪ ⎪ y1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ y2 ⎬ ⎢ g2 g1 ⎬ ⎥⎨ p1 ⎪ ⎢ ⎥ =⎢ .
t (3.43) ⎥ . . . .. ⎪ .. . . . ⎦⎪ .. ⎪ ⎪ ⎣ .. ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎩ ⎭ ⎭ gm gm−1 g1 ym p m−1 or is simply written as y = Gp
(3.44)
where yi , gi are the structural response and the Green kernel function response at t = i t, respectively. The Green kernel function matrix G in Eq. (3.44) is usually ill-posed. Regularization method is then adopted to improve the ill-posedness. In order to verify the correctness and the stability of the regularization method, example of dynamic load identification of a plane structure is presented. The geometry size of the plane structure is 0.15 m × 0.06 m × 0.003 m, its material is aluminum with elastic modulus of 70 GPa, Poisson’s ratio of 0.33 and density of 2.8 × 103 kg/m3 . The damping is proportional damping with the coefficient to the stiffness matrix 9 × 10−4 . Boundary
3.3 Regularization Methods for Ill-Posed Problem Fig. 3.8 Dynamic load as the superposition of unit impulse function
51
p (t )
0
iΔt
t
conditions are that one end is fixed and the other end is free. Both directions of the dynamic concentrate load and the measured displacement response are perpendicular to the plate. In the plane structure shown in Fig. 3.9, arrow 1 and arrow 2 denote the exciting point and the measured point, respectively. The dynamic concentrate load function of the plane structure is p(t) =
q sin(2π t/td ), 0 ≤ t ≤ 4td 0, t < 0 and t > 4td
(3.45)
where td = 0.02 s and q = 10 N are the cycle and the amplitude of the sinusoidal load, respectively. Adding a specific level of random noise to the numerical simulation response to approximate the measured response, so as to identify the time profile of the dynamic load. The displacement response with noise can be expressed as follows
yδ = yc + l δ · std yc · r andn
(3.46)
where yc is the simulated displacement at the discrete time point, std( yc ) denotes the standard deviation of yc , l δ represents percentage level of noise, r andn stands for a set of random numbers with zero mean and one variance. The simulated displacement at point 2 and the simulated displacement with 10% noisy level are shown in Fig. 3.10. An unit impulse load is applied at the point 1 with sample cycle of t = 0.0005 s. The Green impulse kernel function response from point 1 to point 2 can be calculated through numerical simulation, which is shown in Fig. 3.11. The Green kernel function response is then applied to establish the Green kernel function matrix G. With regard to 10% noise in the response, Tikhonov regularization, the truncated singular value decomposition (TSVD) and the iteration LSQR method are applied to identify the dynamic load, respectively. The corresponding regularization parameter methods engage the GCV criterion and the L-curve criterion. The identified loads and the corresponding best regularization parameters are
52
3 Computational Inverse Techniques
Fig. 3.9 Finite element model of plane structure
Displacement response / m
Fig. 3.10 Time history of the displacement at point 2
shown in Table 3.4. The results based on the direct least squares method are also given in Table 3.4 for comparison. According to the results in Table 3.5, the identified dynamic loads through the three regularization methods and the corresponding regularization parameter selection methods are highly consistent with the actual dynamic load, whereas that from the direct least square method without regularization deviates from the actual load substantially. It indicates that when the response data contains noise and Green kernel function matrix is ill-posed, the above several regularization methods and regularization parameter selection methods are effective to rectify. They suppress the noise influence to identify load and reconstruct efficiently the dynamic load on the structure stably.
3.4 Computational Inverse Algorithms
53
Fig. 3.11 Green kernel function response from point 1 to point 2
3.4 Computational Inverse Algorithms After establishing the inverse model for the parameter identification based on various inverse criteria, such as least square criterion, the minimum mean square error criterion and the maximum likelihood criterion, it is still required to engage effective computational inverse algorithm to calculate the reliable model parameters. Many types of computational inverse algorithms have been developed from different views of perspective, such as the optimization, the roots of a nonlinear equations, system identification and optimal controlling. In general, the computational inverse algorithms are divided into two categories: the gradient iteration-based computational inverse algorithms and the intelligent evolutionary-based computational inverse algorithm [1]. Combining with the fast convergence speed of the gradient iterative algorithm and the global convergence of the intelligent evolutionary algorithm, a hybrid inverse algorithm is developed, which further improves the computational efficiency and convergence speed.
54
3 Computational Inverse Techniques
Table 3.5 Identified dynamic loads based on regularization methods [33] Regularization method
Tikhonov (GCV)
TSVD (L_curve)
LSQR (L_curve) Direct least square method (No regularization method)
Identified dynamic load
3.4 Computational Inverse Algorithms
55
3.4.1 Gradient Iteration-Based Computational Inverse Algorithm According to the change rule of the object function, gradient iteration-based computational inverse algorithm explores the direction at a suitable step length to decrease the object function, and repeatedly iterates and rectifies the inverse solution until the convergence condition is satisfied. This category of algorithms is comparatively mature, and mainly includes the steepest descent method, Conjugate Gradient method, Powell method, Newton method, the BFGS method, the Gauss-Newton method, the Levenbeg-Marquardt method and the trust region method, etc. [1, 19]. Many softwares provide a gradient iteration algorithm-based toolkit, including NAG Fortran library, the IMSL library, the Matlab optimization toolbox, the TAO optimization toolbox, the modeFrontiner optimization design software package and the Lindo/Lingo optimization software. Gradient iteration-based computational inverse algorithm has several common characteristics. First, the object function should be continuous and differentiable. Second, the numerical differential should be available to get the corresponding derivative. Third, the object function value of the next iteration must be smaller than that of the current iteration. On the other hand, the differences of various methods mainly lie in the different methods of the searching methodology. Particularly in the iteration process, if there is an inverse operation of a singular matrix or approximate singular matrix, regularization strategy is usually adopted to ensure the stable convergence. The convergences and stabilities of those iteration inverse algorithms depend on the initial values of the model parameter to some extent. Inadequate initial value may lead to slow convergence or local convergence, and thus the identified results are not the global optimal solution. Even though gradient iteration-based computational inverse algorithm requires limited iterations to yield high efficiency, the improficiencies, such as the dependence on the initial value and the inadequacy in the global convergence, refrain its general application for the complex engineering problems. In recent years, the homotopy method [34] for nonlinear equations has attracted much attention due to its advantages in the global convergence, the insensitivity to the initial value and the regularization. And it employs an effective gradient iterationbased computational inverse algorithm. The homotopy algorithm is also named as the homotopy continuation algorithm. At its earlier development stage, it is usually applied as a numerical tool to solve nonlinear equations. When the nonlinear equations are intricate to be solved directly, a system of simultaneous equations is explored to construct a new mapping relationship to get the solution of the original equation set from the solution of the equation group which is easy to obtain [35, 36]. For the inverse problem of model parameter, the objective function can be transformed into nonlinear equations to search the root, so that G(x) = g(x) − yδ = 0
(3.47)
56
3 Computational Inverse Techniques
where x is the model parameter to be identified, yδ stands for the measured response, g(x) is the forward model. Instead of solving directly Eq. (3.47), the homotopy algorithm introduces a homotopy parameter t and a known equation F(x) = 0 to construct a new homotopy map as follows H(x, t) = t G(x) + (1 − t)F(x) = 0
(3.48)
When t = 0, H(x, 0) = F(x) = 0, which implies that the solution of homotopy equation is the solution of the introduced equation. When t = 1, H(x, 1) = G(x) = 0, which indicates that the solution of homotopy equation is the solution to the original inverse equation. When 0 ≤ t ≤ 1, the solution of homotopy equation exists. Because when the parameter t changes from 0 to 1, the solution approaches gradually to the original inverse equation from the introduced easy equation, and the solution path is called the homotopy path. Usually different F(x) can construct different homotopy equations, and the popular homotopy functions include the fixed point homotopy, the Newton honotopy and the affine homotopy. The fixed point homotopy is expressed as
H(x, t) = t G(x) + (1 − t) x − x (0)
(3.49)
Newton honotopy has the form of
H(x, t) = t G(x) + (1 − t) G(x) − G x (0)
(3.50)
And affine homotopy is formulated as
H(x, t) = t G(x) + (1 − t)G x (0) x − x (0)
(3.51)
where x (0) is the initial point. In general, the method to solve the Eq. (3.48) is a path tracking algorithm consisting of initialization, Euler Newton correction and convergence verification.
forecast, As shown in Fig. 3.12, x (k) , t (k) is the kth iteration approximate point on the homo(k) (k) , t topy path (x(t), t), namely, H x = 0. Euler forecast chooses the tangent the forecast direction, and sets a specific step length vector at point x (k) , t (k) as
to derive the forecast point x (k+1) , t (k+1) . The Newton method is then adopted to
modulate the forecast point to converge to the exact point x (k+1) , t (k+1) on homo topy path, H x (k+1) , t (k+1) = 0. Thus, the solution starts from the initial
(0) namely point x , 0 , and repeats the Euler forecast and Newton correction until t = 1 to reach the solution to Eq. (3.47). For homotopy algorithm with the Euler forecast and the Newton correction, when the homotopy path curve is complex, along the tangent direction, the forecast point may deviate away from the homotopy path. It increases the computational cost of the Newton’s correction. For this reason, the curve forecast method is proposed based on the Euler forecast as shown in Fig. 3.12. This method
3.4 Computational Inverse Algorithms
57
Fig. 3.12 Sketch map of Euler/curve forecast and Newton correction
uses the points on the homotopy path for curve fitting, such as the radial basis function (RBF) fitting, and adopts the fitting curve to forecast the next point. Compared with Euler forecast, the curve forecast method predicts the next homotopy path point more accurately. And within the range of the specified error, the step length can be bigger. Therefore, the homotopy method based on the curve forecast and the Newton correction improves the accuracy and efficiency [37]. The homotopy method based on the curve forecast and the Newton correction includes a nested iteration. The outer layer is that the homotopy parameter t varies from 0 to 1 to constitute gradually the homotopy path, whereas the inner layer is that at specific homotopy parameter t, Newton correction modulates the value to draw upon the homotopy. In order to give a clearer illustration, a flowchart of the homotopy algorithm based on the curve forecast and the Newton correction is shown in Fig. 3.13. Details of the implementation procedure are explained as follows
(1) The value (x, t) = x (0) , 0 , the initial step t of the homotopy parameter t and the iteration step k = 0 are initialized. (2) The homotopy algorithm based on the Euler forecast and the Newton correction are applied to calculate k points on the homotopy path. (3) According to the known points, an approximate model, such as the radial basis function and support vector machines, are selected to fit homotopy paths, and an approximate homotopy curve x(t) is constructed. (4) The fitting curve x(t) and the step t are applied to predict the k + 1 iteration curve forecast point x (k+1) , t (k+1) .
58
3 Computational Inverse Techniques
Fig. 3.13 Flowchart of homotopy algorithm based on curve forecast and Newton correction
(5) Check whether the forecast point x (k+1) , t (k+1) satisfies the homotopy equa tion H x (k+1) , t (k+1) = 0. If yes, go to step (7) and increase the step length
t. Otherwise, go to step (6). (6) The Newton correction is adopted to evaluate the curve forecast point (k+1) x , t (k+1) . If the corrected result satisfies the convergence condition, go to step (7). Otherwise, the step length t is decreased before return to step (4). (k+1) = 1, go to step (8). (7) The actual point Rc on the homotopy path is derived. If t (k+1) (k+1) is put into the point set to fit the homotopy ,t Otherwise, the point x curve. Set t (k+1) = t (k) + t, k = k + 1 and return to step (3). (8) The homotopy result is outputted to implement the identification of model parameter.
3.4 Computational Inverse Algorithms
59
3.4.2 Intelligent Evolutionary-Based Computational Inverse Algorithm In order to overcome the improficiencies of the gradient iteration-based computational inverse algorithm, i.e., the derivative operation and the global divergence, a series of intelligent computational-based inverse algorithms are developed accordingly to the Monte-Carlo method, simulated annealing method, genetic algorithm, ant colony algorithm, particle swarm optimization algorithm and artificial neural network, etc. Intelligent evolutionary algorithms usually calculate the function values of the object function and have a good global search ability. Thus, they are suitable for model parameter identification of complex engineering problems. However, intelligence evolution algorithms essentially are random exploration algorithms. When the number of the model parameters is high, they demand highly computational amount. Therefore, a surrogate model and the adaptive update are often applied to substitute the time-consuming complex model. These methods are extensively applied as facilitated by commercial software packages and the open source programs. Herein, a brief introduction to all types of the methods are given as follows. (1) Monte-Carlo method based on the probability and the statistics theory. It generates random samples to search the inverse solution of the model parameter. It is a more feasible algorithm compared with the exhaustive method in view of that it has very limited restriction from the problem conditions and the dimension of the inverse model parameters. Thus, it is suitable for most of the inverse problems. On the other hand, random search of the Monte-Carlo method ensues immense repeatability and blindness, the colossal calculation amount and the low convergence speed refrains its general application to the complex engineering problems. However, the Monte-Carlo method can provide a relatively accurate inverse solution. So it is often adopted to verify the performance and the effectiveness of the other computational inverse algorithms. (2) Simulated annealing method is an improved heuristic Monte-Carlo method. It represents the model parameters as a molecule of melting object and the objective function is considered as the energy of the melting object. It gradually decreases the simulation temperature for the iterative identification to minimize the objective function. The simulated annealing method includes three functions and two criterions, i.e., the new state function, the new state accept function, the cooling function, the sampling stability criterion and the annealing end criterion. The cooling process has an important influence on the inverse solution. If the temperature goes down too quickly, it may miss the extreme point. On the other hand, slow cooling process may lead to low convergence efficiency. Thus, the cooling function should maintain a balance between the accuracy and efficiency. In order to search an optimal solution efficiently, the simulated annealing method usually sets a high initial temperature, a slow cooling rate, a low end temperature, and multi sampling under every temperature state. (3) Genetic algorithm (GA) is a random search method from the biological evolution law. GA starts from an initial population of the model parameter, and
60
3 Computational Inverse Techniques
produces close approximation solution at the next population according to the principle of survival of the fittest. In each generation of population, the individuals whose fitness functions are sufficient high will be retained. The genetic operators consisting of the crossover and mutation are adopted to derive the next generation population. Traditional GA generally has a big population. Therefore, for the complex engineering problems, even one forward calculation is time-consuming. In view of that, Krishmakumar [35] proposed the micro GA. This algorithm reduces the number of the population substantially to save the computational cost during each forward calculation and to further improve the computational efficiency. This method is extensively applied in machinery, aerospace, vehicle engineering and control field. Elaboration on this method will be given in the Chap. 9 of this book. (4) Particle swarm optimization algorithm is an intelligent evolutionary algorithm imitating birds swarm behavior, and it is simpler than GA. It does not engage the crossover and mutation operations as GA does. Instead, it focuses on the current optimal value to search the global optimal value. In the particle swarm optimization algorithm, every solution of the inverse problem is a particle in the search space, and every particle has a corresponding fitness value calculated by an optimization function. Each particle has a speed vector to define the moving direction and the distance, and traces the current optimal particle to search the global optimal solution. Only the optimal particle can pass information to the other particles, which is a one-way flow. The whole searching-updating process is to trace the current optimal solution process, and in most cases, all the particles may converge to the optimal solution efficiently. (5) Ant colony algorithm is a distributed intelligent search algorithm and stems from the ants foraging behavior to detect the shortest path. Ant colony algorithm mainly consists of the memory, the pheromones and clustering activity of ants. To be specific, the searched path will be recorded and eliminated from the next searching context. Thus, a taboo list as memory is established for the simulation process. The ants use pheromones to communicate with each other. And the ants work in cluster instead individually. When specific paths have more ants passing through to increase the pheromone intensity, the ant would choose these paths more probably next time. Since the other paths attract little ants, the pheromone will evaporate away. Ant colony algorithm has these three characteristics, i.e., distributed computation, information positive feedback and heuristic search. The linchpin for its successful application in model parameter identification is to search the reasonable expression of the inverse problem and the appropriate heuristic functions. (6) Artificial neural network is a nonlinear system consisting of many simple calculation cells (neuron), and has the intelligent functions, such as learning, memory and calculation. The neural model of neural network basic unit has three basic factors, i.e., connection weights, the summation element and the nonlinear transfer function. Particularly, the nonlinear transfer function of neurons and the connection weighting distribution of every neuron equip the neural network with a strong nonlinear mapping capability. A usual artificial neural network contains
3.4 Computational Inverse Algorithms
61
BP neural network, Hopfield neural network and RBF neural network, etc. The artificial neural network can be applied to the model parameter inverse through adopting a trained neural network to approximate the inverse solution, and then solve the model parameters by the observation dates or their expectations.
3.4.3 Hybrid Inverse Algorithm Gradient-based computational inverse methods have a poor global search ability and are much sensitive to the initial point. On the other hand, if the initial point is selected properly, it will lead to fast convergence with less computational cost. Intelligent computational-based heuristic inverse algorithm has a strong global search ability, and it does not use the gradient information of the objective function and is not influenced by the initial values. But the computational amount is large, the convergence speed is slow and the local search ability is not enough. In order to fully combine the advantages of the two methods, hybrid inverse algorithms were developed. Liu et al. [36] mixed GA and the nonlinear least square method to identify the mechanical properties of the composite material parameters. Alavi and Gandomi [38] mixed the artificial neural network and the simulated annealing method to identify the ground motion parameter. In short, hybrid inverse algorithms are equipped with the ability to search the global optimal solution, and can also improve the efficiency and accuracy of the local inverse calculation. For the inverse problem of the model parameter, in order to make full use of the advantages of the aforementioned two computational inverse methods, a hybrid inverse method combining curve forecast homotopy algorithm with genetic algorithm [37] is introduced. The hybrid inverse method is divided into two steps to realize the identification of model parameters. The first step is to identify the preliminary model parameter in the whole domain using GA, and the reference values which are relatively close to the real solution can be derived through setting the maximum number of iteration as the stop criterion. The second step selects the set of reference values as the initial point, and more accurate model parameters can be calculated by using the curve forecast homotopy algorithm. Through the combination of the two methods, the hybrid inverse algorithm can further improve the calculation efficiency and global convergence. In order to verify the validity of the hybrid inverse method, the modified Himmelblau function [39] is used as a test example. The function is, thus, converted into an optimization problem, such that min F(x1 , x2 ) =
4
( f i (x1 , x2 ))2
i=1
s.t.
− 6 ≤ x1 , x2 ≤ 6
(3.52)
62
3 Computational Inverse Techniques
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Parameter x1 Parameter x2
1.5
2
2.5
3
3.5
Homotopy parameter t
Homotopy parameter t
where f 1 (x1 , x2 ) = x12 + x2 − 11, f 2 (x1 , x2 ) = x1 + x22 − 7, f 3 (x1 , x2 ) = 0.316(x1 − 3), and f 4 (x1 , x2 ) = 0.316(x2 − 2). This function only has single optimal solution (3, 2) theoretically. To begin with, the derivatives in Eq. (3.52) with respect to x1 , x2 are derived, and the derivatives are artificially set to be zero. So the optimization problem is transformed into a system of nonlinear simultaneous equations, and then the hybrid inverse method is applied to solve the simultaneous equations. Each generation has five individual equations, and the termination criterion is that the maximum number of iterative step is 5. Among these results, four optimal individuals are selected as the preliminary results to be input as the initial value of homotopy inverse algorithm. The model parameters are then derived by the homotopy algorithm combining the curve forecast with the Newton correction. The solving procedure of the homotopy path is shown in Fig. 3.14, and the identified results and the call number of the forward problems are listed in Table 3.6. The results in Table 3.6 show that the optimal solution of the hybrid inverse algorithm is (3.0033, 1.9960), which is highly approximate to the theoretical solution (3, 2). Micro genetic algorithm calls the forward calculation 25 times, and homotopy method is 27. Thus, the hybrid algorithm calls a total of 52 times. On the other hand, 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
4
Parameter x1 Parameter x2
-4
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Parameter x1 Parameter x2
-5 -4 -3
-2 -1
0
1
2
3
4
Solution of the equation set (c) Initial values (-4.268,4.382)
-3.5
-3
-2.5
-2
-1.5
Solution of the equation set (b) Initial values (-3.695,-1.859)
Homotopy parameter t
Homotopy parameter t
Solution of the equation set (a) Initial values (3.647,1.576)
5 6
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Parameter x1 Parameter x2
-3
-2
-1
0
1
2
3
4
Solution of the equation set (d) Initial values (4.024,-2.754)
Fig. 3.14 Homotopy path based on curve forecast and Newton correction
5
6
3.4 Computational Inverse Algorithms
63
Table 3.6 Hybrid identified results of test example No.
Identified result by u-GA
Identified result by curve forecast and Newton correction
Identified results (x1 , x2 )
Object function value
Identified results (x1 , x2 )
Object function value
Number of forward simulation
1
(3.647, 1.576)
15.76
(3.0033, 1.9960)
4.3344 × 10−4
8
2
(−3.695, − 1.859)
58.97
(−3.7635, − 3.266)
7.3568
7
3
(−4.268, 4.382)
80.32
(−2.7871, 3.1282)
3.4821
6
4
(4.024, − 2.754)
29.55
(3.5815, − 1.8208)
1.5022
6
the steepest descent method calls the forward calculation 215 times and the traditional GA calls 520 times to converge to the optimal solution. From the comparison of the results, hybrid inverse algorithm not only accurately derives the global optimal solution but also substantially improves the efficiency of inverse calculation.
3.5 Conclusions To address the sensitivity, ill-posedness, low computational efficiency and global divergence for the inverse problem of model parameter, the direct integral-based global sensitivity analysis method, regularization method for ill-posed problem and the practical hybrid inverse method are discussed in this chapter. The integral-based global sensitivity analysis method effectively improves the efficiency and stability of the sensitivity analysis and achieves the quantitative evaluation and rank of the key model parameters. The regularization method puts up a balance between the accuracy and the stability of the inverse solution, which ensures the reliable solution under measured noise and the ill-posedness of the system. The hybrid inverse method combines the advantages of the gradient algorithm and the intelligent evolutionary algorithm, which not only ensures the global convergence, but also improves the computational efficiency. These practical computational inverse techniques provide an efficient analysis tool for both the high-fidelity numerical modeling of the complex mechanical equipment and the high quality digital design.
64
3 Computational Inverse Techniques
References 1. Liu, G. R., & Han, X. (2003). Computational inverse techniques in nondestructive evaluation. Florida: CRC Press. 2. Tarantola, A., & Valette, B. (1982). Generalized nonlinear inverse problems solved using the least squares criterion. Reviews of Geophysics, 20(2), 219–232. 3. Aster, R. C., Borchers, B., & Thurber, C. H. (2011). Parameter estimation and inverse problems. Academic Press. 4. Engl, H. W., Hanke, M., & Neubauer. A. (1996). Regularization of inverse problems. Springer Science & Business Media. 5. Tikhonov, A. N., Arsenin, V. I. A., & John, F. (1977). Solutions of ill-posed problems. Washington, DC: Winston. 6. Hadamard, J. (1923). Lectures on the cauchy problems in linear partial differential equations. New Haven: Yale University Press. 7. Castillo, E., Conejo, A. J., Mínguez, R., et al. (2006). A closed formula for local sensitivity analysis in mathematical programming. Engineering Optimization, 38(1), 93–112. 8. Saltelli, A., Tarantola, S., Campolongo, F., et al. (2004). Sensitivity analysis in practice: A guide to assessing scientific models. Wiley. 9. Pastres, R., Franco, D., Pecenik, G., et al. (1997). Local sensitivity analysis of a distributed parameters water quality model. Reliability Engineering & System Safety, 57(1), 21–30. 10. Zhang, J., & Heitjan, D. F. (2006). A simple local sensitivity analysis tool for nonignorable coarsening: Application to dependent censoring. Biometrics, 62(4), 1260–1268. 11. Saltelli, A., & Marivoet, J. (1990). Non-parametric statistics in sensitivity analysis for model output: A comparison of selected techniques. Reliability Engineering & System Safety, 28(2), 229–253. 12. Campolongo, F., Cariboni, J., & Saltelli, A. (2007). An effective screening design for sensitivity analysis of large moels. Environmental Modelling and Software, 22(10), 1509–1518. 13. McRae, G. J., Tilden, J. W., & Seinfeld, J. H. (1982). Global sensitivity analysis—A computational implementation of the Fourier amplitude sensitivity test (FAST). Computers & Chemical Engineering, 6(1), 15–25. 14. Sobol, I. M. (2001). Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation, 55(1–3), 271–280. 15. Sudret, B. (2008). Global sensitivity analysis using polynomial chaos expansions. Reliability Engineering & System Safety, 93(7), 964–979. 16. Tian, W. (2013). A review of sensitivity analysis methods in building energy analysis. Renewable and Sustainable Energy Reviews, 20, 411–419. 17. Saltelli, A., & Sobol, I. M. (1995). About the use of rank transformation in sensitivity analysis of model output. Reliability Engineering & System Safety, 50(3), 225–239. 18. Korenberg, M., Billings, S. A., Liu, Y. P., et al. (1988). Orthogonal parameter estimation algorithm for non-linear stochastic systems. International Journal of Control, 48(1), 193–210. 19. Landweber, L. (1951). An iteration formula for fredholm integral equations of the first kind. American Journal of Mathematics, 73, 615–624. 20. Backus, G. E., & Gilbert, F. (1970). Uniqueness in the inversion of inaccurate gross earth data. Philosophical Transactions of the Royal Society, 22, 123–192. 21. Awrejcewicz, J., & Krysko, V. A. (2004). Nonclassical thermoelastic problems in nonlinear dynamics of shells: Applications of the Bubnov-Galerkin and finite difference numerical methods. Springer. 22. Hansen, P. C. (1990). Truncated SVD solutions to discrete ill-posed problems with illdetermined numerical rank. SIAM Journal on Scientific and Statistical Computing, 11, 503–518. 23. Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression: Biased estimation for non-orthogonal problems. Technometrics, 12(1), 55–67. 24. Sun, X. S., Liu, J., Han, X., et al. (2014). A new improved regularization method for dynamic load identification. Inverse Problems in Science and Engineering, 22(7), 1062–1076.
References
65
25. Paige, C. C., & Saunders, M. A. (1982). LSQR: An algorithm of sparse linear equations and sparse least squares. ACM Transactions on Mathematical Software, 8(1), 43–72. 26. Saad, Y., & Schultz, M. H. (1985). GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM Journal on Scientific and Statistical Computing, 7(3), 856–869. 27. Morozov, V. A. (2012). Methods for solving incorrectly posed problems. New York: Springer. 28. Engl, H. W., Hanke, M., & Neubauer, A. (1996). Regularization of inverse problems. New York: Springer. 29. Tikhonov, A. N., & Arsenin, V. Y. (1977). Solutions of ill-posed problems. New York: Wiley. 30. Golub, G. H., Heath, M., & Wahba, G. (1979). Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21, 215–223. 31. Hansen, P. C., & Leary, D. P. (1993). The use of the L-curve in the regularization of discrete ill-posed problems. SIAM Journal on Scientific and Statistical Computing, 14, 1487–1503. 32. Hansen, P. C. (1998). Rank-deficient and discrete ill-posed problems. Philadelphia: SIAM. 33. Liu, J. (2011). Research on computational inverse techniques in dynamic load identification. Changsha: Hunan University. 34. Wang, Z., & Gao, T. (1990). An introduction to homotopy methods. Chongqin: Chongqin Press. 35. Krishnakumar, K. (1990). Micro-genetic algorithms for stationary and non-stationary function optimization. In Proceeding of Advances in Intelligent Robotics Systems Conference, International Society for Optics and Photonics (pp. 289–296). 36. Liu, G. R., Han, X., & Lam, K. Y. (2002). A combined genetic algorithm and nonlinear least squares method for material characterization using elastic waves. Computer Methods in Applied Mechanics and Engineering, 191, 1909–1921. 37. Chen, R. (2014). The research on hybrid inverse method for material characteristic parameters identification and applications. Changsha: Hunan University. 38. Alavi, A. H., & Gandomi, A. H. (2011). Prediction of principal ground-motion parameters using a hybrid method coupling artificial neural networks and simulated annealing. Computers & Structures, 23–24, 2176–2194. 39. Himmelblau, D. M. (1972). Applied nonlinear programming. New York: McGraw Hill Inc.
Chapter 4
Computational Inverse for Modeling Parameters
4.1 Introduction In the last chapter, the model parameters are classified according to the requirements of the specific numerical simulation and the respective type of the parameters themselves. Based on this, the basic calculation procedure on the inverse problem for the model parameters is specified, and several practical computational inverse techniques are discussed. It is understood that the physical experimental test serves as the base for the model parameter identification. To delineate more clearly the procedure for readers to follow and implement, elaboration on the computational inverse process, exemplification with practical problems, and the experimental test will be presented in this chapter. Two types of model parameter inverse problems to derive the model characteristic parameters and the model environment parameters are discussed. For the characteristic parameter inverse problem, experimental test is applied to identify the constitutive parameters of the metal and the complex fragile material. For the environment parameter inverse problem, the measured responses are adopted to identify the dynamic load applied on the structure and the initial conditions of vehicle collision. In practical engineering, the proficient identification of these model parameters not only verifies the engineering practicability of the computational inverse techniques, but also provides basic data for the high-fidelity numerical simulation modeling. For the other two types of key model parameters, namely, the model equivalent parameters and the model controlling parameters, the corresponding methods to address the problems are similar to those for the model characteristic parameters and environment parameters. Although the respective computational inverse process has their distinctive features, they are not included in the in-depth exploration in this chapter.
© Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_4
67
68
4 Computational Inverse for Modeling Parameters
4.2 Identification of Model Characteristic Parameters Model characteristic parameters are self-characteristic parameters of the material or structure to be specified in the modeling process of complex structures. They are expected to reflect the intrinsic behaviors of the system in the numerical simulation. The proficiency of the model characteristic parameters affects the performance and applicability of the numerical simulation model directly. Computational inverse techniques provide an indirect approach to derive the model characteristic parameters [1–4]. In order to verify the effectiveness and practicability, in this section, the material constitutive parameters of a stamping part and a fragile material will be identified through the experimental tests.
4.2.1 Material Parameter Identification for Stamping Plate In the stamping process of the test specimen, metal plastic deformation will produce work-hardening, and the relative material properties after stamping changes unevenly compared with those before stamping. So the inhomogeneous material properties are different in different parts of the specimen. The difference is improbable to be determined directly by experimental methods. On the other hand, the computational inverse techniques provide a feasible approach to delineate the difference. The left part of the specimen in Fig. 4.1 has no deformation and its surface is bright. The right part with the friction marks left along the drawing direction is machined with the plastic deformation by moving the steel plate through the drawbead. The middle is the hump formed by the tamping mould, and its top is 10 mm high from the steel plate surface. Considering the actual load condition of the specimen, work-hardening, and Bauschinger effect, the specimen is divided into four areas as shown in Fig. 4.2. They are ➀ (AC), ➁ (CE), ➂ (EG), and ➃ (GI). Area ➀ is the left part of the specimen in Fig. 4.1. In the X direction, AC = GI = 25 mm, CE = EG = 15 mm. Area ➀ doesn’t bend or experience plastic deformation. Area ➁ sustains a bending deformation, whereas Area ➂ has two times of opposite bending deformations. Area ➃ experiences three times of opposite bending deformations. Therefore, the regional
Fig. 4.1 Stamping specimen
4.2 Identification of Model Characteristic Parameters
69
Fig. 4.2 Area division of test specimen
material properties deviate from each other. For the uniaxial tensile experiment, measurement points at the boundaries and the interior parts of each division are deployed to ensure that deformation information will be obtained in each respective area. As shown in Fig. 4.2, there are total eight measurement points on the central line of the plate, which are denoted as A, B, C, D, E, F, and G. The stepwise quasi-static load is applied on the specimen in the form of end tensile displacement. The displacement field is identified as the structural response. When the right end is fixed, and the part A is stretched to 2 Nmm (N = 1, 2, . . . , 10), the displacements of the measurement points are recorded. The results are listed in Table 4.1. The stretched specimen shape is shown in Fig. 4.3 after the experiment. The deformations of different areas are remarkably different, and the deformation mainly comes from areas ➀, ➁, and ➂. Areas ➀ and ➁ demonstrate observable plastic Table 4.1 Measured data of uniaxial tensile experiment (mm) A(X)
B(X)
C(X)
D(X)
E(X)
F(X)
G(X)
H(X)
2
1.970
1.940
1.060
1.035
0.950
0.040
0.055
4
3.965
3.825
2.115
2.065
1.940
0.070
0.020
6
5.870
5.915
3.225
3.120
3.120
0.120
0.035
8
7.550
7.245
3.890
3.805
3.795
0.130
0.060
10
8.965
7.865
4.155
4.065
4.005
0.185
0.085
12
10.125
8.760
4.525
4.295
4.090
0.250
0.110
14
11.665
9.750
5.110
4.585
4.270
0.260
0.125
16
13.225
11.020
5.560
4.965
4.475
0.350
0.155
18
14.933
11.913
6.000
5.093
4.500
0.387
0.207
20
16.780
13.687
6.380
5.280
4.607
0.387
0.233
Fig. 4.3 Test specimen after uniaxial tensile experiment [5]
A M
B
C
DEF
G H
I N
70
4 Computational Inverse for Modeling Parameters
Fig. 4.4 Finite element model of test specimen
deformations. The main deformation of area ➂ is the flattening deformation of the specimen. Almost no deformation is detected in area ➃. BD has apparent necking phenomenon accompanied by 45° oblique crack. From Fig. 4.3, the deformation of specimen is not symmetric along the center line MN in the longitudinal direction. Instead, it is symmetric along the transverse line E. Thus, half specimen in the transverse direction is numerically modelled by finite element method as shown in Fig. 4.4. All the six degrees of freedom for the elements are fixed at the right end line of the model. For the partition line of half specimen, the translation freedom in the transverse direction and the rest two rotational freedoms are restrained. All the degree of freedoms of Line A except the translational freedom on the stretching direction are restrained. The displacement load as experiment is applied at the left side of the model, and the corresponding calculated displacement responses of the measurement points can then be measured. The original elastic modulus E of the specimen material is 208 Gpa, Poisson’s ratio is 0.245. The power index plastic model is selected to formulate the material property. Thus, twelve material property parameters including the respective initial yield limit σ0 , the hardening coefficient K, and the hardening index n of all the four areas should be identified. Compared with the displacement response between experimental measurement and the numerical simulation at the specified measurement points, the objective functions are identified to be solved by the intergeneration projection genetic algorithm (IP-GA). The identified material property parameters are shown in Table 4.2, and the stress-strain curves fitted by the identified parameters for the different areas are shown in Fig. 4.5. Area ➀ has no deformation, and its material property parameters remain similar to those before stamping. Its initial yield stress is the lowest, and the curve slope of plastic part is relatively high. Area ➁ manifests bending deformation and hardening Table 4.2 Identified results of the material property parameters K1
n1
σ01 /MPa
K2
n2
σ02 /MPa
745.604
0.28554
226.728
619.877
0.18699
362.546
K3
n3
σ03 /MPa
K4
n4
σ04 /MPa
690.915
0.12100
366.887
582.340
0.13266
389.722
4.2 Identification of Model Characteristic Parameters
71
Fig. 4.5 Stress-strain curves of the text specimen
Stress/MPa
600
400 1 200
2 3 4
0 0
0.1
0.2 Strain
0.3
0.4
process. The initial yield limit increases substantially compared to that of the original material, while the curve slope of the plastic part is reduced. Area ➂ experiences two times of opposite bending deformations. The material shows hardening phenomenon, and the plastic curve is slightly ascending compared to the counterpart of the area ➁. Initial yield limit remains unchanged basically. Area ➃ experiences three times of bending deflections, and each bending direction is opposite to that of the previous one. The material reflects a specific softening trend in area ➂, and its material properties are similar to those in area ➁. Sampling in the flat area ➀ and area ➃ of the stamping specimen, the standard material tensile specimens are produced. Its material characteristic parameters are then derived directly through the tensile experiments, and the results are listed in Table 4.3. From Table 4.3, the identified material property parameters of areas ➀ and ➃ are close to the respective counterparts by the experiment test, which manifest the validity of the material property inverse method based on region partition. It can be said the computational inverse technique derives different material property parameters of multiple regions through one experiment, which greatly reduces the experiment cost. Table 4.3 Identified and measured material property parameters for areas ➀ and ➃ Parameter
Area ➀ experimental value
Area ➀ identified value
Relative error (%)
K
736.148
745.604
1.285
n σ0 /MPa
0.27605 255.100
0.28554 226.728
3.438 11.122
Area ➃ experimental value
Area ➃ identified value
Relative error (%)
578.682
582.340
0.632
0.11772 367.150
0.13266 389.722
12.691 6.148
72
4 Computational Inverse for Modeling Parameters
4.2.2 Dynamic Constitutive Parameter Identification for Concrete Material In order to further demonstrate the inverse process of the model parameter identification to promote its engineering practicability, this section chooses concrete materials as the research object to demonstrate again the computational inverse process as shown in Fig. 2.1 in Chap. 2. Through the model definition, forward model, sensitivity analysis, experimental test, computational inverse, and the validation process, the accurate dynamic constitutive parameters of concrete can be derived. Concrete is a typical anisotropic and inhomogeneous multiphase composite brittle material. Because the physical and mechanical properties, and the deformation of cement mortar and aggregate are different and stochastic, the mechanical properties of concrete material are very complex. Its dynamic characteristic is related to the specific strain rate under different loading conditions. Many researches on dynamic mechanical properties of concrete material have been reported, and dynamic constitutive models of concrete material have been established, such as HJC model, RHT model, and TCK model. Each concrete constitutive model contains many parameters. Some of the parameters can be determined directly by the basic physical or mechanical experiments, while some other parameters could only be determined by fitting methods. It requires a lot of experimental data by schematized experiments, such as the Hopkinson experiments under different strain rate. For concrete material, a large amount of effective experimental data is quite time-consuming and expensive to be obtained. Thus, the computational inverse technique is applied to determine the constitutive parameters with application of limited experiments. 1. HJC dynamic constitutive model HJC concrete constitutive model is a material dynamic constitutive model featuring the large strain, high strain rate, and high pressure effects. The model takes into account the material damage, strain rate effect, and the influence of hydrostatic pressure on the yield strength. It mainly consists of the equivalent strength model, the damage model, and the state equation. The equivalent strength model is expressed as [6]. σ ∗ = [A(1 − D) + B P ∗N ][1 + C ln ε˙ ∗ ], σ ∗ ≤ Smax
(4.1)
where σ ∗ = σ/ f c , P ∗ = P/ f c , ε˙ ∗ = ε˙ /˙ε0 are the normalized equivalent strength, the normalized hydrostatic pressure, and the equivalent strain rate, respectively, f c stands for the quasi-static uniaxial compressive strength of the material, and σ , P, and ε˙ are the respective actual equivalent stress, hydrostatic pressure, and strain rate of the material, ε˙ 0 expresses the referenced strain rate, D denotes the damage factor in the constraint of 0 ≤ D ≤ 1.0. Parameter A is the material cohesion strength, parameter B stands for the pressure hardening coefficient, N represents the pressure hardening index, C is the strain rate coefficient and Smax is the normalized maximum equivalent yield strength.
4.2 Identification of Model Characteristic Parameters
73
The HJC dynamic constitutive model of concrete contains totally 19 parameters, which can be determined by many different methods. Amongst, three parameters can be deduced directly by simple physical or mechanical experiments, namely, the initial density ρ0 , the tensile strength T, and the compressive strength f c . Other six parameters can be determined indirectly by experiments or classic formula, i.e., the damage parameter D2 , the minimum fracture strain ε f min , the elastic volume modulus K e , the material crushing pressure Pcr ush , the crushing volume strain μcr ush , and the pressure entity volume strain μlock . The remaining ten parameters, including five strength parameters A, B, N, C, Smax , a damage parameter D1 , and four pressure parameters Plock , K 1 , K 2 , K 3 , should be determined by comprehensive combination of the separate Hopkinson pressure bar (SHPB) experiments. Because the concrete stress within specific strain rate range can reach only the transition phase in the SHPB experiment, the correlation of parameters in the compaction zone is very small with the state equation. Therefore, the parameters Plock , K 1 , K 2 , K 3 of the equation can adopt the values given in the Ref. [6]. In addition, the strain rate parameter C can be determined by the analysis result of the dynamic enhancement factor for the concrete material. Hence, in view of the specified constitutive model of concrete and its parameters, only five parameters, namely A, B, N, Smax , and D1 should be identified by SHPB experiments. In order to improve the efficiency and reliability of the concrete constitutive parameter identification, the parameter intervals are set as A ∈ [0.2, 1.5], B ∈ [0.5, 1.5], N ∈ [0.1, 1.0], Smax ∈ [5, 20], and D1 ∈ [0.035, 0.05] according to Ref. [6]. 2. Experimental devices and response test The dynamic load experiment on a φ90 mm × 60 mm concrete cylinder specimen is implemented by φ100 mm SHPB experimental devices as shown in Fig. 4.6 [7]. In order to ensure the consistency of the basic properties of the concrete specimens and the experimental repeatability, all the materials concrete specimens are prepared
Fig. 4.6 SHPB experimental device and concrete specimen
74
4 Computational Inverse for Modeling Parameters
from the same batch of materials before the experiments. By polishing, all the surface roughness, flatness, and verticality satisfy the experimental requirements. In order to eliminate the difference in the stress and elastic deformation on the cross of the pressure bar, namely, the measurement error due to the bending effect, the middle between the input shaft and output shaft is attached with the resistance strain gauges to measure response. The upper and lower ends of the same diameter line are also attached with the strain gauges. In order to ensure the reliable and effective results of the SHPB experimental test, the experiments repeat three times under the same collision speed. Figures 4.9, 4.10 and 4.11 give the measured incident wave response, reflected wave response, and the transmission wave response under different strain rates, respectively, 74.4 s−1 , 53.6 s−1 , and 209.5 s−1 . 3. Forward model The three-dimensional finite element model complying with the SHPB dynamic experiment is established. Because the input and output bars of SHPB experimental devices and geometry sizes of the specimens are axisymmetric, to save the computational cost, a quarter of the specimen is modelled in the 3-D finite element model as shown in Fig. 4.7. 3-D eight node hexahedral element is selected to mesh the model. Linear elastic material model is adopted for the input bar and the output bar, the HJC dynamic constitutive model for the concrete. The HIJ model parameters for the concrete material except for the parameters A, B, N, Smax and D1 , are listed in Table 4.4. In the numerical simulation of the forward problem, the experimental measured incident wave is applied to the end of the input rod directly to simulate the loading process. The contact between the compressive bar and specimen is modelled with erosion surface contact algorithm and the friction effect between the surfaces is ignored for simplicity. Fig. 4.7 3-D finite element model for concrete specimen Output bar Input bar
Specimen
Table 4.4 Parameters of the HIJ constitutive model of concrete ρ0 (kg/m3 )
G(GPa)
f c (GPa)
T (GPa)
D2
Pcr ush (GPa)
2.04
9.446
30.42e−3
3.42e−3
1.0
10.14e−3
μcr ush
Plock (GPa)
μlock
K 1 (GPa)
K 2 (GPa)
K 3 (GPa)
0.8e−3
0.8
0.312
85
−171
208
4.2 Identification of Model Characteristic Parameters
75
Table 4.5 Global sensitivity analysis of the dynamic constitutive parameters of concrete Parameter
Sensitivity value
Parameter
Sensitivity value
Parameter
Sensitivity value
A
0.1760
B
0.4736
A and B
0.0465
B and Smax
0.0029
A and N
0.0051
B and D1
N
0.0095
0.1643
A and Smax
0.0055
N and Smax
0.0016
Smax
7.23e−4
A and D1
0.0054
N and D1
0.0023
D1
0.0056
B and N
0.0684
Smax and D1
4.18e−4
4. Sensitivity analysis In the identification of the concrete dynamic constitutive parameters by the computational inverse technique, the constitutive parameters to be identified should have strong sensitivity to the measured response in the SHPB experiment. In order to study the sensitivity between them, the sensitivities of the five unknown parameters are sorted by the direct integral-based global sensitivity analysis method presented in Chap. 3. The incident wave under 74.4 s−1 strain rate of the concrete material is chosen as the load wave. 100 samplings in the value ranges of the five parameters are created by the Latin hypercube experimental design method. Through the reflex wave response and the transmission wave response of each sample via forward model simulation, the area enclosed by the transmission wave response curve and the time axis can be calculated. According to the samples and the calculated areas, the optimal polynomial model is created, and the direct integral-based Sobol method for the sensitivity analysis is then adopted to evaluate the sensitivities of each parameter and their interaction. The results are shown in Table 4.5. According to the sensitivity analysis results in Table 4.5, the constitutive parameters A, B, and N have high sensitivities to the transmission wave response, whereas parameter B is the highest. Parameters Smax and D1 have relative low sensitivities, which affect the transmission wave response slightly. For higher-order sensitivity index with the parameter cross effects, the interactions of parameter B with parameters Aand N have substantial influence on the transmission wave response. Especially the influence of the interaction between the parameters B and N is conspicuous as seen in Eq. (4.1). On the other hand, the sensitivities of the other interactions are ineffectual. Therefore, the sensitivity analysis shows that the concrete constitutive parameters Smax and D1 have little effect on the transmission wave response. When using SHPB experiment to identify the concrete dynamic constitutive parameters, the inverse parameters can be reduced to three parameters of A, B and N, whereas the parameters Smax and D1 may adopt the values in the open resource [6]. 5. Identification and verification of constitutive parameters The concrete constitutive model parameters are identified by using the reflected wave response and the transmission wave response under 74.4 s−1 strain rate. In the range of sensitive interval 1900 − 2600 μs of the reflected wave response and the transmission wave response, 200 sampling points are derived uniformly. The average
76
4 Computational Inverse for Modeling Parameters
Euclidean distance Y (x) between the experimental measured reflected wave and the transmission wave response with the numerical simulation response is selected as the objective function for the concrete dynamic constitutive parameters identification. The average Euclidean distance Y (x) has the expression of n 2 σim − σic (x) /n Y (x) =
(4.2)
i=1
where x is the concrete constitutive parameter to be identified, σim denotes the experimentally measured reflected wave and the transmission wave response, σic (x) represents the numerical simulation response, and n is the sampling number. The hybrid inverse algorithm is adopted to identify the concrete constitutive parameters [8–10]. The IP-GA algorithm is applied for the preliminary identification. The number of IP-GA individuals in each generation is 5, and the maximum iteration step is set to be 40. The corresponding inverse solutions of the five optimal individuals are identified as the initial inverse results. These five sets of parameters are then applied as the input for the next step identification using the homotopy algorithm combined with the curve forecast and the Newton correction. The results are given in Table 4.6, and the homotopy path of the constitutive parameters identification is shown in Fig. 4.8. By the computational inverse technique, the concrete dynamic constitutive model parameters are identified as A = 0.5098, B = 1.4792, and N = 0.4057. Table 4.6 Identified results of the dynamic constitutive parameters of concrete No.
Identified result by IP-GA
Identified result by homotopy algorithm
Identified results (A, B, N)
Objective function value
Identified results (A, B, N)
Objective function value
Number of forward simulation
1
(0.8475, 0.9855, 0.1565)
6.3625
(0.7926, 1.1845, 0.1988)
2.4512
32
2
(1.2150, 0.9855, 0.2200)
5.7832
(0.6142, 1.2787, 0.2647)
1.4674
26
3
(1.0872, 1.052, 0.6718)
9.4641
(0.7624, 1.0625, 0.6022)
3.8625
18
4
(0.9851, 1.098, 0.5659)
8.8452
(0.6882, 1.2467, 0.5039)
2.6585
20
5
(0.4957, 1.179, 0.3047)
4.5698
(0.5098, 1.4792, 0.4057)
0.9842
28
4.2 Identification of Model Characteristic Parameters 1 0.8
1 Parameter A Parameter N
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Parameter B
0.9
Homotopy parameter/t
Homotopy parameter/t
0.9
77
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.35
0.40
0.45
0.50
0.55
Constitutive parameter
0 1.1
1.15
1.2
1.25
1.3
1.35
1.4
1.45
1.5
Constitutive parameter
Fig. 4.8 Homotopy path of the constitutive parameters of concrete
In the constitutive parameters identification of concrete with the hybrid inverse algorithm, the preliminary identification calls the forward simulations about 200 times by IP-GA, and the homotopy algorithm calls 124 times in followed simulations. Thus the overall process calls total 324 times the forward simulations. In contrast, if only the IP-GA is adopted for the constitutive parameters identification, the calls will be 1220 times. It indicates that the hybrid inverse algorithm has relatively much higher efficiency. The identified concrete constitutive parameters are substituted into the forward problem calculation, and the numerical simulation is repeated under the condition of 74.4 s−1 strain rate. The results are shown in Fig. 4.9. Compared the experimental measured reflected wave with transmission wave response, the result shows that calculated response under this strain rate is consistent with that of the experiment measurement. This verifies the effectiveness of the identified concrete constitutive model parameters. Fig. 4.9 Calculated and measured responses under the strain rate of 74.4 s−1
150 Experiment Numerical calculation
100
Stress / MPa
50 0 -50
-100 -150 0
500
1000
1500 Time / µs
2000
2500
3000
78
4 Computational Inverse for Modeling Parameters
In order to further corroborate the reliability of the identified results, the identified concrete constitutive parameters are applied to calculate the forward problem under the working conditions of 53.6 s−1 and 209.5 s−1 strain rate, respectively. The transmitted wave and the reflected wave responses of those two working conditions are shown in Figs. 4.10 and 4.11. The results show that the calculated responses are consistent with the experimental measurements under these two strain rates. Thus, the validity of the identified concrete constitutive parameters is again proved. It also demonstrates that the computational inverse technique can quickly and steadily deduce the model parameters, which is not straightforward to be measured by traditional experiment method. Fig. 4.10 Calculated and measured responses under the strain rate of 53.6 s−1
100
Experiment Numerical calculation
80 60
Stress / MPa
40 20 0 -20 -40 -60 -80 -100 0
Fig. 4.11 Calculated and measured responses under the strain rate of 209.5 s−1
500
1000
1500 Time / µs
2000
2500
3000
400 Experiment Numerical calculation
300
Stress / MPa
200 100 0
-100 -200 -300 0
500
1000
1500 Time / µs
2000
2500
3000
4.3 Identification of Model Environment Parameters
79
4.3 Identification of Model Environment Parameters Model environment parameters include structural loads, initial conditions, and boundary conditions. Appropriate environment parameters have important practical significance for ensuring the reliability of numerical model. However, due to the limitation in technology and cost, environmental parameters are potentially improbable to be measured directly in specific situations. On the other hand, the structural response can be measured proficiently. Thus, applying the measured response for the model environment parameter identification is increasingly popular. In order to evaluate the feasibility of the model environment parameter identification, combined with experimental test, the computational inverse method is applied to identify the dynamic load applied on the cylinder structure and the initial condition of vehicle crash.
4.3.1 Dynamic Load Identification for Cylinder Structure The computational inverse method for the dynamic load identification has been discussed in Chap. 3 [11–20]. In this section, the experimental test result of the cylinder structure is applied to further evaluate the engineering practicability of the dynamic load identification methods. Figure 4.12 shows the cylinder structure with inner diameter of 156 mm, thickness of 4 mm, and length of 502 mm. Its material is 20 # steel. The cylinder is weld on a square plate, and four bolts at the four corners fix the square plate onto the horizon iron. Fig. 4.12 Experiment specimen for dynamic load identification
80
4 Computational Inverse for Modeling Parameters
Top (+Z): 210 Hz
Top (+Z): 410 Hz
3DView: 210 Hz
Z
X
Z
XZ
Y
3DView: 410 Hz
Y
X
XZ
Y Y
Fig. 4.13 First two vibration modes of cylinder structure
According to the experimental modal characteristics analysis of the cylinder structure, the first two order vibration modes are derived as shown in Fig. 4.13. The corresponding natural frequencies are 210 Hz and 410 Hz, respectively, and the respective modal damping ratios are 2.52 and 0.153%. The impact hammer is adopted to apply an impact load on the cylinder structure. The practical load can be measured by a force sensor on the hammer and its time history is shown in Fig. 4.14. The response of the measured point can be measured by an acceleration sensor. Because the acceleration sensor is single-axis acceleration sensor, only the radial acceleration is measured from the experiment, and its time history is shown in Fig. 4.15. This measured acceleration response will be adopted to identify the impact load applied on the cylinder. According to the available information, such as the geometrical size, material properties, and boundary conditions, the finite element model including the cylinder and bottom plate are established. The coordinate system is a cylindrical coordinate system. The first three vibration modes can be calculated via the finite element simulation as shown in Fig. 4.16. The corresponding natural frequencies are 210.21 Hz, 272.40 Hz and 411.59 Hz, respectively. Through the measured structural acceleration response and the calculated Green kernel function (Fig. 4.17) via finite element calculation, and combining the regularization method based on truncation singular decomposition (TSVD), the impact
Impact load / N
Fig. 4.14 Measured impact load
4.3 Identification of Model Environment Parameters
81
60 40 20 0 -20 -40 -60
0
1
2
3
4
Time / s
5 10-3
Fig. 4.15 Measured acceleration response
(a) Finite element model
(b) First vibration mode
(c) Second vibration mode
(d) Third vibration mode
Fig. 4.16 Finite element model and first three vibration modes
load of the cylinder structure is identified. The results are shown in Fig. 4.18, and the identified load and corresponding errors at different time spots are listed in Table 4.7. According to the identified results of the impact load, the time history and the peak load are acceptable. It demonstrates that the computational inverse method for the dynamic load identification is effective.
Fig. 4.17 Green kernel function response of acceleration
4 Computational Inverse for Modeling Parameters
Green kernel function response / (m/s2)
82
Impact load / N
Fig. 4.18 Identified impact load
4.3.2 Vehicle Crash Condition Identification In vehicle crash safety analysis and design, reliable environmental parameters, such as the crash initial conditions, help to improve the effectiveness of the numerical simulation of vehicle crash model and fulfill the precision analysis of the protective structure and the potential human body injury. It has the practical applications to reconstruct and evaluate the actual traffic accidents. In this section, a typical deepinvestigated side crash accident case is selected to implement the computational
4.3 Identification of Model Environment Parameters Table 4.7 Identified impact load and relative errors
83
Time point
Actual load (N)
Identified load (N)
Relative error (%)
0.0002
34.72
37.05
6.70
0.0003
48.95
54.58
11.51
0.0004
59.81
63.22
5.71
0.0005
68.41
72.22
5.57
0.0006
70.28
62.65
10.86
0.0007
64.76
61.65
4.80
0.0008
55.90
52.56
5.97
0.0009
46.00
46.34
0.74
0.0010
35.67
36.25
1.63
inverse technique with the measured deformation data to identify the crash initial vector, crash angle, and crash position [20–22]. The accident case comes from Crash Injury Research and Engineering Network (CIREN) database [23], demonstrating a typical side collision accident. A 2000 Honda CRV with total weight of 1452 kg hit a 2000 Mazda 626 with total weight of 1299 kg on the left. Collision deformation mainly occurs in the collided left front and left back doors of the vehicle. According to the standard six-point measurement process, the vehicle deformations are shown in Fig. 4.19. The measured deformations of the six collision point (C1–C6) on the left side of the vehicle are denoted as xim (i = 1, 2 . . . , 6) and listed in Table 4.8. In the establishment of the vehicle collision finite element model, the nondeformed area of the 2000 Honda CRV is set as rigid body to reduce the computational intensity. The numerical simulation collision point deformations are denoted as Fig. 4.19 Crash vehicle and six measurement points (From Guan F., et al. American Society of Mechanical Engineers, 567–573. 2009. With permission)
Table 4.8 Deformations at six measurement points Measurement point
C1
C2
C3
C4
C5
C6
Deformation (mm)
0
470
700
510
320
0
Source Guan F., et al. American Society of Mechanical Engineers, 567–573. 2009. With permission
84
4 Computational Inverse for Modeling Parameters -45 8.4807E-1 8.2800E-1 8.0700E-1 7.8600E-1 7.6600E-1 7.4500E-1 7.2400E-1 7.0300E-1 6.8200E-1 6.6100E-1 6.4000E-1 6.1960E-1
-48.75
8.4897E-1 8.2030E-1 7.9163E-1 7.6296E-1 7.3429E-1 7.0562E-1 6.7894E-1 6.4827E-1 6.1960E-1 5.9093E-1
-52.5 Vel/ (km/h) -56.25
-60 275
277
279
281
283
285
PDOF/(° )
(a) Surrogate model
(b) Contour map
Fig. 4.20 Object function E with respect to the crash velocity and crash angle (From Guan F., et al. American Society of Mechanical Engineers, 567–573. 2009. With permission)
xic (i = 1, 2 . . . , 6). Comparing the calculated deformations xic with the actual deformations xim , the objective function of the collision initial conditions E is defined as E =1−
6
2 xim − xic /xim
(4.3)
i=1
The samples in the value ranges of the collision velocity Vel, the collision angle PDOF, and the collision position POS are derived by Latin hypercube method. Through the collision numerical simulations at the sample points, the Kriging agency model for the objective function E and the collision initial conditions is established [24]. Figures 4.20, 4.21 and 4.22 show the variations of the objective function E with regard to the crash velocity, crash angle, and the hitting point. Sequential quadratic programming method is adopted for the objective function optimization, the identified vehicle collision velocity is 50 km/h, the collision angle is 281°, and the collision position POS is 37 mm to the right from the B column center. Inputting the identified initial collision conditions into the finite element numerical simulation model, the vehicle deformation at the time spot of 120 ms after the collision is calculated as shown in Fig. 4.23. It can be seen from the Fig. 4.23 that the maximum deformation occurs near column B, and the vehicle deformation from the numerical simulation collision conforms to the actual traffic accident situation as shown in Fig. 4.19. Therefore, the reconstruction of the traffic accident using the identified environment parameters reproduces the actual traffic accident proficiently. The coincident results corroborate again that the computational inverse techniques can fulfill the key parameter identification effectively, and it has practical applicability for complex engineering problems.
4.4 Conclusions
85 100 8.3551E-1 8.2400E-1 8.1300E-1 8.0200E-1 7.9000E-1 7.7900E-1 7.6800E-1 7.6700E-1 7.4500E-1 7.3100E-1 7.2300E-1 7.1140E-1
60 8.3551E-1 8.2000E-1 8.0448E-1 7.8897E-1 7.7345E-1 7.5794E-1 7.4243E-1 7.2691E-1 7.1140E-1 6.9588E-1
20 POS/ mm -20
-60
-100 275
277
279
281
283
285
PDOF/(° )
(a) Surrogate model
(b) Contour map
Fig. 4.21 Object function E with respect to crash angle and crash position (From Guan F., et al. American Society of Mechanical Engineers, 567–573. 2009. With permission)
-45 8.6266E-1 8.4000E-1 8.1800E-1 7.9600E-1 7.7400E-1 7.5100E-1 7.2900E-1 7.0700E-1 6.8500E-1 6.6200E-1 6.4000E-1 6.1786E-1
-48.75
8.6266E-1 8.3206E-1 8.0146E-1 7.7086E-1 7.4026E-1 7.0966E-1 6.7906E-1 6.4846E-1 6.1786E-1 5.8726E-1
-52.5 Vel/ (km/h) -56.25
-60 -100
-60
-20
20
60
100
POS/mm
(a) Surrogate model
(b) Contour map
Fig. 4.22 Object function E with respect to crash velocity and crash position (From Guan F., et al. American Society of Mechanical Engineers, 567–573. 2009. With permission)
4.4 Conclusions Through exemplification with the practical engineering problems, the experimental verification and its application for the model parameter identification method based on computational inverse technique are further exposited in this chapter. Especially, for model characteristic parameters and the model environment parameters, it implements the computational inverse for the regional stamping material properties, the dynamic constitutive parameters of the complex brittle material, the dynamic load of cylinder structure, and the vehicle collision initial condition. In the inverse process of
86
4 Computational Inverse for Modeling Parameters
Fig. 4.23 Vehicle crash simulation under the identified initial collision condition (From Guan F., et al. American Society of Mechanical Engineers, 567-573. 2009. With permission)
the engineering model parameters, according to the standard computational inverse steps, the type and the initial range of the parameters are determined by the model definition. The identified parameters and the corresponding experimental measured responses are determined by using the effective global sensitivity analysis. The regularization method is then adopted to rectify the ill-posed problem in computational inverse before applying the high efficient inverse algorithm for the model parameter identification. The engineering examples of model parameter identification show that the computational inverse technique can derive the model parameters reliably with limited physical experiments. It greatly reduces the number of physical experiments and provides an effective methodology for the high-fidelity numerical simulation.
References 1. Liu, G. R., & Han, X. (2003). Computational inverse techniques in nondestructive evaluation. Florida: CRC Press. 2. Han, X., & Liu, G. R. (2003). Computational inverse technique for material characterization of functionally graded materials. AIAA Journal, 41(2), 288–295. 3. Han, X., Xu, D., & Liu, G. R. (2003). A computational inverse technique for material characterization of a functionally graded cylinder using a progressive neural network. Neurocomputing, 51, 341. 4. Liu, G. R., Han, X., & Lam, K. Y. (2001). Material characterization of FGM plates using elastic waves and an inverse procedure. Journal of Composite Materials, 35(11), 954–971. 5. Chao, L. (2012). An inverse method of material parameters about vehicle body panels based on region division. M.S. thesis. Changsha: Hunan University. 6. Holmquist, T. J., Johnson, G. R., & Cook, W. H. (1993). A computational constitutive model for concrete subjected to large strains, high strain rates and high pressures. In Proceedings of 14th International Symposium on Ballistics (pp. 591–600). Qubec, Canada. 7. Rui, C. (2014). The research on hybrid inverse method for material characteristic parameters identification and applications. Ph.D. thesis. Changsha: Hunan University. 8. Chen, R., Han, X., Liu, J., et al. (2011). A computational inverse technique to determine the dynamic constitutive model parameters of concrete. CMC-Computers Materials & Continua, 25(2), 135–157.
References
87
9. Rui, C., Jie, L., & Han, X. (2014). A multi-stage computational inverse technique for identification of the dynamic constitutive parameters of concrete. Explosion and Shock Waves, 34(3), 315–321. 10. Dequan, W., Han, X., & Dean, H. (2009). An inverse technique for identification of the dynamic constitutive parameters of a ceramic brittle material. Chinese Journal of Solid Mechanics, 30(3), 280–285. 11. Liu, J. (2011). Research on computational inverse technique in dynamic load identification. Ph.D. thesis. Changsha: Hunan University. 12. Han, X., Liu, J., Li, W., et al. (2009). A computational inverse technique for reconstruction of multisource loads in time domain. Chinese Journal of Theoretical and Applied Mechanics, 4(4), 595–602. 13. Wang, L. J., Han, X., & Liu, J. (2011). An improved iteration regularization method and application to reconstruction of dynamic loads on a plate. Journal of Computational and Applied Mathematics, 235, 4083–4094. 14. Chen, R., Liu, J., Zhang, Z., et al. (2014). A multi-source dynamic load identification method based on optimal output tracking control. Journal of Vibration Engineering, 27(3), 348–354. 15. Sun, X., Liu, J., & Ding, F. (2014). Identification method of dynamic loads for stochastic structures based on matrix perturbation theory. Journal of Mechanical Engineering, 50(13), 148–156. 16. Liu, J., Sun, X., Han, X., et al. (2014). A novel computational inverse technique for load identification using the shape function method of moving least square fitting. Computers & Structures, 144, 127–137. 17. Liu, J., Sun, X., Han, X., et al. (2015). Dynamic load identification for stochastic structures based on Gegenbauer polynomial approximation and regularization method. Mechanical Systems and Signal Processing, 56–57, 35–54. 18. Sun, X., Liu, J., Han, X., et al. (2014). A new improved regularization method for dynamic load identification. Inverse Problems in Science and Engineering, 22(7), 1062–1076. 19. Sun, X. (2014). Research on the techniques of dynamic load identification for stochastic structures. M.S. thesis. Changsha: Hunan University. 20. Guan, F. (2011). A study on material parameters identification and injury evaluation of biologic tissues under impact loading. Ph.D. thesis. Changsha: Hunan University. 21. Guan, F., Belwadi, A., Han, X., et al. (2009). Application of optimization methodology on vehicular crash reconstruction. In ASME 2009 International Mechanical Engineering Congress and Exposition (pp. 567–573). American Society of Mechanical Engineers. 22. Han, X., Liu, G. R., Li, G. Y., et al. (2005). Applications of computational inverse techniques to automotive engineering. In Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice. 23. Ryb, G. E., & Dischinger, P. C. (2008). Injury severity and outcome of overweight and obese patients after vehicular trauma: A crash injury research and engineering network (CIREN) study. Journal of Trauma and Acute Care Surgery, 64(2), 406–411. 24. Stein, M. L. (1999). Interpolation of spatial data: Some theory for kriging. Springer Science & Business Media.
Chapter 5
Introduction to Rapid Structural Analysis
5.1 Engineering Background and Significance With the increases of the complexity of practical equipment and the more stringent requirements for the high-fidelity modeling, the numerical simulation model becomes more and more comprehensive which leads to intractable computational intensity. For an example of the numerical simulation analysis of the mechanical performance of vehicle body structure, to reproduce the body structure features in detail, it is required that small size of grids to discretize the vehicle body structure and to simulate the nonlinear characteristics in the materials, structures, and components, etc. Such a simulation model for the entire vehicle body model ensues millions of units and a great capacity of computing resource is required to fulfill the simulation procedure. With the advent of the high performance computers and advanced simulation techniques, numerical simulation not only provides an effective computing and analysis tool for the performance evaluation of complex structures, but also is the premise and foundation of model updating, uncertainty analysis, structure optimization for the design of the advanced mechanic equipment. On the other hand, in the design process, the voluminous numerical simulation models will be calculated repeatedly to analyze the influence on structure performance under various working conditions with high dimensional design variables. Therefore, the intractable computational intensity for the complex structure hinders the effective application of numerical simulation in the development of practical equipment. It is imperative to reduce the computing consumption to improve the proficiency of the structural analysis and design [1, 2]. A feasible approach to improve the computational efficiency of numerical simulation is that employing the high performance supercomputer with parallel-processors. However, in the practical implementation, concerns and restrictions arise from finance and the relevant practical ability to fulfill and numerical simulation for specified industry. And yet, the capacity of the upgraded computer hardware is also limited. It turns out the cost-effectiveness to upgrade computer hardware is not justified. Thus, under the existing condition of computer hardware, to realize the computationally © Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_5
89
90
5 Introduction to Rapid Structural Analysis
demanding simulation of the large and complicated equipment, it is indispensible to develop a rapid structural analysis method (RSAM) to reduce the calculation capacity for the design of equipment, to shorten design cycle and cost, and to improve the design efficiency of the mechanical equipment. The structural analysis numerically solves the governing equation, which often ensues large-scale iterative operations for the practical complex structures. If the dimension of the system and the number of the iterative calculation can be reduced by merging the transformation of the governing equation with the highly efficient computational methods developed based on approximation theorems, the computational efficiency of engineering structures will be significantly improved. It should be noted that the improved structural analysis with the increased computational efficiency probably compensates the computational precision. That implies the computational efficiency and computational precision cannot be achieved simultaneously. The currently prevailing numerical methods to improve the efficiency of structural analysis have been proposed from the view of simplifying the physical model, which can be generally classified into two categories, namely, the surrogate model-based RSAM and the model order reduction-based RSAM [3, 4]. For the surrogate modelbased RSAM, only a few times of structural responses should be calculated through the governing equation and the different input parameters. An approximate model is then mathematically constructed according the input and output data. Hence, the rapid calculation can be realized by replacing the computationally intensive physical model with the approximate mathematical model. In general, the principle of the surrogate model-based RSAM is the adoption of the concept of the mathematical interpolation or fitting. For the model order reduction-based RSAM, the original physical model is transformed into a scale reduced model, which retains the main components of original model and can reflect the main features of original model. It is an approximate solution for the large-scale structure to ensure the calculation proficiency. Therefore, in this book, the aforementioned two rapid structural analysis methods will be introduced and discussed. For the surrogate model-based RSAM, the optimal response surface based on polynomial structural selection technique, the adaptive radial basis function, and the high dimensional surrogate model, will be addressed. For the model order reduction-based RSAM, the reduced basis methods for the rapid structural analysis of static and dynamic responses will be briefed. These methods can effectively enhance the structural analysis efficiency by improving numerical calculation methods themselves to spare the high demand on the costly upgrading of the computer hardware. It is expected to provide a proficient analysis tool for the high efficient design of complex equipment.
5.2 Surrogate Model Methods In the analysis of complex structures, the surrogate model usually builds up a simple function through applying limited sample information to explicitly represent the
5.2 Surrogate Model Methods
91
structural input-output relationship. Thus, for the structural design based on the surrogate model, the calculation efficiency can be improved substantially. The critical research factors of the surrogate model are the arrangement of samples for constructing surrogate model, which is related to the field of design of experiments (DOE), and the selection of the approximate function for fitting and predicting, which is the main body of the surrogate model method. These two factors govern the efficiency and precision of the constructed surrogate model. DOE is a technology to research on economically and scientifically implementing experiments based on probability theory and mathematical statistics. For the construction of the various surrogate models, first of all, the DOE is adopted to conduct the sampling operation on the structural input-output information. Through DOE, not only the sampling blindness is prevented, but also the calculation of samples can be reduced substantially. It can also efficiently reduce the influence on fitting error, and thus, improve the approximating precision of the surrogate model. The available DOE methods can mainly be divided into two types, namely, the boundary-based DOE and the space filling-based DOE [5, 6]. The boundary-based DOE focuses on the design of the boundary of the design space, which firstly matches the requirement of boundary samples before producing the samples in the central area. This kind of methods includes full factor experiment, partial full factor experiment, central composite design and Box-Behnken design, etc. For the boundary-based DOE, the sample number and calculation cost increase exponentially with the increase of the dimension of the design space, and the potential to produce errors also arise due to the setting of boundary samples. The space filling-based DOE fills the design space as possible using limited samples, which diminishes the requirement for the sample boundaries. This kind of methods includes orthogonal design, D-optimal design, uniform design (UD) [7], and Latin hypercube design (LHD) [8], etc. Among these methods, the UD and LHD methods are generally preferable. Based on the idea of the total mean model, UD produces the samples to uniformly distribute in the whole design space. As a multi-dimensional layered sampling method, LHD can guarantee the uniformity of the projection on each design variable as well as the uniformity in the design space based on a specified criterion. These two DOE methods with the superior uniform distribution are applicable to analyze the high dimensional computational models and can significantly reduce the number of experiments. Different types of surrogate models can be constructed based on DOE and the sample information. The popular surrogate models include the response surface (RS) [9], Kriging model [10], radial basis function (RBF) [11], support vector machine (SVM) [12], neural network (NN) [13], and the multivariate adaptive regression splines (MARS) [14], etc. When applied to the design of complex structures, the surrogate model has the advantage of the improved computational efficiency by several orders of magnitude. It is also beneficial to the implementations of parallel computing, structural global sensitivity analysis, and processing data with the measured or calculated errors. According to the reported comparative studies from many literatures, various surrogate models have the respective advantages and disadvantages, and no individual one demonstrates superiority in the approximation over the others. Several popular surrogate models will be briefly introduced as follows.
92
5 Introduction to Rapid Structural Analysis
(1) The RS is also called the polynomial regression (PR) model, which is a typical fitting method. It constructs a surrogate model by calculating the fitting parameters of the approximate polynomial based on the least square principle. Due to its simplicity, low computational complexity, and explicit relationship between design variables and objective, the RS has become a popular surrogate model in engineering. On the other hand, RS is inadequate for nonlinear fitting, and the Runge phenomenon may appear in the high order polynomial. Additionally, the RS has demand on the sample number. Taking the quadratic PR model for instance, the minimum sample number is required as (n + 1) (n + 2)/2, where n is the number of design variables. The required sample number will be greatly increased with the increase of the design variables. (2) Kriging model is an interpolation method with the strong nonlinear representation ability to process the spatially distributed data. The estimation of Kriging is linearly optimal and unbiased interpolative. On the other hand, the maximum likelihood estimation of Kriging parameters should be conducted. The related likelihood function may be multimodal, which brings improficiency and numerical instability for the construction of a Kriging model. (3) The RBF model is an interpolation model constructed by a linear combination of the radial basis functions. Compared to the RS and Kriging models, the RBF model has relatively simple structure, the better nonlinear fitting ability and numerical stability. Thus, the RBF model is popular in the design of practical engineering structures. (4) The SVM improves the generalization ability based on the structural risk minimization principle, and it has the advantages in processing the nonlinear, highdimensional, or small sample problems. The construction of SVM engages the choice of kernel function and the determination of the kernel parameters and the penalty factor. The kernel parameters influence the sample distribution in the high-dimensional space, and the penalty factor can adjust the ratio of the experiential risk to the confidence interval. The SVM has high fitting precision, whereas it is still improficient in determining its parameters. To improve the calculation efficiency for the complex structures, the surrogate model plays an indispensible role in the optimization design of engineering structures. In order to realize the combination of the structural optimization and the surrogate model, there are generally several strategies as briefed follows. (1) Direct application strategy. It acquires specific samples by DOE, and then constructs once the global surrogate model to replace directly the computationally intensive original model. (2) Sequential sampling strategy. It identifies the local re-sampling region by model verification or optimization method so as to gradually improve the approximating precision in the local region of interest. (3) Guidance exploration strategy. It accelerates the optimization exploration to find the optimal point by applying the surrogate model as the guidance. (4) Approximate strategy with variable design space. It transforms the design space into a series of trust regions and constructs the surrogate model in
5.2 Surrogate Model Methods
93
each subdomain. The surrogate model is then gradually updated to search the optimum. In the subsequent sections of this book on the multi-objective optimization design and uncertain optimization design, the aforementioned combination strategies will be further discussed for different problems. By using the surrogate model, the design efficiency can be improved, and the cycle and cost of the equipment development are substantially reduced. On the other hand, it should be noted that the usual surrogate models have the better approximate ability for the low dimensional problems, whereas their fitting precision is not guaranteed for the problem with the high dimensional design variables. Constraints on the general engineering applications of the surrogate model also lies in no guarantee to the fitting precision in the whole design space.
5.3 Model Order Reduction Methods The prevailing model order reduction methods include the static or dynamic model condensation method [15, 16], Krylov subspace method [17, 18], substructure method [19, 20], reduced basis method (RBM) [21, 22], etc., which will be briefly introduced as follows. (1) The static model condensation method was explored earlier by Guyan [23] and Irons [24] to calculate the structural static responses. The effective dynamic model condensation method was then extended to quickly calculate the structural dynamic responses. The principle of the model condensation method is that through the division and selection of the principal and secondary degree of freedoms of a structure, the responses on a small number of the principal degree of freedoms are calculated. Subsequently, the approximate responses of the entire structure can be quickly derived. As the model condensation method is implemented in physical space, it has been widely applied in many fields, such as system eigenvalue analysis, fault diagnosis, dynamic modification and parameter identification. (2) The Krylov subspace method usually adopts orthogonal normalized vector basis for model order reduction. The key of this kind of methods lies in the appropriate selection of the subspace, its corresponding basis, and the measurement of approximating precision. This kind of methods mainly includes Arnoldi algorithm, Lanczos algorithm and PRIMA algorithm. As the Krylov subspace method has high stability and minimal calculation demand, and also can retain a certain number of moments in the transfer function, it is suitable to the model order reduction of large scale systems. (3) The substructure method divides a structure into several substructures, and each substructure is further divided into a number of elements. Subsequently, the governing equation of the substructure is established with the appropriate constraints
94
5 Introduction to Rapid Structural Analysis
on the interface of substructures. As a result, by integrating the governing equations of each substructure, the governing equation of the whole structure can be deduced with a substantially reduced order compared to the system governing equation established in the original physical coordinate system. Thus, the substructure method improves the calculation efficiency of complex engineering structures. (4) The basic idea of the reduced basis method (RBM) was introduced to analyze the nonlinear behavior of the truss structure using finite element method by Nagy [25], and then was further developed for structure analysis and application by Almroth et al. [26] and Noor and Peters [27], Noor et al. [28]. In recent years, Grepl and Patera [29], Veroy and Patera [30], Liu et al. [31] explored further RBM to divide the calculating process into two stages of offline and online. Lei [32] developed the structural static RBM with the matrix storage technique for the rapid calculation of large scale problems. Zhang et al. [33] presented the rapid structural analysis method for structural dynamic responses based on time integral and least square principle. RBM mainly focuses on the rapid structural analysis of the large-scaled system with the varying system parameters with regard to geometry, material, and so on. In the offline stage of the RBM, the sampling operation is conducted in the parameter domain and the corresponding response vector on each parameter sample point is calculated. All the response vectors are adopted to construct a low-dimensional reduced basis space. The original large-scale system is then projected into the constructed reduced-basis space through a suitable mapping relation to constitute an approximate reduced model. In the online stage of RBM, the approximate solution to the original problem with new varying system parameters can be rapidly derived by the reduced model. Since only the lower order matrix operation is implemented to save the large-scaled calculation, the solving efficiency of the numerical simulation for complex engineering structures can be substantially improved. The response vector, which is used to build the low-dimensional reduced basis space, is derived from the response space of the original system, so that the RBM inherits and reflects the physical properties of the original system. Thus, the reduced basis space is reliable approximation to the response domain of the original problem. As a result, the reduced model with the physical properties of the original system, not only has high computational efficiency but also guarantees the computational stability and convergence for the rapid structural analysis of the large-scaled engineering problems.
References 1. Keyes, D., Colella, P., Dunning, Jr., T., et al. (2003). A science-based case for large-scale simulation. Washington, DC: Office of Science, U.S. Department of Energy (DOE). 2. Chen, D., Wang, L., & Chen, J. (2012). Large-scale simulation: Models, algorithms, and applications. CRC Press.
References
95
3. Wang, G. G., & Shan, S. (2007). Review of metamodeling techniques in support of engineering design optimization. Journal of Mechanical Design, 129(4), 370–380. 4. Antoulas, A. C., & Sorensen, D. C. (2001). Approximation of large-scale dynamical systems: An overview. Applied Mathematics and Computer Science, 11(5), 1093–1122. 5. Ilzarbe, L., Álvarez, M. J., Viles, E., et al. (2008). Practical applications of design of experiments in the field of engineering: A bibliographical review. Quality and Reliability Engineering International, 24(4), 417–428. 6. Montgomery, D. C. (2008). Design and analysis of experiments. Wiley. 7. Fang, K. T., Lin, D. K. J., Winker, P., et al. (2000). Uniform design: Theory and application. Technometrics, 42(3), 237–248. 8. Kenny, Q. Y., Li, W., & Sudjianto, A. (2000). Algorithmic construction of optimal symmetric Latin hypercube designs. Journal of statistical planning and inference, 90(1), 145–159. 9. Myers, R. H., Montgomery, D. C., & Anderson-Cook, C. M. (2009). Response surface methodology: Process and product optimization using designed experiments. Wiley. 10. Kleijnen, J. P. C. (2009). Kriging metamodeling in simulation: A review. European Journal of Operational Research, 192(3), 707–716. 11. Buhmann, M. D. (2003). Radial basis functions: Theory and implementations. Cambridge University Press. 12. Schölkopf, B., & Smola, A. J. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002. 13. Han, X., Xu, D., & Liu, G. R. (2003). A computational inverse technique for material characterization of a functionally graded cylinder using a progressive neural network. Neurocomputing, 51, 341–360. 14. Friedman, J. H. (1991). Multivariate adaptive regression splines. The Annals of Statistics, 1–67. 15. Schilders, W. H. A., Van der Vorst, H. A., & Rommes, J. (2008). Model order reduction: Theory, research aspects and applications. Berlin, Germany: Springer. 16. Besselink, B., Tabak, U., Lutowska, A., et al. (2013). A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control. Journal of Sound and Vibration, 332(19), 4403–4422. 17. Grimme, E. J. (1997). Krylov projection methods for model reductions. Ph.D. Thesis, University of Illinois at Urbana-Champaign. 18. Antoulas, A. C. (2005). Approximation of large-scale dynamical systems. SIAM. 19. Klerk, D., Rixen, D. J., & Voormeeren, S. N. (2008). General framework for dynamic substructuring: History, review and classification of techniques. AIAA Journal, 46(5), 1169–1181. 20. Barbone, P. E., Givoli, D., & Patlashenko, I. (2003). Optimal modal reduction of vibrating substructures. International Journal for Numerical Methods in Engineering, 57(3), 341–369. 21. Veroy, K. (2003) Reduced-basis methods applied to problems in elasticity: Analysis and applications. Ph.D. Thesis, Massachusetts, MIT. 22. Hyunh, D. B. P. (2007). Reduced-basis approximation and application to fracture and inverse analysis. Ph.D. Thesis, National University of Singapore, Singapore. 23. Guyan, R. J. (1965). Reduction of stiffness and mass matrices. American Institute of Aeronautics and Astronautics Journal, 3(2), 380–380. 24. Irons, B. M. (1965). Structural eigenvalue problems—Elimination of unwanted variables. American Institute of Aeronautics and Astronautics, 3(5), 961–962. 25. Nagy, D. A. (1977). Model representation of geometrically nonlinear behavior by the finite element method. Computers & Structures, 10, 683–688. 26. Almroth, B. O., Stern, P., & Brogan, F. A. (1978). Automatic choice of global shape functions in structural analysis. AIAA Journal, 16, 525–528. 27. Noor, A. K., & Peters, J. M. (1980). Reduced basis technique for nonlinear analysis of structures. AIAA Journal, 18, 455–462. 28. Noor, A. K., Peters, J. M., & Andersen, C. M. (1984). Mixed models and reduction techniques for large-rotation nonlinear problems. Computer Methods in Applied Mechanics and Engineering, 44, 67–89.
96
5 Introduction to Rapid Structural Analysis
29. Grepl, M. A., & Patera, A. T. (2005). A posteriori error bounds for reduced-basis approximations of parameterized parabolic partial differential equations. ESAIM: Mathematical Modelling and Numerical Analysis, 39(1), 157–181. 30. Veroy, K., & Patera, A. T. (2005). Certified real-time solution of the parametrized steady incompressible Navier-Stokes equations: Rigorous reduced-basis a posteriori error bounds. International Journal for Numerical Methods in Fluids, 47, 773–788. 31. Liu, G. R., Lee, J. H., Patera, A. T., Yang, Z. L., & Lam, K. Y. (2005). Inverse identification of thermal parameters using reduced-basis method. Computer Methods in Applied Mechanics and Engineering, 194, 3090–3107. 32. Lei, F. (2009). A vehicle body design oriented rapid computational method of large-scale problems. Ph.D. Thesis, Hunan University, Changsha. 33. Zhang, Z., Han, X., & Jiang, C. (2011). A novel efficient method for real-time computation of parameterized dynamic equations with large-scale dimension. Acta Mechanica, 219(3–4), 337–356.
Chapter 6
Rapid Structural Analysis Based on Surrogate Models
6.1 Introduction The efficiency and precision of the surrogate model are the linchpins to ensure the proficiency of the rapid structural analysis and the performance of structural optimization design. This chapter will expose three types of generally applied surrogate models, i.e., the polynomial RS, the RBF, and the high-dimensional surrogate model. The traditional polynomial construction method based on the least square fitting does not differentiate the terms in view of their respective contribution, and thus, it tends to ensue instability for the fitting results due to the redundancy of the terms. To make amendment, a polynomial structural selection technique based on error reduction ratio is proposed to evaluate the sensitivity degree of each term of the polynomial RS to construct the optimal RS through eliminating the terms with insignificant influence. An adaptive updating surrogate model based on RBF will be presented and discussed. The modeling samples and the testing samples are derived by the optimal Latin hypercube sampling and the inherited Latin hypercube sampling, respectively. When the precision of the approximated responses by the surrogate model at the testing sample points is unsatisfactory, the testing samples with uniform distribution will be added into the sample set to reconstruct and update the surrogate model. Simultaneously, the shape parameters of the RBF are optimized to improve the updating efficiency and the precision of the surrogate model. A surrogate model usually inflicts uncompromising contradiction that the lower order model is improficient, while the computational intensity of the high order model is intractable. In order to improve the approximation efficiency of the high dimensional surrogate model at the least expense of extra computational cost, both an improved high-dimensional surrogate model and an appropriate adjustment function will be explored.
© Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2020 X. Han and J. Liu, Numerical Simulation-based Design, https://doi.org/10.1007/978-981-10-3090-1_6
97
98
6 Rapid Structural Analysis Based on Surrogate Models
6.2 Polynomial Response Surface Based on Structural Selection Technique In constituting the polynomial RS, the linchpin is to determine appropriately the model type and the corresponding effective items. The polynomial structural selection technique is an effective approach to address this issue. In this method, according to the error reduction ratio (ERR) [1], the polynomial model order and the effectiveness of each item will be evaluated in constructing the polynomial RS to derive the optimal structural form.
6.2.1 Polynomial Structure Selection Based on Error Reduction Ratio In the practical engineering, the quadratic polynomial RS is generally applied with its cross terms being neglected to simplify the surrogate model. Thus, the constructed polynomial RS does not practically represent the actual model. On the other hand, for the polynomial RS based on structural selection technique, the polynomial with the complete terms is constructed as the candidate. The relevance of each individual term is evaluated. The general polynomial with the complete terms is expressed as follows f˜(x) = a0 + a1 x1 + · · · + an xn + an+1 x12 + an+2 x1 x2 + · · · + a K −2 xn−1 xnm−1 + a K −1 xnm =
K −1
ai u i
(6.1)
i=0
where f˜(x) is the approximate structural response function, x = [x1 , x2 , . . . , xn ]T represents the n-dimensional design vector, u i denotes the term of the m-order complete candidate polynomial with the design variables x, ai stands for the undetermined coefficient of each term, K is the total number of the terms of the complete polynomial and K = (n + m)!/(n!m!). Generally, the previous set order m should be sufficiently high to ensure that the polynomial has the compatible capability to represent the actual model. By using the orthogonal set, the right side of Eq. (6.1) can be rewritten as follows f˜(x) =
K −1 i=0
ai u i =
K −1
h i pi
(6.2)
i=0
where pi is derived by the orthonormal transform of u i , and h i stands for the corresponding orthogonal transform coefficients. The transformed terms are orthogonal to each other, so that
6.2 Polynomial Response Surface Based on Structural Selection …
99
L 1 pi (d) p j (d) = 0, ∀i = j L d=1
(6.3)
here L is the sample number, pi (d) expresses the corresponding function value of the transformed term at the dth sample point. pi (d) can be deduced through the following Gram-Schmidt orthogonalization. pi (d) = u i (d) −
i−1
ai j p(d)
(6.4)
j=0
where u i (d) is the corresponding value u i at the dth sample point, i = 1, 2, . . . , K −1, and d = 1, 2, . . . , L . L
d=1 u i (d) p j (d) L 2 d=1 p j (d)
ai j =
(6.5)
When the calculated response by the actual model at the dth sample point is represented as f (d), the orthogonal transform coefficient h i in Eq. (6.2) can be derived through minimizing the mean square error (MSE) between the actual output response and the approximating response using Eq. (6.2). The MSE is expressed as L 1 MSE = L d=1
f (d) −
K −1
2 h i pi (d)
(6.6)
i=0
Based on the extremum condition of optimization, partial derivative of Eq. (6.6) with respect to the coefficient h i is zero. With application of the orthogonalization of pi (d), the coefficient h i can be derived as L hi =
d=1
L
f (d) pi (d)
d=1
pi2 (d)
(6.7)
Substituting Eq. (6.7) into Eq. (6.6) leads to the expanded form of Eq. (6.6), such that L K −1 L 1 1 MSE = h i2 pi2 (d) (6.8) ( f (d))2 − L d=1 L i=0 d=1 From Eq. (6.8), when model does not contain any term, the maximum L the surrogate value of MSE is L1 d=1 ( f (d))2 , and the contribution of each orthogonal term to L reduce MSE is L1 d=1 h i2 pi2 (d). Hence, the following error reduction ratio of each term is defined [2, 3].
100
6 Rapid Structural Analysis Based on Surrogate Models L
ERRi =
d=1 L
h i2 pi2 (d) × 100 ( f (d))
(6.9)
2
d=1
ERRi can serve as a standard to evaluate the contribution of each orthogonal term in Eq. (6.2). The term with the maximal error reduction ratio is selected from all the candidate terms, and the orthogonalization and evaluation of error reduction ration are then re-implemented for the residual terms. When the maximal ERRi is less than the specified threshold, it is considered that all the residual candidate terms are insignificant and can be eliminated. Through the above process, the prime orthogonal terms and their coefficients in Eq. (6.2) can be deduced, and using the inverse orthogonal transformation, the prime terms and their coefficients ai in Eq. (6.1) can be derived. Therefore, the optimal structure of the polynomial RS and the corresponding coefficients are determined, which overcome the unsatisfactory stability and precision with regard to the traditional polynomial fitting based on the least square method. It can be stated that the presented polynomial RS method is relatively suitable to construct the high-dimensional and high-order polynomial response surface.
6.2.2 Numerical Example In order to verify the effectiveness and approximating precision of the present polynomial RS method based on the structural selection technique, the following testing function is explored f (x) =
n
ϕi (xi )
(6.10)
i=1
where ϕi (xi ) = (|4xi − 2| + ai ) (1 + ai ), a1 = a2 = 0, a3 = a4 = · · · = a8 = 3 and n = 8. The input parameters are xi ∈ [2, 3], i = 1, 2, . . . , n. The Latin hypercube sampling method is adopted to identify 40 sample points in the input parameter space, and the corresponding output responses can be calculated through Eq. (6.10). The complete polynomial model with the order of m = 8 is firstly established to ensure the polynomial RS appropriately representing the actual model in Eq. (6.10). Each candidate term of the complete polynomial is evaluated by the polynomial structural selection technique based on the error reduction ratio. The streamlined optimal polynomial RS f˜(x) can then be expressed as follows f˜(x) = 0.2633x14 x24 + 21.5257x1 x2 x3 x4 x5 x6 x7 x8 − 0.8626x32 x52 x62 x82 − 3.2714x3 x42 x5 x6 x72 x8 + 3.2322x43 x73
(6.11)
6.2 Polynomial Response Surface Based on Structural Selection … Fig. 6.1 Comparison of responses between original model and optimal polynomial RS
101
4
5.5
x 10
Original responses Responses by optimal polynomial RS
5
Output responses
4.5 4 3.5 3 2.5 2 1.5 1
0
2
4
6
8
10
12
14
16
18
20
Sample points
In order to verify the accuracy of the constructed optimal polynomial RS based on the structural selection technique, twenty new samples are randomly generated in the input parameter space and their function responses are calculated through Eqs. (6.10) and (6.11), respectively. The results are compared in Fig. 6.1. From Fig. 6.1, deviation of the responses calculated by the optimal polynomial RS from the original model is minimal. The results indicate that the presented RS modeling method is as proficient as the original model. The proposed method circumvents the intricateness for the high-dimensional and high-order problem to be fitted with the complete polynomial by the traditional least square method. Instead, it selects five prime terms to construct the streamlined RS as in Eq. (6.11), which also demonstrates strong nonlinear fitting feature.
6.2.3 Engineering Application: Nonlinear Output Force Modeling for Hydro-Pneumatic Suspension With the inert gas serving as elastic medium and the hydraulic oil serving as damper medium, the hydro-pneumatic suspension integrates the functions of the spring unit and shock absorber to demonstrate nonlinear stiffness and damping characteristics. The physical model of the hydro-pneumatic suspension is shown in Fig. 6.2. The pavement roughness is transferred by the vehicle tires and axle into the vertical movement of piston rod in the hydro-pneumatic suspension. When the piston rod moves upward, the gas in the air chamber is compressed and the energy is stored. When the piston rod moves downward, the gas and energy are released. This is equivalent to the function of a spring. The hydraulic oil can flow reciprocally in the oil chambers I and II through the one-way valve and damping hole, which performs the function of a bi-directional damper.
102
6 Rapid Structural Analysis Based on Surrogate Models
Output force F
Gas chamber Sprung mass
Oil chamber I
One-way valve
Damping hole
Oil chamber II
Excitation x,
Unsprung mass
Fig. 6.2 Illustration of hydro-pneumatic suspension
The nonlinear output force of hydro-pneumatic suspension F is equivalent to the combination of the nonlinear elastic and damping forces by the joint excitations of the displacement x and velocity x. ˙ When the order is set as m = 4, the complete polynomial response surface with fifteen sub-terms for the output force F can be expressed as F(x, x) ˙ = a0 + a1 x + a2 x˙ + a3 x 2 + a4 x x˙ + · · · + a13 x x˙ 3 + a14 x˙ 4
(6.12)
6.2 Polynomial Response Surface Based on Structural Selection …
103
Table 6.1 Polynomial surrogate model of the output force of hydro-pneumatic suspension Coefficient
Traditional least square method
Structural selection technique based on ERR
Before filtering
After filtering
Before filtering
After filtering
a0
−1.0453 ×
1.2903 ×
−80802
−81087
a1
3.0863 × 106
4.4339 × 106
2.339 × 106
2.3551 × 106
a2
12245
1.0439 ×
2471
2738
a3
1.4936 × 108
−1.7186 × 107
0
0
a4
−2.2887 × 105
5.6944 × 106
0
0
a5
25976
−55164
6302
6241
a6
−1.8572 × 109
−5.197 × 109
0
0
a7
−2.424 × 107
−2.5415 × 108
0
0
a8
−2.8441 ×
−6.2214 ×
a9
−1180.8
a10
−2.381 × 1011
a11
3.1426 ×
a12
−5.7299 × 107
−1.3652 × 108
a13
68065
a14
−3626.3
105
105
105 105
−61047
−37053
1552
1357
−1.2715 × 1012
0
0
−1.4236 ×
0
0
0
0
−1.6027 × 106
0
0
623.25
0
0
105
−27254 108
1010
When the time histories of the displacement, velocity, and output force with noise are derived, the polynomial response surfaces based on the traditional least square method and the present structural selection technique are respectively constructed to approximate the nonlinear output force of the hydro-pneumatic suspension using two types of data before and after filtering. The results are compared in Table 6.1. According to the above compared results, the traditional least square method fails to evaluate and select effectively the prime terms of the polynomial surrogate model for the nonlinear output force of the hydro-pneumatic suspension. The derived coefficients are numerically unstable and distinctly different by using the data before and after filtering. In contrasts, the polynomial structural selection technique based on ERR can efficiently evaluate the contribution of each term to constitute the streamlined polynomial model with the simplest structure. The identified prime terms are consistent with application of the noisy data before and after filtering. The corresponding coefficients are basically stable, which indicates that the polynomial response surface constructed by the structural selection technique has the sufficient anti-noise capacity. The calculated output forces at each time step through the optimal polynomial RS constructed by the displacement and velocity after filtering are compared with the measured actual forces as shown in Figs. 6.3 and 6.4. It is found that the constructed polynomial surrogate model for the nonlinear force of the hydro-pneumatic
104
6 Rapid Structural Analysis Based on Surrogate Models
Fig. 6.3 Output forces with respect to displacement
-2
104
Output force / N
-4 -6 -8
-10
Output force after filtering Calculated output force
-12 -14 -0.02
-0.01
0
Displacement / m
0.01
0.02
Output force / N
Fig. 6.4 Output forces with respect to velocity
suspension can accurately predict the output force with respect to the different displacements and velocities, which provides a reliable analysis model for the design of the nonlinear elasticity and damping properties of the hydro-pneumatic suspension.
6.3 Surrogate Model Based on Adaptive Radial Basis Function
105
6.3 Surrogate Model Based on Adaptive Radial Basis Function The radial basis function (RBF) model is constructed through a linear superposition of the basic functions of RBF. The RBF features the sample point as the center and the Euclidean distance between the predicted point and the sample points as the independent variable [4, 5]. Through the Euclidean distance, a multi-dimensional variable problem can be transformed into a one-dimensional problem. Generally, an arbitrary function can be approximated by the weighting sum of a series of basic functions with appropriate mapping technique. In this sense, the RBF model fulfill the following nonlinear mapping from the input sample to the output response f˜(x) =
Ns
wi Φ r i
(6.13)
i=1
where Ns denotes the number of samples, wi expresses the weighting coefficient,
r i = x − x i represents the Euclidean distance between the predicted point x and the sample point x i , and Φ(r ) stands for the RBF with the shape parameters. According to the interpolation condition of f˜ x j = f x j , j = 1, 2, . . . , Ns , Eq. (6.13) can be rewritten as follows f = Φw
(6.14)
where Φ = Φi j = Φ x i − x j , i, j = 1, 2, . . . , Ns is the interpolation matrix with order of Ns × Ns , w represents the Ns dimensional weighting coefficient vector, and f is the Ns dimensional response vector of the samples. If the inverse matrix of Φ exists, the weighting coefficient vector w can be written as w = Φ −1 f
(6.15)
Substituting the deduced weighting coefficient into Eq. (6.13), the surrogate model of RBF is established. In order to verify the effectiveness of the constructed surrogate model, the extra testing points are extracted to evaluate its precision. If the precision of the RBF model does not meet the requirement, the testing points will be added into the sample set to reconstruct and update the surrogate model. It can be said that the uniform generation of the samples or testing points and the optimal shape parameters are positive to improve the updating efficiency and approximating precision of the surrogate model [6, 7]. The specific procedure to construct surrogate model based on the adaptive radial basis function (ARBF) [8] will be exposited in the following context.
106
6 Rapid Structural Analysis Based on Surrogate Models
(1) Select the sample and testing points: the optimal Latin hypercube sampling method is adopted to identify the initial samples, and the inherited Latin hypercube sampling method is applied to generate the testing points for the updating process. (2) Determine the optimal shape parameters: the shape parameters of RBF and the precision index of the surrogate model are set as the optimization variables and objective, respectively, the optimal shape parameters are then derived by the optimization algorithms. (3) Update the surrogate model: the precision of the RBF model is evaluated, if the precision is unsatisfactory, the testing points will be added into the sample space, the surrogate model is reconstructed and the new testing points are regenerated until the precision of the surrogate model meet the requirements.
6.3.1 Selection of Sample and Testing Points
X2
X2
The aforementioned construction of RBF model usually produces the initial samples through the optimal Latin hypercube design (OLHD) method [9]. The OLHD extends the optimization criteria on the basis of LHD, which not only inherits the projection uniformity of LHD, but also ensures the distribution uniformity in the sample space. The optimization criterion usually is identified as entropy, maximin distance, or centered L2 discrepancy, etc. Taking the samples of ten points in the two dimensional design variable space as an example, the sampling results by LHD and OLHD are compared in Fig. 6.5. In Fig. 6.5, for both the sampling methods, the design variables are sorted into ten grids to prevent repetition of samples. Thus, the uniformity is ensured when each sample is retrieved from each individual grid for the respective design variable. However, the sample uniformity by LHD is not satisfactory, while the samples by OLHD can uniformly distribute in the design variable space.
X1
(a) LHD Fig. 6.5 Sampling results by LHD and OLHD
X1
(b) OLHD
6.3 Surrogate Model Based on Adaptive Radial Basis Function
107
X1
X2
X2
X2
In the precision evaluation and update procedure of the RBF surrogate model, the inherited Latin hypercube design (ILHD) method [10] is adopted to generate the testing points. In order to guarantee the distribution uniformity of the combination of the newly generated testing points and the existing sample points, specifying the maximin distance criterion as the optimization objective, the simulated annealing algorithm is employed to solve the optimization problem. The sampling process based on ILHD demonstrates two favorable features. The newly generated testing points can be uniformly projected onto each design variable and has the maximum distance to the sample points. The combination of the new testing points and the old sample points can still maintain the distribution uniformity and the projection uniformity in the whole design variable space. The sampling results by OLHD and ILHD in the two dimensional design variable space are shown in Fig. 6.6 with the existing samples shown in Fig. 6.6a and the newly identified testing points by ILHD in Fig. 6.6b. It can be seen from the Figs that all the samples are located in the different grids and can be uniformly projected onto the design variables. From Fig. 6.6c, the distances between the new testing points and the old sample points are relatively large, and the distribution of the combined samples is uniform in the whole design variable space. According to the interpolation characteristics of RBF model, the error of the predicted response will be smaller if the distances from the predicted point to the sample points are closer. Thus, in ILHD, the maximum of the minimum distance between the testing points and the sample points is adopted as the sampling optimization criterion. The generated testing points can also effectively estimate the precision of the constructed RBF model. In case of unsatisfactory precision of the RBF model, the testing points and the sample points will be combined to reconstruct the updating RBF model, which increases the samples in the region with the relatively large error to improve the precision and efficiency of the surrogate model.
X1
X1
Fig. 6.6 Sampling results by OLHD and ILHD. a OLHD (Eight samples). b ILHD (Twelve samples). c Combination (Twenty samples)
108 Table 6.2 General radial basis functions
6 Rapid Structural Analysis Based on Surrogate Models No.
Function name
RBF
Shape parameter
1
Gaussian, GS
e
α
2
Inverse multi-quadric, IMQ
2 − 1 r + β2 2
3
Cubic
4
Logistic
5
Multi quadric
−αr 2
2 3 r +λ 1 (1 + eηr ) 2 q r + R2
β
λ η R, q
6.3.2 Optimization of the Shape Parameters The precision of the RBF surrogate model is governed by the distribution of the samples, the form of the RBF, and the corresponding shape parameters. RBF is a monotonic function with respect to the Euclidean distance. Different types of RBFs are listed in Table 6.2. Amongst, the Gaussian function and MQ function are widely applied. The different shape parameters result in the different influence regions of RBF and the developed methods for the derivation of the shape parameters mainly include the empirical methods and the cross-validation methods. The improficiency of the empirical method generally lies in the exclusive designer’s potential insufficient comprehension of the problems. For the cross validation method, the shape parameters are identified in view of the predicted precision of the surrogate model. For most of the cases, the proficiency of the surrogate model is ensured. However, inefficacy may arise due to the numerical instability and the too large number of conditions for the interpolation matrix. In order to amend the aforementioned improficiencies in these two methods for the selection of shape parameters, the optimization method combined with ILHD sampling is proposed to maximize the performance of the limited samples and appropriately identify the shape parameters for the construction of the optimal surrogate model. In the optimum selection process of the shape parameters, the testing points are specified according to the updated samples in the last iteration. The optimization problem is established by taking the minimum error of predicted response of the RBF model constructed by the samples at the testing points as the optimization objective and the shape parameters as the optimization variables. The optimization methods, such as genetic algorithm, are then adopted to derive the optimal shape parameters. The corresponding error of the predicted response can be adopted to evaluate the precision of the current surrogate model.
6.3.3 RBF Model Updating Procedure The update procedure of the adaptive RBF is shown in Fig. 6.7 and the specific steps are exposited as follows.
6.3 Surrogate Model Based on Adaptive Radial Basis Function
109
Fig. 6.7 Flowchart of the construction of ARBF model [8]
(1) Initialization. Specify the precision requirement of RBF model 2∗ , the form of RBF, the value ranges of shape parameters, the number of the samples for constructing surrogate model Ns , the number of the testing points for evaluating the precision of surrogate model Nt , Apply the following determinant coefficient 2 to represent the precision of surrogate model. i 2 Nt i ˜ f x − f x i=1 2 = 1 − 2 Nt i ¯ i=1 f x − f (x)
(6.16)
where f x i , f˜ x i , and f¯(x) denote the actual response, the approximated response by the surrogate model, and the mean value of the actual responses, respectively. 0 ≤ 2 ≤ 1 and the precision of the surrogate model increases with the increase of 2 . The ranges of the shape parameters are different for the different forms of RBF. For instance, the shape parameters of MQ function are in the range of 0 ≤ R ≤ 2 and 0 < q ≤ 3, where the shape parameters of the other types of RBF are 0 < α, β, λ, η ≤ 1.
110
6 Rapid Structural Analysis Based on Surrogate Models
(2) Selection of the sample points based on the OLHD: derive Ns samples through OLHD and calculate the responses of the actual model at the sample points. In view of that the complexity of the function relationship between the response and the design variables is unpredictable, the number of the initial sample points are controlled to be the least tentatively. (3) Selection of testing point based on ILHD: extract the additional Nt testing points besides the Ns sample points through ILHD, calculate the responses of the actual model at the testing points. The maximum distance between Ns sample points and the newly generated Nt testing points can be ensured by ILHD. (4) Optimization of shape parameters: construct the RBF surrogate model by Ns samples, calculate the determinant coefficient 2 at the Nt testing points, and evaluate the precision of the current surrogate model. Taking the determinant coefficient as the optimization objective and the shape parameters as the optimization variables, the optimal shape parameters will be derived by the optimization method of the genetic algorithm. In the process of optimization iteration, if the condition number of the interpolation matrix Φ is very large, the determinant coefficient 2 with respect to the corresponding shape parameters will be directly set to be a small value. (5) Specify the convergence condition: check whether the precision index 2 of the RBF model constructed by the Ns samples and the optimal shape parameters meet the precision requirements. If 2 < 2∗ , the precision of RBF model is insufficient and it should be updated. As the combination of the sample points and the testing points can still uniformly distribute in the design variable domain, update the samples by Ns = Ns ∪ Nt and then turn to step (3). If 2 ≥ 2∗ , the current RBF model is considered to satisfy the precision requirement and the iteration is terminated.
6.3.4 Numerical Examples In order to test the performance of ARBF, the test functions listed in Table 6.3 will be studied. The ARBF will be applied to construct the surrogate models and the variations of the samples and shape parameters of RBF will be analyzed in the iterative construction process. The precision requirements of the RBF surrogate models for the test functions are set as 2∗ = 90%. The numbers of initial samples and testing points for functions 1 and 2 are given as Ns = Nt = 7, and those for function 3 are Ns = Nt = 12. The type of RBF is MQ function and the ranges of its shape parameters are 0 ≤ R ≤ 2 and 0