Virtual Design and Validation (Lecture Notes in Applied and Computational Mechanics, 93) 3030381552, 9783030381554


131 21 21MB

English Pages [349]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Experiments and Virtual Design
Digital Volume Correlation of Laminographic and Tomographic Images: Results and Challenges
1 Introduction
2 Challenges
2.1 Material Microstructure
2.2 CT Imaging and Artifacts
2.3 Volume of Data/Duration of Acquisition
2.4 Detecting Features Invisible to the Eye
3 Recent Solutions
3.1 Filtering 3D Images
3.2 Global DVC
3.3 Reduced Bases and Integrated DVC
3.4 Regularized DVC
3.5 Projection-Based DVC
4 Conclusion
References
Manufacturing and Virtual Design to Tailor the Properties of Boron-Alloyed Steel Tubes
1 Introduction
2 Material
3 Technological Process
4 Phase Transformations During Heat-Treatment
4.1 Austenite Formation During Inductive Heating
4.2 Austenite Decomposition During Spray Cooling
5 Non-destructive Microstructure Characterization
6 Virtual Design of Tube Heat-Treatment
7 Conclusions
References
Mathematical Modelling and Analysis of Temperature Effects in MEMS
1 Introduction
2 The Full Model
2.1 The Evolution of the Membrane
2.2 The Electrostatic Potential
2.3 The Temperature
2.4 Transformation of the System
2.5 Examples for the Temperature Dependence
3 The Small Aspect Ratio Limit
3.1 The System
3.2 Solution to the Thermal and Electrostatic Problems
3.3 Well-Posedness of the Deflection Problem
References
Multi-fidelity Metamodels Nourished by Reduced Order Models
1 Introduction
2 Multi-fidelity Kriging in a Nutshell
3 Presentation of the Mechanical Problem and the LATIN Solver
3.1 Mechanical Problem
3.2 Chaboche's Elasto-Viscoplastic Behavior Law
3.3 LATIN-PGD Solver
4 Presentation of the Test-Cases
4.1 Test-Case 1: Plate with Inclusion
4.2 Test-Case 2 : Damping Part
5 Implementation of the Coupling Strategy
5.1 Correlation Between Error on the Quantity of Interest and LATIN Error Indicator
5.2 Method Used for Test Campaigns
6 Results of the Test Campaign
6.1 Full Generation of the Metamodel—Test-Case 1
6.2 Full Generation of the Metamodel—Test-Case 2
6.3 EGO Method for Finding the Global Optimum—Test-Case 1
6.4 EGO Method for Finding the Global Optimum - Test-Case 2
7 Conclusion
References
Application of Enhanced Peridynamic Correspondence Formulationpg for Three-Dimensional Simulationspg at Large Strains
1 Introduction
2 General Framework
3 Correspondence Formulation
3.1 Meshfree Discretisation
3.2 Time Integration
3.3 Stability Problems
3.4 Enhanced Formulation
4 Numerical Examples
4.1 Tension of a Rod
4.2 Punch Test
4.3 Torsion of Rod
5 Conclusion
References
Isogeometric Multiscale Modeling with Galerkin and Collocation Methods
1 Introduction
2 Basis Functions
2.1 B-Splines
2.2 Non-uniform Rational B-Splines
3 Computational Homogenization
3.1 Macroscopic Equilibrium Problem
3.2 Microscopic Equilibrium Problem
4 Isogeometric Formulation
4.1 Isogeometric Galerkin Formulation
4.2 Isogeometric Collocation Formulation
5 Numerical Examples
5.1 Microscale Problem
6 Conclusion
References
Composites
Experimental and Numerical Investigations on the Combined Forming Behaviour of DX51 and Fibre Reinforced Thermoplastics Under Deep Drawing Conditions
1 Introduction
2 Methodology
2.1 Experimental Procedure
2.2 Forming Tests
2.3 Numerical Methods
3 Results and Discussion
3.1 Material Characterization and Modelling
3.2 Hemispherical Dome Tests
4 Summary and Outlook
References
The Representation of Fiber Misalignment Distributions in Numerical Modeling of Compressive Failure of Fiber Reinforced Polymers
1 Introduction
1.1 Previous Work
1.2 Current Work
2 Measurement Data Driven Model Generation
2.1 Fiber Misalignment Distribution Generation from Experimentally Characterized Spectral Density
2.2 Generated Distributions Examples
2.3 Mapping of Fiber Misalignment Distribution to Numerical Model
3 Numerical Example Results
4 Concluding Remarks
References
A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber Reinforced Composites
1 Introduction
2 Fine Scale Modeling
3 Transversely Isotropic Material Model for the Coarse Scale
4 Geometrically Nonlinear Cohesive Element
4.1 Weak Formulation and Kinematics
4.2 Finite Element Discretization
5 Multiscale Modeling
6 Numerical Examples
7 Conclusion
References
Topology Optimization of 1-3 Piezoelectric Composites
1 Introduction
2 Constitutive Relation of the 1-3 Piezocomposite
3 Materials Microstructure Topology Optimization by the Level Set Method
4 Optimization Algorithm
5 Results
6 Conclusion
References
Fracture and Fatigue
Treatment of Brittle Fracture in Solids with the Virtual Element Method
1 Introduction
2 Governing Equations of Brittle Fracture
2.1 Basic Equations of Elastic Body
2.2 Crack Propagation Based on Stress Intensity Factors
2.3 Phase-Field Approach for Brittle Crack Propagation
3 Formulation of the Virtual Element Method
3.1 Ansatz Functions for VEM
3.2 Residual and Stiffness Matrix of the Virtual Elements
4 Construction of the Crack Path
5 Numerical Examples
5.1 Crack Propagation Using Phase-Field Approach
5.2 Crack Propagation Using Stress Intensity Factors
6 Conclusion
References
A Semi-incremental Scheme for Cyclic Damage Computations
1 Introduction
1.1 Notation
2 An Overview of the LATIN-PGD Method
2.1 Local Stage
2.2 Global Stage
3 Variable Amplitude and Frequency Loading
3.1 Hybrid Search Direction Formulation
4 Optimality of the Generated ROB
4.1 Randomised Singular Value Decomposition (RSVD) Compression of PGD Bases
5 Numerical Results
5.1 Model Verification
5.2 Comparison Between Deterministic and Randomised SVD Schemes
5.3 Variable Amplitude and Frequency Loading
6 Conclusions
References
Robust Contact and Friction Model for the Fatigue Estimate of a Wire Rope in the Mooring Line of a Floating Offshore Wind Turbine
1 Introduction
2 Detailed Wire Rope Model
2.1 A New Contact and Friction Element for Wire Rope
2.2 Comparison to Analytical and Surface to Surface Solutions
2.3 Boundary Conditions
2.4 Wire Rope Properties
3 Application to a FOWT Model
3.1 Global Hydrodynamic FOWT Model
3.2 Global Model Results
3.3 Results of the Detailed Wire Rope Model
4 Conclusion
References
Micromechanically Motivated Model for Oxidation Ageing of Elastomers
1 Introduction
2 Current State of the Art
3 Network Degradation Dynamics
3.1 Compartment Model
3.2 Reduction Factor
4 Continuum Model
4.1 Primary Network Stress
4.2 Secondary Network Stress
4.3 Kirchhoff Stress and Material Tangent
5 Numerical Results
5.1 Stress Softening and Stiffening
5.2 Network Degradation and Permanent Set
5.3 Finite Element Example
6 Conclusion
References
Uncertainty Quantification
A Bayesian Approach for Uncertainty Quantification in Elliptic Cauchy Problem
1 Introduction
2 The Steklov-Poincaré Approach for the Cauchy Problem
2.1 Forward and Cauchy Problems in Linear Elasticity
2.2 The Steklov-Poincaré Method
2.3 Conjugate Gradient and Ritz Values Computation
3 Bayesian Inference for the Cauchy Problem
3.1 Bayes' Theory and Its Application in the Linear Gaussian Case
3.2 Application to the Cauchy Problem
4 Reduction by Ritz Modes
5 Numerical Example
6 Conclusion and Perspectives
References
On-the-Fly Bayesian Data Assimilation Using Transport Map Sampling and PGD Reduced Models
1 Introduction
2 Posterior Sampling in Bayesian Data Assimilation
2.1 Basics on Bayesian Inference
2.2 Transport Map Sampling
3 PGD Model Order Reduction in Bayesian Inference
3.1 Basics on PGD
3.2 Transport Map Sampling with PGD Models
4 Illustrative Example
4.1 Inference Problem
4.2 PGD Solution
4.3 Sequential Data Assimilation
4.4 Uncertainty Propagation on Outputs of Interest
5 Conclusions and Prospects
References
Stochastic Material Modeling for Fatigue Damage Analysis
1 Introduction
2 Deterministic Modelling of Fatigue Damage
2.1 Material Model
2.2 Numerical Approach
3 Stochastic Modelling of Fatigue Damage
3.1 From a Deterministic to a Stochastic Process
3.2 Numerical Properties of the Damage Process
3.3 Proposed Diffusion Random Process
3.4 Drift Term
3.5 Diffusion Term
3.6 Introduction of the Stochastic Damage Model in the Finite-Element Framework
4 Numerical Example
4.1 Influence of the Numerical Parameters on the Stochastic Results
4.2 Evolution of Stochastic Fatigue Damage
4.3 Virtual S-N Curves
5 Summary
References
Recommend Papers

Virtual Design and Validation (Lecture Notes in Applied and Computational Mechanics, 93)
 3030381552, 9783030381554

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Applied and Computational Mechanics 93

Peter Wriggers Olivier Allix Christian Weißenfels   Editors

Virtual Design and Validation

Lecture Notes in Applied and Computational Mechanics Volume 93

Series Editors Peter Wriggers, Institut für Kontinuumsmechanik, Leibniz Universität Hannover, Hannover, Niedersachsen, Germany Peter Eberhard, Institute of Engineering and Computational Mechanics, University of Stuttgart, Stuttgart, Germany

This series aims to report new developments in applied and computational mechanics—quickly, informally and at a high level. This includes the fields of fluid, solid and structural mechanics, dynamics and control, and related disciplines. The applied methods can be of analytical, numerical and computational nature. The series scope includes monographs, professional books, selected contributions from specialized conferences or workshops, edited volumes, as well as outstanding advanced textbooks. Indexed by EI-Compendex, SCOPUS, Zentralblatt Math, Ulrich’s, Current Mathematical Publications, Mathematical Reviews and MetaPress.

More information about this series at http://www.springer.com/series/4623

Peter Wriggers Olivier Allix Christian Weißenfels •

Editors

Virtual Design and Validation

123



Editors Peter Wriggers Institut für Kontinuumsmechanik Leibniz Universität Hannover Hannover, Germany

Olivier Allix LMT-Cachan Cachan, France

Christian Weißenfels Institut für Kontinuumsmechanik Leibniz Universität Hannover Hannover, Germany

ISSN 1613-7736 ISSN 1860-0816 (electronic) Lecture Notes in Applied and Computational Mechanics ISBN 978-3-030-38155-4 ISBN 978-3-030-38156-1 (eBook) https://doi.org/10.1007/978-3-030-38156-1 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Particularly in the aerospace and automotive industries, the need for weightoptimized structures is growing to ensure low-consumption operations. Simultaneously, the load-bearing capacity and safety requirements must be guaranteed. This requires new materials and innovative structures. The vision in engineering is a purely virtual development of new materials, components and assemblies on the computer. Compared to an experimentally based development, simulation driven engineering enables an acceleration of the development process while reducing costs. Different scenarios can be checked using virtual computer simulations until the optimal material or structure is found. Changes on the micro- or nanoscale enable the virtual creation of new materials. The behavior of these materials can be tested directly on the computer under real conditions. Simulations can also be used to predict changes in the materials over a longer period of time. For the structural change towards a purely digital development process in industry, mathematical models are needed, which describe the behavior of materials and structures under loading realistically, as well as suitable algorithms and solution methods, which solve these equations accurately and efficiently. This book contains 17 articles that have emerged from selected works of the International Research Group IRTG 1627. A special focus of the German–French cooperation is the experimental characterization of materials and their numerical modeling, as well as the development of new computational methods for a virtual design. The selected contributions are assigned to four thematic areas: Experiments and virtual design, composites, fracture and fatigue and uncertainty quantification. The first area relates to new experimental methods that can be used to characterize material behavior more accurately. Furthermore, a combined experimental and numerical approach is presented to optimize the properties of a structure. Besides the modeling of MEMS, new developments in the field of computational methods for virtual design are presented. The second topic is dedicated to experimental and numerical investigations of composites. A special focus is on the modeling of failure modes and the optimization of these materials. v

vi

Preface

New numerical schemes in the field of crack modeling and fatigue prediction are discussed in the third area. Fatigue also includes wear due to frictional contact and ageing of elastomers. The input parameters of a numerical simulation classically represent mean values of real observations. However, more or less strong deviations occur. To illustrate the uncertainties of parameters in calculations, new and efficient approaches are presented in the last section. All contributions provide a good overview of the state-of-the-art approaches and future developments in the field of virtual design. Hannover, Germany Cachan, France Hannover, Germany

Peter Wriggers Olivier Allix Christian Weißenfels

Contents

Experiments and Virtual Design Digital Volume Correlation of Laminographic and Tomographic Images: Results and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amine Bouterf, Ante Buljac, François Hild, Clément Jailin, Jan Neggers and Stéphane Roux Manufacturing and Virtual Design to Tailor the Properties of Boron-Alloyed Steel Tubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illia Hordych, Sebastian Herbst, Florian Nürnberger, Viacheslav Boiarkin, Olivier Hubert and Hans Jürgen Maier Mathematical Modelling and Analysis of Temperature Effects in MEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joachim Escher and Tim Würth Multi-fidelity Metamodels Nourished by Reduced Order Models . . . . . . S. Nachar, P.-A. Boucard, D. Néron, U. Nackenhorst and A. Fau Application of Enhanced Peridynamic Correspondence Formulation for Three-Dimensional Simulations at Large Strains . . . . . . . . . . . . . . . P. Hartmann, C. Weißenfels and P. Wriggers

3

21

45 61

81

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Milad Amin Ghaziani, Josef Kiendl and Laura De Lorenzis Composites Experimental and Numerical Investigations on the Combined Forming Behaviour of DX51 and Fibre Reinforced Thermoplastics Under Deep Drawing Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Bernd-Arno Behrens, Alexander Chugreev and Hendrik Wester

vii

viii

Contents

The Representation of Fiber Misalignment Distributions in Numerical Modeling of Compressive Failure of Fiber Reinforced Polymers . . . . . . 147 N. Safdar, B. Daum, R. Rolfes and O. Allix A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber Reinforced Composites . . . . . . . . . . . . . . . . . . . 167 S. Hosseini, S. Löhnert, P. Wriggers and E. Baranger Topology Optimization of 1-3 Piezoelectric Composites . . . . . . . . . . . . . 185 Chuong Nguyen, Xiaoying Zhuang and Ludovic Chamoin Fracture and Fatigue Treatment of Brittle Fracture in Solids with the Virtual Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 A. Hussein, P. Wriggers, B. Hudobivnik, F. Aldakheel, P.-A. Guidault and O. Allix A Semi-incremental Scheme for Cyclic Damage Computations . . . . . . . 229 Shadi Alameddin, Amélie Fau, David Néron, Pierre Ladevèze and Udo Nackenhorst Robust Contact and Friction Model for the Fatigue Estimate of a Wire Rope in the Mooring Line of a Floating Offshore Wind Turbine . . . . . . 249 F. Bussolati, P.-A. Guidault, M. L. E. Guiton, O. Allix and P. Wriggers Micromechanically Motivated Model for Oxidation Ageing of Elastomers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Darcy Beurle, Markus André, Udo Nackenhorst and Rodrigue Desmorat Uncertainty Quantification A Bayesian Approach for Uncertainty Quantification in Elliptic Cauchy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Renaud Ferrier, Mohamed Larbi Kadri, Pierre Gosselet and Hermann G. Matthies On-the-Fly Bayesian Data Assimilation Using Transport Map Sampling and PGD Reduced Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Paul-Baptiste Rubio, Ludovic Chamoin and François Louf Stochastic Material Modeling for Fatigue Damage Analysis . . . . . . . . . . 329 W. Zhang, A. Fau, U. Nackenhorst and R. Desmorat

Experiments and Virtual Design

Digital Volume Correlation of Laminographic and Tomographic Images: Results and Challenges Amine Bouterf, Ante Buljac, François Hild, Clément Jailin, Jan Neggers and Stéphane Roux

Abstract Although digital volume correlation (DVC) seems to be a simple extension of digital image correlation to 3D situations, new challenges arise. The first problem is that the actual microstructure of the material can hardly be varied overall to improve the contrast. The applicability of the technique may therefore seem limited to a restricted class of materials. Artifacts during image acquisition and/or reconstruction potentially have a more dramatic effect than noise on the calculated displacement field. Because the acquisition time is generally long, the time sampling is very low compared to the spatial sampling. Moreover, experiments have to be interrupted during acquisition to “freeze” out motions, thereby making time-dependent behavior out of reach. Handling large amounts of data is another challenge. To cope with these complications, specific developments must be designed, the most important of which are mechanics-based regularizations that constrain the desired field of motion to compensate for the adverse effects of noise, artifacts and/or poor texture. With such strategies, DVC offers an unprecedented wealth of information to analyze the mechanical behavior of a large class of materials.

1 Introduction Digital volume correlation (DVC) allows for the measurement of displacement fields from 3D images acquired for example with X-ray tomography (X-CT) systems [1]. Similarly, imaging thin plates (as opposed to stick-like samples in tomography) is possible via synchrotron laminography [2]. DVC is a direct extension of digital image correlation (DIC), and therefore different strategies developed in 2D were generalized to 3D with similar weaknesses or advantages [3, 4].

A. Bouterf · A. Buljac · F. Hild (B) · C. Jailin · J. Neggers · S. Roux Université Paris-Saclay, ENS Paris-Saclay, CNRS, Laboratoire de mécanique et technologie, 61 avenue du Président Wilson, 94235 Cachan Cedex, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_1

3

4

A. Bouterf et al.

In the same way that DIC is gradually being routinely used in solid mechanics, it is likely that DVC will experience a similar development over the next decade, as tomographs will become more and more common instruments [1, 5]. The invaluable information provided by tomography in the field of materials science has been mainly focused on 3D imaging of various microstructures. Using 3D image analysis techniques, the access to constitutive phases of a material, their morphology, the statistical characterization of the number, size and shape of inclusions, pores, grains has been the subject of much attention, and this effort has been very rewarding [6–8]. A second very attractive potential of computed tomography in the industrial context is the shape metrology [9]. This application pushed tomography to become even more quantitative than what was asked in simple imaging. This being also achieved for laboratory tomographs, it is now very tempting to follow the deformation of a specimen during mechanical loading, so that in situ mechanical tests constitute an expanding field of investigation [10]. In all these areas, DVC is the preferred technique for measuring three-dimensional displacement fields and coupling experiments with modeling [11–13]. This chapter highlights the new challenges specifically related to DVC and the recently implemented developments to address these demands.

2 Challenges 2.1 Material Microstructure The first generic difficulty is that the actual microstructure of the material can hardly be modified in bulk in order to improve its X-ray absorption contrast. Some attempts have been made (e.g., adding particles [14] or revealing the boundaries of grains [15]), but at the risk that the material has a different behavior than with no contrast enhancement. This is a major difference from 2D-DIC, in which a homogeneous or transparent material can always be painted with a speckle pattern, where the experimenter can adjust the homogeneity, correlation length of the pattern and contrast more or less at will [3]. In synchrotron facilities, the use of phase contrast can reinforce the differences between the material phases and make them visible on radiographs [16–18]. Such severe conditions push DVC to deal with microstructures that can be very faint, with very few contrasting phases, possibly with a low contrast compared to reconstruction artifacts (Fig. 1c). Robustness when confronted to such sparse contrast is a major challenge that limits a priori the use of DVC to specific microstructures (e.g., biological tissues [19], foams [20] and cellular materials (Fig. 1a), or materials with a large proportion of inclusions [11, 21, 22], see Fig. 1b).

Digital Volume Correlation of Laminographic and Tomographic Images

5

Fig. 1 3D rendering of three different textures. a Plaster with a volume fraction of void of 56% (1 voxel ↔ 48 µm). b Spheroidal graphite cast iron with a 13% volume fraction of graphite nodules (1 voxel ↔ 5.1 µm). c Aluminum alloy with 0.3–0.4% of secondary particles (1 voxel ↔ 0.7 µm)

2.2 CT Imaging and Artifacts Tomographic imaging itself poses new challenges, namely that artifacts when acquiring or reconstructing images can affect the calculated displacement field. For example, in a laboratory tomographic scanner, a minute shift of the source may cause a change in magnification of the image, and therefore a uniform apparent expansion not to be confused with mechanical strains [23]. For this reason, many recent laboratory systems have thermal regulation that limits this bias. Yet, an experiment lasting several days remains quite difficult. Further, a defective pixel on the detector, a bad “flat field” normalization or more generally any deviation from the general assumptions (such as the displacement occurring during the scan) usually induces “ring artifacts” [24]. The presence of reconstruction artifacts (such as rings) becomes very detrimental when the contrast due to the natural microstructure of the material is low. It is therefore very important to treat these cases in order to make DVC useful for a wider class of materials. In addition to the progress of the reconstruction technique itself, one solution to the problem is to digitally filter the images before DVC analyses are run (e.g., ring artifacts can be significantly reduced [25]). This filtering can however also affect the true microstructure. Another challenge lies in the fact that the actual (reliable) microstructure after filtering may consist of a very faint texture (i.e., low gradients of gray level or very dilute support, see Fig. 1c). To solve this problem while preserving an acceptable spatial resolution, appropriate regularization strategies are needed (Sects. 3.3 and 3.4).

2.3 Volume of Data/Duration of Acquisition Another limitation comes from the huge amount of information contained in 3D images. This challenge is first met at the acquisition stage, where a complete analysis

6

A. Bouterf et al.

may take up to an hour or more in some cases, of such a duration that creep may be responsible for a significant motion between the first and the last radiographs. This bias makes it difficult (if not impossible) for proper standard reconstructions [26]. This limitation can be partly overcome in third generation synchrotron facilities since very energetic beams allow for very short acquisition times (e.g., 45 s [27] and 4 s [28] in the case of thermomechanical tests). When high-speed cameras were utilized, full scans could be performed at 20 Hz frequency. When standard reconstruction algorithms are used, the quality of the reconstructed volumes degrades in comparison to slower acquisitions since they do not account for minute motions induced by the vibrations of the rotation stage [29]. These observations call for combining reconstruction procedures accounting for motions during acquisitions [26]. Second, at the reconstruction stage, the cost of data processing favors fast techniques (such as the well-known Filtered Back-Projection algorithm [30]) rather than algebraic variants, which are more reliable [31], but more demanding in terms of reconstruction. DVC itself also involves a large amount of data and efficient data processing becomes essential. When regularized or integrated strategies are considered (see below), this demand becomes even more severe. Effective GPU implementations can be a solution [32]. However, other strategies may be envisaged, which will require a very significant reduction in the number of radiographs [33]. This last option is extremely appealing because it allows for much faster data acquisitions without having to spin the sample too fast. However, this approach complements missing projection data with a priori information, and its compatibility with DVC registration, where subvoxel resolution is aimed for, remains to be validated.

2.4 Detecting Features Invisible to the Eye Another difficulty is that some interesting features of the mechanical analysis may not be visible because of insufficient spatial resolution or because some parts of the texture (e.g., particles useful for DVC) exhibit the same gray levels as the desired feature Fig. 1b). An example of this category is the crack opening that cannot be detected when its level is not comparable to the size of the voxel (Fig. 2b). The ability of DVC to detect motions much smaller than the size of the voxel (i.e., of the order of 10−1 voxel) is an advantage that was used to evaluate stress intensity factors (see Sect. 3.2). In addition, new techniques were specifically designed for analyzing crack openings and for locating the position of the crack front in 3D long before it could be seen on a single image [11].

Digital Volume Correlation of Laminographic and Tomographic Images

7

Fig. 2 a 3D rendering of correlation residuals clearly locating the crack surface. b Thresholded gray levels in the vicinity of the crack surface. It is impossible to locate the crack surface because the nodules and the crack have similar gray levels (Fig. 1b)

3 Recent Solutions 3.1 Filtering 3D Images The 3D images are reconstructed from radiographs acquired according to different orientations of the sample with respect to the beam so that artifacts may arise. This is especially true when the reconstruction method is known to be approximate [e.g., far from the median plane with fan-beam or cone-beam tomography or in laminography (Fig. 3)]. Moreover, phase contrast between the particles and the matrix leads to a spurious “aura” around each particle that fades away with distance. This pattern is fake, but it moves together with the particle (yet it does not rotate with it). Hence it helps DVC but can only be trusted if local rotations are not sought. Filtering of reconstructed images to erase the ring artifacts while preserving the microstructure can be performed as illustrated in Fig. 3b. As a result, it is found that the gray level histogram, which was already rather poor for DVC purposes, becomes even narrower (Fig. 4). Assuming that this thinning of the histogram is mainly due to the correction of artifacts, it shows that the recourse to the texture of the direct reconstructions may lead to false evaluations of the displacements, because the rings do not necessarily follow the kinematics of the material. For this particular case, the initial amount of inclusions in the material was estimated to be about 0.3–0.4 vol%. It should be noted that a direct attempt to use DVC on filtered images does not lead to convergence. Such a poor microstructural texture makes the ill-posed nature of DVC more difficult. In a forthcoming section, it will be shown that DVC can be used on unfiltered images, because in this particular case, the artifacts come from phase contrast, and are therefore advected with the inclusions. Note, however, that the rings do not rotate with the microstructure of the material and are therefore fragile markers for DVC purposes.

8

A. Bouterf et al.

Fig. 3 3D rendering of an aluminum alloy with 0.3–0.4 vol% of secondary particles (1 voxel ↔ 0.7 µm). a Raw reconstructed volume, and b filtered reconstruction

Fig. 4 Gray level histograms of aluminum alloy laminographies (Fig. 3). a Raw reconstructed volume, and b filtered reconstruction

3.2 Global DVC Parallel to the distinction between local and global DIC strategies [34, 35], the two propositions can simply be extended to DVC. In fact, from a theoretical point of view, spatial dimensionality does not play a key role in image registration. Local DVC consists in analyzing sub-volumes and determining their average displacements from the recording of a deformed sub-volume corrected with the corresponding reference [19, 36]. These corrections may involve a linear displacement gradient, although only the average displacement is retained. Global DVC consists in seeking the “best” displacement field in a specified kinematic vector space defined over the whole region of interest. A common choice of this space is given by the shape functions of finite elements [20], thus providing a natural interface to numerical simulations [11], and where mature meshing techniques allow for an arbitrarily fine fit to the sample boundaries (and potentially microstructure if wanted [12, 13]). The “best” displacement field is the argument that minimizes a functional qualifying image registration. The price to be paid for this formulation is that unknowns (i.e., nodal displacements) are coupled, unlike the local approach where each sub-volume is treated independently. Thus, parallel processing is simple

Digital Volume Correlation of Laminographic and Tomographic Images

9

for the local approach, but requires skills used in the field of computational mechanics for global DVC [32]. Among the different variants, a very generic one is based on finite element decompositions for the displacement field, with an image-driven mesh with C8 elements (8-node cubes with trilinear displacement shape functions). However, unstructured meshes based on the imaged microstructure were also implemented in such global framework (e.g., 4-noded tetrahedra with linear displacement interpolations [12]). The material studied below is light gypsum. Cylinders (17 mm in diameter) were extracted from industrial plasterboard plates. In situ experiments with a spherical indenter (6 mm in diameter) were performed. The specimen was imaged during unloading, and loaded at nine different levels [37]. After each step, the crosshead was held still to allow relaxation to take place for a 20-min dwell duration before acquiring a new scan. Figure 5 shows the reconstructed volumes of the sample observed by CT for the reference configuration (Fig. 5a) and the eighth load level (Fig. 5b). The compacted area is clearly visible in the upper part of the 3D rendering. It was hidden in the following DVC analyses because the gray level conservation was not satisfied. First, a C8-DVC analysis was performed for an area close to the compacted region. The corresponding longitudinal displacement field is shown in Fig. 7a for a discretization with 6-voxel long elements. Displacement fluctuations are clearly visible because the dynamic range is very small (i.e., less than 2 voxels) due to the fact that apart from the compacted volume, the plaster remained essentially elastic. Many of these fluctuations were due to measurement uncertainties associated with very small element sizes. This first analysis shows that even with very small element sizes, global DVC provided a good first estimate of the displacement field. However, it was corrupted by measurement uncertainties and the evaluation of strains cannot be carried out faithfully on such a small scale. In the next section, an alternative route will be followed.

Fig. 5 3D renderings of the observed volume of interest. a Reference configuration, and b deformed configuration

10

A. Bouterf et al.

Fig. 6 a Profiles of K I , K I I and K I I I stress intensity factors determined experimentally by postprocessing X-DVC results, and numerically via X-FEM analyses. b 3D rendering of the longitudinal displacement field calculated by X-FEM in the cracked nodular graphite cast iron (Fig. 1b). The scale bar is expressed in micrometers. The boundary conditions used for the calculation correspond to the experimental displacements obtained by X-DVC (after [38])

The following analysis now focuses on nodular graphite cast iron (Fig. 1b). A larger sample was first pre-cracked in fatigue. A smaller sample was cut and then tested in situ for different propagation stages [38]. After 45 kcycles, a scan was acquired at maximum and minimum load. The displacement field was first measured via C8-DVC. Using the correlation residuals (as shown in Fig. 2a), an enriched kinematics was sought using the same kinematic assumptions as in eXtended Finite Element Methods (X-FEM) [39, 40], which is called eXtended Digital Volume Correlation (or X-DVC [11, 38, 41]). The crack front was determined by projecting the displacement field measured on the Williams’ series, in particular by canceling out the amplitude associated with the first supersingular field. Another result of this procedure was the stress intensity profiles (Fig. 6a). The latter was compared with the predicted profiles having the same cracked surface, the same front and the measured boundary conditions applied to the upper and lower faces of the considered volume (Fig. 6b). A very good agreement was observed between the experimentally determined and numerically predicted stress intensity profiles. Such a result shows the interest of combining advanced experimental and numerical tools to analyze 3D cracks.

3.3 Reduced Bases and Integrated DVC The analysis of displacements both for local and global DVC typically results from a compromise between spatial resolution and displacement uncertainty [32, 42]. To enhance the spatial resolution, smaller sub-volumes (local DVC) or elements (global DVC) are to be used to allow for local adjustments. However, this leads to an increase in the number of unknowns, and hence less available information for each

Digital Volume Correlation of Laminographic and Tomographic Images

11

Fig. 7 3D rendering of the longitudinal displacement field measured during an indentation test on plasterboard (Fig. 1a). The zone where the material was compacted was removed. a Standard DVC approach. b DVC approach with reduced basis. The scale bar is expressed in voxels (1 voxel ↔ 48 µm)

one. Consequently, the measurement uncertainty increases, so does the sensitivity to noise. This may be tolerable for displacement measurements, but soon becomes harmful to the determination of strain fields. One possibility to circumvent this problem is to design a reduced basis, i.e., selecting much less numerous degrees of freedom, yet being faithful to the actual displacement within the specimen. The advantage of this approach is that one may easily benefit from prior knowledge on the expected displacement field. However, this knowledge cannot be expressed as analytical solutions since they are very scarce in 3D situations. Conversely, a specific basis can be built numerically as soon as the “true” unknowns of the problem are parameterized using any finite-element code [12, 43, 44]. To exemplify this concept, let us revert to the example of plasterboard indentation of the previous section. The specimen was expected to behave elastically away from the indentation. The “true” unknowns describe the loading along the boundary of the elastic domain. When a spherical harmonics decomposition of the unknown displacement on the surface of the crushed region was performed, a truncation to low order terms constituted a suited reduced basis with less than 10 degrees of freedom [43]. This displacement basis was obtained from a finite element simulation of the whole sample assumed to behave elastically. The DVC results with this reduced kinematic basis are shown in Fig. 7b. Many of the fluctuations were filtered out when compared to a standard C8-DVC result (Fig. 7a), yet the long wave features were still captured. Because the correlation residuals in both analyses were virtually identical, the DVC results with a reduced basis were deemed trustworthy. The previous analyses can be extended to cases in which the parameters of constitutive models become the unknowns to the registration problem (as were boundary conditions in the above example). Consequently, the sought displacement fields are parameterized by these unknowns (and no longer by the nodal displacements) and can

12

A. Bouterf et al.

Fig. 8 Schematic representation of procedures that can be used for validation and identification purposes of numerical simulations at the microscopic scale using 3D images (adapted from [44])

be computed via finite element analyses. Such fields then satisfy equilibrium, compatibility and the selected constitutive equation. Such approaches that use mechanically admissible solutions are referred to as integrated-DVC [12]. Reduced bases are then constructed from the sensitivity fields, namely spatiotemporal displacement field derivatives with respect to the sought material parameters. Non-intrusive procedures were devised in which the 4D sensitivity fields are obtained with existing (commercial or academic) finite element codes, thereby allowing for a large versatility in meshing and incorporation of complex constitutive laws. Further, Model Order Reduction techniques like the Proper Orthogonal Decomposition [45] can be applied to these sensitivity fields [46]. Proper Generalized Decompositions (PGD [47–51]) are an alternative route that is worth investigating. Given the fact that microtomography and laminography enable for micrometer resolutions, frameworks combining in-situ experiments and numerical simulations (Fig. 8) can be devised to validate calculations at the microscopic scale [13]. In such analyses, the region of interest in the reconstructed volume was analyzed by DVC to measure displacement fields. Finite element calculations, which took into account the details of the microstructure of cast iron (µFE [52]), were carried out using boundary conditions extracted from DVC calculations, which was a primary quantity to validate such simulations [53]. The correlation residuals of DVC and coupled DVC-µFE were then used for validating the measurements and numerical simulations. These independent results could then compared to assess the predictive capacity of the selected models [13, 53].

Digital Volume Correlation of Laminographic and Tomographic Images

13

An identification procedure can be devised by extending the previous framework to determine (microscopic) constitutive parameters for the analysis, say, of ductile damage. It consists in the full integration of sensor (e.g., load cell) and imaging data into numerical procedures for the purpose of reducing the gray level residuals and gap between measured and computed load (Fig. 8). Such integrated approaches were performed at meso- and micro-scales [5, 12, 44]. For instance, the three mechanisms of ductile damage (i.e., nucleation, growth and coalescence) were analyzed using numerical simulations at the microscopic scale [44].

3.4 Regularized DVC The previous strategy calls for a rather strong prior knowledge (i.e., an elastic behavior was assumed over most of the domain of the indentation test). It is also possible to introduce such information in a tunable manner allowing for a more flexible tool [32, 54, 55]. The spirit of mechanical regularization consists in adding to the traditional DVC objective functional, which is based on the quadratic difference between the deformed image corrected by a displacement field and the reference image, a second functional based on the “equilibrium gap” that penalizes deviations of the displacement field from being the solution to a homogeneous elastic problem with known (or null) body forces [56]. The latter functional is the quadratic norm of a linear second-order differential operator acting on the displacement field, and hence the relative weights given to both functionals introduce a length scale r eg . Below r eg , the regularization functional dominates, whereas at large length scales (i.e., above r eg ), the DVC functional is the largest. This combination acts as a low-pass filter for DVC, where the high frequency component is brought by elasticity. In some sense, the previous (i.e., integrated) approach corresponds to the limit of a very large weight being given to the regularization as the library of proposed displacement fields obeys exactly elasticity over the entire domain. When a smaller value of the cut-off wavelength is used, the effect of regularization can be compared to that of a coarse mesh of size r eg , with however much better smoothness properties. The same method can be extended to more complex mechanical behavior than linear elasticity or heterogeneous elastic properties if desired. However, even if homogeneous linear elasticity is used, the latter can simply be considered as a smoothening operator to make the problem well-posed rather than an accurate description of the mechanical behavior of the studied specimen. The following analysis deals with an in situ tearing test on an aluminum alloy sheet observed via synchrotron laminography. Experimental conditions for a similar material are found in Refs. [57, 58]. The present case was deemed difficult, if not impossible, since the volume fraction of secondary particles was of the order of 0.3–0.4% (Figs. 3a and 4a). Although this condition may seem impossible for DVC, the mean distance between inclusions was of the order of 7 voxels. It became even more difficult after filtering (Figs. 3b and 4b). In the unfiltered case, standard global C8-DVC (with 16-voxel elements) was performed on a zone far from the notch where

14

A. Bouterf et al.

Fig. 9 3D rendering of von Mises’ equivalent strain measured during an opening test of a notched aluminum alloy sample (Fig. 1c). a Standard DVC, and b regularized DVC

the crack initiated and subsequently propagated. The crack had not yet propagated even though localized bands were observed (Fig. 9a). In Fig. 9b, a regularized correlation approach (with r eg = 50 voxels) is used with the same mesh as previously. However, the considered displacements were filtered. Without the present regularization, the registrations was not possible. Although the texture was extremely poor, very consistent results were observed between the equivalent strain maps obtained by both approaches (Fig. 9). The filtering of the strain field is clearly apparent, thereby illustrating its denoising effect for DVC. The question of the possible detrimental effect of this “filter” had to be evaluated on the quality of the registration, namely on the correlation residual or the difference between the reference and deformed image after correction by the measured displacement field. In that case, the quality of registration was excellent so that the filtering was not considered as detrimental to the measured displacement field. The regularization strategy allows the DVC problem to be made well-posed even when discretized onto a very fine mesh. It was shown that the ultimate limit of a regular cubic mesh with elements reduced to a single voxel could be handled with such a strategy [54]. However, in that case, the number of unknowns becomes very large, the regularization kernel involves a more complex problem to solve, and hence regularized DVC becomes much more demanding in terms of computation time and memory management. To overcome this difficulty, a dedicated GPU implementation was set up, which could handle several million degrees of freedom within an acceptable time (i.e., less than 10 min [32]).

Digital Volume Correlation of Laminographic and Tomographic Images

15

Fig. 10 Crack opening in the sample shown in the center can be obtained from an initial tomography, two projections in the deformed state and an appropriate model schematized by the finite element mesh onto which the vertical component of the displacement field is rendered and color-coded

3.5 Projection-Based DVC When considering the temporal changes of a specimen, its microstructure usually does not change much in time but for its deformation. Hence, after a first reconstruction has been performed, the only remaining unknowns are the sample kinematics. The latter generally requires much less data than the number of voxels [59], and even more so if a mechanical model is involved [60, 61]. Performing successive reconstructions is both costly and unnecessary. It is therefore natural to try to read the kinematics directly in the radiographs, and to reduce their number (or rather increase the time resolution). This procedure, which is referred to as Projection-based DVC [59, 62], was shown to be effective with several orders of magnitude gain in needed number of projections. Figure 10 shows the displacement field of a cracked nodular graphite cast iron sample by considering two radiographs of the deformed state. In this particular case, the unknowns were the 2 × 6 degrees of freedom used as boundary conditions to an elastic model in which the crack was explicitly accounted for. Further, when the reference scan is performed when the sample is unloaded (or very modestly preloaded), then the number of radiographs needed for tracking the 3D space plus time changes of the test can be reduced from 500 to 1000 down to a single one per time step [63, 64]. With this procedure, the sample was continuously loaded, continuously rotated and regularly imaged via 2D X-ray projections. Consequently, the lower spatial sampling of projection-based DVC (i.e., down to one radiograph per load level) was compensated by increasing the temporal sampling and leading to 4D analyses for which PGD turned out to be a very powerful tool to perform the projection-based registration solely over the relevant modes.

16

A. Bouterf et al.

4 Conclusion The emergence and development of full-field measurements in the bulk of imaged materials has transformed experimental solid mechanics, where huge and ever increasing amounts of data are routinely collected during tests [46]. Processing this experimental information has led to specific and elaborate numerical developments integrating measurements and simulations into unique frameworks. Integrated DVC as described herein, namely merging DVC and numerical tools traditionally used in computational mechanics (e.g., Finite Element Methods) constitutes an emblematic example. Efficient numerical GPU implementation of the resulting algorithms is also a route that can be followed. It is expected that much more could be performed regarding time and memory savings while keeping all the relevant information (including uncertainties). Some possibly more efficient reduction techniques are related to the use of, for instance, modified POD with the metric associated with the covariance matrix for the mode selection or PGD [46]. The latter in particular will be the subject of future work as a very promising and efficient solution generator at measurement and identification stages. In spite of the challenges listed herein, DVC was made operational and reliable for a wider class of materials than usually considered. As demonstrated herein, those extensions require a critical consideration of the reconstructed images and possibly filtering to (at least partially) erase imaging artifacts, regularization strategies to allow for noise reduction on strain evaluation and to compensate for poor microstructural contrast. Such developments are essential for fully benefiting from lab scale tomograph equipment and analyze in-situ mechanical testing to get more qualitative and quantitative insight into the mechanical behavior of materials. Acknowledgements Different parts of the above mentioned studies were funded by Agence Nationale de la Recherche under the grants ANR-10-EQPX-37 (MATMECA) and ANR-14-CE070034-02 (COMINSIDE), Saint Gobain, SAFRAN Aircraft Engines and SAFRAN Tech. It is a pleasure to acknowledge the support of BPI France within the DICCIT project, and ESRF for MA1006, MI1149, MA1631, MA1932 and ME1366 experiments. Fruitful discussions with Profs. Olivier Allix, Marc Bernacki, Pierre-Olivier Bouchard, Jean-Yves Buffière, and Drs. Jérôme Adrien, Dominique Bernard, Xavier Brajer, René Gy, Lukas Helfen, Nathalie Limodin, Eric Maire, Thilo Morgeneyer, Estelle Parra, Julien Réthoré and Julien Schneider are acknowledged.

References 1. Bay, B. K. (2008). Methods and applications of digital volume correlation. Journal Strain Analysis, 43, 745. 2. Helfen, L., Baumbach, T., Mikulik, P., Kiel, D., Pernot, P., Cloetens, P., et al. (2005). High-resolution three-dimensional imaging of flat objects by synchrotron-radiation computed laminography. Applied Physics Letters, 86(7), 071915.

Digital Volume Correlation of Laminographic and Tomographic Images

17

3. Sutton, M. A., Orteu, J. J., & Schreier, H. (2009). Image correlation for shape, motion and deformation measurements: Basic concepts, theory and applications. New York, NY (USA): Springer. 4. Hild, F., & Roux, S. (2012). Digital image correlation. In P. Rastogi & E. Hack (Eds.), Optical methods for solid mechanics. A full-field approach (pp. 183–228). Weinheim (Germany): Wiley-VCH. 5. Buljac, A., Jailin, C., Mendoza, A., Taillandier-Thomas, T., Bouterf, A., Neggers, J., et al. (2018). Digital volume correlation: Review on progress and challenges. Experimental Mechanics, 58(5), 661–708. 6. Baruchel, J., Buffière, J. Y., Maire, E., Merle, P., & Peix, G. (Eds.). (2000). X-Ray tomography in material sciences. Paris (France): Hermès Science. 7. Weitkamp, T., Tafforeau, P., Boller, E., Cloetens, P., Valade, J., Bernard, P., et al. (2010). Status and evolution of the ESRF beamline ID19. In ICXOM 2009 (AIP Conference Proceedings) (Vol. 1221, pp. 33–38) 8. Maire, E., & Withers, P. J. (2014). Quantitative X-ray tomography. International Materials Reviews, 59(1), 1–43. 9. Wikipedia Contributors. (2019). Industrial computed tomography. Wikipedia, The Free Encyclopedia p. 883448937. 10. Buffière, J. Y., Maire, E., Adrien, J., Masse, J. P, & Boller, E. (2010). In Situ Experiments with X ray Tomography: An Attractive Tool for Experimental Mechanics. Experimental Mechanics, 50(3), 289–305. 11. Rannou, J., Limodin, N., Réthoré, J., Gravouil, A., Ludwig, W., Baïetto, M., et al. (2010). Three dimensional experimental and numerical multiscale analysis of a fatigue crack. Computer Methods in Applied Mechanics and Engineering, 199, 1307–1325. 12. Hild, F., Bouterf, A., Chamoin, L., Mathieu, F., Neggers, J., Pled, F., et al. (2016). Toward 4D mechanical correlation. Advanced Modeling and Simulation in Engineering Sciences, 3(1), 1–26. 13. Buljac, A., Shakoor, M., Bernacki, M., Bouchard, P. O., Morgeneyer, T. F., & Hild, F. (2017). Numerical validation framework for micromechanical simulations based on synchrotron 3D imaging. Computational Mechanics, 59(3), 419–441. 14. Bornert, M., Chaix, J. M., Doumalin, P., Dupré, J. C., Fournel, T., Jeulin, D., et al. (2004). Mesure tridimensionnelle de champs cinématiques par imagerie volumique pour l’analyse des matériaux et des structures. Institute Mes Métrology, 4, 43–88. 15. Ludwig, W., Buffière, J. Y., Savelli, S., & Cloetens, P. (2003). Study of the interaction of a short fatigue crack with grain boundaries in a cast Al alloy using X-ray microtomography. Acta Materials, 51(3), 585–598. 16. Buffière, J. Y., Maire, E., Cloetens, P., Lormand, G., & Fougères, R. (1999). Characterisation of internal damage in a MMCp using X-ray synchrotron phase contrast microtomography. Acta Materials, 47(5), 1613–1625. 17. Helfen, L., Baumbach, T., Cloetens, P., & Baruchel, J. (2009). Phase contrast and holographic computed laminography. Applied Physics Letters, 94, 104103. 18. Altapova, V., Helfen, L., Myagotin, A., Hänschke, D., Moosmann, J., Gunneweg, J., et al. (2012). Phase contrast laminography based on talbot interferometry. Optics Express, 20, 6496– 6508. 19. Bay, B. K., Smith, T. S., Fyhrie, D. P., & Saad, M. (1999). Digital volume correlation: Threedimensional strain mapping using X-ray tomography. Experimental Mechanics, 39, 217–226. 20. Roux, S., Hild, F., Viot, P., & Bernard, D. (2008). Three dimensional image correlation from X-Ray computed tomography of solid foam. Composites Part A, 39(8), 1253–1265. 21. Hall, S., Bornert, M., Desrues, J., Pannier, Y., Lenoir, N., Viggiani, C., et al. (2010). Discrete and continuum analysis of localized deformation in sand using X-ray micro CT and volumetric digital image correlation. Géotechnique, 60(5), 315–322. 22. Hild, F., Fanget, A., Adrien, J., Maire, E., & Roux, S. (2011). Three dimensional analysis of a tensile test on a propellant with digital volume correlation. Archives of Mechanics, 63(5–6), 1–20.

18

A. Bouterf et al.

23. Limodin, N., Réthoré, J., Adrien, J., Buffière, J. Y., Hild, F., & Roux, S. (2011). Analysis and artifact correction for volume correlation measurements using tomographic images from a laboratory X-ray source. Experimental Mechanics, 51(6), 959–970. 24. Vidal, F. P., Letang, J. M., Peix, G., & Cloetens, P. (2005). Investigation of artifact sources in synchrotron microtomography via virtual X-ray imaging. Nuclear Instruments and Methods in Physics Research, 234, 333–348. 25. Prell, D., Kyriakou, Y., & Kalender, W. A. (2009). Comparison of ring artifact correction methods for flat-detector CT. Physics in Medicine and Biology, 54, 3881–3895. 26. Jailin, C., Buljac, A., Bouterf, A., Poncelet, M., Hild, F., & Roux, S. (2018). Self-calibration for lab-mct using space-time regularized projection-based dvc and model reduction. Measurement Science and Technology, 29, 024003. 27. Dahdah, N., Limodin, N., El Bartali, A., Witz, J. -F., Seghir, R., Charkaluk, E., et al. (2016). Damage investigation in A319 aluminium alloy by X-ray tomography and digital volume correlation during in situ high-temperature fatigue tests. Strain, 52(4), 324–335. 28. Cai, B., Karagadde, S., Yuan, L., Marrow, T. J., Connolley, T., & Lee, P. D. (2014). In situ synchrotron tomographic quantification of granular and intragranular deformation during semisolid compression of an equiaxed dendritic Al-Cu alloy. Acta Materials, 76, 371–380. 29. Maire, E., Le Bourlot, C., Adrien, J., Mortensen, A., & Mokso, R. (2016). 20-Hz X-ray tomography during an in situ tensile test. International Journal of Fracture, 200(1), 3–12. 30. Feldkamp, L. A., Davis, L. C., & Kress, J. W. (1984). Practical cone beam algorithm. The Journal of the Optical Society of America, 1, 612–619. 31. Gabor, G. T. (2009). Fundamentals of computerized tomography: Image reconstruction from projections. London (UK): Springer. 32. Leclerc, H., Périé, J. N., Hild, F., & Roux, S. (2012). Digital volume correlation: What are the limits to the spatial resolution? Mechanical and Industrial, 13, 361–371. 33. Herman, G. T., & Davidi, R. (2008). Image reconstruction from a small number of projections. Inverse Problems, 24, 045011. 34. Hild, F., & Roux, S. (2012). Comparison of local and global approaches to digital image correlation. Search Results, 52(9), 1503–1519. 35. Sutton, M. A. (2013). Computer vision-based, noncontacting deformation measurements in mechanics: A generational transformation. Applied Mechanics Reviews, 65 (AMR-13-1009), 050802. 36. Smith, T. S., Bay, B. K., & Rashid, M. M. (2002). Digital volume correlation including rotational degrees of freedom during minimization. Experimental Mechanics, 42(3), 272–278. 37. Bouterf, A., Adrien, J., Maire, E., Brajer, X., Hild, F., & Roux, S. (2017). Identification of the crushing behavior of brittle foam: From indentation to oedometric tests. Journal of the Mechanics and Physics of Solids, 98, 181–200. 38. Réthoré, J., Limodin, N., Buffière, J. Y., Hild, F., Ludwig, W., & Roux, S. (2011). Digital volume correlation analyses of synchrotron tomographic images. The Journal of Strain Analysis, 46, 683–695. 39. Black, T., & Belytschko, T. (1999). Elastic crack growth in finite elements with minimal remeshing. The International Journal for Numerical Methods in Engineering, 45, 601–620. 40. Moës, N., Dolbow, J., & Belytschko, T. (1999). A finite element method for crack growth without remeshing. The International Journal for Numerical Methods in Engineering, 46(1), 133–150. 41. Réthoré, J., Tinnes, J. P., Roux, S., Buffière, J., & Hild, F. (2008). Extended three-dimensional digital image correlation. textitComptes Rendus Mécanique, 336, 643–649. 42. Buljac, A., Taillandier-Thomas, T., Helfen, L., Morgeneyer, T., & Hild, F. (2018). Evaluation of measurement uncertainties of digital volume correlation applied to laminography data. The Journal of Strain Analysis, 53, 49–65. 43. Bouterf, A., Roux, S., Hild, F., Adrien, J., & Maire, E. (2014). Digital volume correlation applied to X-ray tomography images from spherical indentation tests on lightweight gypsum. Strain, 50(5), 444–453.

Digital Volume Correlation of Laminographic and Tomographic Images

19

44. Buljac, A., Trejo-Navas, V. -M., Shakoor, M., Bouterf, A., Neggers, J., Bernacki, M., et al. (2018). On the calibration of elastoplastic parameters at the microscale via X-ray microtomography and digital volume correlation for the simulation of ductile damage. European Journal of Mechanics, 72, 287–297. 45. Chatterjee, A. (2000). An introduction to the proper orthogonal decomposition. Current Science, 78(7), 808–817. 46. Neggers, J., Allix, O., Hild, F., & Roux, S. (2018). Big data in experimental mechanics and model order reduction: Today’s challenges and tomorrow’s opportunities. Archives of Computational Methods in Engineering, 25(1), 143–164. 47. Chinesta, F., Ammar, A., & Cueto, E. (2010). Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Archives of Computational Methods in Engineering, 17(4), 327–350. 48. Ladevèze, P., Passieux, J. -C., & Néron, D. (2010). The LATIN multiscale computational method and the proper generalized decomposition. Computer Methods in Applied Mechanics and Engineering, 199(21), 1287–1296. 49. Nouy, A. (2010). Proper generalized decompositions and separated representations for the numerical solution of high dimensional stochastic problems. Archives of Computational Methods in Engineering, 17(4), 403–434. 50. Ladevèze, P. (2014). PGD in linear and nonlinear computational solid mechanics. In Separated representations and PGD-based model reduction (pp. 91–152). Berlin: Springer. 51. Paillet, C., Néron, D., & Ladevèze, P. (2018). A door to model reduction in high-dimensional parameter space. Comptes Rendus Mécanique, 346(7), 524–531. 52. Shakoor, M., Bouchard, P. O., & Bernacki, M. (2017). An adaptive level-set method with enhanced volume conservation for simulations in multiphase domains. The International Journal for Numerical Methods in Engineering, 109(4), 555–576. 53. Shakoor, M., Buljac, A., Neggers, J., Hild, F., Morgeneyer, T. F., Helfen, L., et al. (2017). On the choice of boundary conditions for micromechanical simulations based on 3D imaging. International Journal of Solids and Structures, 112, 83–96. 54. Leclerc, H., Périé, J. N., Roux, S., & Hild, F. (2011). Voxel-scale digital volume correlation. Experimental Mechanics, 51(4), 479–490. 55. Taillandier-Thomas, T., Roux, S., Morgeneyer, T. F., & Hild, F. (2014). Localized strain field measurement on laminography data with mechanical regularization. Nuclear Instruments and Methods in Physics Research, 324, 70–79. 56. Claire, D., Hild, F., & Roux, S. (2002). Identification of damage fields using kinematic measurements. Comptes Rendus Mécanique, 330, 729–734. 57. Morgeneyer, T. F., Helfen, L., Sinclair, I., Proudhon, H., Xu, F., & Baumbach, T. (2011). Ductile crack initiation and propagation assessed via in situ synchrotron radiation computed laminography. Scripta Materialia, 65, 1010–1013. 58. Morgeneyer, T. F., Helfen, L., Mubarak, H., & Hild, F. (2013). 3D digital volume correlation of synchrotron radiation laminography images of ductile crack initiation: An initial feasibility study. Experimental Mechanics, 53(4), 543–556. 59. Leclerc, H., Roux, S., & Hild, F. (2015). Projection savings in CT-based digital volume correlation. Experimental Mechanics, 55(1), 275–287. 60. Taillandier-Thomas, T., Roux, S., & Hild, F. (2016). Soft route to 4D tomography. Physical Review Letters, 117(2), 025501. 61. Taillandier-Thomas, T., Jailin, C., Roux, S., Hild, F. (2016). Measurement of 3D displacement fields from few tomographic projections. In Proceedings of SPIE, Optics, Photonics and Digital Technologies for Imaging Applications IV (Vol. 9896L, p. 98960L). 62. Khalili, M. H., Brisard, S., Bornert, M., Aimedieu, P., Pereira, J. M., & Roux, J. N. (2017). Discrete digital projections correlation: A reconstruction-free method to quantify local kinematics in granular media by X-ray tomography. Experimental Mechanics, 57(6), 819–830.

20

A. Bouterf et al.

63. Jailin, C., Buljac, A., Bouterf, A., Hild, F., & Roux, S. (2018). Fast 4D tensile test monitored via X-CT: Single projection based digital volume correlation dedicated to slender samples. Journal of Strain Analysis, 53(7), 473–484. 64. Jailin, C., Buljac, A., Bouterf, A., Hild, F., & Roux, S. (2019). Fast four-dimensional tensile test monitored via X-ray computed tomography: Elastoplastic identification from radiographs. Journal of Strain Analysis, 54(1), 44–53.

Manufacturing and Virtual Design to Tailor the Properties of Boron-Alloyed Steel Tubes Illia Hordych, Sebastian Herbst, Florian Nürnberger, Viacheslav Boiarkin, Olivier Hubert and Hans Jürgen Maier

Abstract Application of products with properties locally adapted for specific loads and requirements has become widespread in recent decades. In the present study, an innovative approach to manufacture tubes with tailored properties in the longitudinal direction from a boron-alloyed steel 22MnB5 was developed. Due to advanced heating and cooling strategies, a wide spectrum of possible steel phase compositions can be obtained in tubes manufactured in a conventional tube forming line. A heattreatment station placed after the forming line is composed of an inductive heating and an adapted water-air cooling spray system. These short-action processes allow fast austenitizing and subsequent austenite decomposition within several seconds. To describe the effect of high inductive heating rates on austenite formation, dilatometric investigations were performed in a heating rate range from 500 to 2500 K s−1 . A completed austenitizing was observed for the whole range of the investigated heating rates. The austenitizing was described using Johnson-Mehl-Avrami model. Furthermore, series of experiments on heating and cooling with different cooling rates in the developed technology line was carried out. Complex microstructures were obtained for the cooling in still as well as with compressed air, while the water-air cooling at different pressures resulted in quenched martensitic microstructures. Nondestructive testing of the mechanical properties and the phase composition was realized by means of magnetization measurements. Logarithmic models to predict the phase composition and hardness values from the magnetic properties were obtained. Subsequently, a simulation model allowing virtual design of tubes in the FE-software ANSYS was developed on basis of experimental data. The model is suited to predict microstructural and mechanical properties under consideration of the actual process parameters. I. Hordych (B) · S. Herbst · F. Nürnberger · H. J. Maier Institut für Werkstoffkunde (Materials Science), Leibniz Universität Hannover, Hannover, Germany e-mail: [email protected] V. Boiarkin Department of Metal Forming, National Metallurgical Academy of Ukraine, Dnipro, Ukraine O. Hubert Laboratoire de Mécanique et Technologie, Ecole Normale Supérieure Paris Saclay, Cachan, France © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_2

21

22

I. Hordych et al.

1 Introduction In the pursuit of economic and ecologic manufacturing processes, different products that can be adapted to local loadings and stresses have been developed. From this point of view, tailored components are attractive due to the possibility of their local adjustment to application purposes. Different technological strategies can be employed in order to tailor the mechanical properties of products [1], which can be divided into two main groups: in the first one, tailoring is achieved by combining materials with different properties. In this case, dissimilar materials are combined through the creation of metallurgical, form- or force-closed joints depending on their unique characteristics [2, 3]. The second group assembles methods of tailoring properties within the same material through the adaptation of geometry dimensions (e.g. rolling with alternating roll diameters) and/or microstructures (e.g. selective heattreatment) [4]. A break-through in the industry of, above all, automotive steels that came with introduction of complex microstructures (dual-, complex-phase, TRIP-, TWIP-steels) attracted strong attention to advanced heat-treatments and the possibility of steel tailoring through miscellaneous time-temperature courses [5, 6]. In this respect, boron and manganese alloyed steels are characterized by their enhanced hardenability and a wide spectrum of possible phase compositions achievable by heattreatments. In addition, they exhibit a high level of mechanical and wear properties in service [7]. The present investigation aims to develop and implement a technology line, which allows the manufacturing of tailored tubes based on locally adapted mechanical properties in the longitudinal direction by means of an advanced heat-treatment integrated in the production line. Being a continuous process, tube forming is advantageous for integration of a local heat-treatment station to manufacture hollow profiles with a constant cross-section over the length. When using boron-alloyed steels, a significant delay of ferrite-perlite formation during the cooling due to the boron addition technologically simplifies an achievement of different steel phases. Depending on the cooling rate, diffusive (ferrite-perlite, bainite) as well as diffusionless (martensite) transformations can take place [8]. This implies that by a selective heat-treatment and adapted heating/cooling strategies, different combinations of phase compositions can be obtained, such as relatively ductile ferrite-perlitic or bainitic and hard martensitic sections. Such tailored tubes can: • Be post processed as semi-finished products: for the manufacturing of, e.g. Tshape tubes by means of hydroforming, certain sections to be processed can be held purposely ductile, while not affected sections will gain the final properties during the integrated heat-treatment. • Find a direct application: for instance, ductile sections of frame rails in the automotive body can absorb the kinetic energy during a crash accident and hence, ensure a controlled deformation through a predictable folding, whereas hard sections remain responsible for the crash resistance. In addition, a damping effect of ductile sections can enhance vibration resistance of these products.

Manufacturing and Virtual Design to Tailor the Properties …

23

To reduce the experimental and costs efforts, a simulation model allowing to predict optimal heating and cooling parameters in dependence on the required finish properties of the tube under consideration of the actual process parameters was developed within this study.

2 Material

total elongation in %

The current work is focused on the investigations performed with a boron alloyed heat treatable steel 1.5528 (22MnB5). It has been widely applied for the press-hardening since the mid-1990s due to its remarkable properties during processing [9] (Fig. 1): these provide an increased deformability in the hot state with an extended temperature range for hot forming as well as an enhanced hardenability during cooling. An addition of boron in quantities between 0.008 and 0.030 wt% facilitates formation of a quenched microstructure through the delay of the diffusive decomposition of austenite in steel. While the tensile strength in the as-delivered state is about 600 MPa, it reaches more than 1500 MPa in the quenched state. As determined by atomic emission spectroscopy, the chemical composition of the used steel charge is in accordance with the norm values [10]: C—0.23, Si—0.30, Mn—1.23, Cr—1.16, Ti—0.04, B—0.0031, balance Fe, in wt%. In the initial asdelivered state, it consists of ferrite—white areas in Fig. 1—and perlite—black spots in Fig. 1. The metallographic preparation is equal for all investigations in the present study. The samples were embedded, ground and polished to 1 µm diamond paste finish. Surfaces were etched with 1% HNO3 for 2 s to contrast the individual phases.

2

1 2 3

40 30

as delivered state hot state quenched state

P

20

1

10

3

F

0 0

500

1000

1500

2000

tensile strength in MPa

2500

20 µm

Fig. 1 Evolution of the mechanical properties during hot-stamping (left) and initial microstructure of the steel 22MnB5 used within the present study in the as-delivered state (right)

24

I. Hordych et al.

3 Technological Process The developed technology is based on a traditional tube forming line, which consists of three processing stages—sheet forming, welding and sizing (Fig. 2a). With 16 (13 forming and 3 sizing) multi-roll stands, a metal sheet is incrementally closed to a circle profile, which is welded and sized in terms of improved surface properties. At the end of the line, a number of profiling stands is positioned for adjustment of the profile cross-section if necessary. The tube forming is realized using a laboratory scale electric pipe-welded profiling machine “Nagel Profiliertechnik 1001”. With a given roll pass design, it allows manufacturing of tubes with an outer diameter of 20 mm from a metal strip with a thickness between 0.5 and 0.8 mm (Fig. 2b). An available feed range from 0.01 to 250 mm s−1 is limited by the welding process. A plasma arc welding station “EWM Microplasma 50” integrated in the production line provides a maximal current of 50 A. This welding type uses plasma gays to melt preformed steel edges without any filler materials, which is advantageous for a continuous line in terms of absent flash in the welded zone. Preliminary investigations of the welding parameters resulted in sufficient welding seam properties at feed rates up to 50 mm s−1 . The described conventional tube forming line is upgraded with an advanced heattreating station (Fig. 2c). An inductive heating and a subsequent water-air spray cooling are short-acting methods and thus, advantageous to be applied in the line. The heating and cooling ensures high process rates to enable the required transformations

(a) TUBE FORMING Feed

Stand 1

Sheet forming

Stand 2...

HEAT TREATMENT Welding

Sizing

Profile Inductive forming heating

Spray cooling

Stand 9 Stand 10...

(b)

(c) Spray nozzles

Circle 1

Circle 2

Induction Tube coil

Roll pass design

Fig. 2 Schema of the tube forming line (a) with a given roll pass design (b) and an integrated heat-treatment station (c)

Manufacturing and Virtual Design to Tailor the Properties …

25

in a few seconds. Following the tube forming stands, an induction coil is installed in the production line to heat up the welded tube. An available medium frequency inductor “Eldec MFG 30” can generate a maximal power of 30 kW and thus provide a wide range of heating rates. Subsequently, an water-air spray cooling system is positioned. It consists of two independent water-air circles with four spray nozzles per circle. A flexible setup and control system of the heating/cooling conditions offers various strategies to produce complex microstructures and hence tailored properties [11].

4 Phase Transformations During Heat-Treatment 4.1 Austenite Formation During Inductive Heating The formation of different steel phases during cooling can be fulfilled exceptionally from the austenite state. Depending on the cooling rate, ferrite-perlite, bainite or martensite can occur in the steel 22MnB5. Thus, tube segments to be cooled should exhibit an austenitic microstructure. To be fully austenitized, 22MnB5 is usually exposed to a furnace heating at 850–950 °C with a soaking time between 3 and 6 min [9, 12, 13]. However, one of the characteristic properties of continuous lines is limited time for heating and soaking. Conventional furnace heat-treatments appear to be hardly realized within a continuous process. Since austenite formation is governed by diffusion of carbon atoms into the iron lattice, steel tubes should be significantly overheated to promote diffusive processes and compensate a lack of time. A possible range of suitable heating rates can be determined on the basis of the technical properties and parameters of the line (Eq. 1). Both feed rate and coil length determine the necessary heating rate to reach the required austenitizing temperature: Vh =

(T A − TR ) · V p , li

(1)

where V h —heating rate in K s−1 , T A —austenitizing temperature in °C, T R —room temperature in °C, vp —feed rate in mm s−1 , l i —inductor length in mm (Fig. 3). For the current study, an inductor coil with a length of 50 mm was used. The feed rate was limited by the plasma welding in the processing line. In order to accelerate the diffusion, the target austenitizing temperature was set at a value of 1150 °C, which is above the temperature of the homogenous austenite at the highest heating rates in literature [14]. Thus, the range of relevant rates was set between 500 and 2500 K s−1 . The inductive heating is described for heating rates up to 150 K s−1 in [14, 15]. The reported data does not cover the relevant heating rate range. Hence, a series of dilatometric investigations on phase transformations during rapid austenitizing were performed in a previous work of the authors [16]. Steel samples were inductively heated at the nominal rates of 500, 1200, 1800, 2500 K s−1 using a dilatometer DIL

26

I. Hordych et al.

Fig. 3 Heating station with a given inductor coil and parameters determining the heating rate at a constant inductor power

TA TR

805A/D+T from Bähr with three samples per rate. Without soaking, the samples were immediately quenched with compressed nitrogen at 30 K s−1 to capture the formed austenite fraction at an elevated temperature and indirectly confirm its formation through the martensite fraction at room temperature (critical cooling rate of 22MnB5: 27 K s−1 ) [13]. The actual heating courses of one sample per heating rate are exemplary plotted in the logarithmic scale in Fig. 4. Two temperature ranges with different heating rates are clearly distinguishable. These are separated along the line of Curie temperature that represents the change in the magnetic properties from ferromagnetic to paramagnetic [17]. Depending on the steel grade and chemical composition, Curie temperature lies 1200

Heating temperature in C

1000

Paramagnetism 800

Curie temperature

Ferromagnetism

600

400

200

0 0.001

0.01

0.1

1

10

Heating time in s

Fig. 4 Heating courses obtained during dilatometric experiments with the actual heating rates calculated below and above Curie temperature in K s−1

Manufacturing and Virtual Design to Tailor the Properties …

27

Thermal expansion

0.1%

A F+P

F+A 600

700

800 900 Temperature in C

1000

1100

Fig. 5 Dilatometer courses for the given nominal heating rates in K s−1 and indication of transformation areas (F—ferrite, P—perlite, A—austenite)

in between 720 and 768 °C [18]. In fact, the target heating rates were valid only up to the Curie temperature; above a drop to values between 368 and 380 K s−1 was observed. The obtained dilatometric curves for the inductive heating are shown in Fig. 5. For all tested heating rates, the courses appear to be similar. For a better visibility, these are manually shifted along the Y-axis by −0.1%. Due to the fact, that the transformations took place above Curie temperature, the heating rates were quite similar in this temperature range. Although this resulted in a similar transformation behavior, increasing heating rates shifted the transformation temperatures to the higher values. From the dilatometric curves, the Ac1 - and Ac3 −temperatures were derived. These describe the start and the finish of the transformation correspondingly. The transformation start temperatures vary between 816 and 832 °C, whereas the transformation finish temperatures lie in the range from 946 to 995 °C. In both cases, the temperatures increase with increasing heating rates. The determined parameters of the heating courses and transformation kinetics are summarized in Table 1. The dilatometric data served as a basis for the calculations of the ferrite-perlite decomposition during the inductive heating. The method presented in [19] was implemented to describe the evolution of austenite formation as a function of heating time and heating temperature. Herein, the thermal expansion εth during heating is considered as a sum of the thermal expansions of each present phase ( f A , f F+P ) and the strain induced by the transformation of ferrite-perlite to austenite εtr (Eq. 2): εth = f F · εth,F + f A · εth,A + f A · εtr ,

(2)

Through the assumption that only ferrite-perlite and austenite are present in the system, the ferrite-perlite fraction can be expressed through the austenite fraction

28

I. Hordych et al.

Table 1 Heating and transformation parameters determined from the dilatometric investigations (T C —Curie temperature) Heating rate in K s−1 Target

Transformation temperatures in °C

Real, below TC

Real, above TC

Average

Ac1

Ac3

500

499 ± 1

368 ± 1

440 ± 6

816 ± 6

946 ± 12

1200

1197 ± 2

368 ± 20

709 ± 12

828 ± 3

974 ± 15

1800

1783 ± 4

377 ± 17

854 ± 12

826 ± 4

971 ± 17

2500

2440 ± 13

380 ± 4

898 ± 26

832 ± 4

995 ± 10

f F+P = 1 − f A . Furthermore, the thermal expansion of particular phases can be represented as a product of the thermal expansion coefficient and material temperature. In this case, Eq. 2 is expressed in form of Eq. 3: εth − αth,F+P · T (t)  . f A (T, t) =  αth,A − αth,F+P · T (t) + εtr

(3)

The thermal expansion coefficients were derived from the dilatometric curves as a linear ascent of the corresponding phases in the non-transformation areas. The coefficient of ferrite-perlite was equal (14.52 ± 0.17) × 10−6 K−1 , whereas it amounts to (23.35 ± 1.11) × 10−6 K−1 for austenite. Since austenite exhibits normally a larger thermal expansion than ferrite, the values are adequate to each other and to the literature [20]. The transformation strain is an imaginary thermal expansion of austenite at the temperature of 0 °C and is equal to −0.99 ± 0.11% for the current study. Application of Eq. 3 allowed the prediction of the austenite evolution at a particular heating temperature and time at the given heating rates (Fig. 6). The courses are vertically shifted along the Y-axis by 10% for a better visibility. The dependence on the temperature confirmed the similar transformation range and the increase of the transformation temperatures. The austenite fraction over the heating time gives the absolute time required for the austenitizing within the investigated range of the heating rates. Furthermore, the dilatometric and calculated data for each sample was processed to describe the evolution of the austenite formation based on the half-empirical model of Johnson-Mehl-Avrami (JMA) [21, 22]. This is usually applied to describe diffusional phase transformations in materials and assumes that each transformation process exhibits a sigmoidal course expressed by Eq. 4:   f A = 1 − exp −(kt)n ,

(4)

Manufacturing and Virtual Design to Tailor the Properties …

(a)

29

500 1200 1800 2500

Austenite fraction in %

Calculation JMA-model

20%

Shift along Y-axis 10% Heating rate in K·s -1 800

850

900

950

1000

Temperature in °C

(b)

100 Calculation

Austenite fraction in %

JMA-model

80

60

40

20

Heating rate in K·s -1

0 0

0.5

1

1.5

2

2.5

Time in s

Fig. 6 Evolution of the austenite formation during the inductive heating with the given heating rates depending on a temperature—shift along the Y-axis of 10%—and b time from the start of the heating process according to calculated dilatometric data and to JMA-model

where t is time counted from the begin of a transformation in s; k and n— are coefficients depending on material, temperature (isothermal transformation) or heating/cooling rate (continuous transformation). The coefficients k and n are determined as follows: • Through the mathematical transformation of the JMA-equation, it becomes evident that the moment of time, when t = 1/k, corresponds to an austenite fraction of 0.6321. The interpolation of this point in the coordinate system austenite-heating time allows the identification of the corresponding time and hence, coefficient k. • JMA-equation expressed in form of Eq. 5:

30

I. Hordych et al.

 lnln

1 1 − fA

 = n · ln(t) + n · ln(k),

(5)

demonstrates that n is equal to a slope of the function lnln( 1−1 f A ) over the transformation time plotted in the logarithmic scale. The courses of the austenite fraction as a function of temperature and heating time according to the JMA-model are exemplary plotted in Fig. 6. A good correspondence of the calculated dilatometric data and JMA-model can be stated with a slight deviation on the final transformation stages. The calculated coefficients k and n are presented in Table 2. These change only slightly; however, a tendency to decrease with increasing heating rates can be observed with an exception of n for 2500 K s−1 . The dilatometric investigations and performed calculations were verified by means of optical microscopy by a microstructure analysis of the tested samples. All heated and subsequently quenched samples—independent on the cooling rate—exhibit a martensitic microstructure. This reveals that a full austenitizing was achieved during the inductive heating for each heating rate. An optical micrograph of the sample heated with a rate of 2500 K s−1 is exemplary depicted in Fig. 7. Table 2 Rate dependent coefficients k and n in JMA-equation for the given heating rates Heating rate in K s−1

500

1200

1800

2500

k

5.467

5.241

5.203

4.695

n

2.025

1.903

1.885

2.336

200 µm Fig. 7 Exemplary micrograph of the dilatometric sample exhibiting martensitic microstructure after heating with rate of 2500 K s−1 and immediate quenching with a rate of 30 K s−1

Manufacturing and Virtual Design to Tailor the Properties …

31

The obtained dilatometric data revealed that heating above 1200 °C should ensure a full austenitizing of the tube cross-section at high heating rates during in-lineexperiments in the tube forming line.

4.2 Austenite Decomposition During Spray Cooling

Temperature

Depending on the application purpose, tailored products can be of high interest in a wide range of mechanical properties. Using the advanced spray cooling, a wide spectrum of cooling rates can be obtained in the range from air cooling up to the quenching under a high water-air pressure (Fig. 8). The pressure can be adjusted in the range between 0 and 0.6 MPa for both air and water flow independently. The actual temperature-time-course, which depends on the feed rate, is influenced by the inductive heating (H) and two stages of active cooling in the water-air circles (C1, C2). In addition, the tube is passively cooled on air during transportations and after passing the second circle (T1–T3). The distance between the cooling circles can be flexible adjusted and will be a subject of future studies. For the present investigations, the distances corresponding to T1 and T2 are chosen to be 150 mm. Four spray nozzles per circle are positioned in the perpendicular plane to the tube feed with a tangential rotation of 45° to each other (Fig. 2c). In case of fours nozzles and tubes with a diameter of 20 mm, a distance of 40 mm from a nozzle nose to the normal plane of the tube surface should ensure a complete covering of the whole cross-section with

H

T1 C1

T2

C2

T3 F+P

A

F+P

B M Time

Feed

Inductive heating Circle 1

Circle 2

Fig. 8 Possible heat-treatment strategies to obtain different phase compositions under consideration of the parameters of the heat-treatment station (H—heating; T1, T2, T3—air transportation stages; C1, C2—water-air spray cooling circles)

32

I. Hordych et al.

the water-air mixture [23]. Depending on the cooling intensity, all steel phases (with exception of retained austenite due to a low content of carbon) can be expected. In order to investigate the austenite decomposition during spray cooling in the technology line, a series of experiments with various cooling parameters was performed. Preliminary investigations on the inductive heating revealed the inductor power needed for reaching the austenitizing temperature. At a feed rate of 16.7 mm s−1 , the required temperature range can be reached using an inductor power of 12 kW. The temperature was measured with a thermocouple attached to the internal tube surface. These heating parameters were used as default for the investigations on heat-treatments with different cooling strategies. After the heating, tube segments of a length at least 150 mm were cooled on air (0 MPa water, 0 MPa air); with compressed air (0 MPa water, 0.6 MPa air) and with a water-air mixture at pressures of 0.1 MPa, 0.3 MPa and 0.6 MPa respectively. For the ease of convenience, the tube samples are named based on the corresponding cooling conditions: “XWYA”, where “X” and “Y” are accordingly equal to the water and air pressure in MPa. The time-temperature courses of two boundary cases, namely air cooling (0W0A) and water-air cooling at 0.6 MPa (0.6W0.6A), are exemplary depicted in Fig. 9. Both heating sections exhibit a similar behavior with a higher rate below Curie temperature and a lower rate above. The accurate Curie temperature for a particular case could not be determined, since heating courses do not have pronounced inflexion points. The reason may be that in the case of a continuous process, in addition to the inductive influence, the tube is heated by a heat transfer from the already austenized front sections. Preliminary experiments on the static inductive heating confirmed H

T1 C1 T2 C2

T3

1400

0.6W0.6A

0W0A

1200 VH(200-1200)= 590 K·s -1

Temperature in oC

VH(200-1200)= 577 K·s -1

1000

800

600

VC(1200-200)=5.2 K·s -1

400

200 VC(1200-200)=73.7 K·s -1

0 1

10

100

1000

Time in s

Fig. 9 Time-temperature-courses due to heat-treatments composed of inductive heating at 12 kW and spray cooling at the given cooling parameters in MPa with a feed rate of 16.7 mm s−1

Manufacturing and Virtual Design to Tailor the Properties …

Hardness in HV30

Initial

33 0W0A

0W0.6A

800 652

600 400

554

557

0.1W0.1A

0.3W0.3A

352 239

200

160

0 Initial state

0W0A 0.1W0.1A

0W0.6A

0.3W0.3A

0.6W0.6A 0.6W0.6A

Fig. 10 Average hardness over the tube length of the given samples with respective optical micrographs

this presumption: without movement of the tube, the inflexion point could be clearly distinguished. An average heating rate above 500 K s−1 allowed reaching the maximal temperature above 1200 °C in both cases. Considering the dilatometric results, this ensures a complete formation of austenite in the heated cross-sections. The air cooling curve exhibits an exponential course with a low average cooling rate of 5 K s−1 between 1200 and 200 °C. In the time-temperature course of the water-air spray cooling at 0.6 MPa, the passive and active cooling stages can be clearly distinguished. The tube is cooled actively when passing circles C1 and C2 with a drastic drop of temperature and cooled passively in zones T1–T3. An average cooling rate for the whole cooling is app. 74 K s−1 . Hence, the cooling of the other samples was carried out within the determined range between 5 K s−1 on air and 74 K s−1 using the highest water-air pressure. After the heat-treatments, the tube samples were prepared for Vickers hardness measurements with a load force of 294.2 N (HV30) according to DIN EN ISO 6507 [24]. The hardness was measured in the longitudinal direction with a default interval of about 10 mm. The surface of the tubes was ground, polished with a fine sandpaper to remove the scale and degreased with acetone immediately before the measurement. The average values of hardness with respective standard deviations for each specimen are depicted in Fig. 10. Increased hardness for all heat-treated samples in comparison to the initial state can be noticed. Furthermore, higher cooling rates resulted

34

I. Hordych et al.

in higher hardness values correspondingly. At the beginning, the as-delivered state exhibits a hardness of 160 HV. The air cooling caused a rise to a value of 239 HV, whereas the compressed air provided a further increasing up to 352 HV. If water is added to the air cooling mixture, the hardness grows drastically above 550 HV and then increases only slightly. There is no difference between the cooling with 0.1 and 0.3 MPa in the hardness. The hardest microstructure is obtained by a water-air cooling with a pressure of 0.6 MPa. Optical micrographs supplemented the hardness measurements. The non-treated microstructure is ferrite-perlitic: a white matrix of ferrite contains dark spots of perlite. A deformation due to the tube forming process appears not to influence the microstructure. In the air cooled samples 0W0A and 0W0.6A, inclusions of three phases could be detected: ferrite, upper and partially lower bainite. The micrographs of the samples cooled with the water-air mixture exhibit mostly martensitic microstructures independently of the water-air pressure. Hence, for the hardening the presence of water in the cooling mixture is essential and even relatively low water pressures provide a hardened microstructure. Additionally, to control the homogeneity of the heat-treatment and hence of the mechanical properties over the cross-section, the hardness was measured in the transverse direction on some selected sections. In Fig. 11, the hardness is presented along the rotation, where 0°/360° corresponds to the position of the weld seam. The course of the initial state stands out from the rest due to a hardness jump that corresponds to the weld seam. The hardness in this point correlates with the hardened samples that supposes martensite formation in the seam (Fig. 11b). Interestingly, the influence of welding is completely eliminated due to the heat-treatments, which becomes evident by comparing the initial state against the air cooled state 0W0A, which exhibits the homogeneous course over the whole cross-section. The water-air quenched samples feature less homogenous cross-section properties with a certain value scatter. However, the hardness remains above 500 HV, which implies a martensitic microstructure in the whole cross-section for the water-air cooling. Initial state 0W0A 0.1W0.1A 0.3W0.3A 0.6W0.6A

Initial

400 200 0

Hardened weld seam M

Base microstructure F+P

200 µm

Fig. 11 a Average hardness along the tube cross-section as a function of rotation angle in reference to the weld seam for the given samples; b optical micrograph of the welded zone in the initial state without heat-treatment (M—martensite, F—ferrite, P—perlite)

Manufacturing and Virtual Design to Tailor the Properties …

35

5 Non-destructive Microstructure Characterization Destructive methods of the mechanical or microstructure analysis, e.g. hardness measurement, tensile testing or microscopy, usually require appropriate surface conditions and/or preparation of standard samples, which is time- and cost-consuming. From this point of view, non-destructive analyses appear to be beneficial. These can be adapted for a microstructure characterization as well. Hence, magnetic measurements of magnetization and demagnetization of the manufactured samples were performed. Under the influence of a magnetic field, ferromagnetic materials create a characteristic hysteresis loop that plots the generated magnetic flux B or in some cases magnetization M over the applied magnetic field H [25]. The hysteretic form is caused by the irreversible magnetic behavior of the ferromagnetic materials [26]. This means that, once a material is magnetized with a certain magnetic field, a higher opposite magnetic field has to be applied in order to demagnetize it. To characterize the magnetic hysteresis, four values are mostly handled: energy losses that are defined by the hysteresis area A; coercive force H C that represents a value of the magnetic field strength at B = 0; remanence Br that is equal to the magnetic flux at H = 0 and the saturated magnetic flux BS (Fig. 12). Here, the energy losses and the coercive force were analyzed in terms of their correlation with the mechanical properties. The steel phases with exception of austenite are known for their ferromagnetic properties at room temperature [27, 28]. Herewith, ferrite exhibits so called soft magnetic properties that are defined by relatively low coercive forces and energy losses; martensite belongs to the hard magnetic materials with high values of these properties [29]. Similarly to the mechanical properties, the magnetic properties of bainite cover a range of values between ferrite and martensite in dependence on the amount of carbide precipitations. Austenite is paramagnetic and hence, does not create a hysteresis loop under a magnetic field [28]. Due to the low carbon content Fig. 12 Typical magnetization hysteresis loops of the steel phases in the coordinate system of magnetic field and flux density with definition of characteristic magnetic values

Magnetic flux density B (T) Ferrite

Br

BS Martensite Austenite

HC Magnetic field strength H (A/m)

A

36

I. Hordych et al.

in 22MnB5, the presence of retained austenite at room temperature is rather unlikely [13]. The magnetic properties of the manufactured samples were determined using a power supplier from Kepco and an experimental magnetic device (Fig. 13). It consists of a primary coil (H-coil) with 91 turns to create a magnetic field with help of alternate current generated by the power supplier. Another component is a secondary coil (B-coil) with 23 turns to pick up the created electromotive force. To close the magnetic circuit and decrease the macroscopic demagnetizing field, the tube sample was assembled with a ferrite yoke of an adapted shape. Measurements were carried out with an interval 10 mm in the longitudinal direction of the tube. 10 cycles of magnetization per measurement were performed for a statistical evaluation with a current frequency of 2 Hz. Values of the applied current and the generated voltage were recorded. They were transformed into values of the magnetic field and flux according to Ampere’s and Lenz’s-Faraday laws under consideration of the geometry of the setup and samples as well as parameters of H- and B-coils. Hysteresis loops for each measured spot served for determination of the coercive force and energy losses. Magnetic properties of the characteristic samples, namely 0W0A, 0.6W0.6A and of the initial state are exemplary depicted in Fig. 14. Their dependence on the heat-treatment and the phase composition is evident. The initial state, namely ferrite-perlite, exhibits a slim hysteresis with a high level of magnetization. In contrast, the magnetization behaviors of 0W0A and 0.6W0.6A look more similarly due to the macroscopically identical values of the maximal magnetic flux; however, 0.6W0.6A exhibits a noticeable higher coercive force and hence, higher total energy losses. The evaluation of the whole measure data reveals a general dependence of the magnetic properties on the cooling intensity. Figure 15 illustrates average values of the energy losses and the coercive force over the applied cooling and the initial state as a reference correspondingly. Both values increase continuously with a cooling intensity and reach their maxima at the water-air cooling with 0.1 MPa; then they exhibit a slight decrease. Three ranges that represent three general states (initial, air cooled and water-air cooled) can be clearly distinguished by the values of the magnetic properties. Fig. 13 Setup of the magnetic device composed for determination of the magnetic properties in the tube cross-sections

Tube

Primary coil Secondary coil

Voltage

Alternate current Ferrite yoke 10 mm

Manufacturing and Virtual Design to Tailor the Properties …

37

Magnetic flux density in T

Fig. 14 Hysteresis loops of the exemplary cross-sections from the given samples

Initial state 0W0A 0.6W0.6A

Magnetic field strength in A·m -1

Energy losses in Wt·kg-1 Coercive force in kA·m-1

5 4

Energy losses Coercive force

3 2 1 0 Initial state

0W0A

0W0.6A

0.1W0.1A

0.3W0.3A

0.6W0.6A

Fig. 15 Average values of the energy losses and coercive force for the given samples with standard deviation

Within the water-air cooling, it is remarkably that the magnetic properties show a slightly inverse relation to the cooling intensity in contrast to the hardness values (cf. Fig. 10) and to the literature [27]. The coercive force is often called “magnetic hardness”, since it is similarly influenced by the microstructure [25]. Interactions between the magnetic constituents as magnetic walls or domains and the microstructure as stress fields or dislocations are responsible for this connection [26]. In these investigations, a more intense cooling seems to induce internal stresses in the microstructure that could lead to the decrease of the magnetic properties. With the aim of a non-destructive prediction of the phase composition, a correlation between the mechanical and magnetic properties was derived. In Fig. 16, hardness was plotted as a function of the energy losses (Fig. 16a) and the coercive force (Fig. 16b) in the whole range of the measured data. It can be seen that their interactions can be described with exponential functions H V = 30.237 · e0.6011A (R 2 = 0.969) and H V = 64.353 · e0.5733HC (R 2 = 0.946). The data of 0.6W0.6A deviate more pronounced from the exponents, which might be explained by the increased internal

38

I. Hordych et al.

(a) 800

Initial state

Hardness in HV30

700

0W0A 0W0.6A

600

0.1W0.1A

500

0.3W0.3A 400

0.6W0.6A

300 y = 30.237e0.6011x R2= 0.9685

200 100 0 2

Hardness in HV30

(b) 800

3

4 Energy losses A in Wt·kg-1

5

6

Initial state

700

0W0A

600

0W0.6A 0.1W0.1A

500

0.3W0.3A 400

0.6W0.6A

300 200

y = 64.353e0.5733x R2 = 0.9457

100 0 0

1

2 3 4 Coercive force HC in kA·m-1

5

Fig. 16 Hardness of the given samples as a function of energy losses A (a) and coercive force H C (b) with respective logarithmic equations

stresses described above. This phenomenon will be studied more precisely in the following works. The obtained dependencies allow the prediction of the mechanical properties on basis of the non-destructive analysis.

Manufacturing and Virtual Design to Tailor the Properties …

39

6 Virtual Design of Tube Heat-Treatment A numerical simulation of the technological process can substitute numerous experimental tasks and efforts. Thus, a suited mathematical model was developed and implemented in the simulation software ANSYS Workbench. The model allows for a prediction of the mechanical properties of the finished product and an inverse determination of the technological input parameters of the heat-treatment in terms of intended mechanical properties (Fig. 17) [30]. For the simulation of the heat-treatment, an electromagnetic task of the energy losses during inductive heating and a thermal task of the temperature evolution and corresponding phase transformations during heating/cooling has to be solved. In ANSYS, both tasks are considered separately by Maxwell eddy current and thermal solvers respectively with a direct coupling of the results via the so-called Response Surface Optimization tool and the feedback operator. The virtual feed and tube movement are achieved by the prescribed time intervals for the heating/cooling that were defined by the given feed rates and geometrical parameters of the heat-treatment station. During the inductive heating a magnetic field created by the alternating current generates heating energy in a conductor (herein—tube). Flowing through the tube body, it induces ohmic losses that serve as a heat source. The distribution of the electric field and hence, heat flow over the volume of the conductor depends on the physical properties of the material. The energy losses and heat distribution induced by the inductive heating were included in the developed model by simulating the high-frequency alternating current. The geometrical and material parameters of the induction coil and the tube body were taken from the experimental setup (cf. Fig. 2). The physical properties of 22MnB5 were based on literature data [10, 31, 32] and modelled temperature dependent. In order to take the Curie temperature and hence, the change of the magnetic properties into account, a finite number of calculation steps within the steady-state task was specified with prescribed intervals of heating time. Temperature values calculated at each processing step were implemented as boundary conditions for each subsequent step. Although a decreasing time step size increases the quality of the model, the number of time steps should be minimized due Heating

Cooling

Inductor power, feed rate, inductor geometry

Austenite fraction

Microstructure

Temperature Temperature

Final product

Phase composition

F P

A B M

Time

Mechanical and magnetic properties Hardness

Microstructure

Austenite fraction

Heating parameters

Cooling rate

Fig. 17 Model allowing the inverse determination of the required parameters of heat-treatment based on target mechanical properties

40

I. Hordych et al.

to significant time and processing efforts. For the current simulation, 20 time steps of 0.1 s were used and the simulation was carried out according to the experimental conditions and parameters of the inductive heating (see p. 12). Ohmic losses calculated in the electromagnetic solver were transferred into the thermal solver and the transient thermal analysis of the phase transformations was performed. Automatic adaptive meshing was applied to efficiently determine ohmic losss and fix the skin layer thickness providing a higher accuracy of calculations according to [33]. The ohmic losses averaged over the elements were defined as a spatially distributed loads. This allowed to determine the heat flow and the corresponding temperature distribution field (Fig. 18). According to the generated temperature field evolution and the transformation kinetics described by the JMA-equation (Fig. 6), the amount of formed austenite was calculated for each element (Fig. 18). Based on the computed austenite fraction, its decomposition into martensite during the subsequent cooling was considered. The cooling process was divided into two stages: passive air cooling during transportation and active spray cooling with a waterair mixture at water-air pressures of 0.1 MPa. According to the experimental data, this configuration ensures a complete transformation of austenite into martensite, thus diffusive transformations were not considered. The heat transfer coefficients for these cooling conditions were modelled temperature dependent according to preliminary investigations. Based on the temperature evolution, the phase transformations during the cooling were described for each mesh element according to Wildau and Hougardy [34]: ζ f M = 1 − exp[−γ (Ms − T ) ,

(6)

where f M —martensite fraction; T —temperature at a particular time in °C; Ms — initial temperature of martensite transformation in °C; γ , ζ —material coefficients (depend on the Ms ). The start temperature of martensite transformation was defined considering the actual chemical composition of 22MnB5 [35].

Maxwell eddy current solver Ohmic losses in Wm-3 3.210

Thermal-transient solver Temperature in C 1200

2.210

Austentie fraction in % 99.9 60.0

800

8.69

300

3.54

20

20 mm

fA=f(T, t)

30.0 0

20 mm

Fig. 18 Simulation of the inductive heating for a feed rate of 16.7 mm s−1 composed of the determination of ohmic losses due to inductive influence in Maxwell eddy current solver and generation of the temperature field along with calculation of the austenite fraction in the thermaltransient solver

Manufacturing and Virtual Design to Tailor the Properties …

(a)

(b)

41

(c)

Phase fraction in %

100 80 60 40 20 0 30

50

70

30

50 Tube length in mm

70 30 F+P

50 A

70 M

Fig. 19 Phase distribution along the tube length on the inner tube surface after the inductive heating (a), at the time point of 19.5 s during the cooling (b) and after the completed cooling (c)

The phase distribution along the tube after the heating (a), during the cooling (b) at a time point of 19.5 s, when martensite is partially formed, and after the full cooling (c) are plotted in Fig. 19. At the beginning of the cooling, the heated part consists completely of austenite, non-heated parts are ferritic-perlitic and a transition zone of the partial austenitizing is present. After the reaching the martensite start temperature, the martensite begins to form. In Fig. 19b, an increasing fraction of martensite is visible. A slight asymmetric formation is a consequence of the non-homogeneous distribution of ohmic losses. It becomes more evident during the transformation, when slightly different ohmic losses result in a visible difference of the phase transformation. Furthermore, the martensite fraction slightly decreases from the transition zone towards the center due to the occurring temperature gradient (the transition zone cools faster due to the heat flow towards the non-austenized tube sections). Thus, for this point of time, the martensite fraction is higher close to the transition zone, than in the center. With temperature alignment, the difference of martensite fraction continuously decreases and is eliminated, when the transformation is completed (Fig. 19c). The developed model simplifies experimental efforts by the available interconnection of different model elements in ANSYS Workbench. It allows determining of suited process parameters for designated properties of the finished product.

7 Conclusions A technological process for the continuous manufacturing of tubes from the boronalloyed steel 22MnB5 featuring tailored properties in the longitudinal direction was developed. A continuous tube forming line was upgraded with a heat-treatment station composed of a short-action inductive heating and flexible water-air spray cooling system. In terms of control over the transformation kinetics during the fast heating and cooling under consideration of the technological parameters, investigations on austenitizing and austenite decomposition were performed. The results can be summarized as follows:

42

I. Hordych et al.

• Dilatometric experiments on nominal inductive heating at rates from 500 to 2500 K s−1 resulted in a complete austenitizing of all investigated samples. Since the actual heating rates in the relevant two-phase region were lowered to 368– 380 K s−1 due to the change of the magnetic properties, the transformation kinetics of austenite formation appears to be similar for all experiments. However, the influence of the increasing heating rates, which were equal to the target ones up to Curie temperature, is expressed by the shift of the transformation temperatures to higher values and enlarging of the transformation temperature range. Austenite formation was described as a function of temperature and heating time by a JMA-model. • The heat-treatments performed in the developed technological line revealed that a wide range of phase compositions and mechanical properties can be obtained by varying the cooling parameters. The cooling with still as well as compressed air resulted in a complex microstructure consisting of ferrite, lower and upper bainite with hardness values of 239 HV and 352 HV respectively. The austenitized samples exposed to the water-air cooling exhibited quenched microstructure with an average hardness above 550 HV. • Magnetic properties evaluated for the nondestructive characterization of the mechanical properties of the heat-treated tubes showed a clear dependency on the cooling conditions. Using logarithmic functions, the mechanical properties can be predicted based on values of energy losses and coercive force. • ANSYS Workbench was applied to simulate the heat-treatment process and to predict the mechanical tube properties. Therefore, the temperature field was computed based on values of ohmic losses, which served as a heating source. Based on dilatometric measurements, the evolution of austenite formation was described by a JMA-model. Subsequently, austenite decomposition due to transformation of the prior austenitized material sections into martensite was computed. Acknowledgements The present study is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—project number 134839507 within the scope of the graduate school’s IRTG 1627 “Virtual Materials and Structures and their Validation”, subproject C5 “Virtual Design and Manufacturing of Tailored Tubes”.

References 1. Merklein, M., Johannes, M., Lechner, M., et al. (2014). A review on tailored blanks— Production, applications and evaluation. Journal of Materials Processing Technology, 214, 151–164. 2. Zadpoor, A. A., Sinke, J., & Benedictus, R. (2007). Mechanics of tailor welded blanks: An overview. Key Engineering Materials, 344, 373–382. 3. Groche, P., Wohletz, S., Brenneis, M., et al. (2014). Joining by forming—A review on joint mechanisms, applications and future trends. Journal of Materials Processing Technology, 214(10), 1972–1994.

Manufacturing and Virtual Design to Tailor the Properties …

43

4. Kopp, R., Wiedner, C., & Meyer, A. (2005). Flexibly rolled sheet metal and its use in sheet-metal forming. Advances in Materials Research, 6(8), 81–92. 5. Vollertsen, F., & Lange, K. (1998). Enhancement of drawability by local heat treatment. CIRP Annals-Manufacturing Technology, 47, 181–184. 6. Geiger, M., Merklein, M., & Vogt, U. (2009). Aluminium tailored heat treated blanks. Production Engineering 401–410. 7. Chang, Y., Wang, C. Y., Zhao, K. M., et al. (2016). An introduction to medium-Mn steel: Metallurgy, mechanical properties and warm stamping process. Materials and Design, 94, 424–432. 8. Naderi, M., Durrenberger, L., Molinari, A., et al. (2008). Constitutive relationships for 22MnB5 boron steel deformed isothermally at high temperatures. Materials Science and Engineering A, 478, 130–139. 9. Karbasian, H., & Tekkaya, A. E. (2010). A review on hot stamping. Journal of Materials Processing Technology, 210, 2103–2118. 10. Spittel, M., & Spittel, T. (2009). Steel symbol/number: 22MnB5/ 1.5528. In H. Warlimont (Ed.) Springer Materials—The Landolt-Boernstein Database, Berlin, Heidelberg: Springer. https:// doi.org/10.1007/978-3-540-44760-3_146. 11. Herbst, S., Steinke, K. F., Maier, H. J., et al. (2016). Determination of heat transfer coefficients for complex spray cooling arrangements. International Journal of Microstructure and Materials Properties, 11, 229–246. 12. Merklein, M., & Lechler, J. (2008). Determination of material and process characteristics for hot stamping processes of quenschable ultra high strength steels with respect to a FE-based process design. In SAE World Congress: Innovations in Steel and Applications of Advanced High Strength Steels for Automobile Structures 2008–0853. https://doi.org/10.4271/2008-010853. 13. Naderi, M., Saeed-Akbari, A., & Bleck, W. (2008). The effects of non-isothermal deformation on martensitic transformation in 22MnB5 steel. Materials Science and Engineering A, 487, 445–455. 14. Guk, A., Kunke, A., Kräusel, V., et al. (2017). Influence of inductive heating on microstructure and material properties in roll forming processes. AIP Conference Proceedings, 1896, 800271– 800276. 15. Tröster, T., & Niewel, J. (2014). Inductive heating of blanks and determination of corresponding process windows for press hardening. Report Project P 805/IGF-Nr.16319 N. 16. Hordych, I., Bild, K., Boiarkin, V., et al. (2018). Phase transformations in a boron-alloyed steel at high heating rates. Procedia Manufacturing, 15, 1062–1070. 17. Haimbaugh, R. (2015). Practical induction heat treating (2nd ed., p. 9781627080897). ISBN: ASM International. 18. Kolleck, R., Veit, R., Merklein, M., et al. (2009). Investigation on induction heating for hot stamping of boron alloyed steels. CIRP Annals-Manufacturing Technology, 58, 275–278. 19. Miokovic, T., Schwarzer, J., Schulze, V., et al. (2004). Description of short time phase transformations during the heating of steels basedon high-rate experimental data. Journal de Physique IV, 120, 591–598. 20. Kuepferle, J., Wilzer, J., Weber, S., et al. (2015). Thermo-physical properties of heat-treatable steels in the temperature range relevant for hot-stamping applications. Journal of Materials Science, 50, 2594–2604. 21. Johnson, W. A., & Mehl, R. F. (1939). Reaction kinetics in process of nucleation and growth. Transactions of the Metallurgical Society of AIME, 135, 416–458. 22. Avrami, M. (1941). Kinetics of phase change. The Journal of Chemical Physics, 9, 177–184. 23. Nowak, M., Golovko, O., Nürnberger, F., et al. (2013). Water-air spray cooling of extruded profiles: Process integrated heat treatment of the alloy EN AW-6082. Journal of Materials Engineering and Performance, 22, 2580–2587. 24. EN ISO 6507-1. (2005). Metallic materials—Vickers hardness test—Part 1: Test method. 25. Morgner, W., Michel, F., Bezljudko, G., et al. (2015). Zerstörungsfreie Materialcharakterisierung mittels Koerzimetrie. Non-destructive Testing Journal, 144, 40–43.

44

I. Hordych et al.

26. Hubert, O., & Lazreg, S. (2017). Two phase modeling of the influence of plastic strain on the magnetic and magnetostrictive behaviors of ferromagnetic materials. Journal of Magnetism and Magnetic Materials, 424, 421–442. 27. Byeon, J. W., & Kwun, S. I. (2003). Magnetic evaluation of microstructures and strength of eutectoid steel. Materials Transactions, 44(10), 2184–2190. 28. Nahak, B. (2017). Material characterization using Barkhausen noise analysis technique—A review. Indian Journal of Science and Technology, 10(14), 1–10. 29. Saquet, O., Chicois, J., & Vincent, A. (1999). Barkhausen noise from plain carbon steels: Analysis of the influence of microstructure. Materials Science and Engineering A, 269, 73–82. 30. Hordych, I., Boiarkin, V., Rodman, D., et al. (2017). Manufacturing of tailored tubes with a process integrated heat treatment. AIP Conference Proceedings, 1896, 1900031–1900036. 31. Vibrans, T. (2016). Induktive Erwärmung von Formplatinen für die Warmumformung. Doctoral Thesis, Chemnitz. 32. Zedler, T., Nikanorov, A., & Nacke, B. (2008). Investigations of relative magnetic permeability as input data for numerical simulation of induction surface hardening. In Proceedings of International Scientific Colloquium Modelling for Electromagnetic Processing, pp. 119–126. 33. Larsen, P., & Horiuchi, T. (2013). Induction heating applications. ANSYS Application Brief. https://www.ansys.com/-/media/ansys/corporate/resourcelibrary/techbrief (downloaded on 25.10.18). 34. Wildau, M., & Hougardy, H. (1987). Zur Auswirkung der Ms-Temperatur auf Spannungen und Maßänderungen. Journal of Heat Treatment and Materials, 42, 261–268. 35. Hochholdinger, B. (2012). Simulation des Presshärteprozesses und Vorhersage der mechanischen Bauteisleigenschaften nach dem Härten. Doctoral thesis, Stuttgart.

Mathematical Modelling and Analysis of Temperature Effects in MEMS Joachim Escher and Tim Würth

Abstract Of concern is the mathematical analysis of models for MicroElectro-Mechanical Systems (MEMS). These models consist of coupled partial differential equations with a moving boundary. Although MEMS devices are often operated in non-isothermal environments, temperature is usually neglected in the mathematical investigations. Therefore the focus of our modelling is to incorporate temperature and the related material properties. We derive a model which allows us focus on different aspects of the underlying physics. Finally we analyse a simplified version of this model: The Small Aspect Ratio Limit.

1 Introduction The technology of MEMS is concerned with microscopic devices that function by combining electrostatic with mechanical features. In his famous lecture from 1959 Richard Feynman discussed how very small machines could be used to solve a high variety of problems. However due to the difficult manufacturing a rapid development only started in the 1980s which in turn revolutionized numerous branches of industry [9]. We now give some examples for such devices, for more details we refer to [23]. Microsensors can be used to measure inertia, for example in airbags or to measure pressure for medical applications. Microactuators create a displacement by converting an electric signal into a mechanical output. Micropumps are used as a drug delivery system in medicine [35] or for a controlled transfer of fluids in chemical engineering [1]. Several of these devices actually rely on thermal effects: For example a thermal actuator creates motion by a thermal expansion that is due to resistive heating [37]. Moreover a thermal-based microsensor can be used to monitor glucose and other metabolites [43]. Although other devices do not explicitly rely on temperature, they J. Escher · T. Würth (B) Leibniz Universität Hannover, Hannover, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_3

45

46 Fig. 1 Idealized MEMS device in 2d with membrane u

J. Escher and T. Würth

u

are nevertheless often operated in a non-isothermal environment. Possible examples are turbines [41], rocket engines [6] and satellites [25]. A key component (see Fig. 1) of several MEMS devices consists of an elastic membrane and a fixed ground plate. A voltage is applied to the system and the resulting Coulomb force causes the membrane to deflect towards the ground plate. This change in gap size again impacts the electrostatics and thus the two effects are connected to each other. In order to design and optimize MEMS devices one has to gain a precise understanding of these underlying dynamics. It is therefore no surprise that several branches of science have advanced in this direction. We now turn to the recent mathematical investigations of MEMS. A lot of mathematical research has been done focusing on the idealized MEMSdevice with an electrical actuated membrane. Apart from the fact that this model is important for applications, there is also a purely mathematical reason for this interest. The device’s mechanics can only be fully understood if the model takes into account the fact that the device’s boundary changes over time. The arising partial differential equations will therefore take the form of a moving boundary problem. These kinds of problems are of high interest in itself due to their inherently non-linear nature and their capability to describe natural phenomena accurately. Examples outside the world of MEMS devices include the Stefan problem, tumor growth models [4] and the Muskat problem [14]. The first mathematical analyses for MEMS devices use the additional assumption of a small aspect ratio (see for instance [8, 15, 16, 19, 21, 29]). In the very recent contribution [20] the authors consider an additional pressure term in the stationary case and examine positive solutions and singularties. Assuming a small aspect ratio makes it possible to decouple the electro and mechanical effects and thus consider a problem with a fixed boundary. The first results without this assumption were obtained in [12] and in [28] for the stationary case. Building on this breakthrough several papers have been published that deal with different extensions of this model. For additional details we also refer to the surveys [11, 34] and the references therein. In [13] the authors drop the usual assumption of small deformations and therefore consider a quasilinear equation for the membrane’s deflection. The therein developed techniques will help us to handle a quasi-linearity which arises due to the temperature in our model.

Mathematical Modelling and Analysis of Temperature Effects in MEMS

47

The case of a general permittivity profile without assuming a small aspect ratio is investigated in [26, 27] for small deformations and in [10] for non-small deformations. Moreover in [30, 31] bending effects are included into the model and in [32] the domain of the ground plate is generalized from a simple interval to a convex and smooth 2d domain. We will use the approach of the latter paper to proof wellposedness for a 3d model. Lastly, in [33] a constrained model is developed which allows further insights into the touchdown phenomena. A few theoretical contributions in mathematics have already been dedicated to the incorporation of temperature into models for MEMS. We want to emphasize that none of these authors considered the just described, well-established, moving boundary problem. However due to the proximity to our work we want to review this part of the literature as well. In [2] the authors consider the “V-shape” electro-thermal actuator. They show existence of a weak solution for their system and then carry out a numerical analysis. In [24] the authors derive a model for the thermoelastic behavior of a micro-beam resonator. Furthermore they solve the resulting equation analytically and show a good agreement to already available numerical data. To the best of our knowledge no analytical work has been done that takes into account temperature effects on the dynamics of the above described idealized MEMS device. First steps in that direction were taken in [37, 4.4] and [37, 6.3.2]. However both approaches made use of significant simplifications. In [37, 4.4] the joule heating of a cylinder is considered. The authors neglect a possible deflection of the cylinder’s top and therefore only couple equations for electrostatic and thermal effects. In [37, 6.3.2] the authors ignore electrostatics and instead focus on the thermal-elastic behaviour. Additionally space variations of the temperature and damping of the membrane is neglected. Furthermore in both chapters the authors neither consider the question of well-posedness nor perform a numerical analysis. Instead they consider certain limits of the resulting equations and steady state solutions. It is therefore our intention to fill this gap by incorporating temperature into the well-established moving boundary problem for the idealized MEMS device.

2 The Full Model Our MEMS device is a cylinder with radius a and height h, filled with a fluid. The fixed ground plate lies at z = −h. We will use a tilde on several variables because we will rescale our equations later for convenience. The elastic membrane is attached at  z = 0. Denoting the membrane’s displacement with  u , the volume of the cylinder is given by  VC = πa 2 h +

 E

 u ( x, y) d( x, y).

(1)

48

J. Escher and T. Würth

( Fig. 2 Domain  u ( t)) for  u=1

The ground plate and the region of the cylinder are denoted by   := {( x2 +  y 2 < a} E x, y)|  and

 × (−h, ∞) : −h <  (  u ( t)) := {( x,  y, z) ∈ E z < u( x, y,  t)}

respectively. In the following we assume that ( ( x,  y, z) ∈  u ( t)),  t ∈ [0, ∞) holds if  x, y, z and  t are not specified otherwise.

2.1 The Evolution of the Membrane We formulate the equations governing the membrane’s deflection which is modelled  × [0, ∞) → R [37, p. 192, p. 236]: with a function  u:E ρm d

) u ∂ u ∂ 2 1 ( T )∇ , VC ). ⊥2  ψ ⊥4  |2 + P(T − μ( T |∇ + a u + D ∇ u = − d ∂ t2 ∂ t 2

(2)

⊥2 := ∂x2 + ∂y2 . Here d is the membrane’s thickness, ρm the We use the notation ∇ ) the tension in density of the membrane’s material, ad the damping constant, μ(T

Mathematical Modelling and Analysis of Temperature Effects in MEMS

49

( Fig. 3 Domain  u ( t)) for a membrane with a negative and a positive deflection

) the permittivity. Furthermore P is the the membrane, D its flexural rigidity and 1 (T  the temperature. The right-hand side of the latter equation captures the pressure and T fact that the membrane’s movement is dependent on both the electrostatic potential and the temperature within the device. The voltage difference pulls the membrane to the ground plate whereas the pressure of the fluid, which increases with temperature, pushes the membrane upwards. At the edge of the cylinder the membrane is clamped and it is assumed to start in position u 0 . Therefore we have the boundary and initial conditions

and

  u ( x, y,  t) = 0, for ( x, y) ∈ ∂ E, t ∈ [0, ∞)  u ( x, y,  t) = ∂r 

(3)

 x, y), for ( x, y) ∈ E.  u ( x, y, 0) = u 0 (

(4)

2.2 The Electrostatic Potential We state the governing equations for the electrostatic potential within the cylinder. A tilde on differential operators denotes derivatives with respect to variables with a ) : R → R is the conductivity of the fluid in the cylinder which will depend tilde. σ (T on the temperature. We model the electrostatic potential with the equation )∇  · (σ (T ψ )( ∇ x, y, z) = 0.

(5)

50

J. Escher and T. Würth

) as a scalar A derivation of the latter can be found in [37, p. 104]. By defining σ (T only depending on temperature and not as a tensor depending on the position inside the cylinder we assumed that the fluid is homogeneous and isotropic. The fixed plate at  z = −h is grounded and the potential in the amount of V is applied to the membrane. Also we assume that there is no current through the lateral boundary of the cylinder. Denoting the derivative in the outward radial direction by ∂r this imposes the boundary conditions [37, p. 104]:      ψ ( x, y, −h) = 0, ψ ( x, y,  u ( x, y)) = V, ∂r ψ = 0

r =a

.

(6)

2.3 The Temperature Now we turn to the thermal problem: We derive a diffusion advection equation for the  similar to [7, 39]: Here ρ f is the density of the fluid, c is the specific temperature T heat capacity. We write ρ f = VmC where m is the mass of the whole fluid. This makes the density time-dependent because VC depends on the membrane’s position. We do however assume that density variations in space are neglectable. Thermal energy is . The continuity equation in differential form which relates the then given by ρ f c T change of the thermal energy to the heat flux j and to the source term R is given by:  ∂ρ f c T + ∇ · j = R. ∂ t

(7)

Our source term captures two effects. Firstly the electric energy which is converted to thermal energy [37, p. 105] and secondly the pressure-volume work (or volumechange work) [37, p. 189]. Pressure-volume work is relevant in our case because if the cylinders volume decreases, the heat energy inside the cylinder increases.1 Consequently the source term is of the form )|∇ , VC ) ψ |2 − P(T R = σ (T

d VC . d t

(8)

Here we decided not to include the heat energy generated by elastic energy. This is reasonable because this amount of heat is outweighed by Joule heating [37, p. 38]. We consider two different types of flux: The diffusive flux jdiff which corresponds to heat conduction in our setting can be approximated by Fourier’s first law: . T jdiff = −k ∇ 1 Here

(9)

we used the in our case reasonable assumption that the described process is reversible. [37, p. 189] Also just like with density we assumed that space variations for the pressure-volume work are neglectable.

Mathematical Modelling and Analysis of Temperature Effects in MEMS

51

The thermal conductivity k is assumed to be constant. The advective flux jadv gives the bulk motion of our fluid in the cylinder in direction v: . jadv = v T

(10)

Now we plug the flux ( j = jadv + jdiff ) and the source term into the continuity equation and assume c to be constant in order to obtain c

 ∂ρ f T dV − ∇ ) + σ (T )|∇ T  · (v T ψ |2 − P(T, VC ) C . = k ∂ t d t

(11)

) is hard to handle if the device is  · (v T The velocity v in the advection term −∇ turned or moved. This is why we want to ignore it. This means we consider the case in which conduction dominates advection. Fortunately this approximation is very reasonable for our case: Since we are dealing with a microsystem we can assume a small Reynolds number Re [37] which in turn yields a small Rayleigh number2 [17]. Furthermore for a laminar flow (again a reasonable assumption in the regime of a small Reynolds number [37, p. 307]) the length of the boundary layer is approximately √rRe [37, p. 305]. This makes the boundary layer’s thickness larger than our system which in turn results in slow velocities. Also diffusion is relatively fast in small length scales [37, p. 83]. Therefore we reduce our initial equation to: c

 ∂ρ f T  + σ (T )|∇ , VC ) d VC . T ψ |2 − P(T = k ∂ t d t

(12)

In our setting this can be rewritten as: −1  cm ∂ T  + σ (T )|∇ , VC ) d VC . m ∂(VC ) − P(T T ψ |2 − c T = k   VC ∂ t ∂t d t

(13)

We assume that the cylinder is at ambient temperature T A at time  t = 0 and that the membrane is insulated. We allow for heat loss through the lateral sides of the cylinder’s boundary. The heat loss is modelled by the Newton cooling condition with heat transfer coefficient h nc . This gives us the boundary conditions: ( T x,  y, z, 0) = T0 ,

      ∂T ∂T   = 0 = −h nc (T − T0 ) . , k ∂n ∂ r  z= u ( x , y, z, t) r =a

(14)

Here n denotes the unit normal at the membrane. We also assume that the cylinder  × [0, ∞): is heated from below. This is modelled by a given function Th : E ( x, y,  t). T x, y, −h,  t) = Th (

2 This

number gives the ratio between the gravitional and viscous forces.

(15)

52

J. Escher and T. Würth

2.4 Transformation of the System Now we introduce dimensionless variables  t  x  y  z , , y := , z := , t := a a h ad a 2  ψ  u , Th (x, y, t) := Th ( x, y,  t). u := , ψ := , T := T h V x :=

We further define domains  E := {(x, y)| x 2 + y 2 < 1} and (u(t)) := {(x, y, z) ∈ E × (−1, ∞) : −1 < z < u(x, y, t)}. In the following we assume that (x, y, z) ∈ (u(t)) holds if not specified otherwise. We now use these variables and multiply the govern2 ing Eq. (5) for the potential ψ by hV , the governing Eq. (13) for the temperature T by 2 h2 and the governing Eq. (2) for the membrane’s displacement by ah . Our equations k in dimensionless form are h2 ∂ ∂ ∇⊥ · (σ (T )∇⊥ ψ) + (σ (T ) ψ) = 0, a2 ∂z ∂z

(16)

ch 2 m ∂ T h2 2 ∂2 = ( T) ∇ T + ⊥ ad ka 2 VC ∂t a2 ∂2z V 2 h2 h2 ∂ψ 2 ch 2 m ∂(VC−1 ) d VC + σ (T ) ( 2 |∇⊥ ψ|2 + ( ) )− ) − T P(T, VC ) k a ∂z ad ka 2 ∂t kad a 2 dt (17) and ∂u D 1 (T )V 2 a 2 h 2 ∂ψ 2 a2 2 − μ(T )∇⊥2 u + 2 ∇⊥4 u = − ) P(T, VC ). ( |∇ ψ| + ( ) + ⊥ ∂t a 2h 3 a2 ∂z h (18) We also simplified the equation for the membrane’s displacement further by assuming that the thickness of the membrane d is small so we can drop the second time derivative. This is a standard simplification which is used for example in [9, 12, 18, 37]. This assumption is valid for micropumps and other MEMS devices that

Mathematical Modelling and Analysis of Temperature Effects in MEMS

53

are described in the mentioned references. The boundary and initial conditions now yield:   ∂ψ = 0 , (19) ψ(x, y, −1) = 0, ψ(x, y, u(x, y, t)) = 1, ∂r r =1 T (x, y, z, 0) = T0 ,

  ∂T , = 0 ∂n z=u(x,y,t)

  k ∂T = −h nc (T − T0 ) , a ∂r r =1

T (x, y, −1, t) = Th (x, y, t), u(x, y, t) = 0, for (x, y) ∈ ∂ E and u(x, y, 0) = u 0 for (x, y) ∈ E, u(x, y, t) = ∂r u(x, y, t) = 0 (x, y) ∈ ∂ E.

(20) (21) (22) (23)

With the definitions for the aspect ratio  and the ‘tuning parameters’ λi :  :=

V 2a2 a2 cm V2 1 D h , λ1 := , λ4 = , λ5 = , λ2 := , λ3 := , λ6 = 2 . 3 a 2h h ad k k kad a

We now state our governing equations one last time by also replacing the terms containing VC , which is given by (1), by terms containing u:  2 (σ (T )∂x2 ψ + σ (T )∂ y2 ψ + ∂x σ (T )∂x ψ + ∂ y σ (T )∂ y ψ) + σ (T )∂z2 ψ + ∂z σ (T )∂z ψ = 0,

(24) λ3  2  ∂t T =  2 ∇⊥2 T + ∂z2 T + σ (T )λ4 [ 2 |∇⊥ ψ|2 + (∂z ψ)2 ] πa 2 h + E u d xdy   λ3  2 T E ∂t u d xdy 2  + −  λ P(T, u) ∂t u d xdy, (25) 5 (πa 2 h + E u d xdy)2 E ∂t u − μ(T )∇⊥2 u + λ6 ∇⊥4 u = −1 (T )λ1 [ 2 |∇⊥ ψ|2 + (∂z ψ)2 ] + λ2 P(T, u). (26)

2.5 Examples for the Temperature Dependence A variety of different materials are used in MEMS. The specific form for the electric conductivity σ (T ), permittivity of the membrane 1 (T ), pressure P(T, u) and shear modulus μ(T ) strongly depend on the material in question. Furthermore these

54

J. Escher and T. Würth

properties have to be validated in experiments. Therefore we do not chose specific forms for these four and instead let them be arbitrary functions satisfying hypotheses appropriate for our mathematical analysis. Nevertheless we want to give some examples how they could look like: For Sylgard 1843 the shear modulus (second Lamé constant) μ depends linearly on temperature: (27) μ(T ) = μ0 T + μ1 . The temperature dependence (for low temperatures) of the permittivity 1 (T ) of the polymer PMMA can be approximated linearly as well [40]. For the pressure in a gas we can use the ideal gas law. N is the number of moles in the gas and R is the ideal gas constant. The pressure P is then given by P=

N RT . VC

(28)

For liquids there are other equations of state (EOS), for example the Peng-Robinson EOS (which can also be used for gas): P=

aα RT − 2 . Vm − b Vm + 2bVm − b2

(29)

For the interpretation of the real constants R, Vm , b, α and a we refer the interested reader to [38]. The electric conductivity of a liquid can be approximated as an exponential function (see e.g.:[5] [Chaps. 4 and 5], [42]): σ (T ) = σ∞ ex p(

Eα ). kB T

(30)

where σ∞ denotes the maximum electrical conductivity, k B the Boltzmann constant and E a the activation energy. Gases however have neglectable electric conductivity.

3 The Small Aspect Ratio Limit In this section we make the following simplifications: We let  → 0, the dimension is reduced to two, we impose Dirichlet boundary conditions for both temperature and electrostatic potential and we let σ (T ) := σ be independent of temperature.

3 This material is used for micropumps. Then we have μ 0

= 0, 0032 and μ1 = 0, 373 [22, Table 4].

Mathematical Modelling and Analysis of Temperature Effects in MEMS

55

3.1 The System We define E := (−1, 1) and (u(t)) := {(x, z) ∈ E × (−1, ∞) : −1 < z < u(x, t)}. The electrostatic potential ψ is governed by σ ∂z2 ψ(x, z) = 0, (x, z) ∈ (u(t)), t > 0

(31)

together with the boundary conditions ψ(t, x, z) =

1+z , (x, z) ∈ ∂(u(t)), t > 0. 1 + u(t, x)

The temperature T is governed by ∂z2 T (x, z) = −σ λ4 (∂z ψ)2 (x, z) ∈ (u(t)), t > 0, with the boundary conditions T (t, x, −1) = T0 , T (t, x, u(x, t)) = T1 , t > 0, x ∈ I,

(32)

for T0 > T1 > 0. Let t > 0 and x ∈ I , the membrane’s deflection u is governed by ∂t u(t, x) − μ(T (x, u(t, x)))∂x2 u(t, x) = −

λ1 1 (T (x, u(t, x))) + λ2 P(T (x, u(t, x)), u(t, x)) (1 + u(t, x))2

(33) with the boundary conditions u(±1, t) = 0, t > 0 and u(x, 0) = u 0 (x) for x ∈ I.

(34)

We can now solve explicitly for both the electrostatic potential and the temperature.

3.2 Solution to the Thermal and Electrostatic Problems The solution to the electrostatic problem may be written as ψ(t, x, z) =

1+z , t > 0, (x, z) ∈ (u(t)). 1 + u(t, x)

56

J. Escher and T. Würth

Inserting this expression in the temperature equation, it may be rewritten as ∂z2 T (t, x, z) = −

σ λ4 , t > 0, (x, z) ∈ (u(t)). (1 + u(t, x))2

Since the RHS of this equation is independent of z, we can solve it explicitly by integrating. The solution to the temperature problem may be written as T (t, x, z) = Az 2 + Bz + C with A := A(u(t, x)) := − B := B(u(t, x)) := and C := C(u(t, x)) :=

(35)

σ λ4 , 2(1 + u(t, x))2

T1 − T0 σ λ4 (u(t, x) − 1) + 1 + u(t, x) 2(1 + u(t, x))2

σ λ4 u(t, x) T1 − T0 + T0 + . 1 + u(t, x) 2(1 + u(t, x))2

3.3 Well-Posedness of the Deflection Problem For the reader’s convenience we present the the full system for this model again: Tu (t, x, z) = A(u(t, x))z 2 + B(u(t, x))z + C(u(t, x)), ψu (t, x, z) =

u t − μ(T1 )∂x2 u = −

(36)

1+z , 1 + u(t, x)

1 (T1 )λ1 + λ2 P(T1 , u), (1 + u)2

u(±1, ·) = 0, and u(·, 0) = u 0 .

(37) (38)

Let p ∈ (1, ∞). We define the spaces:  2α (I ) = W p,B

{u ∈

W p2α (I ), 2α ∈ 2α W p (I ); u(±1) =

[0, 1/ p), 0}, 2α ∈ (1/ p, 2].

(39)

Mathematical Modelling and Analysis of Temperature Effects in MEMS

57

Since T (t, x, u(x, t)) = T1 , x ∈ I, t > 0 we can uncouple the temperature from the membrane’s deflection and write P(u, T ) = P(u), (T ) = 1 , and μ(T ) = μ. Thus we can simplify the initial value problem for the membrane’s equation: u t − μ∂x2 u = g(u), t > 0, x ∈ I, u(0, x) = u 0 (x), x ∈ I,

(40)

u(t, ±1) = 0, t > 0, with g(u) := −

1 λ 1 + λ2 P(u). (1 + u)2

We have the following result: Theorem 4.1 Let p ∈ (1, ∞) and u → P(u) be globally Lipschitz. For each u 0 ∈ 2 (I ) with u 0 > −1 there is a positive existence time t1 > 0 and a unique solution W p,B u(t, x) to (40) satisfying 2 ) ∩ C 1 ((0, t1 ), L p (I )). u ∈ C([0, t1 ), W p,B

This theorem is due to a standard argument which we outline here: The operator 2 Au := μ∂x2 u, u ∈ W p,B

is uniformly strongly elliptic (see e.g. [3, Example 4.3a]). Since we have simple Dirichlet boundary conditions the boundary value problem (A, B) is normally elliptic ([3, Remark 4.3c]) and therefore 2 (I ), L p (I )) A ∈ H(W p,B

follows from [3, Theorem 4.1]. Denote by S(t) the semigroup which is generated by A. The exponential decay ||S(t)||L(L p (I )) ≤ Me−νt , M ≥ 1, ν > 0 can be proven by applying [36, Chap. 4.4, Theorem 4.3]. Finally, the result follows for example by [3, Remark 12.2b].

58

J. Escher and T. Würth

References 1. Abiev, R. S. (2012). Modern state and perspectives of microtechnique application in chemical industry. Russian Journal of General Chemistry, 82(12), 2019–2024. 2. Ahn, J., Kuttler, K. L., & Shillor, M. (2017). Modeling, analysis and simulations of a dynamic thermoviscoelastic rod-beam system. Differential Equations and Dynamical Systems, 25(4), 527–552. 3. Amann, H. (1993). Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems. Function spaces, differential operators and nonlinear analysis (pp. 9–126). New York: Springer. 4. Bergner, M., Escher, J., & Lippoth, F. (2012). On the blow up scenario for a class of parabolic moving boundary problems. Nonlinear Analysis: Theory, Methods and Applications, 75(10), 3951–3963. 5. Bockris, J. O. M., & Reddy, A. K. (2000). Modern electrochemistry 2B: electrodics in chemistry, engineering, biology and environmental science, (Vol. 2). New York: Springer 6. Brown, T. G. (2003). Harsh military environments and microelectromechanical (MEMS) devices. In SENSORS, 2003 IEEE (Vol. 2, pp. 753–760). IEEE. 7. Cohen, D., & Alexander, R. (1986). Chemical reactor theory and problems in diffusion. Physica D: Nonlinear Phenomena, 20(1), 122–141. 8. Esposito, P., Ghoussoub, N., et al. (2008). Uniqueness of solutions for an elliptic equation modeling MEMS. Methods and Applications of Analysis, 15(3), 341–354. 9. Esposito, P., Ghoussoub, N. & Guo, Y. (2010). Mathematical analysis of partial differential equations modeling electrostatic MEMS (Vol. 20). American Mathematical Society. 10. Escher, J., & Lienstromberg, C. (2016). A qualitative analysis of solutions to microelectromechanical systems with curvature and nonlinear permittivity profile. Communications in Partial Differential Equations, 41(1), 134–149. 11. Escher, J., & Lienstromberg, C. (2017). A survey on second-order free boundary value problems modelling MEMS with general permittivity profile. Discrete and Continuous Dynamical Systems-Series S, 10(4), 745–771. 12. Escher, J., Laurençot, P., & Walker, C. (2014). A parabolic free boundary problem modeling electrostatic MEMS. Archive for Rational Mechanics and Analysis, 211(2), 389–417. 13. Escher, J., Laurençot, P., & Walker, C. (2015). Dynamics of a free boundary problem with curvature modeling electrostatic MEMS. Transactions of the American Mathematical Society, 367(8), 5693–5719. 14. Escher, J., Matioc, B., & Walker, C. (2018). The domain of parabolicity for the muskat problem. Indiana University Mathematics Journal, 67, 679–737. 15. Flores, G., Mercado, G., Pelesko, J. A., & Smyth, N. (2007). Analysis of the dynamics and touchdown in a model of electrostatic MEMS. SIAM Journal on Applied Mathematics, 67(2), 434–446. 16. Ghoussoub, N. & Guo, Y. (2008). On the partial differential equations of electrostatic MEMS devices II: Dynamic case. Nonlinear Differential Equations and Applications NoDEA, 15(1–2), 115–145. 17. Grossmann, S., & Lohse, D. (2002). Prandtl and Rayleigh number dependence of the Reynolds number in turbulent thermal convection. Physical Review E, 66(1), 016305. 18. Guo, Y., Pan, Z., & Ward, M. J. (2005). Touchdown and pull-in voltage behavior of a MEMS device with varying dielectric properties. SIAM Journal on Applied Mathematics, 66(1), 309– 338. 19. Guo, Yujin. (2008). Global solutions of singular parabolic equations arising from electrostatic MEMS. Journal of Differential Equations, 245(3), 809–844. 20. Guo, Y., Zhang, Y., & Zhou, F. (2019). Singular behavior of an electrostatic–elastic membrane system with an external pressure. arXiv preprint. arXiv:1902.03707. 21. Hui, K. M. (2011). The existence and dynamic properties of a parabolic nonlocal MEMS equation. Nonlinear Analysis: Theory, Methods and Applications, 74(1), 298–316.

Mathematical Modelling and Analysis of Temperature Effects in MEMS

59

22. Johnston, I. D., McCluskey, D. K., Tan, C. K. L., & Tracey, M. C. (2014). Mechanical characterization of bulk Sylgard 184 for microfluidics and microengineering. Journal of Micromechanics and Microengineering, 24(3), 035017. 23. Kaajakari, V. (2009). Practical MEMS: Design of Microsystems, Accelerometers, Gyroscopes. RF MEMS: Optical MEMS, and Microfluidic Systems. Small Gear Publishings. 24. Kakhki, E. K., Hosseini, S. M., & Tahani, M. (2016). An analytical solution for thermoelastic damping in a micro-beam based on generalized theory of thermoelasticity and modified couple stress theory. Applied Mathematical Modelling, 40(4), 3164–3174. 25. Kvell, U., Puusepp, M., Kaminski, F., Past, J. E., Palmer, K., Grönland, T., et al. (2014). Nanosatellite orbit control using MEMS cold gas thrusters. Proceedings of the Estonian Academy of Sciences, 63(2), 279. 26. Lienstromberg, C. (2015). A free boundary value problem modelling microelectromechanical systems with general permittivity. Nonlinear Analysis: Real World Applications, 25, 190–218. 27. Lienstromberg, C. (2016). On qualitative properties of solutions to microelectromechanical systems with general permittivity. Monatshefte für Mathematik, 179(4), 581–602. 28. Laurençot, P., & Walker, C. (2013). A stationary free boundary problem modeling electrostatic MEMS. Archive for Rational Mechanics and Analysis, 207(1), 139–158. 29. Laurençot, P., & Walker, C. (2014). A fourth-order model for MEMS with clamped boundary conditions. Proceedings of the London Mathematical Society, 109(6), 1435–1464. 30. Laurençot, P., & Walker, C. (2014). A free boundary problem modeling electrostatic MEMS: I. Linear Bending Effects, Mathematische Annalen, 360(1–2), 307–349. 31. Laurençot, P., & Walker, C. (2014). A free boundary problem modeling electrostatic MEMS: II. Nonlinear Bending Effects, Mathematical Models and Methods in Applied Sciences, 24(13), 2549–2568. 32. Laurençot, P., & Walker, C. (2016). On a three-dimensional free boundary problem modeling electrostatic MEMS. Interfaces and Free Boundaries, 18(3), 393–411. 33. Laurencot, P., & Walker, C. (2017). A constrained model for MEMS with varying dielectric properties. Journal of Elliptic and Parabolic Equations, 3(1–2), 15–51. 34. Laurençot, P., & Walker, C. (2017). Some singular equations modeling MEMS. Bulletin of the American Mathematical Society, 54(3), 437–479. 35. Nisar, A., Afzulpurkar, N., Mahaisavariya, B., & Tuantranont, A. (2008). MEMS-based micropumps in drug delivery and biomedical applications. Sensors and Actuators B: Chemical, 130(2), 917–942. 36. Pazy, A. (2012). Semigroups of linear operators and applications to partial differential equations (Vol. 44). New York: Springer. 37. Pelesko, J. A., & Bernstein, D. H. (2002). Modeling MEMS and NEMS. Boca Raton, FL: CRC Press. 38. Peng, D., & Robinson, D. B. (1976). A new two-constant equation of state. Industrial and Engineering Chemistry Fundamentals, 15(1), 59–64. 39. Singh, R. N. (2013). Advection diffusion equation models in near-surface geophysical and environmental sciences. The Journal of Indian Geophysical Union, 17, 117–127. ˇ 40. Svorˇcík, V., Králová, J., Rybka, V., Plešek, J., Cervená, J., & Hnatowicz, V. (2001). Temperature dependence of the permittivity of polymer composites. Journal of Polymer Science Part B: Polymer Physics, 39(8), 831–834. 41. Tang, W., & Lee, A. (2001). Defense applications of MEMS. Mrs Bulletin, 26(4), 318–319. 42. Vila, J., Ginés, P., Pico, J. M., Franjo, C., Jiménez, E., Varela, L. M., et al. (2006). Temperature dependence of the electrical conductivity in emim-based ionic liquids: Evidence of VogelTamman-Fulcher behavior. Fluid Phase Equilibria, 242(2), 141–146. 43. Wang, L., Sipe, D., Xu, Y., & Lin, Q. (2008). A MEMS thermal biosensor for metabolic monitoring applications. Journal of Microelectromechanical Systems, 17(2), 318–327.

Multi-fidelity Metamodels Nourished by Reduced Order Models S. Nachar, P.-A. Boucard, D. Néron, U. Nackenhorst and A. Fau

Abstract Engineering simulation provides better designed products by allowing many options to be quickly explored and tested. In that context, the computational time is a strong issue because using high-fidelity direct resolution solvers is not always suitable. Metamodels are commonly considered to explore design options without computing every possible combination of parameters, but if the behavior is nonlinear, a large amount of data is required to build this metamodel. A possibility is to use further data sources to generate a multi-fidelity surrogate model by using model reduction. Model reduction techniques constitute one of the tools to bypass the limited calculation budget by seeking a solution to a problem on a reduced-order basis (ROB). The purpose of this study is an online method for generating a multi-fidelity metamodel nourished by calculating the quantity of interest from the basis generated on-the-fly with the LATIN-PGD framework for elasto-viscoplastic problems. Lowfidelity (LF) fields are obtained by stopping the solver before convergence, and high-fidelity (HF) information is obtained with converged solutions. In addition, the solver ability to reuse information from previously calculated PGD basis is exploited.

S. Nachar · P.-A. Boucard (B) · D. Néron Université Paris-Saclay, ENS Paris-Saclay, CNRS, LMT - Laboratoire de Mécanique et Technologie, 61 avenue du Président Wilson, 94230 Cachan Cedex, France e-mail: [email protected] S. Nachar e-mail: [email protected] D. Néron e-mail: [email protected] U. Nackenhorst · A. Fau LUH - Leibniz Universität Hannover [Hannover] 44961, Gottfried Wilhelm Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover, Germany e-mail: [email protected] A. Fau e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_4

61

62

S. Nachar et al.

1 Introduction Digital simulation tools have enabled the emergence of new product development processes within shorter time and less costly than experiments. The issue of simulation time is nevertheless a major challenge because the current desire is to maintain the time allocated to product optimization reasonable, while being able to take into account nonlinear, multiphysical models over as many configurations as possible. Under these conditions, direct use of exact solvers is not acceptable for virtual chart generation or optimization, and metamodels are commonly employed to study the impact of different configurations with a reduced number of observations. Metamodels are statistical or functional models based on input-output data obtained through experimental measurements or numerical simulations. The inputs are parameters x chosen in a design space D and the outputs are scalar quantities of interest (QoI) calculated from the complete mechanical space-time fields. Each observed value, representing a point in the design space, requires the use of a mechanical solver and significant computational resources. Generating the metamodel of a quantity of interest requires a large number of observations. One possibility is to use multiple sources of different quality to generate a multi-fidelity metamodel. First, many low-fidelity (and therefore fast) calculations can be performed to obtain a first approximation of the response. With this information, the next points calculated will be the points that maximize the estimated information gain. These points calculated in an approximate (low-fidelity) and accurate (high-fidelity) manner will make it possible to correct the approximate data calculated previously, and to improve the metamodel over the entire design space by creating a virtual chart, or only in areas where the overall optimum is most likely to be found. Many developments exist on this subject [4, 6, 16]. Another way to speed up the calculation is to use model reduction to initialize and approximate the fields and therefore the quantities of interest. Typical model reduction methods are the Proper Orthogonalized Decomposition (POD) and its extensions, the Reduced-Basis method [11] or the Proper Generalized Decomposition (PGD) [2, 9]. The main advantage of these techniques is that the computation time is directly correlated to the number of modes generated and therefore to the quality of the approximation. The aim of the present work is to couple multi-fidelity metamodel generation techniques with the generation of mechanical fields by a model reduction method. This coupling will be carried out on an industrial case to quantify the savings in calculation time. The study presented herein very briefly recalls the generation of a multi-fidelity metamodel and in particular the Evofusion method. Then presents the LATIN-PGD solver used in the context of two elasto-viscoplastic problems. With these tools, four test campaigns have been launched to quantify the impact of coupling these methods for the complete approximation of the metamodel as part of the creation of a virtual chart, and for the determination of the optimum in the design space.

Multi-fidelity Metamodels Nourished by Reduced Order Models

63

2 Multi-fidelity Kriging in a Nutshell Let’s consider a spatio-temporal mechanical problem defined by input parameters x belonging to a design space D. For each value x ∈ D, the solution of the problem allows to calculate a given quantity of scalar interest (QoI) called Y . The corresponding virtual chart is the x ∈ D → Y (x) ∈ R function, but it is not directly affordable for computational cost reasons. This function is replaced by a metamodel x ∈ D → Yˆ (x) ∈ R, where Yˆ is constructed from a set of observations on n selected points xi and the corresponding answers from QoI Yi = Y (xi ). Here, the metamodel Yˆ is constructed using a Gaussian process regression also called kriging. A possible sampling strategy for selecting the n samples xi in D is to use Latin Hypercube Sampling (LHS) [12]. Kriging [13] is a statistical regression model that can be presented as an extension of linear regressions Yˆ (x) = f(x)T β + ε(x) with x the input data, β the weight vector associated with the regression, and f the regression function chosen by the user. The model error is modelled by a Gaussian process of zero mean and covariance σ 2 r (x, x ; θ ) with σ 2 the variance parameter, r a correlation function (or kernel function) chosen by the user and θ the hyperparameters associated to the function. These hyperparameters are also to be determined. These terms make it possible to form a Gaussian process conditioned by observations: 

     f(x)T β 1 r(x)T Yˆ (x) 2 ,σ ∼N r(x) R FTβ Y

(1)

with F regression matrix such that F i j = f i (x j ), R correlation matrix between observed data such as Ri j = r (xi , x j ; θ ) and r(x) correlation matrix between observed and predicted data such as r j (x) = r (x j , x; θ ). β and σ are determined by considering Yˆ as the best unbiased linear predictor. The resulting properties allow to define the best Gaussian process with regard to the covariance function and associated hyperparameters. These are obtained by maximizing the likelihood function or by cross-validation. The consideration of multi-fidelity data can be done in several ways: by defining a non-zero variance on the data, by defining the correlation between data [3, 6, 17] or by the use of corrections to approximate data. In this work, the method for generating successive multi-fidelity metamodels is called Evofusion [5] and is presented in Algorithm 1 and illustrated on Fig. 1. It is recalled that one of the main interests of kriging is to exploit the variance of the metamodel as an error estimator of the metamodel or as an estimator of the gain in information that would result from an additional observation. With this type of estimator, it is possible to use the solver to enrich the metamodel with new points that maximize the gain of information, or maximize the probability of improving the optimum. Thus, in a second step, the enrichment strategy uses both low-fidelity (LF) and high-fidelity (HF) data.

64

S. Nachar et al.

Algorithm 1 Evofusion algorithm Require: Low-fidelity and high-fidelity results (X L F , Y L F ), (X H F , Y H F ) 1: Build a metamodel containing only low-fidelity data: (X L F , Y L F ) −→ Yˆ L F 2: Calculate the differences between the low-fidelity metamodel Yˆ L F and the high-fidelity data Y H F at the high-fidelity points X H F : Ycorr = Y H F − Yˆ L F (X H F ) 3: Build an error metamodel: (X H F , Y corr ) −→ Yˆcorr 4: Correct low-fidelity data with the error metamodel : Y L Fcorr = Y L F + Yˆcorr (X L F ) 5: Build the metamodel with corrected data and low-fidelity data:  (X L F , Y corr ) (X H F , Y H F ) −→ Yˆ 6: return Metamodel build using evofusion approach Yˆ

(a) Observation data (points) and LF (dashed line) and HF (solid line) data difference (arrows)

(b) Creation of the metamodel using Evofusion approach

Fig. 1 Illustration of Evofusion approach

Algorithm 2 Generation of a multi-fidelity metamodel Require: D, Y 1: Drawing in D −→ X (4 points) 2: Low-fidelity resolution (Error η L F ) X −→ Y L F 3: while r >= rtol do 4: Creation of the metamodel Yˆ with observations (X, Y L F ) 5: Calculation of the error indicator r on D 6: Determination of the point x∗ maximizing the expected information gain on D (MSE for instance) 7: Low-fidelity resolution x∗ 8: Addition of (x∗ , Y L F (x∗ )) in observations 9: if LF Points > n then 10: High-fidelity resolution x∗ −→ Y H F (x∗ ) 11: Addition of (x∗ , Y H F (x∗ )) in observations 12: end if 13: end while

The choice of two parameters of the Algorithm 2 strongly influences the metamodel generation time: the number of points n initially generated using only lowfidelity data and the quality of the low-fidelity data based on the level of the error indicator of the LATIN algorithm η L F that will be introduced hereafter.

Multi-fidelity Metamodels Nourished by Reduced Order Models

65

3 Presentation of the Mechanical Problem and the LATIN Solver The LATIN algorithm is an iterative solver that allows to produce at each iteration of the solver a complete spatio-temporal approximation of the searched fields. This approximation is sought in the form of a separate space-time decomposition. For a complete presentation, the interested reader can refer to [9].

3.1 Mechanical Problem The mechanical problem used for the test campaigns is a common problem of mechanics of continuous media with a nonlinear behavior law. The reference problem is a quasi-static isothermal evolution problem defined on the time-space domain I ×  assuming small perturbations hypothesis. The structure is subjected to body forces f d , surface forces F d on the boundary ∂2 , and prescribed displacements U d on the boundary ∂1 . The state of the structure is defined by all the fields s = (˙ε p , σ , Ak , Vk ) with : • ε = ∇sym U the kinematically admissible deformation field associated with the displacement field U , sum of an elastic part εe , and an inelastic part εp ; • σ = K e ε e the stress field, K e being the Hooke tensor; • Ak and Vk respectively the primary and dual fields of the internal variables characterizing the structure behavior.

3.2 Chaboche’s Elasto-Viscoplastic Behavior Law The behavior law used here is derived from the unified elasto-viscoplastic model Lemaître and Chaboche [10]. This law models the generation of plastic zones that occur when extreme loads exceed the yield point f presented in Fig. 2 as a circle of elasticity in the space of the main deviating stresses. The size and origin of this ellipsoid are controlled respectively by isotropic hardening R and a single linear kinematic strain hardening X: f = (σ − X)eq − σ0

(2)

where J2 = (σ − X)eq designates the equivalent Von Mises stress and σ0 = σ y − R the size of the elastic zone. The first fields p and α are associated with R and X respectively. Norton-Hoff’s law controls the rate of plastic deformation: p˙ =

 N f k +

(3)

66

S. Nachar et al.

Fig. 2 Kinematic hardening

with k, N two scalar coefficients and · + Macaulay’s brackets. The state equations are: σ = K e εe 2 X = Cα 3 R = R∞ (1 − e−bp )

(4) (5) (6)

where C, R∞ , b are scalar coefficients. A pseudo-dissipation potential F is defined: F= f +

2γ C 3γ X:X− α:α 4C 3

(7)

Thus, the evolution equations are: ⎡ d dt

εp





3 N 2



⎥ ⎢ ⎥  N ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎢ −α ⎥ = f 3γ ⎥ 3 ⎢ ⎢ ⎥ k +⎣ − N+ X⎥ ⎣ ⎦ 2 2C ⎦ −p −1

with N the unit normal tensor

N=

3 σD − X , 2 (σ D − X)eq

(N)eq = 1.

(8)

(9)

Multi-fidelity Metamodels Nourished by Reduced Order Models

67

3.3 LATIN-PGD Solver The previous equations are divided into two sub-problems: the global problem Ad where s respects the equilibrium and state equations and  where s respects the nonlinear but local evolution equations in time and space. The solution to the problem is then at the intersection  ∩ Ad . The solution is searched using the LATIN method involving a two-stage iteration scheme that solves each set of equations alternatively using search direction between them, until reaching convergence as  ∩ Ad . This method has been implemented in particular in the context of elasto-viscoplastic behavior [14]. One particularity of the LATIN-PGD lies in the search for solutions of Ad in the form of a separate representation: the so called Proper Generalized Decomposition (PGD). After a few iterations, an approximation of the solution is thus obtained and is represented using functions with separate variables. These functions provide a basis for representing the approximation of the solution. The equations allowing the implementation of the solver in elasto-viscoplastic are presented in [1]. Moreover, the solver allows to start with existing data and not only from scratch. These data can be derived from previous calculations, as an existing reduced basis (see Figs. 3 and 4). This strategy is called multi-parametric strategy as illustrated in Fig. 5. In the context of this work, the interest of this type of solver is twofold. At each iteration, complete fields are accessible, which allows to determine an approximation of the solution and quantities of interest before convergence. Thus, both low-fidelity and high fidelity data can be obtained from the same solver: low-fidelity data is based on a truncated basis, whereas high-fidelity data is based on the basis obtained at convergence of the algorithm. This simplifies the implementation of the problem and allows to easily implement a stop criterion via an error estimator on the quantities of interest.

Fig. 3 Classical initialization (starting from scratch)

zz

zz

Elastic initialization Iter 6 - Latin error indicator 10−1 Iter 6 - Latin error indicator 6.10−2 Iter 12 - Latin error indicator 10−4

68

S. Nachar et al. zz

zz

Elastic initialization + previous basis Iter 4 - Latin error indicator 10−4

Fig. 4 Initialization using previously computed PGD basis

j

sˆ m

s

Ad

E+ E−

sj

s0

0

0

sM0

sM j E+

sm s

sM

E−

sm+1

Ad

s0

sM0 sM j

(a) Classic strategy

(b) Multiparametric strategy

Fig. 5 Schematic representation of the LATIN algorithm [15]

4 Presentation of the Test-Cases Two academic examples are employed to investigate the advantages in combining multi-fidelity kriging and model reduction. These test-cases are performed using an in-house software called ROMlab. An atypical point of this solver is its decomposition of fields and operators in the form of multidimensional matrices and the parallel implementation of Einstein’s summation allowing to accelerate on multithread architectures the calculation of mechanical fields. The various simulations were carried out using the calculation resources of the CentraleSupélec Mesocentre and the Ecole normale supérieure Paris-Saclay with the support of the CNRS and the Île-de-France region.

Multi-fidelity Metamodels Nourished by Reduced Order Models

69

4.1 Test-Case 1: Plate with Inclusion The first test-case is a quarter plate with an oblong inclusion, as shown in Fig. 6 and Table 1. This plate is subject to symmetry on three edges and traction on the upper surface. The material properties chosen for the structure and inclusion are those of a 600 K isothermal 316 Steel with the exception of the Young’s modulus of the inclusion and the n plate coefficient of Norton’s law associated with the plate material. The two parameters of the model will therefore be the ratio of Young’s modules denoted by α and n plate as detailed in Table 2.

Ud Plate (Material 1)

Inclusion (Material 2) Fig. 6 Plate with inclusion Table 1 First test-case characteristics Objective function Parameters Type of spatial elements Time discretization Boundary conditions

Y (x) = max I × σ Rankine   x = log10 (α), n ∈ [−1, 1] × [3, 7] Triangular Linear (8000 DOFs) 41 time steps Symmetry conditions down and on two side faces U = Ud sin(ωt)ez on top

70

S. Nachar et al.

Table 2 Material coefficients (the symbols in bold are the design parameters) Partie E (GPa) σ y (MPa) n K (MPa s1/n ) Plate Inclusion (α)

137.6 137.6

8 8

nplate 5

150 150

4.2 Test-Case 2 : Damping Part The second test-case is a generic damping part inspired from Safran Aircraft Engines. It is embedded on the upper and lateral surfaces (Blue zone on Fig. 7) and is loaded on both parts of its nose (Green zone and opposite zone not visible on Fig. 7). The loading on both parts is of the same intensity, but the direction of each side is controlled by two angles for each zone (Fig. 8). The design space therefore has four dimensions in this test-case (θ1 , φ1 , θ2 , φ2 ) (see Fig. 8 and Table 3).

Fig. 7 Turbine blade test-case

Multi-fidelity Metamodels Nourished by Reduced Order Models Fig. 8 Angular loading control on the visible green surface (1)

71

z

F1 (t) = Fmax cos(ωt) θ1

y

φ1

x Table 3 Characteristics of the the damping part problem Objective function Parameters Type of spatial elements Time discretization Boundary conditions

Y (x) = max I × σV on Mises x = (θ1 , φ1 , θ2 , φ2 ) ∈ [0, 90]4 Triangular linear (30 k DOFs) 41 time steps Embedded on blue side faces F = Fmax sin(ωt)ei on each green side

5 Implementation of the Coupling Strategy 5.1 Correlation Between Error on the Quantity of Interest and LATIN Error Indicator As explained earlier, the LATIN-PGD solver has the possibility to use the basis previously generated when calculating a solution for a new value of parameter in the design space. Thus, it is frequent that no new functions are added to the existing basis, even if the calculation is performed for a new set of parameters. This is due to the update step that modifies the time functions but not the functions of the space variable. Then the existing “space modes” associated to one set of parameters are

72

S. Nachar et al. 50% of all test-cases under QoI error 95% of all test-cases under QoI error 100% of all test-cases under QoI error

10−1

10−2

10−3 0.1%

1%

10%

100%

100

LATIN error indicator

LATIN error indicator

100

10−1

10−2

10−3 0.1%

VM /

VM,exact )

(a) Maximum Von Mises

1%

10%

100%

Quantity of interest error estimator (%)

Quantity of interest error estimator (%) 1-(

50% of all test-cases under QoI error 95% of all test-cases under QoI error 100% of all test-cases under QoI error

1-(

p/

pexact )

(b) Maximum rate of plastic deformation

Fig. 9 Comparison between the LATIN error indicator and the error in QoI

sufficient to obtain the solution associated to new set of parameters. It is therefore not relevant to consider the number of modes as a low-fidelity solver stop parameter as in [8] but it is preferred to use the LATIN error indicator. This indicator is a distance between two elements s belonging successively to  and Ad . To verify the pertinence of using such an indicator, a test campaign was conducted to determine the interactions between an error indicator on quantities of interest f exact | and the solver error indicator η L AT I N , where f exact is the reference η QoI = | f|−fexact | value of the quantity of interest f at convergence. The study was conducted on the first test-case (Fig. 6) by performing an LHS draw of 15 × 15 points. At each iteration, the LATIN error indicator and two quantities of interest are calculated: the maximum of the equivalent Von Mises stress and the maximum rate of plastic deformation. Figure 9a, b give the solver error indicator necessary for many calculated points (50%, 95% or 100%) to be below a certain error rate on the quantity of interest. For example, to have all the points calculated with an error of less than 10% on the quantity of interest, the calculation can be stopped when the solver indicator reaches 10−2 . The observation is that η L AT I N ≈ 10−2 seems to be a good stopping criterion. The following test campaigns will verify this value in case of the complete generation of a metamodel.

5.2 Method Used for Test Campaigns The metamodel generation strategy corresponds to the one presented in the Algorithm 2. Both cases will be tested. In the first study, a metamodel of the QoI over the whole design space is generated. The functional optimized to maximize the

Multi-fidelity Metamodels Nourished by Reduced Order Models

73

expected information gain will be the least square error M S E. A reference metamodel has been calculated and the generation of the next ones will be compared to it on a very fine grid. Generation will be stopped when the correlation rccc is greater than 0.99 between the two: rccc =

2 2σex,app

(10)

2 + σ 2 + (m 2 σex app − m ex ) app

where σ 2 is the variance between the values and m is the associated mean. The second study corresponds to the search for an overall optimum of the quantity of interest in the design space. The EGO method [7] is used by maximizing the expected information gain through Expected Improvement EI: 



EI(x) = ymin − Yˆ (x)



ymin − Yˆ (x) σ 2 (x)



 + σ 2 (x)φ

ymin − Yˆ (x) σ 2 (x)

 (11)

with ymin the minimum of the observed responses, φ the probability density function and the reduced centered normal distribution function. The stop criterion chosen is max(EI) < 10−3 , obtained 5 times consecutively. Different configurations values of η L F and n parameters will be tested to choose which are the most efficient in terms of computation time. To also study the impact of the random initial draw on the metamodel generation, the configurations will be tested with 20 different draws. Thus, the average and variance of the calculation time can be extracted over all the draws.

6 Results of the Test Campaign 6.1 Full Generation of the Metamodel—Test-Case 1 The first campaign focuses on the complete generation of the metamodel for the plate test-case with inclusion. It is interesting to compare the values of the quantity of interest obtained for a solver stop criterion η L AT I N = 10−2 (Fig. 10a) and convergence (Fig. 10b). It can be seen in these figures that the differences between the two responses are small. The test campaign gives the results presented in Table 4. The best strategy obtained is the one for a low-fidelity stop criterion η L AT I N = 10−2 and the calculation of 2 × 12 low-fidelity points. In this case, a gain of 5.2× from the use of multi-fidelity was obtained. The use of the multi-parameter solver strategy allowed to reach a gain of 1.5×, allowing a gain of 7.8× to be considered between the best strategy and the case of a kriging with converged data from a conventional solver. Table 5 shows the normalized variance of each configuration presented. There is a great disparity in

74

S. Nachar et al.

(a)

−2 L AT I N = 10 ( 33s per calculation)

(b) Convergence ( 143s per calculation)

Fig. 10 Quantity of interest max I × σ Rankine expressed on the design space Table 4 Average of computation times to generate the metamodel for test case 1 (if only HF data are used: 24 ) Nb of LF points LATIN LF criteria 10−3 10−2 10−1 4/dim 8/dim 12/dim

27 15 13

24 8 57 5 30

1 h 09 46 45

Table 5 Normalized variance of computation times to generate a complete metamodel (bi-material test case) Nb of LF point LATIN LF criteria 10−3 10−2 10−1 4/dim (%) 8/dim (%) 12/dim (%)

26 30 22

27 40 32

13 16 16

computation times depending on the initial run and more particularly on low-fidelity data. Nevertheless, the time saving is such that it remains interesting to keep the choice of η L AT I N = 10−2 and 12 low-fidelity points per dimension of the design space.

Multi-fidelity Metamodels Nourished by Reduced Order Models

75

Table 6 Average of computation times to generate the metamodel for test case 2 (if only HF data are used: 3 h 10 ) Nb of LF points LATIN LF criteria 10−3 10−2 10−1 4/dim 8/dim 12/dim

3 h 17 1 h 45 1 h 14

2 h 55 1 h 35 34

2 h 29 2 h 06 36

6.2 Full Generation of the Metamodel—Test-Case 2 The second campaign concerns the complete generation of the metamodel for the test-case of the damping part. The test campaign gives the results presented in Table 6. The best strategy is the one for a low-fidelity stop criterion η L AT I N = 10−2 and the calculation of 4 × 12 low-fidelity points. In this case, a gain of 11.8× from the use of multi-fidelity was obtained. The use of the multi-parameter solver strategy allowed to reach a gain of 3.2×, allowing a gain of 37.8× to be considered between the best strategy and the case of a kriging with converged data from a conventional solver. We can therefore see that the complete generation of the metamodel by the coupling strategy allows gains of 7.8× on the bi-material test-case 1 and 37.8× on the test-case 2 compared to a generation strategy with a single data source and a solver without a multi-parametric strategy. The best choice of parameters for the strategy is a low-fidelity error η L AT I N = 10−2 and 12 low-fidelity points per dimension.

6.3 EGO Method for Finding the Global Optimum—Test-Case 1 The purpose of the next two sections is to find the global optimum using the EGO method. The use of the Evofusion method is problematic here because the corrected low-fidelity points are considered in the same way as the high-fidelity points and therefore the variance associated with these points is considered as zero. The following test campaign characterizes  the test-case of the plate with inclusion with a minimum located at log10 (α), n = (−0.08, 5.1). It is estimated that the minimum is found when the minimum is located in a square of dimensionless size 0.02 and centered on the minimum (Central Rectangle in Fig. 11). This case is simple insofar as the gradients are consistent around the minimum. In addition, Fig. 10a shows that a resolution approximated with η L AT I N = 10−2 already gives a good indication of the minimum area. The test campaign gives the results presented below. All of the 20 test-cases reach the global minimum.

76

S. Nachar et al. 1

20 0

70 55

0.2

60

60

0

48

log 10 ( )

0 12 0 10

50

80

0.4

15 0

55

0.6

80 70

60

100

0.8

50

-0.2

55 60

-0.4

55

80

70

0

10

-0.6

70 -0.8

80

120

100

-1 3

3.5

4

4.5

5

5.5

6

6.5

7

n

Fig. 11 Area validated for the global minimum Table 7 Average computation time to determine the optimum for test case 1 (if only HF data are used: 55 ) Nb of LF points LATIN LF criteria 10−3 10−2 10−1 6/dim 8/dim 10/dim 12/dim

29 29 33 41

26 26 21 30

28 27 26 29

Table 8 Normalized variance of computation times to obtain the optimum for the bi-material test case Nb of LF points LATIN LF criteria 10−3 10−2 10−1 6/dim (%) 8/dim (%) 10/dim (%) 12/dim (%)

21 14 17 9

36 27 31 12

20 20 18 14

In terms of computation time, the best strategy obtained is for a low-fidelity stop criterion η L AT I N = 10−2 and the calculation of 2 × 10 or 2 × 12 low-fidelity points (see Tables 7 and 8). In the context of this choice, the gain brought by multi-fidelity is 15× and coupling kriging and reduced order models offers a gain of 23×.

Multi-fidelity Metamodels Nourished by Reduced Order Models

77

6.4 EGO Method for Finding the Global Optimum Test-Case 2 The search for the overall optimum in test-case 2 is much more complex. Here, the aim is to find the most unfavourable loading case in terms of the maximum Von Mises constraint, in a vast research space (θ1 , φ1 , θ2 , φ2 ) ∈ [0, 90]4 . The maximum amount of interest is 118 MPa, 10% of the design space has a value greater than 110 MPa in multiple unrelated areas and 1% has a value greater than 112 MPa. The overall optimum objective will be considered satisfied when the calculated maximum is greater than 113 MPa. The test campaign gives the results presented below. Table 9 shows the percentage of test-cases that converged to the global maximum area. In terms of calculation time, the best strategy is the one for a low-fidelity stop criterion η L AT I N = 10−1 and the calculation of 4 × 6 low-fidelity points (see Tables 10 and 11). Within the framework of this choice, the use of the multi-parameter solver strategy allowed a gain of 2.5× to be obtained. The gain brought by multi-fidelity is 19× and coupling kriging and reduced order models offers a gain of 47.5×. We can therefore see that the optimization of the metamodel by the coupling strategy and the EGO method allows even greater gains than in the case of a generation of a metamodel just over the entire design space, with gains of 23× on the bimaterial test-case 1 and 19× on the test-case 2 compared to a generation strategy with a single data source and a solver without a multi-parametric strategy. The best choice of parameters for the strategy varies in both cases. An adaptive strategy for the choice of the stop criterion on low-fidelity data would be a solution to overcome this problem.

Table 9 Percentage of cases where the global minimum area was found Nb of LF points LATIN LF criteria 10−3 10−2 10−1 6/dim (%) 8/dim (%) 10/dim (%) 12/dim (%)

56 58 63 59

83 84 87 87

75 74 74 77

Table 10 Average computation time to determine the optimum for test case 2 (if only HF data are used: 1 h 01 ) Nb of LF points LATIN LF criteria 10−3 10−2 10−1 4/dim 8/dim 10/dim 12/dim

27 40 36 02 42 38 43 22

9 8 10 09 10 27 11 14

3 14 3 23 3 26 3 38

78

S. Nachar et al.

Table 11 Normalized variation of calculation times to obtain the optimum for the test case of the damping part Nb of LF points LATIN LF criteria 10−3 10−2 10−1 6/dim (%) 8/dim (%) 10/dim (%) 12/dim (%)

47 72 106 107

87 73 78 91

14 28 30 77

7 Conclusion This chapter presented the advances in multi-fidelity kriging coupling and modeling for the analysis and optimization of mechanical parts. The Evofusion two-level kriging method was used to generate a metamodel of quantities of interest obtained from two distinct quality sources. The remarkable point here is the use of low-fidelity and high-fidelity data from the same solver generating an approximation of the fields in PGD form, a model reduction method for generating separate variable approximation of a field on-the-fly. Two test-cases were used, each with its own specificities. As part of generating an accurate metamodel over the entire design space, the coupling strategy allows gains of 7.8× on the bi-material test-case 1 and 37.8× on the test-case 2 compared to a generation strategy with a single data source and a solver without a multi-parametric strategy. The best choice of parameters for the strategy corresponds to a low-fidelity error η L AT I N = 10−2 corresponding to an error of about 5% on the interest quantities, and 12 low-fidelity points per dimension of the design space. As part of the generation of a metamodel for optimization via the EGO method, the coupling strategy allows even greater gains with 23× on the bi-material test-case 1 and 47.5× on the test-case 2, which seems to be very promising results for this type of approaches.

References 1. Bhattacharyya, M., Fau, A., Nackenhorst, U., Néron, D., & Ladevèze, P. (2017). A LATINbased model reduction approach for the simulation of cycling damage. Computational Mechanics, 1–19. https://doi.org/10.1007/s00466-017-1523-z. 2. Chinesta, F., Keunings, R., & Leygue, A. (2014). The proper generalized decomposition for advanced numerical simulations. In SpringerBriefs in applied sciences and technology. Cham: Springer International Publishing. 3. Courrier, N., Boucard, P. A., & Soulier, B. (2016). Variable-fidelity modeling of structural analysis of assemblies. Journal of Global Optimization, 64(3), 577–613. https://doi.org/10. 1007/s10898-015-0345-9. 4. Forrester, A. I., Bressloff, N. W., & Keane, A. J. (2006). Optimization using surrogate models and partially converged computational fluid dynamics simulations. Proceedings of the Royal

Multi-fidelity Metamodels Nourished by Reduced Order Models

5. 6. 7. 8.

9. 10. 11.

12.

13. 14.

15. 16.

17.

79

Society A: Mathematical, Physical and Engineering Sciences, 462(2071), 2177–2204. https:// doi.org/10.1098/rspa.2006.1679. Forrester, A. I. J., Keane, A. J., & Bressloff, N. W. (2006). Design and analysis of “noisy” computer experiments. AIAA Journal, 44(10), 2331–2339. Han, Z., Zimmerman, R., & Görtz, S. (2012). Alternative cokriging method for variable fidelity surrogate modeling. AIAA Journal, 50(5), 1205–1210. https://doi.org/10.2514/1.J051243. Jones, D. R. (2001). A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21(4), 345–383. https://doi.org/10.1023/A:1012771025575. Kramer, B., Marques, A.N., Peherstorfer, B., Villa, U., & Willcox, K. (2019). Multifidelity probability estimation via fusion of estimators. Journal of Computational Physics, 392, 385– 402 . http://www.sciencedirect.com/science/article/pii/S0021999119303249. Ladevèze, P. (1999). Nonlinear computational structural mechanics: New approaches and nonincremental methods of calculation. In Mechanical engineering series. Springer. Lemaitre, J., & Chaboche, J. L. (1994). Mechanics of solid materials. Cambridge University Press. Maday, Y., & Ronquist, E. (2004). The reduced basis element method: Application to a thermal fin problem. SIAM Journal on Scientific Computing, 26(1), 240–258. https://doi.org/10.1137/ S1064827502419932. McKay, M. D., Beckman, R. J., & Conover, W. J. (2000). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42(1), 55–61. Rasmussen, C. E., & Williams, C. K. (2004). Gaussian processes in machine learning. Lecture Notes in Computer Science, 3176, 63–71. Relun, N., Néron, D., & Boucard, P. A. (2013). A model reduction technique based on the PGD for elastic-viscoplastic computational analysis. Computational Mechanics, 51(1), 83–92. https://doi.org/10.1007/s00466-012-0706-x. Vitse, M. (2016) Model-order reduction for the parametric analysis of damage in reinforced concrete structures (Ph.D. thesis). Université Paris-Saclay. Zimmerman, D. L., & Holland, D. M. (2005). Complementary co-kriging: Spatial prediction using data combined from several environmental monitoring networks. Environmetrics, 16, 219–234. Zimmermann, R., & Han, Z. H. (2010). Simplified cross-correlation estimation for multifidelity surrogate cokriging models. Advances and Applications in Mathematical Sciences, 7(2), 181–201.

Application of Enhanced Peridynamic Correspondence Formulation for Three-Dimensional Simulations at Large Strains P. Hartmann, C. Weißenfels and P. Wriggers

Abstract Within the theory of Peridynamics standard continuum mechanical material models can be applied using the so-called correspondence formulation. However, the correspondence formulation is susceptible to instabilities in the resulting displacement field, which makes the method inapplicable for simulations at large strains. Hence, the application of a suitable numerical approach to eliminate this drawback is required. Besides a general introduction into Peridynamics, different possibilities to prevent the appearing of the arising instabilities are presented in this chapter. One such approach comes without the necessity of additional stabilisation parameters and is based on the subdivision of the non-local interaction region of interest. Further, it is denoted as the enhanced peridynamic correspondence formulation. In the numerical examples it is demonstrated that for this formulation the instabilities in the displacement field disappear for three-dimensional examples at large strains. In addition, previously unknown limitations of the enhanced peridynamic correspondence formulation are shown within the numerical examples. These are slight, non-physical, deviations in the deformation field and in case of torsion dominated problems a non-physical representation of the stress field.

1 Introduction Up to the present day, it is common practise to solve local partial differential equations for the conservation of linear momentum in solid mechanics using the Finite Element Method (FEM). A distinct framework was introduced in [1], where, motivated by molecular dynamics, non-local pairwise interaction forces between particles are considered and the Lagrangian type framework of Peridynamics was founded. In [2] Peridynamics is considered as an upscaling of molecular dynamics and an implementation within a molecular dynamics code is shown in [3].

P. Hartmann (B) · C. Weißenfels · P. Wriggers Institute of Continuum Mechanics, Leibniz Universität Hannover, Appelstr. 11, 30167 Hannover, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_5

81

82

P. Hartmann et al.

The main benefit of Peridynamics is that the governing equations are integrodifferential instead of partial differential equations. Thus, a meshfree discretisation is commonly applied, such that Peridynamics fall within the range of meshfree methods. The main advantage of meshfree discretisations is the increased flexibility of the underlying method, such that geometrical complexities can be handled without the problem of excessive mesh distortion degrading the solution quality. Since the model is free of spatial derivatives, it comes inherently with the possibility to include fracture, as shown in [4–7], among others. A comprehensive overview of the peridynamic framework and its application is found in [8] as well as in the books of [9, 10]. A current problem in Peridynamics is the proper definition of material models at finite strains. One such example is the consideration of plasticity, which is, up to now, only possible under the consideration of small strains for a pure peridynamic material model, cf. [11]. To overcome this problem, the so-called correspondence formulation can be applied, allowing the application of arbitrary classical continuum mechanical models. Within this chapter an enhanced version of the correspondence formulation is considered and the applicability for three-dimensional simulations at large strain is investigated in terms of a simple large strain elasticity model.

2 General Framework Before introducing the correspondence formulation, first the general concept of Peridynamics is presented. Due to the non-locality of the framework particle interactions within a specific range have to be considered. In Fig. 1 the domain of influence, further denoted as family H, of a master particle I is depicted. It includes all neighbouring particles J whose distance is less than or equal to the horizon size δ. Thus, the horizon size defines the strength of non-local interactions. The larger the horizon, the more

Fig. 1 Illustration of neighbouring particles (grey) of master particle I (red) within its family H (light green)

Application of Enhanced Peridynamic Correspondence Formulation …

83

delocalised the model becomes. Since the originally framework of Peridynamics is of the Lagrangian type, particle connectivities do not change during the computation but the individual families are deforming. In general, deformations of bonds ξ = X − X are considered, which are defined as the vectors between an initial point X I and all points X within its family HX . With respect to the bond deformation it is differentiated into two groups, bondbased and state-based Peridynamics. In bond-based Peridynamics the force within a bond depends only on its own deformation, whereas in state-based models it depends on the collective deformation within the family. The advantage in bond-based Peridynamics lies in the easy computation of bond forces being always parallel to the associated current bond vector, but they are generally restricted to Poisson’s ratios of 0.25. An elastic bond-based model, its extensions to viscoelasticity as well as plasticity were initially shown by [12]. To overcome the constraint of bond-based models, state-based models are applied. As mentioned previously, the related equation of motion  ¨ t) = ρ0 (X)u(X,

HX

T[X, t]X − X − T[X , t]X − X dVX + ρ0 (X)b(X, t)

(1) is an integro-differential instead of partial differential equation, in which the integral term replaces the divergence of stresses of classical continuum mechanics and ρ0 (X) represents the initial density. It contains the so-called force vector state T[X, t] which must be determined by a constitutive model. In state-based Peridynamics it is further differentiated between ordinary and non-ordinary models. In ordinary models the individual bond forces within a family are parallel to the current bond vector, whereas in non-ordinary models the direction of the force vector can differ from the bond direction. In general, a state Aξ  = AX − X, respectively Aξ  = AX − X, is the generalisation of a second order tensor mapping a bond vector ξ ∈ HX to a scalar or a vector. The related names of the mapping are scalar state and vector state. Thus, the force vector state maps bonds onto bond force densities. Other important states are the reference position vector state X(X, t)X − X = X − X = ξ and the deformation state Y(X, t)X − X = y(X , t) − y(X, t) = x − x = ζ . The latter returns the deformed bond vector and former the original bond vector. Furthermore, the tensor product of two states is defined by  A∗B=

HX

wξ Aξ  ⊗ Bξ dVξ ,

(2)

involving the scalar state wξ , which represents a weighting influence function. By definition it has to be zero outside the family and it depends only on the magnitude of ξ . A special tensor is the so-called shape tensor K, resulting from the vector product of the reference position state with itself

84

P. Hartmann et al.

 K =X∗X=



HX

wξ Xξ  ⊗ Xξ dVξ =

HX

wξ ξ ⊗ ξ dVξ .

(3)

Another state operation is the reduction of a vector state to a second order tensor utilising the reference position state and the inverse shape tensor as follows R{A} = (A ∗ X)K−1 .

(4)

Besides the vector product the peridynamic dot product between two states is defined by  A•B=

HX

Aξ Bξ dVξ .

(5)

In Table 1 the analogy between state-based Peridynamics and classical continuum mechanics is depicted. The deformation state replaces the classical deformation gradient as deformation measure and the force state the to the deformation gradient conjugated first Piola-Kirchhoff stresses. Thus, the stress power within the peridynamic framework is computed by the peridynamic dot product of the force state and the rate of deformation state. As mentioned before, the integro equation (1) takes the place of the partial differential equation of classical continuum mechanics for the equation of motion. To preserve angular momentum in a peridynamic formulation the cross product of the force state and the deformation state has to vanish within the family. Ordinary models automatically fulfil this requirement due to equal force and bond vector directions. In contrast, for each non-ordinary model the conservation of angular momentum has to be proven separately. Moreover, as proven in [12] the equation of motion (1) satisfies the balance of momentum, independent of the definition of the force vector state. Thus, for an elastic model the energy is preserved. Now, similar to classical continuum mechanics a peridynamic material model is defined by the differentiation of an associated free energy function with respect to the deformation measure. Hence, the force vector state is defined as the Fréchet derivative of the peridynamic scalar strain energy density function Ψ with respect to the deformation vector state by T[X, t]X − X = ΨY (Y)X − X.

(6)

Table 1 Similarities of state-based Peridynamics (PD) and classical continuum mechanics (CCM) with respect to crucial physical quantities, cf. [13] PD CCM Deformation measure Conjugated force Stress power Linear momentum Angular momentum

Y

F

T ˙ T•Y  ρ0 (x)u¨ = HX T[X, t]ξ  − T[X , t]−ξ dVX   HX Tξ  × Yξ dVX = 0

P P : F˙ ρ0 (x)u¨ = DivP PFT = FP T

Application of Enhanced Peridynamic Correspondence Formulation …

85

3 Correspondence Formulation In this work the correspondence formulation, belonging to the class of non-ordinary state-based models introduced in [12], is applied. It is based on the consideration of the free energy functions of a peridynamic model and a related continuum mechanical model and postulating their correspondence. With respect to the respective kinematics the resulting stress powers are computed, cf. Table 1, and requiring their equality leads to the definition of the force vector state. Likewise derivation using the principle of virtual work, cf. [10], leads to the same result of T = wξ P(F)K−1 ξ .

(7)

In Eq. (7) the non-local deformation gradient F is applied for the computation of the first Piola-Kirchhoff stresses utilising a classical continuum mechanical model. The non-local deformation gradient is defined as the reduction of the deformation state. Thus, it yields (8) F = R{Y} = (Y ∗ X)K−1 . It has to be mentioned that for a vanishing horizon, i.e. when the non-local model becomes a local model, the non-local deformation gradient converges to its local counterpart (9) lim F → F. δ→0

The major benefit of the correspondence formulation is that the constitutive behaviour is completely determined by the stress computation utilising the non-local deformation gradient. Therefore, an arbitrary material model of classical continuum mechanics is applicable within the correspondence formulation. Furthermore, objectivity and conservation of angular momentum have been proved by [12].

3.1 Meshfree Discretisation In a next step the non-local equation of motion (1) is discretised in space for its numerical solution. As shown in [14–16] it is possible to solve the peridynamic equation of motion in a weak form similar to a classical Finite Element framework. However, it is desirable to perform the classical meshfree discretisation of Peridynamics, as described by [4, 17], among others, to retain the flexibility of the method. Thus, the strong form (7) is solved, leading to a collocation type approach

86

P. Hartmann et al.



ρ0 I u¨ I =

(T I ξ I J  − T J ξ J I )V J + ρ0 I b I

J ∈HX I



=

−1 wξ I J P I K−1 I ξ I J V J − wξ J I P J K J ξ J I V J + ρ0 I b I .

(10)

J ∈HX I

Within the discretised version the discretised shape tensor and deformation gradient KI =



⎛ wξ I J ξ I J ⊗ ξ I J V J , F I = ⎝

J ∈HX I



⎞ wξ I J ζ I J ⊗ ξ I J V J ⎠ K−1 I

J ∈HX I

(11) are used. On the basis of the non-locality of the model and the discretisation it has to be distinguished between two different convergences. The first convergence is related to the convergence to a local model for δ → 0 and the spatial convergence Δx → 0, where Δx is the particle distance. A general improved one point quadrature in two-dimensional space is shown by [17], where an analytically-based approach to determine the exact intersection area between the families and the containing integration points is presented. Furthermore, an extension to a numerically-based three-dimensional approach is considered in [18]. Besides an improved computation of intersection volumes, the convergence of bond-based peridynamic models with respect to the choice of influence function is examined. Similar convergence studies for state-based Peridynamics are performed by [19]. 3.1.1

Kinematic Boundaries

The discretised equation of motion (11) lacks the Kronecker-Delta property and kinematic boundaries are not directly applicable. The reason behind this is that the initial particle positions are defined as centroids of volume elements, such that the outer particles do not coincide with the surface of the body. Therefore, similar to an approach introduced in the framework of SPH by [20], ghost layers of particles are used. For kinematic boundaries the number of layers is chosen to be proportional to the horizon size and for a size of δ = mΔx m layers are applied.

3.2 Time Integration The meshfree discretised equation of motion (21) is a second order ordinary differential equation in time and has to be solved numerically. Therefore, a suitable time stepping scheme has to be applied. So far, the state of the art is to solve the problem within an explicit integration scheme. The choice for an explicit integration scheme is done with respect to molecular dynamics simulations. In molecular dynamics it is the state of the art to apply a Verlet integration scheme, as firstly introduced in [21], or a predictor-corrector

Application of Enhanced Peridynamic Correspondence Formulation …

87

algorithm, namely the Gear-algorithm, as described in [22]. In the present framework the Velocity-Verlet scheme has been chosen, incorporating the position and velocity updates 1 x(t + Δt) = x(t) + v(t)Δt + a(t)Δt 2 2 (12) a(t) + a(t + Δt) Δt. v(t + Δt) = v(t) + 2 Locally it is related to a fourth order error in position and second order in velocity. At the global level the error of position as well as error of velocity is of second order. As commonly known an explicit time integration is only conditionally stable. A detailed investigation of different stability criteria for classical peridynamic models is done by [23]. In this work the CFL-criterion, cf. [24], Δtcrit =

hl , cws

(13)

where h l is the characteristic length and cws the wave speed, defined with compression modulus K by cws =

K , ρ

(14)

is applied. Although not uniquely defined, referring to [10], the particle spacing is assumed to be the characteristic length.

3.3 Stability Problems Unfortunately, referring to [10, 13], among others, the derived meshfree correspondence framework is susceptible to instabilities similar to the appearance of zeroenergy-modes, especially in regions of high gradients. In [25] they are interpreted as zero-energy-modes resulting from the rank deficiency due to nodal integration. As a stabilisation, in [26] additional spring-like forces are introduced to reduce the instabilities. However, this approach stiffens the system and requires a stabilisation parameter which has to be chosen a priori. Another approach, based on the assumption that the instabilities result from nodal integration, is formulated by [27] using a stabilisation of the deformation state over a stabilised deformation field. The proof that the instabilities do not arise from nodal integration is done by [15], or [13]. In [13] the peridynamic equation of motion is solved in a weak form with an exact integration and compared to the results of an equivalent Finite Element computation. The numerical instabilities do not disappear within the simulations, so the reason for their appearance is not assigned to the meshless particle discretisation and has to arise from the constitutive correspondence formulation (7).

88

P. Hartmann et al.

Overall, the weak point of the formulation is the definition of the constant nonlocal deformation gradient. It averages the deformations of bonds within a family so that only constant, averaged stress states are reproduced. Thus, the possibility of matter interpenetration is not eliminated within the formulation. Nevertheless, the main problem is that the peridynamic reduction to obtain the non-local deformation gradient is not bijective under the consideration of a spherical domain of influence. In [10] it is shown that for a spherical domain of influence the non-local deformation gradient F I is independent of the current position of the associated particle y I , i.e. the non-local deformation gradient F I takes the same value for arbitrary y I . To cope with this problem different extensions and reformulations of the correspondence formulation have been developed. In [13] a stable formulation is derived by the definition of non-linear bond-strain measures  (m) ξ  :=

1

Yξ Yξ  cξ m − 1 and cξ  := , 2m ξξ

(15)

similar to Seth-Hill strain measures in classical continuum mechanics, and the definition of a correspondence force state. It is proved that the numerical instabilities vanish for m 0 it is defined as Ni,p (ξ ) =

ξi+p+1 − ξ ξ − ξi Ni,p−1 (ξ ) + Ni+1,p−1 (ξ ) ξi+p − ξi ξi+p+1 − ξi+1

(2)

A B-Spline curve of degree p is then computed by the linear combination of the control points and the respective basis functions: C(ξ ) =

n 

Ni,p (ξ ) P i .

(3)

i=1

2.2 Non-uniform Rational B-Splines A NURBS of degree p is the projection of a B-Spline in Rd +1 . A NURBS curve is a linear combination of univariate NURBS basis functions, Ri,p , and control points Pi : n  wi Ni,p (ξ ) , (4) C(ξ ) = Ri,p (ξ )P i with Ri,p (ξ ) = n j=1 wj Nj,p (ξ ) i=1 where Ni,p are B-spline basis functions with the corresponding weight wi . As can be seen, B-Spline is a special case of a NURBS, where all weights are equal. Bivariate p,q NURBS basis functions Ri,j are defined by a tensor product of the one dimensional B-spline basis functions Ni,p (ξ ) and Mj,q (η) in the two parametric directions ξ and η and the corresponding weights wi,j as

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods

109

Ni,p (ξ )Mj,q (η)wi,j p,q Ri,j (ξ, η) = n m . ˆi=1 ˆj=1 Nˆi,p (ξ )Mˆj,q (η)wˆi,ˆj

(5)

With the control points P i,j , a NURBS surface of degree p, q can be expressed as S(ξ, η) =

n  m 

p,q

P i,j Ri,j (ξ, η).

(6)

i=1 j=1

3 Computational Homogenization In this section we review the theoretical background of modeling a material consisting of heterogeneous microstructures. The separation of scales allows to view the problem as two coupled subproblems at the macro- and the micro-scale. At the microscale the appropriate boundary conditions are derived based on the Hill-Mandel condition [17, 18]. In this paper, we use periodic boundary conditions for the microscale BVP.

3.1 Macroscopic Equilibrium Problem Let MB0 denote the reference macro-continuum domain. The initial macroscale configuration can take the form of a spatial configuration MBt under a deformation gradient of MF = MGradMϕ, where Mϕ is a one to one mapping between reference and spatial configurations. At the macro level we assume no mechanical body forces and inertia effects exist. Thus, the conservation of linear momentum in the undeformed configuration takes the form DivMP = 0 in MB0 subject to (7) M M P N − MT = 0 on ∂ MBN0 , where MP is the macroscopic first Piola-Kirchhoff stress tensor, MN is the outward unit normal and MT is the prescribed traction on the Neumann portion of the boundary ∂ MBN0 .

3.2 Microscopic Equilibrium Problem The microscale problem is defined on an RVE, consisting of at least two materials, representing the local heterogeneities of the corresponding integration point in the macro-structure.

110

M. Amin Ghaziani et al.

Let B0 and Bt denote the reference and current configurations of the RVE, respectively. We assume that for each point with position vector X in the material configuration B0 , there is only one counterpart in the spatial configuration Bt with position vector x. The position vectors in material and spatial configurations are related via a deformation mapping ϕ such that x = ϕ(X). The deformation gradient F is defined by F = Gradϕ. The body forces and inertia effects in the RVE are neglected and the stress-divergence term becomes the dominant part in the linear momentum equation. Thus, the conservation of linear momentum in the material configuration is written as DivP = 0 in B0 subject to (8) P N − T = 0 on ∂BN0 , where P is the macroscopic first Piola-Kirchhoff stress tensor, N is the outward unit normal and T is the prescribed traction on the Neumann portion of the boundary ∂BN0 .

4 Isogeometric Formulation In this section, we present the isogeometric Galerkin and collocation formulations for solving the equations introduced in Sect. 3. In the next subsections, we only illustrate the formulations for one scale, since the equilibrium equation takes the same form in both micro- and macro-scale.

4.1 Isogeometric Galerkin Formulation We start with derivation of the weak form of the balance of linear momentum and then discretize it in space. The resulting nonlinear system of equations is linearized and solved using the Newton-Raphson scheme. The mechanical weak form is obtained by multiplying the local linear momentum balance equation in strong form by a vector valued test function δϕ, vanishing on the Dirichlet part of the boundary. It is followed by integrating over the material configuration as given below: 

 B0

δϕ · DivP dV =

where

B0

  ! Div(δϕ · P) − P : Gradδϕ dV = 0

∀δϕ ∈ H01 , (9)

 H01 = f = f (X) : f , Grad f ∈ L2 , f = 0 on ∂BD 0

and ∂BD 0 represents the Dirichlet part of the boundary. Using the divergence theorem, the equation above takes the form

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods



 B0

P : Gradδϕ dV −

!

∂BN0

δϕ · T dA = 0

111

∀δϕ ∈ H01 ,

(10)

where T = P · N is the prescribed traction force on the surface and and ∂BN0 represents the Neumann part of the boundary. The reference domain B0 and its boundary ∂B0 are discretized into a set of bulk and surface elements through Bh0 =

n-Bel

β

B0 and ∂Bh0 =

β=1

n-Sel

β

∂B0 .

β=1

Next, using the isoparametric concept, the geometry of each bulk element is discretized as npe npe   h i h X (ξ ) = Ni,p (ξ )X , ϕ (ξ ) = Ni,p (ξ )ϕ i . i=1

i=1

The abbreviation npe denotes the number of control points per element and Ni,p is the basis function of the bulk element at control point i. Then, by using this spatial discretization and introducing the assembly operator A, the discritized weak form of the balance of linear momentum equation reads  R=

Anel β=1 R

=

An-Bel β=1

 B0

P · GradN dV −

An-Sel β=1

∂B0

T · N dA

(11)

The nonlinear residual vector which has size equal to the number of degrees of freedom, can be stated as a function of the unknown global vector of spatial coordinates d, as ! (12) R = R(d) = 0. An iterative numerical method is needed to find the solution of Eq. (12), d. Here, the Newton-Raphson scheme is used and hence the linearized system of equations reads R(d) +

∂R ! :d = 0 and d k+1 = d k + d k , ∂d

(13)

where k, d k and d k+1 denote the iteration step, iterative increment vector and the corrected vector of spatial coordinates. Utilizing such linearization scheme, results in a mechanical tangent stiffness matrix K :=

∂R , ∂d

which is constructed by the element stiffness matrices in the form

112

M. Amin Ghaziani et al.

∂R el := Anβ=1 K := ∂ϕ



∂P · GradN dV GradN · ∂F B0

(14)

where A is the assembly operator. As shown in Eqs. (11) and (14), in order to solve the nonlinear problem using the Newton-Raphson scheme, the first Piola-Kirchhoff stress tensor P and the Piola stress tangent A are needed. This stress tangent with respect to the deformation gradient ∂P is denoted by A = and is a fourth-order tensor. In the microscale problem these ∂F properties are calculated from the hyperelastic material model (ψ) using standard tensor algebra. In macroscale problem the required properties are derived based on the solution of the local BVP. The macroscopic first Piola-Kirchhoff stress tensor is computed based the average stress theorem and the tangent tensor is approximated by finite differences. The upscaling of these properties is explored in [19].

4.2 Isogeometric Collocation Formulation In the IGA-C method we can perform the discretization and linearization procedures directly on the strong form of Eq. (13), which is enforced at a set of discrete collocation points. These points are located in the interior domain, on the edges and at the corners of the domain. For the inner collocation points, the residual form regarding the Newton-Raphson iterative solution can be written as R = DivP, which can be rephrased as R = DivP = tr (GradP) = tr

∂P ∂F

:

∂F  = tr (A : GradF), ∂X

(15)

where A is the sensitivity of the first Piola-Kirchhoff stress with respect to the deformation gradient. Linearization of Eq. (15) leads to the consistent stiffness matrix R = tr (A : GradF + A : GradF) ∂A = tr ( : F : GradF + A : GradF) ∂F = tr (D : F : GradF + A : GradF),

(16) (17) (18)

where  represents the linearized increment. Performing the linearization procedure on the strong form of the balance of linear momentum leads to a new sixth-order tensor D in Eq. (18). This tensor is defined as D=

∂ 2P ∂A = . ∂F ∂F2

(19)

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods

113

For collocation points located on edges within the Neumann boundary, the equation is PN − T = 0, (20) where, N and T are outward unit normals and imposed tractions, respectively. For collocation points located at corners where two Neumann boundaries meet, [20] showed that the appropriate equation takes the form P (N  − N  ) − (T  + T  ) = 0,

(21)

where N  and N  are the outward unit normals of the edges meeting at the corner, and T  and T  are the imposed tractions. Linearization of the Neumann boundary conditions follows the same procedure and is not reported here for the sake of brevity. For more details on the collocation formulation at finite strain, the reader is referred to [21]. As shown in Eq. (18), in order to solve the nonlinear problem using the NewtonRaphson scheme, the Piola stress tangents A and D are needed. In the microscale problem these properties are calculated from the hyperelastic material model (ψ) using standard tensor algebra. In macroscale problem the required properties are derived based on the solution of the local BVP and approximated by finite differences.

5 Numerical Examples In this section we investigate our isogeometric frameworks in two different numerical examples, including one RVE problem and one multiscale problem. The accuracy and efficiency of IGA-G and IGA-C methods in both examples are explored. We assign periodic boundary conditions for the microscale problem. The hyperelastic Neo-Hookean material model ψ is defined as [22] ψ(F) :=

1 1 λ ln2 J + μ [F : F − ndim − 2 lnJ ], 2 2

(22)

where λ and μ denote the Lamé parameters, J is the determinant of the deformation gradient tensor and ndim represents the problem dimension.

5.1 Microscale Problem In this part, we consider a composite consisting of a matrix material and fibers distributed uniformly inside the matrix. In this case the microstructure can be represented with a two-dimensional unit cell illustrated in Fig. 3. For the foregoing study, the material parameters of the matrix are assumed to be shear modulus

114

M. Amin Ghaziani et al.

Table 1 Comparison of computational time for different polynomial degrees for IGA-G and IGA-C Polynomial degree

DOF

IGA-G #Integration points

IGA-C Assembly Solution time (s) time (s)

#Collocation points

Assembly Solution time (s) time (s)

2

1150

3969

113.10

0.91

729

42.14

2.66

3

1456

7056

683.55

2.15

900

123.16

5.55

4

1798

11,025

2060.92

3.82

1089

356.15

5.80

5

2176

15,876

6026.21

5.21

1296

953.03

6.32

μmat = 8 and Poisson’s ratio ν mat = 0.3. The same Poisson’s ratio is chosen for the fiber material, but the fiber to matrix shear modulus ratio is 0.1. In this problem, a macroscopic deformation gradient is applied on the boundary of the unit cell. In this specific example, we consider 30% of extension as the macroscopic deformation, which gives 1.3 0 M F= . 0 1 However, the same analysis can be generalized to any macroscopic deformation gradient. We study the accuracy and performance of both IGA-G and IGA-C methods for quadratic, cubic, quartic and quintic NURBS. The computational time for stiffness assembly and solution procedures for each method is reported in Table 1. It can be seen that IGA-C needs less time for the assembly than IGA-G, while the solution procedure takes the same time for both methods. This benefit of using IGA-C rises complying with the increase of the basis function degree. It is a result of fewer evaluation points in IGA-C compared to IGA-G. For example for p = 2, the assembly time for the IGA-G is more than its counterpart for IGA-C by factor of 2.7, while for p = 5 this factor increases to 6.3. The dependency of the number of integration points on the polynomial degree in Galerkin leads to a remarkably higher computational time rather than IGA-C, where the number of collocation points remains equal to the number of control points independent of any increase in the degree of basis functions.

Fig. 3 Two-dimensional unit cell of a fiber composite and the associated IGA discretization 1.0

1.0

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods

115

Fig. 4 Relative error versus number of degrees of freedom for polynomial degrees p = 2, 3, 4, 5

The convergence plots for the relative error of the upscaled Piola stress component Pxx in L2 norm with respect to the number of degrees of freedom (DOF) can be found in Fig. 4 for different polynomial degrees. We consider the solution of the most refined case for quintic polynomial in IGA-G method as the reference solution. The best results for each polynomial degree are obtained by the IGA-G method. However, the plots in Fig. 5 visualizing the relative errors in L2 norm as functions of time show that IGA-C reaches the same level of accuracy faster than IGA-G, particularly for higher polynomial degrees.

5.1.1

Multiscale Problem

In this problem, we use the geometry of a quarter annulus defined as in Fig. 6 with Ri = 1 and Ro = 2 for the macroscopic domain. The macrosample is fixed along the lower edge and is loaded by the force vector of f = [50 0]T along the left side. Solving the full two-scale problem involves satisfying the linear momentum not only at the microscale but also at the macroscale. Each evaluation point in the macro domain is linked to an RVE as described in Fig. 3 with the same material properties of previous subsection. We use two schemes for this problem: 1. IGA-G2 : using IGA-G to solve the BVP in both scales 2. IGA-C2 : using IGA-C to solve the BVP in both scales

116

M. Amin Ghaziani et al.

Fig. 5 Convergence plots of relative error versus time for polynomial degrees p = 2, 3, 4, 5 Fig. 6 Macroscale sample

3. IGA-C & IGA-G: using IGA-C to solve the macroscale BVP and IGA-G to solve the microscale BVP 4. IGA-G & IGA-C: using IGA-G to solve the macroscale BVP and IGA-C to solve the microscale BVP In this example, we have used quadratic, cubic, quartic and quintic polynomials for both scales. The displacement of point C in Fig. 6 is investigated. We consider the displacement solution of IGA-G2 for the most refined case for quintic polynomial in both scales as the reference solution. The total computational times for macro- and micro-scale problems in IGA-G2 , IGA-C2 , IGA-C & IGA-G and IGA-G & IGA-C are respectively reported in Tables 2, 3, 4 and 5. Like in the previous numerical example, the computational time increases for all methods conforming with polynomial degree increment. IGA-C2 gains the best results among all approaches in terms of computational time due to fewer number of evaluation points in both scales rather than other methods, although upscaling

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods

117

the sixth-order tensor D with finite differences reduces the efficiency. Next is IGAC & IGA-G scheme which has the same number of macroscopic evaluation points as IGA-C2 but uses IGA-G at the microscale BVP. Within the IGA-G & IGA-C framework,the advantage of using IGA-C is restricted to the microscale. Employing IGA-G at the macroscale leads to higher number of microscopic BVPs compared to

Table 2 IGA-G2 computational time in macro- and micro-scale for different polynomial degrees Polynomial degree p No. of Macro Time (Macroscale) Time (Microscale) evaluation points 2 3 4 5

196 441 1225 1764

574.81 s 953.43 s 1892.54 s 2751.34 s

26 h 21 m 45 h 01 m 111 h 21 m 142 h 42 m

Table 3 IGA-C2 computational time in macro- and micro-scale for different polynomial degrees Polynomial degree p No. of Macro Time (Macroscale) Time (Microscale) evaluation points 2 3 4 5

81 100 121 144

351.26 s 582.75 s 811.68 s 934.72 s

6 h 37 m 8 h 54 m 10 h 41 m 11 h 56 m

Table 4 IGA-C & IGA-G computational time in macro- and micro-scale for different polynomial degrees Polynomial degree p No. of Macro Time (Macroscale) Time (Microscale) evaluation points 2 3 4 5

81 100 121 144

345.11 s 578.22 s 818.70 s 921.53 s

12 h 19 m 14 h 07 m 16 h 33 m 17 h 41 m

Table 5 IGA-G & IGA-C computational time in macro- and micro-scale for different polynomial degrees Polynomial degree p No. of Macro Time (Macroscale) Time (Microscale) evaluation points 2 3 4 5

196 441 1225 1764

587.53 s 961.85 s 1883.81 s 2774.49 s

13 h 43 m 28 h 51 m 81 h 22 m 109 h 56 m

118

M. Amin Ghaziani et al.

Table 6 Relative error: IGA-G2 , IGA-C2 , IGA-C & IGA-G and IGA-G & IGA-C Polynomial IGA-G2 IGA-C2 IGA-C & IGA-G IGA-G & IGA-C degree p 2 3 4 5

1.3E−4 2.4E−5 2.6E−6

7.2E−2 4.3E−4 3.1E−6 1.4E−7

3.0E−2 2.7E−4 2.9E−6 1.1E−7

3.4E−3 9.1E−5 2.7E−6 8.3E−8

previous schemes. For example for p = 2, we need 2.4 times more microscopic BVPs). The least efficient approach is IGA-G2 since it has more number of evaluation points in both scales. Also, as reported in Table 6 and compare the computational times in Tables 2, 3, 4 and 5, IGA-C2 reaches the same order of accuracy faster compared to IGA-G2 as we increase the polynomial degree of the basis function. Based on the results in this numerical example, utilizing the IGA-C with higher polynomial degrees in the macroscale shows better results both in accuracy and computational efficiency with IGA-C2 giving the best results among other approaches.

6 Conclusion In this paper we investigated the application of isogeometric Galerkin in the context of multiscale modeling and further proposed a new isogeometric collocation computational homogenization framework. The main idea of this framework is benefiting from the high accuracy of IGA and the computational efficiency of the collocation method. Numerical examples based on a microscale problem and a multiscale problem were explored to demonstrate the accuracy and computational efficiency of the proposed IGA-C2 framework. In both examples we measured the error values in the L2 norm. In microscale example IGA-C obtained the same level of accuracy faster compared to IGA-G, notably for higher polynomial degrees. In multiscale problem, four different schemes were explored: IGA-G2 , IGA-C2 , IGA-C & IGA-G and IGA-G & IGA-C. The approaches utilizing the IGA-C for the macroscale problem (IGA-C2 and IGA-C & IGA-G) obtained better results in the sense of accuracy and computational efficiency than approaches using IGA-G at the macroscale. This result is due to fewer number of evaluation points in the global scale for IGA-C than IGA-G. The best results in terms of accuracy and computational time among all approaches are obtained by IGA-C2 , particularly for higher polynomial degrees. Acknowledgements The authors appreciate the support of the German Research Foundation (DFG) within the International Research and Training Group IRTG 1627.

Isogeometric Multiscale Modeling with Galerkin and Collocation Methods

119

References 1. Cottrell, J., Hughes, T., & Bazilevs, Y. (2009). Isogeometric analysis: Toward integration of CAD and FEA. Wiley. 2. Schillinger, D., Hossain, S. J., & Hughes, T. J. R. (2014). Reduced Bezier element quadrature rules for quadratic and cubic splines in isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 277, 1–45. 3. Auricchio, F., Calabro, F., Hughes, T., Reali, A., & Sangalli, G. (2012). A simple algorithm for obtaining nearly optimal quadrature rules for NURBS-based isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 249–252, 15–27. 4. Hughes, T., Reali, A., & Sangalli, G. (2010). Efficient quadrature for NURBS-based isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 199(5–8), 301–313. 5. Adam, C., Hughes, T., Bouabdallah, S., Zarroug, M., & Maitournam, H. (2015). Selective and reduced numerical integrations for NURBS-based isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 284, 732–761. 6. Hiemstra, R., Calabro, F., Schillinger, D., & Hughes, T. (2017). Optimal and reduced quadrature rules for tensor product and hierarchically refined splines in isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 316, 966–1004. 7. Fahrendorf, F., De Lorenzis, L., & Gomez, H. (2018). Reduced integration at superconvergent points in isogeometric analysis. Computer Methods in Applied Mechanics and Engineering, 328, 390–410. 8. Auricchio, F., Beirao da Veiga, L., Hughes, T. J. R., Reali, A., & Sangalli, G. (2010). Isogeometric collocation methods. Mathematical Models and Methods in Applied Sciences, 20(11), 1075–1077. 9. Schillinger, D., Evans, J. A., Reali, A., Scott, M. A., & Hughes, T. (2013). Isogeometric collocation: Cost comparison with Galerkin methods and extension to adaptive hierarchical NURBS discretizations. Computer Methods in Applied Mechanics and Engineering, 267, 170– 232. 10. Temizer, I. (2014). Multiscale thermomechanical contact: Computational homogenization with isogeometric analysis. International Journal for Numerical Methods in Engineering, 97, 582– 607. 11. Alberdi, R., Zhang, G., & Khandelwal, K. (2018). A framework for implementation of RVEbased multiscale models in computational homogenization using isogeometric analysis. International Journal for Numerical Methods in Engineering, 114, 1018–1051. 12. Geers, M. G. D., Kouznetsova, V., & Brekelmans, W. A. M. (2010). Computational homogenization, multiscale modelling of plasticity and fracture by means of dislocation mechanics. CISM Courses and Lectures, 522, 327–394. 13. Feyel, F. D. R., & Chaboche, J.-L. (2000). FE2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials. Computer Methods in Applied Mechanics and Engineering, 183, 309–330. 14. Solinc, U., & Korelc, J. (2015). A simple way to improved formulation of FE2 analysis. Computational Mechanics, 56(5), 905–915. 15. Somer, D. D., de Souza Neto, E. A., Dettmer, W. G., & Peric’, D. (2009). A sub-stepping scheme for multi-scale analysis of solids. Computer Methods in Applied Mechanics and Engineering, 198(9–12), 1006–1016. 16. Otero, F., Martinez, X., Oller, S., & Salomon, O. (2015). An efficient multiscale method for non-linear analysis of composite structures. Composite Structures, 131, 707–719. 17. Hill, R. (1972). On constitutive macro-variables for heterogeneous solids at finite strain. Proceedings of the Royal Society of London. Series A, 326(1565), 131–147. 18. Mandel, J. (1972). Plasticite classique, Viscoplasticite (CISM courses and lectures) (Vol. 97). New York: Springer. 19. Saeb, S., Javili, A., & Steinmann, P. (2016). Aspects of computational homogenization at finite deformations: A unifying review from Reuss’ to Voigt’s bound. In Applied mechanics reviews.

120

M. Amin Ghaziani et al.

20. Auricchio, F., Beirao da Veiga, L., Hughes, T. J. R., Reali, A., & Sangalli, G. (2014). Isogeometric collocation for elastostatics and explicit dynamics. Computer Methods in Applied Mechanics and Engineering, 14(2), 249–252. 21. Kruse, R., Nguyen-Thanh, N., De Lorenzis, L., & Hughes, T. J. R. (2015). Isogeometric collocation for large deformation elasticity and frictionalcontact problems. Computer Methods in Applied Mechanics and Engineering, 296, 73–112. 22. Simo, J. C., & Pister, K. S. (1984). Remarks on rate constitutive equations for finite deformation. Computer Methods in Applied Mechanics and Engineering, 46, 201–215.

Composites

Experimental and Numerical Investigations on the Combined Forming Behaviour of DX51 and Fibre Reinforced Thermoplastics Under Deep Drawing Conditions Bernd-Arno Behrens, Alexander Chugreev and Hendrik Wester Abstract In order to achieve significant weight reduction, multi-material concepts steadily gain importance in the automotive and aviation industry. In this respect, a new hybrid construction approach is the combination of steel and fibre-reinforced thermoplastics (FRT) in a sandwich design. The use of FRT provides a high lightweight potential due to the combination of low density and high tensile strength. Using thermoplastics instead of thermoset matrices enables the reduction of process times and component costs and thus becomes affordable in large-scale application. The combined forming and joining of FRT and steel sheets require elevated temperatures, which lead to a complex forming behaviour. Furthermore, the in-plane and out-of-plane material properties of the FRT, in particular the forming and failure behaviour differ strongly from that of conventional metal materials like steel or aluminium. Therefore, new material characterisation techniques, investigation methods as well as numerical models are required. However, the temperature dependent material behaviour of the steel component and the occurrence of material phenomenon such as blue brittleness also needs to be investigated and taken into account during numerical simulation. This research deals with the experimental investigation and numerical modelling of the material behaviour under deep drawing conditions in order to realize an efficient one-shot forming process with the help of numerical simulation. The numerical analysis is realised with the commercial FE-software Abaqus.

B.-A. Behrens · A. Chugreev · H. Wester (B) Institute of Forming Technology and Machines, Leibniz Universität Hannover, An der Universität 2, 30823 Garbsen, Germany e-mail: [email protected] B.-A. Behrens e-mail: [email protected] A. Chugreev e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_7

123

124

B.-A. Behrens et al.

1 Introduction The automotive and aviation industry needs to achieve significant weight reduction in order to fulfil legal obligations concerning resource and energy efficiency. This leads to increasing requirements on the material properties which can no longer be met by single materials. One way to deal with these challenges is the combination of various materials with different and partly contrary mechanical properties like metals and composites. Metals are the most used construction materials due to their advantages like high ductility, formability, joining ability as well as availability. Nevertheless, new composite materials like fibre-reinforced thermoplastics (FRT) are getting used more frequently. They provide a high lightweight potential due to the combination of low density and high specific tensile strength. In contrast to metal materials, composites only offer high stiffness and strength in fibre direction. Furthermore, they exhibit good electromagnetic shielding properties. The use of thermoplastics, like polyamide, as matrix material is gaining more importance particularly in the automotive industry since it can be reshaped under special thermal conditions. Meanwhile pre-impregnated sheets with a thermoplastic matrix reinforced with woven fibres (FRT) are commercially available. This leads to significant cost reduction and the FRT become affordable for large scale production. A process design to produce structural parts in a sandwich design is the forming of the metal material and the FRT in a combined deep drawing process. This will enable the integration of the new process design concept in existing press lines. A schematic representation of the process chain design for manufacturing hybrid parts in a one-shot process is shown in Fig. 1. In a first step the different layers of FRT and steel are stacked and heated up on forming temperature, which is above the melting temperature of the thermoplastic matrix. The heating is carried out by means of conductive heating inside an oven. By using steel as cover layer, an inductive or conductive heating can also be achieved. Subsequently, the compound is transferred into the tool, formed and cooled down. The cooling takes place inside the tool to achieve the reconsolidation of the used FRT as well as the formation of the bonding between steel and FRT layers. To avoid Heating

Transfer

Forming & Joining

Transfer

Metal component FRT Oven

Robot

Robot Press

Fig. 1 Schematic representation of the process chain for manufacturing hybrid parts [1]

Experimental and Numerical Investigations …

125

an early cooling, resulting in a decrease of formability of the FRT, the tool is heated by means of heat cartridges. To carry out the reconsolidation the temperature has to be below the melting temperature. In this investigation the sandwich core of the hybrid part is a PA6 matrix reinforced with woven glass fibres and the outer layers are made out of steel from DX51 D+Z. The core thickness was sFRT = 1.0 mm and the thickness of each outer layers was sst = 0.50 mm. The applied semi-finished compound and stacking order is shown in Fig. 2. Due to the strong temperature dependency of the thermoplastic matrix and the superposition of fibre and matrix properties the FRT exhibits a complex material behaviour [2, 3]. Above the melting temperature the stiffness and strength of the thermoplastic are strongly reduced. Furthermore, the matrix material behaviour is superimposed by the mechanical properties of the fibres. This leads to a strong anisotropic behaviour, where the highest strength is observed in the fibre direction. Thus, shear deformation is the main in-plane forming mode of woven fabrics. During the deformation the shear angle between the fibres changes and after reaching a specific angle the shear stress strongly increases due to shear locking [4]. The large angular rotation between the two fibre directions leads to a significant change of anisotropic material behaviour during the forming process [5]. In case of a matrix solidification, the influence of the matrix on the overall forming behaviour increases significantly. The influence of temperature evolution has been investigated in [6] by means of the draping behaviour of a double dome geometry. The authors have concluded that the temperature evolution over the entire process including the heating, transfer and forming phase has a decisive impact on the results of the draping process on the precise prediction of the draping process. Literature review shows that there are mainly two different model approaches, namely the kinematic approach [7, 8] and finite element based methods [9], for the description of deformation behaviour of woven fabrics. In kinematic approaches, the fabric is represented as a pin-jointed net and the description of the deformation is done by depositing the flat semi-finished product on a 3D surface using a mapping algorithm. They are characterized by short computing times, but cannot describe Steel layer DX51 D+Z 0.5 mm

FRT layer PA6 + woven glass fibres 1.0 mm Steel layer DX51 D+Z 0.5 mm Fig. 2 Stacking order of semi-finished compound

126

B.-A. Behrens et al.

the constitutive material behaviour. Depending on process design and component geometry, particularly when unsymmetrical configurations are used, the kinematic approaches fail in predicting correct shear angles [10]. In contrast, FE-based approaches take constitutive behaviour like nonlinearities and rate-dependency as well as process boundary conditions into account. This allows detailed statements about the forming process, but requires higher computing capacities. So-called continuous approaches are suitable for the description of complex forming processes and were subject of research in the last decade. Here, the semifinished fibre composite products are regarded as a continuum with homogeneous material properties. Thus, the different deformation mechanisms are homogenized in each element and can be described efficiently. The challenge in modelling is in the consideration of fibre reorientations caused by shear deformation. Since the fibre reorientation cannot be described by the element coordinate systems, non-orthogonal constitutive laws are required. A non-orthogonal material model for woven fabrics based on a homogenization method under consideration of structural and mechanical reinforcement properties was developed by Yu et al. [11]. A continuum model considering the non orthogonal fibre coordinate system due to shear deformation was presented by Xu et al. [12]. The relationship between stresses and strains in the global coordinate system and the variable local fibre system was obtained by analysing stresses and strains in orthogonal and non-orthogonal coordinates as well as solid state rotation matrices. The mechanical properties of the fibres are described by a non-linear function, the behaviour under shear by means of a piecewise linear function. A similar non-orthogonal continuum approach for modelling the behaviour of a balanced fabric was developed by Peng et al. [13]. The determination of the actual fibre orientation is conducted based on the deformation gradient and a rigid body rotation of the material frame. A comparison of experimental results and numerical simulation for the bias extension test as well as a double-curved geometry showed good agreement for the pure fabric with regard to component geometry and fibre shear angle [14]. Mesoscale models are another way of describing the behaviour of woven fabrics. The material properties are not homogenized and both the fibres and the thermoplastic matrix are modelled individually. A FE model based on the tensile deformation energy and constructed from several periodic unit cells of the woven fabric is introduced by Boisse et al. [15]. Wang et al. developed a mesoscale model which takes tension, in-plane shear and out-of-plane bending into account and thus allows a prediction of wrinkle formation during the forming of fabric-reinforced thermoplastics [16, 17]. Through the direct imaging of fibres and thermoplastics, mesoscale models allow detailed statements about the forming and in particular the interaction of the components, but also require very high computing capacities and are therefore not suitable for the numeric simulation of forming processes with complex geometries. The employed steel, DX51, is a typical deep drawing steel which is commonly used for cold forming processes. However, the simultaneous processing of FRT and steel in a combined process requires elevated temperatures. With increased temperatures, the material properties of steel change and, especially in the relevant range between 150 and 300 °C, a decrease in deformation capacity and an increase in

Experimental and Numerical Investigations …

127

strength can be observed for unalloyed and low-carbon steels. This is due to the so-called blue brittleness effect, which is a thermally activated diffusion of carbon and nitrogen atoms occurring on the interstitial sites and accordingly impede the migration of the dislocation and thus the deformation [18, 19]. As a result of diffusion, this effect depends on temperature and strain rate. Typically used models like Johnson-Cook describe the changes of formability depending on strain, strain rate and temperature as well as the stress state, though due to their formulation they can only depict an increase of fracture elongation with increasing temperature [20]. Therefore, the correlation between temperature, stress state and formability has to be determined under consideration of the blue brittleness effect for the considered process. Further effects that can be observed during the forming of steel sheets are the so-called Lüders strain and anisotropy. In the case of anisotropic behaviour, the sheet material flows more from the sheet width or the sheet thickness, depending on the load [20]. This direction-dependent flow behaviour occurs particularly with cold-rolled sheets, where a preferred direction of the grains is formed in the material during the rolling process. The Lüders strain phenomenon is characterised by a plateau area in the stress-strain curve between the yield point and the beginning of hardening [21]. The occurrence of this phenomenon has been attributed by Cottrell et al. [22] to the pinning of dislocations by solute atoms in the crystal lattice. The occurrence of Lüders strain leads to the formation of slip bands in which the plastic deformation initially takes place and thus leads to an inhomogeneous strain distribution. The sliding bands move over the specimen until a homogeneous plastic deformation is achieved. In the area of the Lüders strain the stress remains almost constant [23]. The occurrence of Lüders strain is influenced by a variety of material factors and process variables such as grain size, carbon and nitrogen content, test temperature and previous thermomechanical treatments [24, 25]. The effect of Lüders strain is a diffusion based process and thus also temperature and strain rate dependent. The main objective of this study is the development and parameterisation of advanced material models under consideration of the above discussed challenges to design an efficient process concept for the combined forming and joining of FRT and DX51 by means of numerical simulation.

2 Methodology The use of numerical simulation to describe the complex process for the combined production of hybrid components made of steel and FRT requires detailed material data as well as suitable material models which can consider the phenomena described above.

128

B.-A. Behrens et al.

2.1 Experimental Procedure 2.1.1

Steel

The thermomechanical material parameters for the steel components were determined by tensile tests on the quenching and forming dilatometer (DIL805A/D+T) of TA Instruments at IFUM in accordance with DIN EN ISO 6892-1 [26]. For the analysis of the local strain distributions, tests were also carried out using the optical strain measuring system ARAMIS from Fa. GOM. A schematic representation of the test setup is given in Fig. 3a. The gauge length of the used samples is 10 mm with a width of 2 mm. In order to take into account the increased forming temperatures, the tests were carried out in a temperature range from RT to 500 °C. During the tests, the temperature, force and displacement were recorded in order to derive the flow curves. For the adjustment of quasistatic conditions, the tests were initially carried out with a strain rate of 0.001 s−1 . The test program is exemplary shown in Fig. 3b. At the beginning of the test, the sample is heated to test temperature by induction. After no change in length of the sample is detected, the target temperature is maintained for a few seconds to ensure homogeneous temperature distribution in the measuring range. Subsequently, the specimen is deformed until fracture. All samples have been manufactured by means of waterjet cutting. To investigate possible anisotropy, samples were taken with 3 different angles (RD 0°, RD 45°, RD 90°) to the rolling direction. This process enables material separation without thermal influence and thus changes in material properties. The chemical composition of the DX51 steel is provided in Table 1. Tensile mode

(b)

Pushing rods

Heating Deformation Cooling

Temperature [°C]

Specimen holder

4000

300

3000

200

2000

Fracture

100 0

Induction coil

Temperature Length change

0

200 400 Time [s]

600

1000 0

Length change [µm]

(a)

Fig. 3 Schematic representation of the test setup (a), exemplary test program for a temperature of 300 °C (b)

Table 1 Chemical composition of DX51 (wt%) C

Si

Mn

P

S

Ti

0.120

0.500

0.600

0.100

0.045

0.300

Experimental and Numerical Investigations …

(a)

(b)

Uniaxial tension test 0°/90° fibre orientation

129

(c)

Bias extension test ° fibre orientation

Fig. 4 Fibre orientation in uniaxial tension test (a); Tensile testing machine with integrated environmental chamber (b); Fibre orientation in bias extension test (c)

2.1.2

FRT

The pre-consolidated FRT used in this study consist of thermoplastic PA6 matrix reinforced with glass fibres in a 2/2 twill design. The overall density of the material is 1.8 g cm−3 . In order to investigate the complex, anisotropic material behaviour, different tests are needed. With respect to the strong temperature dependency the tests have been performed in a wide temperature range between RT as a reference and 250 °C in an environmental chamber (Fig. 4b). The experimental tests have been performed on a uniaxial tensile test machine at IFUM, Hannover and LMT, ENS Paris-Saclay. Initially, a test speed of 1 m ms−1 was chosen. For statistical coverage, up to five tests have been performed for each parameter combination. The material properties of the FRT in 0°/90° are dominated by the fibre. They are analysed by means of uniaxial tensile tests. In these tests, one fibre direction is orientated parallel to the loading direction, the angle between the second fibre direction and the force direction equals 90° like shown in Fig. 4a. During the test there is no change in the angle between the fibres. To carry out the uniaxial characterization tests, the samples produced by water jet cutting are clamped in the tools, heated to the test temperature in the heating chamber and then deformed until failure. The height of the measuring area is 70 mm and the width 30 mm. For the later derivation of the mechanical properties, the force-displacement curves are recorded. For the numerical simulation of the forming behaviour of the investigated FRT, the exact knowledge of the shear stress that varies with the fibre shear angle is of particular importance. Bias extension tests were initially performed to investigate the material behaviour under shear load. In order to achieve shear deformation, the samples are taken from the sheet and applied into the tensile testing machine with an initial angle of ±45° between the fibre and the loading direction. The initial fibre orientation is shown in Fig. 4c. If a tensile load is applied to the sample, the fibres

130

B.-A. Behrens et al.

(a)

(b) Clamp C B

Initial fibre orientation

B A

Current fibre orientation Shear angle

B

B C

x

Clamp y

Fig. 5 Different deformation zones in bias extension test (a), definition of shear angle (b)

start to shear. This reduces the angle between the fibres (2θ ) which is initially 90° and leads to an increase of the fibre shear angle γ . The fiber shear angle describes the change in fiber orientation and is defined as shown in Fig. 5b. As a result of the specimen geometry and the fixed clamping, 3 areas with different deformations are formed on the specimen, illustrated in Fig. 5a. As a result of the fixed clamping of the fibres in the clamps, there is no change in the shear angle in zone C. Under the assumption that no slippage occurs, pure shear occurs in zone A. The shear angle in zone B corresponds to half of the shear angle in zone A [27]. These kinematic conditions need to be taken into account during calculation of the shear stress—shear angle relationship. Analogous to the uniaxial tensile test, the same specimen geometry was used. The ratio of height and width is greater than 2 to enable pure shear [5]. A further attempt to investigate the shear behaviour is the picture frame test. In this test, a trapezoidal-shaped sample is clamped in a frame. By applying a tensile load to the diagonally opposite corners, the frame moves from an initially square geometry into a lozenge. The sample within the frame undergoes pure shear with constant in-plane shear strain [28, 29]. In addition to the bias extension test, picture frame tests were carried out to expand the database. Both test methods, bias extension test and picture frame test, were examined and compared inter alia in [4, 30] and led to comparable results. By investigating the shear properties of FRT, direct measurement techniques such as strain gauges cannot be used because they influence the deformation behaviour. Similarly, evaluation based purely on theoretical calculations can lead to unreliable results as it is based on assumptions about deformation behaviour that are often generalized and inaccurate, especially for bias extension tests where the deformation of the specimen is not uniform [31]. Therefore, in this study an optical measurement system and Digital Image Correlation (DIC) are used.

Experimental and Numerical Investigations …

131

Blank holder Heating cartridge Lower tool

Punch Pneumatic spring

Fig. 6 Experimental setup for hemispherical dome test

2.2 Forming Tests For further analysis of the developed models under deep-drawing conditions, a model tool for a servo-hydraulic forming simulator was constructed. The tool design is illustrated in Fig. 6 and allows the investigation of the individual materials as well as the hybrid composite under deep drawing conditions. By means of the hemispherical punch design the FRT material is specifically sheared. The use of a gas spring allows the pressure of the blank holder to be varied. In addition, the punch, die and blank holder can be specifically heated using heating cartridges.

2.3 Numerical Methods For numerical modelling, the material is assumed to be a continuum. For this purpose, the properties are homogenized across the elements so that standard elements can be used in the FE simulation. The used FE-software Abaqus explicit works in the orthogonal Green-Nagdhi (GN) frame, whereas constitutive behaviour must be described in a non-orthogonal frame defined by the fibres. To calculate the current stress increments, the strains from the GN-frame must be transformed into the fibre frame. The stresses are updated within the fibre frame and subsequently transformed back to the GN-frame. An overview of the orientation of the GN frame and fibre frame caused by fibre rotation is shown in Fig. 7. The calculation of the current fibre orientation is based on the work of [13] using the deformation gradient F and rigid body rotation matrix. First, the rotation matrix R is calculated from the deformation gradient F and the right stretch tensor U by polar decomposition. The rotation matrix R is used to update the GN-frame. The current fibre orientation is calculated using the deformation gradient F. Considering the geometric relationships, the transformation matrix Q to calculate the strain increment in the non-orthogonal fiber frame can be defined as in Eq. 1.

132

B.-A. Behrens et al.

Initial state

Deformed state

= initial 1st fiber direction = initial 2nd fiber direction = initial orthonormal material frame

= deformed 1st fiber direction = deformd 2nd fiber direction = deformed orthonormal material frame

Fig. 7 Orientation of GN-frame and fibre frame in the initial state and after deformation

 Q=

cos α sin α cos(α + θ ) sin(α + θ )

 (1)

According to Hughes and Winget formulation the stress increment within the fibre frame is calculated based on updated constitutive tensor and fibre strain increment. By multiplication with the transformation matrix the stress tensor is subsequently transferred back to the orthonormal GN-frame. The mathematical functions have been implemented in Abaqus explicit by means of a user defined material routine VUMAT [32]. The nonlinear shear stress—shear angle behaviour is modelled by means of combination of a linear function and polynomial function of grade 5 (Eq. 2). τ12 (γ ) =

γ A

(2)

Here τ 12 describes the shear stress, γ the shear angle and A, p1 − p7 are temperature-dependent constants. The behaviour under tensile load in the fibre direction is described by means of Ramberg–Osgood equations (Eq. 3). This equation can be used to describe the macroscopic nonlinear behaviour of FRT observed at higher strains. The stress σi can be determined as defined in Eq. 3, whereby i denotes the two fibre directions 1 and 2, respectively. σi (εi ) = 

1+

Eεi  n  n1 Eεi σa

with σa =  n

E X 0 ε0 n  n i = 1, 2 Eε0 − X 0

(3)

Experimental and Numerical Investigations …

133

Here E describes the initial Young’s modulus, ε0 the strain at failure onset and X 0 the maximum strength. The shape parameter n is used to fit the material behaviour to the experimental data. Since both fibre directions show the same properties due to the symmetrical fabric structure, the material parameters can be used for both fibre directions. In order to take temperature dependency into account, the parameters were implemented into the routine as a function of temperature.

3 Results and Discussion 3.1 Material Characterization and Modelling

400

300

T = RT

200

RD 0°

100

RD 45°

(b)

400

Technical stress [MPa]

(a) Technical stress [MPa]

The simulation of deformation processes of metallic materials requires an exact and consistent description of the hardening behaviour as a function of process variables such as degree of deformation and temperature. The required flow curves are derived from the force-displacement data or stress-strain curves determined in the uniaxial tensile tests and integrated into the commercial FE software using analytical approaches. True stress—true strain curves for the temperatures RT (a) and T = 300 °C (b) as well as for three extraction angles to the rolling direction RD (0°, 45°, 90°) are exemplarily shown in the Fig. 8. The comparison of the curves for the different extraction angles only shows a minor influence of the rolling direction on the mechanical properties. For the further investigation of the deformation behaviour, the influence of the rolling direction is therefore neglected and an isotropic material behaviour is assumed in the numerical simulation. If not stated otherwise, the following results refer to the rolling direction

300

T = 300 °C

200

RD 0°

100

RD 45°

RD 90°

RD 90°

0 0

10

20

Elongation [%]

30

0 0

10

20

30

Elongation [%]

Fig. 8 Technical stress—elongation curves for 3 rolling directions RD at RT (a) and 300 °C (b)

134

B.-A. Behrens et al.

RD = 90°. The comparison of the curves for RT and T = 300 °C shows a clear influence of temperature on the hardening and fracture behaviour. With increasing temperature the maximum tensile strength rises from 325 MPa at RT to 375 MPa at 300 °C. At higher temperature there is a significantly stronger increase in stress with ongoing plastic deformation and thus stronger hardening of the material is achieved. The occurrence of the maximum tensile strength shifts towards lower strain values. At the same time, a decrease in elongation at fracture can be seen with increased temperature. The observed behaviour indicates a reduced formability, which is due to the effect of blue brittleness. A more detailed analysis of the effect is given in Fig. 9. Figure 9a shows the maximum tensile strain determined as a function of temperature and Fig. 9b the course of the tensile strength over the temperature. In particular in the process-relevant temperature range between 150 and 300 °C, a decrease in the maximum strain at fracture with simultaneously increased strength can be observed. This decrease in the deformation potential resulting from blue brittleness must be taken into account in the process design. Furthermore, Fig. 8a shows a pronounced Lüders strain area at RT in which the stress remains almost constant with increasing plastic strain. At a temperature of 300 °C, this effect no longer occurs, since the dislocation blockades dissolve as a result of increased diffusion at elevated temperatures (Fig. 8b). The evaluation of the tensile tests at different temperatures pointed out that with increasing temperature the Lüders strain decreases constantly and disappears from a temperature of 300 °C and above. This behaviour has to be considered in the subsequent modelling. The use of numerical simulation requires flow curves, which have to be determined from experimental stress-strain curves, for the description of the plastic deformation behaviour. However, due to the uniaxial stress state, only small uniform strains can be achieved in the tensile test. Since significantly higher strains occur in the real forming

(b) 500

4000

Tensile strength [MPa]

Length at fracture [μm]

(a)

3000

2000

1000

400 300 200 100 0

0 0

200

400

600

0

200

Temperature [°C] Fig. 9 Length at fracture (a) and tensile strength over temperature (b)

400

Temperature [°C]

600

Experimental and Numerical Investigations …

135

process, the plastic flow behaviour must be extrapolated using suitable mathematical functions. Therefore, various approaches like Ghosh, Swift or Voce can be found in literature [33]. Figure 10 shows exemplary a comparison of calculated flow curves for DX51 using different approaches as well as the experimentally determined flow curve at RT. It can be seen that, depending on the applied approach, there are clearly different flow curves. For instance, the Voce and Hocket-Sherby approaches underestimate the yield stress at higher plastic strains and the Ludwik approach overestimates it. The Ghosh approach with a coefficient of determination of R2 = 0.9944 shows the best agreement. This approach was therefore selected for the further investigations and is defined in Eq. 4. k f = A ∗ (B + ϕ)C−D

(4)

Here k f describes the yield stress, ϕ the degree of deformation and A, B, C, D are material-specific constants. An overview of the material constants determined by fittings for the whole temperature range can be found in [3]. Since no strain hardening occurs in the area of Lüders strain, this area was initially not considered in the parameterization of the extrapolation approaches. The increase in strength caused by blue brittleness and the influence of Lüders strain on the hardening behaviour must be taken into account for detailed numerical simulation. However, in the field of sheet metal forming, often used multiplicative or additive approaches to consider the temperature influence, such as the JohnsonCook, El Magd—Swift or Zerelli—Amstrong approaches, only allow the mapping of 800 700

Flow stress [MPa]

600 500 400 300

Experiment Ghosh Ludwik Hocket - Sherby Voce

200 100 0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Eqv. plastic strain [-]

Fig. 10 Comparison of different extrapolation approaches and experimental derived flow curve at RT

136

B.-A. Behrens et al.

Flow stress (φ=0.5) [MPa]

800

600

400

El Magd - Swift Zerelli - Amstrong Johnson - Cook

200

Experiment

0 0

100

200

300

400

500

Temperature [°C] Fig. 11 Comparison of different approaches for the consideration of the temperature influence on flow stress

a monotonous strength decrease with increasing temperature. This is also illustrated in Fig. 11. The figure shows exemplary the temperature-dependent yield stress for a degree of deformation of ϕ = 0.5 In order to take the effect of temperature as well as Lüders strain into account, a user defined hardening behaviour of steel has been implemented into the FE-system Abaqus 6.14 by means of VUHARD subroutine [32]. Within this routine, the flow stress is calculated based on Ghosh approaches parameterised by means of temperature dependent experimental data. For a continuous description of temperature influence, an interpolation scheme is used between the calculated base points. In order to take the delayed hardening due to Lüders strain into account, a modified strain value is calculated within the VUHARD routine. Hardening of the material is therefore only calculated when the temperature-dependent value of the Lüders strain is exceeded by the plastic strain calculated by the FE-solver. The temperature dependency of Lüders strain is defined by a piecewise linear function. A comparison between experimental measured force-displacement curves and numerical calculated results using the VUHARD routine of tensile tests is exemplary given in Fig. 12 for different test temperatures. The curves show a good qualitative agreement. Furthermore, a force-displacement curve for RT regardless of Lüders strain has been computed. The comparison of this curve with experimental data shows an overestimation of material hardening. In addition, the curves show the influence of blue brittleness on the required forming forces. Starting from RT, a clear increase in the forming force can be seen. On the other hand, at a test temperature of 500 °C, the material is softened and the force drops below the level measured at RT.

Experimental and Numerical Investigations …

137

1000

Simulation Experiment

Force [N]

800

T = RT T = 250 °C

T = 500 °C *** No Lueders strain (T = RT)

600

400

200

0 0

0.5

1

1.5

2

2.5

Displacement [mm]

Fig. 12 Comparison of experimental measured and numerical calculated force-displacements curves at RT, 250 and 500 °C

The simulation of the FRT forming process requires knowledge of the current, varying shear angle. Furthermore, the strain in the fibres must be known. To investigate the nonlinear shear behaviour of the FRT, the shear stress as a function of the shear angle must be determined from the experimentally determined axial force path profiles. The shear angle course can be calculated by theoretical assumptions from the crosshead displacement or by digital image correlation from the local displacements, as done in this study. The material behaviour in fiber direction is described by means of stress-strain relationships based on the experimental force-displacement curves recorded in uniaxial tensile tests. A comparison of experimental obtained force-displacements curves for various temperatures and a test speed of 1 mm/s for uniaxial tensile tests with 0°/90° fibre direction (a) and bias extension tests with ±45° fibre direction (b) are illustrated in Fig. 13. Under shear stress, the fibres start to rotate and the movement is influenced by friction between fibre bundles and the matrix. After a specific lock angle is reached, further deformation leads to a sharp increase of force as well as out of plane wrinkling due to compaction of fibre bundles. At temperatures above the melting temperature, which was determined by means of DSC measurements to 220 °C, the forces decrease further. Here, the bias extension test showed early failure due to fibre pull out. To supplement the shear stress—shear angle database, data from the picture frame test were added for these temperatures. The determination of the local shear angle distribution via the local displacement was carried out with the help of digital image correlation.

138

B.-A. Behrens et al.

(a)

(b)

14

T = RT T = 120 °C T = 180 °C T = 220 °C T = 240 °C

12

T = RT T = 120 °C T = 180 °C T = 200 °C T = 220 °C

6

5

8

Force [kN]

Force [kN]

10

7

6

4

3

4

2

2

1

0

0 0

1

2

Displacement [mm]

3

0

5

10

15

20

Displacement [mm]

Fig. 13 Force–displacement curves obtained in uniaxial tensile test (a) and bias extension test (b) for various temperatures

For this purpose, the Corelli code developed by ENS Cachan was used and modified. Figure 14a exemplary shows the calculated local shear angle distribution for bias extension test at T = 210 °C shortly before sample failure. The described different deformation zones occur and in the middle of the sample an area with pure shear is formed. In order to determine the shear force from the experimentally measured axial force, the different deformation areas must be taken into account. The normalized shear stress per unit length can be determined as defined in Eq. 5, taking into account the geometric properties of the bias extension test specimen [28]. Here H and W are the specimen height of 70 mm and width of 30 mm. FS H (γ ) =

1 (2H − 3W ) cos γ



  γ  H γ γ γ − 1 Fax cos − sin − W FS H cos W 2 2 2 2

(5)

  Since the current value FS H (γ ) depends on FS H γ2 , an iterative calculation   scheme in Matlab has been obtained. The value FS H γ2 takes into account the amount of shear in the regions B, where the shear angle is the half of that in region C. Exemplary, the calculated shear stress—shear angle curves for various temperatures between RT and 250 °C are illustrated in Fig. 14b. With increasing temperatures the

Experimental and Numerical Investigations …

139

shear stress is significantly reduced and the curve shape changes. At high temperatures the shear stress is negligible until the specific locking angle is reached and the shear stress increases sharply. Within the developed material model the nonlinear dependence of shear stress and shear angle is described by means of Eq. 2 which was parameterized based on the experimental data. The material behaviour in fibre direction is described by means of Ramberg– Osgood equation (Eq. 3). The stress-strain curve determined in this way is exemplary illustrated for RT in Fig. 15a and shows a good agreement with the experimental data. The comparison with the ideal elastic curve, represented by the dashed line, shows an overestimation of the stresses occurring at higher strains. A representation of the

(a)

(b)

Shear angle [rad] 0.0 0.4 0.6 0.8 1.0

Fig. 14 Local shear angle distribution calculated by means of DIC at T = 210 °C (a) and shear stress—shear angle curve at various temperature (b)

(b) Experimental data RAMBERG - OSGOOD Linear Young's modulus

400

500

Stress [MPa]

Stress [MPa]

(a) 600

200

400 300 200 100 0 200

Te

mp

0 0

0.01

0.02

Strain [-]

0.03

0.04

150

era

100

50 tur 0 0 e[ °C ]

0.01

0.02

0.03

0.04

0.05

Strain [-]

Fig. 15 Exemplary calculated stress—strain curve based on Ramberg–Osgood equation and experimental data at RT (a) and stress–strain surface for the investigated temperature range (b)

140

B.-A. Behrens et al.

calculated stress—strain surface for the investigated temperature range is given in Fig. 15b. The comparison with the experimental data represented by the blue stars shows a good agreement. The verification of the implemented material model, especially regarding the correct mapping of fibre shear and reorientation, was performed by simulation of the uniaxial tensile tests as well as the bias extension test. The discretization of the samples was performed by means of shell elements with reduced integration and an element edge length of 0.5 mm. To avoid numerical shear locking, the element edges were aligned in the direction of the initial fibre direction [34]. The meshing with correct element edge orientation was realized with a Python tool, developed within the study, in Abaqus. The size of the samples was chosen chosen analogously to the experimental tests. The displacement was applied to the upper edge as in the experimental tests. The temperature was defined as a predefined field. Since the experiments were carried out isothermally, the heat transfer is neglected. Figure 16 exemplary shows a comparison of experimental results and numerical simulation at T = 180 °C. The stress-strain curve, determined using the Ramberg– Osgood equation from the experimental data, corresponds well with the numerically calculated curve (Fig. 16a). The determination of the numerical results was carried out for an element from the middle of the sample. The force-displacement curves presented in Fig. 16b also show a good agreement and the reduced force increase at higher displacements is well represented. Figure 17 shows a comparison of the manually measured shear angle (a) and the local shear angle distribution test determined by means of DIC (b) and numerically

(a)

12

(b) 400 350

10

300 250

Stress [MPa]

Force [kN]

8

6

4

200 150 100

2

Experiment Simulation

Experiment Simulation

50

0

0

0

1

2

Displacement [mm]

3

4

0

0.01

0.02

0.03

0.04

Strain [-]

Fig. 16 Comparison of experimental and numerical force–displacement curve (a) and stress–strain curve (b) exemplary at T = 180 °C

Experimental and Numerical Investigations …

(a) Shear angle γ= -2θ =55° 2θ=35°

(b)

(c)

ROI

141

(d)

Experiment Simulation

SDV7 shear anglel [rad] 1.0 0.5 0.0

Fig. 17 Comparison of manually measured (a) shear angle and calculated local shear angle distribution by means of DIC (b) and numerical simulation (c) exemplary at T = 180 °C; Comparison of experimental determined and numerical calculated shear angle–displacement curve (d)

calculated (c) for the bias extension test at a temperature T = 180 °C shortly before sample failure. The numerically calculated course of the shear angle over the applied axial displacement, illustrated in Fig. 17d corresponds well with the value determined by DIC. The data shown in Fig. 17d was averaged over the marked ROI. A comparison of the numerically calculated force-displacement curves with the experimentally determined data was also carried out and showed a good agreement. Thus, the implemented model is able to accurately represent the strongly anisotropic material behaviour in dependence of the fibre reorientation and under consideration of the temperature influence.

3.2 Hemispherical Dome Tests To investigate the forming behaviour of the steel sheet, rectangular blanks with a height and width of 200 mm were used. In the numerical simulation, the workpiece was modelled elastic—plastic using the described UHARD routine. The discretization was carried out with shell elements with reduced integration and an edge length of 2.0 mm. The tools were modelled as analytical rigid bodies. The blank holder force is applied using a load condition and the punch movement by means of displacement boundary condition. Figure 18 compares exemplary experimental and numerical results for a low blank holder force of 5 bar (a) and a higher force of 10 bar at RT. The tests were performed without the use of lubricants. Within the numerical simulation the tangential contact behaviour was modelled by means of Coulomb friction and a friction factor of μ = 0.1, which is common for deep drawing processes. With a blank holder force of 5 bar a drawing depth of 40 mm can be achieved. However, radial compressive stresses and the low blank holder force result in wrinkles in the flange area. The formation of wrinkles, the draw-in behaviour and

142

B.-A. Behrens et al.

(a)

(b)

Eq. plastic strain [-] 0.4

0.2

0.0

Fig. 18 Comparison of experiment and simulation of _hemispherical dome test with DX51 and a blank holder force of 5 bar (a) and 10 bar (b) at RT

the resulting part geometry are well reproduced by the numerical simulation. The formation of wrinkles can be suppressed by increasing the blank holder force up to 10 bar. However, the maximum drawing depth is reduced to 32 mm. At higher drawing depths, component failure due to cracking occurs. Furthermore, a numerical analysis of the forming behaviour of the FRT was performed. Rectangular blanks with a length and width of 200 mm were assumed. Since the strength of the thermoplastic matrix at T = 250 °C is significantly lower than that of the steel sheet, the blank holder force was reduced to 0.2 bar. The mechanical behaviour of the FRT was described by the developed VUMAT material model. The discretization of the workpiece was carried out analogously to the steel sheet with shell elements with reduced integration and an edge length of 2.0 mm. Initially, isothermal conditions were assumed for the numerical simulation. Based on literature, the friction coefficient was initially assumed to be μ = 0.25 [35, 36]. Figure 19 Initial fibre direction Fibre 2

Initial fibre direction

(a)

90° Fibre 1

(b)

SDV7 shear anglel [rad] 0.66 0.33 0.0

Fig. 19 Influence of initial fibre orientation ±45° (a) and 0°/90° (b) on the forming behaviour of FRT in hemispherical dome test

Experimental and Numerical Investigations …

(a)

U3 Drawing depth [mm]

143

(b)

SDV7 Shear anglel [rad]

25.0

0.3

12.5

0.15

0.0

0.0

Fibre 2

Initial fibre direction 90° Fibre 1

Fig. 20 Comparison of experiment and numerical simulation of hemispherical dome test for semifinished sandwich compound

exemplarily shows the influence of different initial fibre orientations on the forming behaviour at T = 250 °C and a drawing depth of 40 mm. The contour plot shows the resulting shear angle as defined in Fig. 5. Since the forming behaviour is significantly influenced by the shearing of the fibres, the initial alignment has a significant influence on the process and in particular on the draw-in behaviour and the final component geometry. The resulting maximum shear angle is comparable. Since the fibre offers a high strength and low plastic formability, the areas with fibres aligned orthogonal to the sheet sides are drawn in more strongly. For the sheet with an initial fibre orientation of ±45° (a) the corners are drawn in and for the one with 0°/90° (b) the middle of the sides. This influences the final part geometry significantly and also points out the strong anisotropic forming behaviour of the FRT. Between the drawn in areas the material is sheared, resulting in high shear angles up to 0.66 rad (≈38°). In areas with high shearing small wrinkles occur due to self compaction of the fibres and low transverse shear stiffness. Figure 20 shows exemplary a comparison between experiment and numerical simulation for forming of a semi-finished sandwich compound consisting of DX51 cover layers and a FRT. The mechanical behaviour was described by the user-defined routines UHARD for the DX51 and VUMAT for the FRT, respectively. The temperature at the beginning of the experimental forming process was determined to T = 220 °C. However, isothermal conditions were assumed in the numerical simulation. The blank holder force was increased to 20 bar for the forming of the sandwich composite. As a consequence of the lower formability of the steel at elevated temperatures due to blue brittleness, the drawing depth was reduced to 30 mm. After forming, cooling took place in the die under pressure to develop the adhesive bond between steel and FRT. The comparison of the final part geometry shows a good agreement between experiment and simulation (Fig. 20a). Although the blank holder force is increased, wrinkles occur in the steel sheets as the soft FRT core layer impedes the application of the force. During the experimental tests the upper cover plate fails due to cracking

144

B.-A. Behrens et al.

even though the drawing depth was reduced. Since no damage model was implemented for the steel material, the component failure is not indicated in the numerical simulation. Figure 20b shows a comparison of the deformed FRT core layer. For this purpose, the adhesive connection between core and cover view was separated manually. It can be seen that the material flow differs noticeably due to the significantly different properties. Due to the low plastic deformability of the fibres, the FRT is drawn in stronger in the areas in which the fibres are oriented orthogonally to the side edges (see Fig. 19). The deformation behaviour of the FRT in the numerical simulation is qualitatively in a good agreement with the experiment. However, small deviations can be observed with respect to the draw-in behaviour. During experimental tests the material flow can be affected by insertion deviations of the workpiece. Furthermore, isothermal conditions and a constant friction factor were assumed in the numerical simulation. The influence of non-uniform temperature distributions was thus not considered. Future work will therefore investigate the influence of heat conduction and friction properties in the contact zone between the FRT core and the steel cover plates during the forming process.

4 Summary and Outlook Within this study, the single materials of a semi-finished hybrid compound consisting of DX51 and FRT were investigated with regard to a combined forming and joining process. Based on the experimental findings, appropriate material models were developed, parameterized and implemented into the FE-tool Abaqus by means of user defined routines. The complex and anisotropic material behaviour of the FRT is described by a continuum model approach considering fibre reorientation and temperature dependence. Furthermore, a model for the steel was presented, which takes into account the effects of blue brittleness and Lüders strain on hardening behaviour. The models were validated by comparison between experimental data and numerical. In further work, the simulation model created is to be extended with regard to a temperature and pressure-dependent contact modelling. Acknowledgements This work is funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through the International Research Training Group 1627 “Virtual Materials and Structures and their Validation”.

References 1. Behrens, B. A., Vucetic, M., Neumann, A., Osiecki, T., & Grbic, N. (2015). Experimental test and FEA of a sheet metal forming process of composite material and steel foil in sandwich design using LS-DYNA. Key Engineering Materials, 651–653, 439–445.

Experimental and Numerical Investigations …

145

2. Behres, B.-A., Hübner, S., Grbic, N., Micke-Camuz, M., Werhane, T., & Neumann, A. N. (2017). Forming and joining of carbon-fiber-reinforced thermoplastics and sheet metal in one step. Procedia Engineering, 183, 227–232. 3. Wester, H., Chugreev, A., Moritz, J., Behrens, B.-A. (2019). Experimental and numerical investigations on the material behaviour of fibre-reinforced plastics and steel for a multi-material compound production. In: R. Schmitt, G. Schuh (Hrsg.) Advances in production research. Proceedings of the 8th congress of the german academic association for production technology (WGP), Aachen, 19–20 Nov 2018. Cham, Switzerland: Springer. 4. Taha, I., Abdin, Y., & Ebeid, S. (2013). Comparison of picture frame and bias-extension tests for the characterization of shear behaviour in natural fibre woven fabrics. Fibers and Polymers, 14(2), 338–344. 5. Machado, M. (2015). Modelling and simulation of continuous fibre reinforced thermoplasticmatrix composites. Dissertation Universität Linz, Linz. 6. Harrison, P., Gomes, R., & Curado-Correia, N. (2013). Press forming a 0/90 cross-ply advanced thermoplastic composite using the double-dome benchmark geometry. Composites Part A Applied Science and Manufacturing, 54, 56–69. 7. Sharma, S., Potluri, P., Porat, I. (1999). Moulding analysis of 3D woven composite preforms: Mapping algorithms. In: Proceedings 12th International Conference of Composite Materials. 8. Hancock, S. G., & Potter, K. D. (2006). The use of kinematic drape modelling to inform the hand lay-up of complex composite components using woven reinforcements. Composites Part A Applied Science and Manufacturing, 37(3), 413–422. 9. Gereke, T., Döbrich, O., Hübner, M., & Cherif, C. (2013). Experimental and computational composite textile reinforcement forming: A review. Composites Part A Applied Science and Manufacturing, 46, 1–10. 10. Vanclooster, K., Lomov, S. V., & Verpoest, I. (2009). Experimental validation of forming simulations of fabric reinforced polymers using an unsymmetrical mould configuration. Composites Part A Applied Science and Manufacturing, 40(4), 530–539. 11. Yu, W. R., Pourboghrat, F., Chung, K., Zampaloni, M., & Kang, T. J. (2002). Non-orthogonal constitutive equation for woven fabric reinforced thermoplastic composites. Composites Part A Applied Science and Manufacturing, 33(8), 1095–1105. 12. Xue, P., Peng, X., & Cao, J. (2003). A non-orthogonal constitutive model for characterizing woven composites. Composites Part A Applied Science and Manufacturing, 34(2), 183–193. 13. Peng, X. Q., & Cao, J. (2005). A continuum mechanics-based non-orthogonal constitutive model for woven composite fabrics. Composites Part A Applied Science and Manufacturing, 36(6), 859–874. 14. Peng, X., & Rehman, Z. U. (2011). Textile composite double dome stamping simulation using a non-orthogonal constitutive model. Composites Science and Technology, 71(8), 1075–1081. 15. Boisse, P., Zouari, B., & Gasser, A. (2005). A mesoscopic approach for the simulation of woven fibre composite forming. Composites Science and Technology, 65(3–4), 429–436. 16. Wang, P., Hamila, N., & Boisse, P. (2013). Thermoforming simulation of multilayer composites with continuous fibres and thermoplastic matrix. Composites Part B Engineering, 52, 127–136. 17. Wang, P., Hamila, N., Pineau, P., & Boisse, P. (2014). Thermomechanical analysis of thermoplastic composite prepregs using bias-extension test. Journal of Thermoplastic Composite Materials, 27(5), 679–698. 18. König, W., Klocke, F. (2017). Fertigungsverfahrenn4. 6. Auflage. VDI-Verl.; Düsseldorf, Berlin [u.a.]: Springer. 19. Lange, K. (Ed.). (1990). Blechbearbeitung (2nd ed.). Berlin: Springer. 20. Götze, T. (2015). Erweiterte Blechwerkstoffmodellierung im Rahmen eines gekoppelten Spritzgieß-Blechumformprozesses. Dissertation Universität Hannover, Hannover. 21. Mazière, M., Luis, C., Marais, A., Forest, S., & Gaspérini, M. (2017). Experimental and numerical analysis of the Lüders phenomenon in simple shear. International Journal of Solids and Structures, 106–107, 305–314. 22. Cottrell, A.H., Bilby, B.A. (1949). Dislocation theory of yielding and strain ageing of iron. Proceedings of the Physical Society. Section A, 62(1), 49–62.

146

B.-A. Behrens et al.

23. Yoshida, F. (2000). A constitutive model of cyclic plasticity. International Journal of Plasticity, 16(3–4), 359–380. 24. van Rooyen, G. T. (1971). Basic factors which influence the Lüders strain during discontinuous yielding. Materials Science and Engineering, 7(1), 37–48. 25. Kyriakides, S., Miller, J. E. (2000). On the propagation of Lüders bands in steel strips. Journal of Applied Mechanics, 67(4). 26. DIN EN ISO 6892-1:2017-02, Metallische Werkstoffe_- Zugversuch_- Teil_1: Prüfverfahren bei Raumtemperatur (ISO_6892-1:2016); Deutsche Fassung EN_ISO_6892-1:2016. 27. Boisse, P., Hamila, N., Guzman-Maldonado, E., Madeo, A., Hivet, G., & dell’Isola, F. (2017). The bias-extension test for the analysis of in-plane shear properties of textile composite reinforcements and prepregs: A review. International Journal of Material Forming, 10(4), 473–492. 28. Cao, J., Akkerman, R., Boisse, P., Chen, J., Cheng, H. S., de Graaf, E. F., et al. (2008). Characterization of mechanical behavior of woven fabrics: Experimental methods and benchmark results. Composites Part A Applied Science and Manufacturing, 39(6), 1037–1053. 29. Prodromou, A. G., & Chen, J. (1997). On the relationship between shear angle and wrinkling of textile composite preforms. Composites Part A Applied Science and Manufacturing, 28(5), 491–503. 30. Lee, W., Padvoiskis, J., Cao, J., de Luycker, E., BOISSE, P., Morestin, F., et al. (2008). Biasextension of woven composite fabrics. International Journal of Material Forming, 1(S1), 895– 898. 31. Pierce, R. S., Falzon, B. G., Thompson, M. C., & Boman, R. (2015). A low-cost digital image correlation technique for characterising the shear deformation of fabrics for draping studies. Strain, 51(3), 180–189. 32. SIMULIA Abaqus 6.14. (2014). Abaqus User Subroutines Reference Guide. 33. Behrens, B.-A., Bonk, C., Grbic, N., Vucetic, M. (2017). Numerical analysis of a deep drawing process with additional force transmission for an extension of the process limits. IOP Conference Series: Materials Science and Engineering, 179(1). 34. Yu, X., Cartwright, B., McGuckin, D., Ye, L., & Mai, Y.-W. (2006). Intra-ply shear locking in finite element analyses of woven fabric forming processes. Composites Part A Applied Science and Manufacturing, 37(5), 790–803. 35. Khan, M. A., Mabrouki, T., Vidal-Sallé, E., BOISSE, P. (2010). Numerical and experimental analyses of woven composite reinforcement forming using a hypoelastic behaviour. Application to the double dome benchmark. Journal of Materials Processing Technology, 210(2), 378–388. 36. Badel, P., Gauthier, S., Vidal-Sallé, E., & BOISSE, P. (2009). Rate constitutive equations for computational analyses of textile composite reinforcement mechanical behaviour during forming. Composites Part A Applied Science and Manufacturing, 40(8), 997–1007.

The Representation of Fiber Misalignment Distributions in Numerical Modeling of Compressive Failure of Fiber Reinforced Polymers N. Safdar, B. Daum, R. Rolfes and O. Allix

Abstract This chapter introduces a methodology to implement systematically spatially varying fiber misalignment distribution characterized experimentally into numerical modeling for failure surface analyses under in-plane loading conditions in compressive domain. If stochastically characterized spectral density of fiber misalignment by performing averaging over measured data as an ensemble is available, the approach allows designers to enhance the efficient usage of Fiber Reinforced Polymers (FRPs) by utilizing maximum capacity of the material with calculable reliability. In the present work, Fourier transform algorithms generally used in signal processing theory, are employed to generate representative distributions of fiber misalignments. The generated distributions are then mapped onto a numerical model as fluctuations of the material orientations. Through Monte Carlo analyses, probability distribution of peak stresses are subsequently calculated. This information is then used to define a probabilistic failure surface.

1 Introduction Fiber reinforced polymers (FRPs) are finding an ever increasing application in advanced structural components because of their exceptional material properties, such as high strength and stiffness to weight ratio. Foremost fields of application for FRPs are mainly highly loaded components such as aircraft fuselage panels, fins and rudders, and wind turbine blades among others [1, 2]. The main driving forces behind this shift in industry from metals towards composites are economic and environmental considerations. However, in spite of all the advantages offered by FRPs, their potential can not be fully utilized at present due to a considerable spread in material properties observed for FRPs. This in turn necessitates conservative designs. There are multiple causes for the variable nature of material properties of this class of mateN. Safdar (B) · B. Daum · R. Rolfes Institute of Structural Analysis, Leibniz University Hannover, Hannover, Germany e-mail: [email protected] O. Allix ENS Paris-Saclay, Cachan, France © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_8

147

148

N. Safdar et al.

rials. Differences in chemical composition, flaws introduced through manufacturing processes in the form of fiber misalignments and voids etc. are prominent sources of uncertainty among them [3–5]. Of all the directional strength properties, strength under compression is susceptible to highest spread [6–9]. Current standard design practice for composites based engineering structures is to consider material properties as deterministic quantities. To ensure reliability, high factor of safety are used to guard against catastrophic failures. Hence, a quantification the variation of material strength characteristics is highly desirable for engineering design work as it would allow better utilization of the material potential. Uncertainty quantification in FRPs is, however, not a trivial task and has already received some attention by the research community [8, 10–12]. Further advances in the understanding of material characteristics variation of composites will eventually result in the procedures to quantify the uncertainty of the mechanical response of the structures based on such materials. Such a quantification consequently can then be used to design more robust structures, with a well defined level of reliability and with known failure probabilities. This applies not only to the initial design, but also to the planning of maintenance can benefit from knowing the weak points in the structure based on the spatial distributions of material properties known a priori. As a step towards these objectives, this chapter is concerned with the effect of the spatial distribution of fiber misalignments on the failure strength under compression and compression-shear loadings. The general procedure is discussed here only in brief, however, the reader is referred to [8, 13, 14] for further details. In particular, the present distribution is concerned with procedures and techniques to represent the fluctuation of misalignment as a spatial distribution in models used for numerical analyses.

1.1 Previous Work Fiber misalignments (or imperfections) are small but unavoidable deviations of individual fibers (or a local patch of fibers) from their mean directions. Typically, this occurs as a result of limitations in the manufacturing processes. Measurements have shown that the misalignments across a volume of FRPs show a random but locally correlated spread, with minimum and maximum values controlled by manufacturing process and the size of the component [4, 5, 15, 16]. Fiber misalignments have been identified as the most important characteristic, along with matrix shear properties for the compression strength of FRPs [17], and consequently limiting the design. Experiments have shown that the compression strengths of FRPs are ca. 60% of their tensile strengths [12]. To include the effects of fiber misalignments on compression strength prediction in numerical models, a lot of work has been done over the past few decades with varying degrees of accuracy. In the following, a short overview of different approaches to tackle the problem are jotted down.

The Representation of Fiber Misalignment Distributions in Numerical …

149

There have been different approaches to predict the compressive failure of FRPs using analytical, numerical and experimental bases, both in terms of mechanism of failure and failure strength prediction. Analytical approaches generally consider micromechanical aspects, i.e. mechanisms at ply level at a length scale of several fiber diameters. Evolving over time, analytical micromechanics considered microbuckling i.e. small length scale fiber rotation of perfectly aligned fibers in elastic matrix [18], microbuckling due to fiber misalignment [19] and matrix nonlinear shear stress τ12 (γ ) [20], and the effect of fiber bending stiffness on microbuckling [21] among others. The quintessence of the findings from analytical micromechanics can be compiled into (1). There, σc represents compressive strength, τ12 (γ ) is the matrix shear stress as a function of shear angle γ , φ0 is the linearized initial misalignment angle, and γ represents the linearized shear deformation angle.  σc = max γ

τ12 (γ ) φ0 + γ

 (1)

In addition to analytical micromechanics, several numerical approaches at micro scale were put forward. These typically model fiber and matrix as separate constituents, adding different features of interest to solve the problem such as variation of angle or amplitude of misalignment, matrix shear behavior etc. [22–26]. Some experimentally driven prediction strategies have also been proposed [6, 27, 28]. The theoretical assumption for prediction formulas in such data driven approaches are taken from the micromechanical theories of well found analytical models. Although the aforementioned analytical, numerical and experimental approaches can give a detailed description of the failure mechanisms, the main limitation with all them is the manner in which the most critical aspect i.e. fiber misalignment is considered. For instance, micro scale models consider misalignment as either a single value of angle, or as a sinusoidal undulation with a constant amplitude in transversal direction at a particular location called infinite band, or a sinusoidal band with varying amplitudes across transversal direction at a particular location called a finite band. However, experimental observations of fiber misalignments have shown that although the misalignment is locally approximately sinusoidal in nature, it is spread over the whole spatial domain in a correlated manner with varying wavelengths and amplitudes [4, 5, 16, 29]. Hence, it is apparent that an accurate model needs to consider the misalignment distribution in some form, if an accurate estimate of the effective strength at the length scale of a specimen or component is to be made. Following this line of thought, a number of numerical approaches were developed. Bednarcyk et al. [30] included statistical distribution of misalignment in the material stiffness using tensor of rotations, defined by probability density functions of angles of misalignment. Although this model considers the distribution of misalignment in material description but it does not account for the spatial correlation. Allix et al. [31] and Sutcliffe [9] modeled distribution of misalignment in spatial domain as a random variable with no correlation, and correlation introduced using artificial smoothening of random angle parameter respectively. Slaughter and Fleck [13] and later Liu et al. [8] modeled spatial misalignment distribution following the procedure introduced

150

N. Safdar et al.

by Cebon and Newland [32, 33] in a Cosserat continuum based model of Fleck [21]. All the approaches considering distribution of misalignment in spatial domain show the promising prospect of accurate prediction of the compressive failure probability distribution.

1.2 Current Work The approaches discussed thus far have focused on certain specific aspects of failure such as mechanism of failure, kink band width, effect of different nonlinearities etc. The quantification of variation in strength properties due to variation in material characteristics using micromechanical and analytical models is lacking. The reliability of the failure strengths with such predictive tools is absent. Consequently, the practical implications are rather limited for real world engineering problems. Another aspect to consider is that micromechanical approaches are not suitable for engineering designs, as the design process rely on ply based homogenized descriptions. In this regard, homogenized modeling approaches are a viable option for research in failure strength probability and consequently, engineering applications. Although previous homogenized model based investigations have produced strength probability distributions for uniaxial load cases, multiaxial load cases were not considered. For engineering simulations, not only the axial loading but also combined load cases are of importance. Therefore, it is essential to predict not only the distribution of strength under axial compression but also under combined load cases such as compression-shear. In the current approach, the idea of correlated misalignment distribution generation proposed by Slaughter and Fleck [13] and Liu et al. [8] with extension of varying degrees of refinement and model sizes is detailed. The focus of this chapter is on modeling experimentally informed, realistic spatial distributions of fiber misalignments. Such a distribution can then be used in the following numerical analyses, by mapping it to finite elements. The analyses on these numerical models are then performed not only under axial compression, but also combined compression and in-plane shear loading using simpler description of material modeling available commercially. The goal is to give a methodology for the prediction of probabilistic failure surfaces for unidirectional FRPs based on the fiber misalignments gathered from measurements on physical specimens, so that the high factors of safety currently being used in practice for FRPs composites can be scrapped in favor of reliability based design practices.

2 Measurement Data Driven Model Generation As highlighted in the introduction, the nature of fiber misalignments in fiber reinforced composites is stochastic with a certain spatial distribution and correlation lengths rather than sinusoidal deterministic as assumed in most of the analytical and

The Representation of Fiber Misalignment Distributions in Numerical …

151

numerical models tackling the problem of waviness modeling in such materials. It is thus imperative that for a probabilistic study with the goal of finding the apparent compressive strength of the material needs to include this naturally occurring material imperfection in the modeling framework. Herein it is detailed how such a procedure can be imparted in an accurate and efficient manner, preserving the inherent material characteristics of the misalignments. There have been different approaches to quantify the statistics of waviness, as discussed briefly in the introduction, such as sinusoidal representation at a particular location in the numerical/analytical model with varying amplitudes and wavelengths, and a random spread of misalignment angle on the spatial domain. The main drawback of these numerical and analytical approaches is the lack of representation of spatial interaction between neighboring fibers and hence, omitting the effect of inherently present correlation length in fiber misalignment distribution. If the correlation property is ignored and misalignments are independently sampled at some arbitrary distance, e.g. as defined by the discretization used in the numerical model, a spurious dependence of the strength on the number of sampling points will result. More independent sampling points will generally lower the apparent strength. This is due to the circumstance that the apparent strength of a homogeneously loaded specimen is essentially controlled by its weakest, i.e. most misaligned region. In actual specimens the misalignment of a pair of points is only independent if they are separated by a certain distance, due to physical constraints like contact between fibers, etc. Hence, Consideration of correlation properties in the model generation is a prerequisite for a correct representation of the effect of specimen size on the apparent strength [34, 35].

2.1 Fiber Misalignment Distribution Generation from Experimentally Characterized Spectral Density Basic Terminologies Representation of the fiber misalignments through spectral representation method provides a procedure to define both the statistical distribution and the correlation properties of a random variable. The spectral representation method applies to stationary random fields. The term stationary implies that the spectral characteristics of the field are independent of spatial interval in which they are measured/sampled, and hence can be accounted as a material characteristics. Assuming that the misalignment angle φ(y) (with y being spatial dimension) is a stationary random field, the spectral density S(ω) of fiber misalignment angle φ(y) can be used to represent the random correlated field through spectral representation method [8, 13]. Similar approaches have also been successfully employed in other engineering problems such as surface generation for contact mechanics [32, 36], and sea surface generation for visual imagery or optical analysis [37, 38] among others.

152

N. Safdar et al.

Fourier analysis yields a representation of a function in terms of an infinite series of superposed harmonic functions of different wavelengths and amplitudes. Fourier transform of the fiber misalignment angels is given by (ω) =

1 2π





−∞

φ(y)dy

(2)

and the inverse transform by  φ(y) =



−∞

(ω)dω

(3)

For a fiber of infinite length, the autocorrelation function for fiber misalignments in the x-direction in x y plane is defined as  R(τ ) =

∞ −∞

φ(τ ) · φ(y + τ )dy

(4)

The autocorrelation function defines by how much the values of a function are correlated with itself over a lag distance τ . Examples of autocorrelation functions for some standard functions are given in Fig. 1 for better understanding. In Fig. 1a–c, three basic functions are shown i.e. a square wave, a sine wave, and random values from a normal distribution respectively. In Fig. 1d–f, their corresponding autocorrelation functions are shown. The autocorrelation of infinite sine and square waves are infinite cosine and triangular waves with constant amplitudes. However, when the autocorrelation is calculated over a finite distance following (4), the autocorrelation slowly decays to zero. This reflects the fact that the function has zero values outside the given range (5π in these examples). For the random functions, it can be seen easily that after a spike at zero lag the correlation becomes approximately 0 over a very short lag distance. This implies that the points are independent of each other, which is true for random sampling. Additionally, it can be noted that the autocorrelation takes positive and negative values. Looking at the example of sine wave, it is clear that any given point is perfectly correlated with itself. The correlation decreases with increasing lag distance, and when the original function changes sign the correlation becomes negative. Spectral density S(ω) is defined as the Fourier transform of the autocorrelation R(τ ) of a function, mathematically S(ω) =

1 2π





−∞

R(τ ) · e−ιωτ dτ

(5)

Spectral density provides information about the amplitudes of harmonic functions associated with different frequencies ω (or wavelengths λ) in that function. One can find different definitions of frequencies in terms of either the angular frequency ω or in terms of the ordinary frequency f = ω/2π in literature. In the current imple-

The Representation of Fiber Misalignment Distributions in Numerical … 1

1

0.5

0.5

0

0

-0.5

-0.5

-1

0

10

-1

20

5

0

-5 0

10

-5

20

1

1

0.5

0.5

0.5

-1

R( τ)

1

0

0

500

τ

1000

(d) R( τ ) of square wave over 5 π

-1

5

0 -0.5

-0.5

-0.5

0

(c) Normal distribution random values

(b) Sin function

R( τ)

R( τ)

(a) Square wave function

0

153

0

500

τ

1000

(e) R( τ ) of sine wave over 5 π

-1

0

500

τ

1000

(f) R( τ ) of random function

Fig. 1 Examples of some standard functions and their autocorrelation. The τ axis in bottom row plots represents number of total discrete points over which the function has been sampled and corresponds to the lag distances

mentation, the frequencies considered are angular i.e. ω = 2π/λ. It is the number of waves contained in 2π dimensions. Spectral density can also be related to the Fourier transform of the original function (ω) and it’s conjugate (ω)∗ as [33] S(ω) =

1 (ω)∗ (ω) 2π

(6)

This relationship is useful in discrete calculations as will be explained in the next heading. The total area under the spectral density curve 2 equals the mean square of the function (see (7)) and it is a material characteristic quantity in the current representation [8, 13, 15].  2 =



S(ω)dω

(7)

−∞

Experimental Characterization of Spectral Density In experimental investigations, misalignment angle values are measured over finite number of points on a specimen of finite volume. Hence in practice, spectral density can only be calculated approximately in a discrete manner. If the fibers are followed over a finite distance L and the misalignments are calculated distance apart from each other for a total of N times, one can write

154

N. Safdar et al.

φk = φ(yk )

(8)

where yk = k (for k = 0, 1 . . . , N − 1) and = L/N . The discrete Fourier transform of φk is given by N −1 1  φr e−ι2πkr/N (9) k = N r =0 To avoid aliasing which is an accuracy eroding artifact [33], it is necessary to take number of sampling points N > ωc L/π where ωc is the maximum component of frequency (minimum wavelength λmin ) present in the fiber misalignment field φ(y). In a discrete form, spectral density can then be calculated by [33] Sk = Sφ (ωk ) =

N |k |2 2π

(10)

It is to be noted that the minimum wavelength that can be measured on a specimen depends on the distance between measurement points , or alternatively number of measurement points N . For spectral density of fiber misalignments to be a material characteristic, an ensemble averaged value of the function has to be calculated by measuring misalignments experimentally in a statistical manner. Some experimental investigations to characterize the spectral density for fiber reinforced composites has been carried out in literature. Although they are somewhat lacking the true representation of spectral density curve because of insufficient number of measurements to get a statistical average, they still provide a useful basis to fit a simple function to the curve obtained from measurements. Such a function of spectral density helps in implementing the algorithm to go in the reverse direction of generating correlated random distributions of fiber misalignments from spectral density function. It is to be noted here that the exact form of the spectral density does not affect the prediction of the failure strengths much, essential is the mean square value of spectral density curve i.e. 2 [8]. An examples of measured spectral density curve from the literature is shown in Fig. 2.

Fig. 2 Spectral density of fiber misalignments in xy plane calculated from experimental results by Clarke et al. [15]

The Representation of Fiber Misalignment Distributions in Numerical …

155

In Fig. 3, the different steps involved in calculation of spectral density from experiments and the reverse process of generating fiber misalignment distributions are clarified by showing the respective figures of an example calculation. To retrieve the discrete spectral density S(ωk ) from a grid of measurement points φ(yk ), one would follow the path from Fig. 3c →e →d. Newland [33] showed that by following the path Fig. 3c →a and b →d, spectral density function can also be obtained in an alternative manner. In this case, however, no direct information on autocorrelation is extracted from the data. Discrete sampling of Spectral Density function Once a representative spectral density curve is obtained from experimental measurements and can be represented by a suitable function, the next step is generation of spatial distributions of fiber misalignments. As the final goal is to obtain a probabilistic failure surface, a large number of representative spatial distributions have to be generated. This is where the problem gets tricky as one needs to perform a discrete analysis on continuous representation of spectral density. The algorithm for this process is rather easier and is available in different sources in literature with minor differences [8, 13, 32, 39]. The general steps on how to perform the inverse Fourier transform calculations on a discretely sampled set of spectral density values to obtain real valued functions with desired characteristics are readily available. Here only the main relations are listed for the sake of completeness. The discrete Fourier transform (DFT) of the intended fiber misalignments function (ω) is related to the discretely sampled spectral density as  k =

 2π Sk eιθk N

(11)

where θ is the phase angle of Fourier harmonics, sampled randomly over a uniform distribution in [0, 2π ]. By random sampling of the phase angle θ , any number of different realizations can be generated with the material dependent spectral quantities preserved. By performing the inverse Fourier transform on the (ωk ) from (11), the fiber misalignment distribution φ(yk ) is obtained in discrete form as φk = φ(yk ) =

N −1  r =0

k ex p(ι

2π kr ) N

(12)

where k = 0, 1, . . . , N − 1. But the devil lies in detail on how the discrete sampling should take place for a continuous function of spectral density for its best possible representation. To understand the problem of discrete sampling in detail and how changing the number of sample points and the length of the spatial domain affects the mean square value of spectral density function sampled, we start with the 1D problem. The range of wavelengths which could be modeled depends on model size and the number of sampling points N . N is equal to the mesh size of the subsequently generated numerical model.

156

N. Safdar et al.

Fig. 3 Representation of discrete calculation/measurement of the fiber misalignments generation in 2D. a shows the real part of Fourier transform of the fiber misalignment angles k,l , where b represents imaginary part. c represents a discrete calculation/measurement of φk,l . d is the 2D representation of the exponential function for spectral density. e represents the autocorrelation of φk,l . This schematic is for a specimen/model with square dimensions and edge length of 4 mm, sampled every 80 µm. By definition, calculation of spectral density should follow the path c →e →d, but due to discrete measurements in experiments it follows an alternate path c →a, b →d [33]. Similarly, generation of misalignment distributions from given spectral density function follow the reverse path i.e. d →a, b →c

The Representation of Fiber Misalignment Distributions in Numerical …

157

The minimum frequency ω f , also called as the fundamental frequency, corresponds to maximum wavelength λmax which can be modeled for the particular spatial dimension L. The maximum frequency also known as Nyquist frequency ω N = N π/L corresponds to shortest possible wavelength with the selected number of sampling points N . The sampling distance ω in frequency domain between two neighboring sampled points of spectral density curve is controlled by the model dimensions and is equal to ω f . There are additional physical constraints on the wavelengths being modeled which must be enforced considering the problem at hand. An example calculation of frequency values for a sample size of 5000 µm and different discretization values is shown in Table 1. The row shown with number of sampling points N = 100 corresponds to the experimental characterization over the same length and number of sampling points done by Clarke et al. [15, 29]. Unidirectional FRPs have additional physical constraints which must be enforced i.e. shortest physically possible wavelength and absence of an infinite wavelength. The latter constraint is achieved by setting the spectral density corresponding to ω = 0 equal to zero. While sampling in a discrete manner, this is done by simply putting the first value S(0, 0) in the array equal to 0. The former is enforced by defining a maximum cut-off frequency ωc corresponding to a minimum wavelength λmin present in the material, above which the values of spectral density are either exactly or approximately equal to 0, depending on the type of spectral density function

Table 1 Sample calculations of different frequency values for edge size length 5000 µm and different number of sampling points. The row shown in number of sampling point N = 100 corresponds to the experiments of Clark et al. [15, 29] N Fundamental Nyquist Frequency Frequency Minimum frefrequency interval w = range ω R = wavelength ω −ω quency ω f = ω N = NLπ = ω f = NN/2−1f ω N − ω f λmin = ω2πN 16 32 64 100 128 256 512 1024 2048 4096 8192 16384 32768 65536

2π L

Nω f 2

0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637

0.010053096 0.020106193 0.040212386 0.062831853 0.080424772 0.160849544 0.321699088 0.643398175 1.286796351 2.573592702 5.147185404 10.29437081 20.58874161 41.17748323

0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637 0.001256637

0.008796459 0.018849556 0.038955749 0.061575216 0.079168135 0.159592907 0.320442451 0.642141538 1.285539714 2.572336065 5.145928767 10.29311417 20.58748498 41.17622659

625 312.5 156.25 100 78.125 39.0625 19.53125 9.765625 4.8828125 2.44140625 1.220703125 0.610351563 0.305175781 0.152587891

158

N. Safdar et al.

chosen. Because of symmetric nature of spectral density which stems from the Fourier analysis involved, if no cut-off frequency ωc is considered, the whole range of values is sampled from 0 to N /2 − 1 and mirrored around N /2 in discrete calculations resulting in a minimum wavelength corresponding to the Nyquist frequency. Returning to the problem of realistic model generation for the fiber misalignment, in the current algorithm, following [8], two simple functions for spectral density S(ω) are considered. The square function (13) represents an equal contribution to spectral density of all wavelengths within the range of frequencies sampled. The exponential function (14) represents a distribution of spectral density content skewed towards shorter frequencies, where the constant So is the called initial spectral density. S(ωx , ω y ) =

S0 0

if if

|ωx | ≤ ωc |ωx | > ωc

and and

S(ωx , ω y ) = S0 e−((ωx /ωx c )

2

|ω y | ≤ ωc |ω y | > ωc

+(ω y /ω y c )2 )

(13)

(14)

If a cut-off is to be considered due to the physical constraints, it is quite simple to implement in the case of exponential function. It is to be noted that in case of the exponential function, the cut-off occurs at the frequency where spectral density is almost zero but not exactly zero. Hence, one directly defines the value of cut-off frequency corresponding to a minimum wavelength value e.g. ωc = 0.01256 µm−1 corresponds to λmin ≈ 500 µm. On the other hand for the case of a square wave i.e. constant function, one has to define a cut-off number c in relation to fundamental frequency ω f and corresponding sampling distance ω up to which the values of spectral density are sampled and above which they become zero. The simple relation between them is ωc = c · ω = c · ω f . To check the quality of the acquired results, following test can be performed. The test relates to the physical characteristic of the problem. The mean square value of the spectral density curve 2 needs to be constant, since it represents a material characteristic as has been argued earlier. One can simply calculate the discrete counter part to 2 by summing up the values of the discretely sampled S(ωk ) and ensuring whether it falls within the desired tolerance when compared to the same quantity calculated from the experimentally measured curve, considering numerical roundoff errors. This is done by taking a sum of S(ωk ) values and multiplying by ω.

2.2 Generated Distributions Examples It is to be noted that for the following figures in this subsection, Figs. 4a,b, 5a, b, 6a, b, and 7a, b, the seed of random number generator is fixed in Matlab for direct comparisons and to see the effects of only desired variables’ variations. In order to show the capabilities of spectral representation method, a few representative calculations are shown with a brief discussion. In Fig. 4, histograms of

The Representation of Fiber Misalignment Distributions in Numerical …

(a) 1024 sample points

159

(b) 2048 sample points

1

1

0.8

0.8

0.6

0.6

R(τ )

R(τ )

Fig. 4 Histograms of misalignments angles for 1D calculations, showing sampling point independence of probability density functions of generated misalignment angle values

0.4

0.4

0.2

0.2

0

0

-0.2

0

200

400

600

τ

800

1000

1200

-0.2

0

500

1000

τ

1500

2000

2500

Fig. 5 Autocorrelation function of 1D generated fiber misalignments of Fig. 4

generated fiber misalignment angles for a 1D model using an exponential function of the form (14) with ωc = 0.01256 µm−1 corresponding to a minimum wavelength of λmin ≈ 500 µm for a length of 5000 µm are shown. The calculations shown in Fig. 4 correspond to a 1D implementation, and can be used in a 1D representative model as done by Slaughter and Fleck [13]. In Fig. 4a, the number of sampling points N = 1024 is taken compared to N = 2048 in Fig. 4b. It can be seen that mean square value 2 for both examples is exactly same. The corresponding autocorrelation functions R(τ ) of the generated misalignment angles φ are calculated using commercial software Matlab’s built-in functions, and are plotted in Fig. 5. It is readily clear from these plots that the correlation quickly decreases as the lag distance τ between corresponding values increases. Correspondingly, the correlation length can be estimated from these plots of autocorrelation of fiber misalignments. Since generated wave pattern is a combinations of sinusoidal waves with wavelengths ranging from λmin ≈ 500 µm to λmax ≈ 5000 µm and only a single realiza-

160

N. Safdar et al.

tion out of an ensemble is shown, the autocorrelation plotted in these figures from their corresponding fiber misalignment distribution values is still existent even at considerable lag distances. If autocorrelation of a statistical ensemble is to be plotted by averaging over large number of calculations, it will not show correlation after the cut-off frequency as can be seen in the experimental curve shown in Fig. 2. Similarly, a 2D implementation’s results are also shown. Here the comparison is done for different model dimensions. For this purpose two models of square dimensions with edge lengths of 2500 and 5000 µm are chosen. The number of sampling points for each edge length is kept constant at N = 1024 to see the effects of changing model dimension only. The same form of exponential function (see (14)) representation of spectral density as in the 1D example with same cut-off frequencies ωc = 0.01256 µm−1 and corresponding minimum wavelengths λmin ≈ 500 µm for each spatial direction direction are selected. The resulting contour plots of generated fiber misalignment angles for the 2D case are shown in Fig. 6. The mean square values of the spectral density function given by area under the spectral density curve 2 for both realizations is equivalent up to 7th decimal. For different model dimensions, the mean square value is not exactly equal as the larger model size also sample smaller frequencies (larger wavelengths). For the purpose of the current numerical models, this difference is quite negligible as shown by calculated mean square values, because the model dimensions are not varied by manifolds. A visual plausibility test in comparing the generated distributions is the number of misaligned regions in each direction in the contour plots. By doubling the edge length in each direction, the number of misaligned regions have doubled as expected. The correlation length is not dependent on model size but the underlying physical constraints such as manufacturing process. One can see that the correlation length is same in both models, which in fact is controlled by the maximum value of frequency i.e. cut-off frequency ωc (minimum wavelength λmin ). It shows that the

Fig. 6 Contours of misalignments angles for 2D case. Here the number of sampling points is kept constant for both cases, but the edge length has been doubled. It can be seen that correlation length which depends on the base function (14) is same for both models. Additionally, the mean square of the sampled discrete spectral density is same in both cases i.e. 2 = 2.106e − 04

The Representation of Fiber Misalignment Distributions in Numerical …

161

Fig. 7 Autocorrelation function of 2D generated fiber misalignments shown in Fig. 6. Since the number of sampling points is same i.e. 1000 for each spatial direction, 2τ in Fig. 7a is equal to 1τ in Fig. 7b. Hence, the autocorrelation is same for both models as the base function is same

method is able to model the misalignment distribution in an accurate manner for different model dimensions. The corresponding autocorrelation functions R(τx , τ y ) of the generated misalignment angles φ are plotted in Fig. 7. One has to notice that the lag distance τ in Fig. 7a is half in length as that in Fig. 7b, since same number of sampling points are chosen for different model dimensions. The autocorrelation for both models show same behavior and also confirms that the correlation lengths in the models are equal.

2.3 Mapping of Fiber Misalignment Distribution to Numerical Model Microbuckling failure involves material nonlinearities well before reaching peak load. Thus it is imperative to represent material nonlinearity in an accurate manner. Additionally due to directional behavior of the FRPs, anisotropy has to be taken into account. Anisotropic elasto-plastic response is modeled using a homogenized description. The elastic part of the material response is based on Voigt micromechanical theory, in which fiber and matrix are modeled as a single material. Elastic material properties are from Kaddour et al. [12]. The nonlinear plastic part of the material behavior is modeled using a modified form of Hill’s plastic formulation. Although this plasticity model is originally intended for metal plasticity, it can be effectively utilized in current case by suppressing yielding in fiber direction for inplane loading conditions. The active part of the yield function can be represented by the form

σ 2 σ σ 2 σ 2 22 11 22 12 − + (15) 2 f (σ ) = 2Y Y S

162

N. Safdar et al.

where Y and S are transverse and in-plane shear yield strengths respectively. Nonlinear isotropic hardening is specified based on shear hardening curve from Vogler et al. [40], and is mapped subsequently by Hill’s constants for other stress states. It is to be noted that the currently proposed model generation approach is not limited to the herein considered material model. For similar class of problems, more suitable material models can also be employed. However, for practical engineering applications the proposed usage of material model is considered sufficient, considering both the accuracy and availability of solutions tools and material characterization data. In order to demonstrate the capabilities of the techniques discussed in Sect. 2.1, a simple finite element method (FEM) based Monte Carlo analysis is presented. The approach presented in this section follows Safdar et al. [14]. The finite element simulations are carried out using commercial software Abaqus [41]. Two dimensional plane stress elements with reduced integration are employed. The model is discretized with an element size approximately 6 times the fiber diameter. The boundary and loading conditions are shown in Fig. 8. Left edge is constrained in axial direction and bottom left corner is constrained in transverse direction to suppress rigid body motions. Right edge is coupled to a reference node using kinematic coupling. Loads in the form of concentrated forces F11 and F12 to get axial compression and in-plane shear are applied on the reference node. Different combinations of compressive and in-plane shear loads are investigated, see Fig. 8. Peak stresses are calculated as ratios of corresponding maximum applied load to the initial cross sectional area A as σ¯ 11 = F11 /A and σ¯ 12 = F12 /A. Geometrically nonlinear implicit solution is carried out. As snap back is expected in this type of failure, arc length method (Riks’ algorithm) was used to track the global force displacement model response. As the focus in current study is only on the peak load carrying capacity hence, the analyses are terminated after the peak load is reached.

Fig. 8 Schematic representation of the 2D numerical model, representative of a unidirectional FRP laminate, is shown here along with loading and boundary conditions. Left edge is constrained in axial direction, along with bottom corner being constained in transversal direction to suppress rigid body motions. Force loads corresponding to respective load case are applied on a reference node, connected to right edge of the model through kinematic couplings. The fiber direction is in global X-axis

The Representation of Fiber Misalignment Distributions in Numerical …

163

The generated distributions of fiber misalignment from Sect. 2.1 are mapped directly at integration points of each finite element in the numerical model. In this way an easy and fairly simple to implement one to one correspondence between discrete misalignment distribution and the finite element mesh is achieved. As elaborated in the introduction Sect. 1.1, the compressive failure predominantly depends on fiber misalignments [17] which in fact is correlated random at this scale [16]. Hence, the need to perform a stochastic analysis is justified. Using the presented approach, it is possible to generate numerical models with faithful fiber misalignment distributions. These models then can be used to gather a statistic of probability distributions of failure loads and consequently a probabilistic definition of failure surface in compressive domain in a cost efficient manner without having to test a large series of specimens experimentally.

3 Numerical Example Results To highlight the advantages of adding imperfection distributions in a realistic manner to the numerical models for FRPs, some selected results are shown in the following. After finalizing the appropriate mesh and representative volume element (RVE), the model is simulated under axial compression and compression-shear loads in a probabilistic manner. A fixed convergence criteria for Monte Carlo simulation is used, with 1% tolerance for mean and 1.5% for standard deviation in the peak load after every n realizations. The resulting distributions of each load case are plotted in Fig. 9. In Fig. 9a, which shows the probabilistic failure surface, small black lines in each loading direction represent the spread of resulting peak stress values. The mean of each line (yellow dot), and 1 standard deviation (blue line with green and red limit dots) are also highlighted on top. As the trend of mean values is linear, a linear fitting equation for the mean of the failure surface is proposed (16). Similarly in Fig. 9b, the corresponding standard deviation values of each load case are plotted along with a proposed quadratic fitting equation. It can be readily observed from Fig. 9b that the spread of resulting peak stress values is maximum under axial compression, and almost vanishes under in-plane shear loading. The observation from these results is in line with the micromechanical theory of microbuckling where compressive failure is dependent on variation in fiber misalignments, but strength under pure shear is not. Fmean = c + f 11 σ¯ 11 + f 12 σ¯ 12

(16)

2 + f 22 σ¯ 11 + σ¯ 12 Fstd = c + f 11 σ¯ 11

(17)

The contour constituted by the mean of the failure surface is similar to experimental observation in σ¯ 11 − σ¯ 22 stress plane by Vogler et al. [42]. It shows symmetry around σ¯ 11 axis hence, only one side is plotted. Basu et al. [43] used an analytical model to predict a deterministic failure surface. Their result shows an unsymmet-

164

N. Safdar et al.

40 20 -1500

-1000

-500

0

0

2

-100

σ ¯11 [M P a] (a) In-plane probabilistic failure surface

-50

σ ¯12 [M P a]

60

σ ¯12 [M P a]

4

0 0

σ ¯11 [M P a] (b) Standard deviations

Fig. 9 In-plane probabilistic failure surface and corresponding standard deviations in σ¯ 11 − σ¯ 22 space as a result of Monte Carlo simulations on a 2D model containing correlated random misalignment distributions. The surface shows that the maximum variation in strength value is under axial compression and fades as one moves away from pure axial compression in stress space Table 2 Fitting parameters for probabilistic failure surface Function f 11 f 22 Mean curve of failure 1640 surface in stress space (16) Corresponding 1.945e–04 standard deviation curve (17)

c

70

1

2.0813e–03

−1.794

rical surface, where shear load has a pronounced effect on strength when it is in the direction of the sinusoidal misalignment assumed. Since in reality misalignment is distributed and is unbiased, thus a symmetric form of the failure surface where positive and negative shear loading will have equal effect of peak stress is expected as shown by current results (Table 2).

4 Concluding Remarks In the current chapter, effects of adding experimentally characterized fiber misalignments in the numerical models to predict in-plane probabilistic failure surfaces are presented. A detailed description on autocorrelation and spectral density functions is given with some basic examples to aid understanding. Following the introduction of the concept, a detailed methodology of using experimentally characterized spectral density for generating representative correlated random fiber misalignment distributions is penned. Problems with discrete sampling and changing model size are investigated and it is shown that the methodology is independent of model size and sampling refinement, within certain physical and mathematical bounds.

The Representation of Fiber Misalignment Distributions in Numerical …

165

Following this description, the numerical model and material model are briefly discussed. The analyses are performed in fully nonlinear material and geometrical setting. By performing simulation under compression and compression-shear loading, the resulting peak load carrying capacity of the model are used to define a probabilistic failure surface. Simple forms of fitting equations for the for the mean and the standard deviation of the failure surface are proposed. Using an updated form of numerically calculated failure surface, design engineers can maximize the load carrying capacity of FRPs based structures with known reliability.

References 1. Soutis, C. (2005). Fibre reinforced composites in aircraft construction. Progress in Aerospace Sciences, 41(2), 143–151. 2. Boeing. 787 dreamliner by design: Advanced composites use. https://www.boeing.com/ commercial/787/by-design/advanced-composite-use. Accessed June 21, 2019. 3. Liu, L., Zhang, B.-M., Wang, D.-F., & Wu, Z.-J. (2006). Effects of cure cycles on void content and mechanical properties of composite laminates. Composite Structures, 73(3), 303–309. 4. Yurgartis, S. W. (1987). Measurement of small angle fiber misalignments in continuous fiber composites. Composites Science and Technology, 30(4), 279–293. 5. Paluch, B. (1996). Analysis of geometric imperfections affecting the fibers in unidirectional composites. Journal of Composite Materials, 30(4), 454–485. 6. Jelf, P. M., & Fleck, N. A. (1992). Compression failure mechanisms in unidirectional composites. Journal of Composite Materials, 26(18), 2706–2726. 7. Wisnom, M. R. (1991). Relationship between strength variability and size effect in unidirectional carbon fibre/epoxy. Composites, 22(1), 47–52. 8. Liu, D., Fleck, N. A., & Sutcliffe, M. P. F. (2004). Compressive strength of fibre composites with random fibre waviness. Journal of the Mechanics and Physics of Solids, 52(7), 1481–1505. 9. Sutcliffe, M. P. F. (2013). Modelling the effect of size on compressive strength of fibre composites with random waviness. Composites Science and Technology, 88(Supplement C), 142–150. 10. Lekou, D. J., & Philippidis, T. P. (2008). Mechanical property variability in FRP laminates and its effect on failure prediction. Composites Part B: Engineering, 39(7), 1247–1256. 11. Whiteside, M. B., Pinho, S. T., & Robinson, P. (2012). Stochastic failure modelling of unidirectional composite ply failure. Reliability Engineering and System Safety, 108, 1–9. 12. Kaddour, A. S., & Hinton, M. J. (2012). Input data for test cases used in benchmarking triaxial failure theories of composites. Journal of Composite Materials, 46(19–20), 2295–2312. 13. Slaughter, W. S., & Fleck, N. A. (1994). Microbuckling of fiber composites with random initial fiber waviness. Journal of the Mechanics and Physics of Solids, 42(11), 1743–1766. 14. Safdar, N., Daum, B., & Rolfes, R. (2018). Stochastic compressive failure surface modelling for the unidirectional fibre reinforced composites under plainstress. In 6th European Conference on Computational Mechanics, Glasgow. 15. Clarke, A. R., Archenhold, G., Davidson, N. C., Slaughter, W. S., & Fleck, N. A. (1995). Determining the power spectral density of the waviness of unidirectional glass fibres in polymer composites. Applied Composite Materials, 2(4), 233–243. 16. Sutcliffe, M. P. F., Lemanski, S. L., & Scott, A. E. (2012). Measurement of fibre waviness in industrial composite components. Composites Science and Technology, 72(16), 2016–2023. 17. Budiansky, B., & Fleck, N. A. (1993). Compressive failure of fibre composites. Journal of the Mechanics and Physics of Solids, 41(1), 183–211. 18. Rosen, B.W. (1965). Mechanics of composite strengthening. 19. Argon, A. S. (1972). Fracture of composites. Treatise on Materials Science and Technology, 1, 79–114.

166

N. Safdar et al.

20. Budiansky, B. (1983). Micromechanics. Computers and Structures, 16(1), 3–12. 21. Fleck, N. A., & John, Y. S. (1995). Microbuckle initiation in fibre composites : A finite element study. Journal of the Mechanics and Physics of Solids, 43(12), 1887–1918. 22. Kyriakides, S., Arseculeratne, R., Perry, E. J., & Liechti, K. M. (1995). On the compressive failure of fiber reinforced composites. International Journal of Solids and Structures, 32(6), 689–738. Time Dependent Problems in Mechanics. 23. Yerramalli, C. S., & Waas, A. M. (2004). The effect of fiber diameter on the compressive strength of composites-A 3D finite element based study. Computer Modeling in Engineering and Sciences, 6, 1–16. 24. Prabhakar, P., & Waas, A. M. (2013). Micromechanical modeling to determine the compressive strength and failure mode interaction of multidirectional laminates. Composites Part A: Applied Science and Manufacturing, 50, 11–21. 25. Romanowicz, M. (2014). Initiation of kink bands from regions of higher misalignment in carbon fiber-reinforced polymers. Journal of Composite Materials, 48(19), 2387–2399. 26. Bishara, M., Rolfes, R., & Allix, O. (2017). Revealing complex aspects of compressive failure of polymer composites—Part I: Fiber kinking at microscale. Composite Structures, 169, 105– 115. In Honor of Prof. Leissa. 27. Waas, A. M., & Schultheisz, C. R. (1996). Compressive failure of composites, Part II: Experimental studies. Progress in Aerospace Sciences, 32(1), 43–78. 28. Vogler, T. J., & Kyriakides, S. (1999). Inelastic behavior of an AS4/PEEK composite under combined transverse compression and shear. Part I: experiments. International Journal of Plasticity, 15(8), 783–806. 29. Clarke, A. R., Archenhold, G., & Davidson, N. C. (1995). A novel technique for determining the 3D spatial distribution of glass fibres in polymer composites. Composites Science and Technology, 55(1), 75–91. 30. Bednarcyk, B. A., Aboudi, J., & Arnold, S. M. (2014). The effect of general statistical fiber misalignment on predicted damage initiation in composites. Composites Part B: Engineering, 66, 97–108. 31. Allix, O., Feld, N., Baranger, E., Guimard, J.-M., & Cuong, H.-M. (2014). The compressive behaviour of composites including fiber kinking: modelling across the scales. Meccanica, 49(11), 2571–2586. 32. Cebon, D., & Newland, D. E. (1983). Artificial generation of road surface topography by the inverse FFT method. Vehicle System Dynamics, 12(1–3), 160–165. 33. Newland, D. E. (1984). An introduction to random vibrations and spectral analysis. Mineola, New York: Dover Publications Inc. 34. Bažant, Z. P., Kim, J.-J. H., Daniel, I. M., Becq-Giraudon, E., & Zi, G. (1999). Size effect on compression strength of fiber composites failing by kink band propagation. International Journal of Fracture, 95(1), 103–141. 35. Bažant, Z. P. (1999). Size effect on structural strength: A review. Archive of Applied Mechanics, 69(9), 703–725. 36. Jacobs, T. D. B., Junge, T., & Pastewka, L. (2017). Quantitative characterization of surface topography using spectral analysis. Surface Topography: Metrology and Properties, 5(1), 013001. 37. Elfouhaily, T., Chapron, B., Katsaros, K., & Vandemark, D. (1997). A unified directional spectrum for long and short wind-driven waves. Journal of Geophysical Research: Oceans, 102(C7), 15781–15796. 38. Kay, S., Hedley, J., Lavender, S., & Nimmo-Smith, A. (2011). Light transfer at the ocean surface modeled using high resolution sea surface realizations. Optics Express, 19(7), 6493–6504. 39. Mobley, C. (2016). Ocean optics web book. 40. Vogler, M., Rolfes, R., & Camanho, P. P. (2013). Modeling the inelastic deformation and fracture of polymer composites—Part I: Plasticity model. Mechanics of Materials, 59, 50–64. 41. ABAQUS/Standard User’s Manual, Version 6.16. Simulia, 2016. 42. Vogler, T. J., Hsu, S.-Y., & Kyriakides, S. (2000). Composite failure under combined compression and shear. International Journal of Solids and Structures, 37(12), 1765–1791. 43. Basu, S., Waas, A. M., & Ambur, D. R. (2006). Compressive failure of fiber composites under multi-axial loading. Journal of the Mechanics and Physics of Solids, 54(3), 611–634.

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber Reinforced Composites S. Hosseini, S. Löhnert, P. Wriggers and E. Baranger

Abstract A multiscale approach called Multiscale Projection Method is adapted for the analysis of fiber microbuckling (fiber kinking) in laminated composites. Based on this global/local multiscale scheme, in the parts of the 0 degree layers of the laminate, where the fiber microbuckling is expected to happen, a fine scale mesh, with the geometrical details and material property of the fiber and matrix, is projected and a concurrent multiscale solution is sought to capture the kink band formation. The delamination between the buckled 0 degree layer and its neighboring plies is simulated using geometrically nonlinear cohesive elements. The effectivity of the proposed multiscale method is investigated through a numerical study of the fiber microbuckling in a [902 /0/902 ] composite laminate.

1 Introduction During the past decade, fiber reinforced composites have become more and more desirable in advanced structural applications such as those in aerospace, automotive, and renewable energy industries and accordingly, the analysis of their failure mechanism has become highly important. Under compressive loads, failure mechanisms of fiber reinforced composites, such as matrix cracking, fiber kinking, fiber fracture, delamination and fiber/matrix debonding can form a complex combined mode which could lead to the uncontrollable catastrophic failure of the structure. S. Hosseini (B) · P. Wriggers Institute of Continuum Mechanics, Leibniz University Hannover, Appelstr. 11A, 30167 Hannover, Germany e-mail: [email protected] S. Löhnert Institute of Mechanics and Shell Structures, Technische Universität Dresden, August-Bebel str. 30, 01219 Dresden, Germany e-mail: [email protected] E. Baranger LMT, ENS Paris-Saclay, CNRS, Université Paris-Saclay, 61 Avenue du Président Wilson, 94235 Cachan Cedex, France © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_9

167

168

S. Hosseini et al.

For this reason, the compressive loading conditions are more critical for fiber reinforced composites than the tensile loadings. Fiber kinking (or micro-buckling) is one of the most dominant compressive failure modes in unidirectional composite laminates [1], which could itself lead to the initiation and formation of other failure modes, such as delamination and matrix cracking, and considerably decrease the compressive strength of the laminate. This mode of failure happens when due to the misalignment of fibers matrix material fails due to large shearing stresses. With the lack of lateral support, fibers start buckling. With increasing the load fibers rotate in a band of finite width within the degrading matrix. The rotation of fibers increases the localized shear strains which itself drives more shear degradation of the local matrix. The shear degradation of the matrix in turn drives more rotation of the fibers in a positive feedback loop called kink band formation. Analysis of fiber kinking goes back to 1965 when Rosen [2] first suggested an analytical model for fiber micro-buckling based on the elastic buckling of fibers, elastic shear behavior for the matrix and the fiber volume fraction. However, experimental tests showed that this model highly overestimated the compressive strength of the structure, and even further modifications of this model [3–7], such as considering a nonlinear shear behavior for the matrix, didn’t significantly improve the predictions. Later refinements [8–10] showed that the two key factors in driving the shear failure of the matrix and subsequently micro-buckling of fibers are the plastic behavior of the matrix and the existence of initial misalignment in fibers direction. More improvements were achieved by taking into account the bending resistance of fibers [11–14] leading to a group of analytical models called ’bending theory’ for fiber kinking. In these models the equilibrium equation of a unit cell containing the misaligned fiber and the surrounding matrix is solved under bending moments and compressive and shear stresses. This model could make a reasonable prediction of the compressive strength of the composite structure. However these models only study the 0 degree layer and are not able to take into account the in-situ effects of surrounding layers. Some authors [15–19] tried to resolve this shortcoming by considering the fiber kinking as a separate failure mode of the composite laminate and developing a failure criterion for its macroscale analysis. This failure model is capable of predicting the compressive strength of the laminated structure, but still is unable to capture the sequence of events of kink band formation and its interaction with other failure modes. So far, none of the proposed methods were able to trace the unstable post-peak behavior of the kink band propagation. In order to obtain this behavior the fiber and matrix should be modeled explicitly, and numerical solvers should be applied to incrementally solve the problem. This class of models is mainly based on the Finite Element simulation of the composites’ microstructure, where fiber kinking is modeled through the application of proper kinematics and material models to fiber, matrix and their interface. Many researchers tried to cover different aspects of the kinking phenomenon, such as the effect of the matrix material model [20], fiber volume fraction [21], effect of fiber/matrix splitting [22, 23] and the interaction of fiber kinking with other failure modes such as delamination and matrix cracking [24, 25], using finite element modeling of the microstructure. Although the micromechanical models allow good predictions of the sequence of events in

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

169

the kink band formation and its interaction with other failure modes, these models are not sufficient for considering the effect of fiber kinking on the total behavior of the composite structure. One possible solution to incorporate the micromechanical effects in the scale of the laminated structure was along the definition of meso-scale models [26, 27]. In these models the micromechanics of the composite structure (fiber, matrix and interface) is homogenized by definition of a proper constitutive behavior which takes into account different failure mechanisms as dissipative damage parameters. The macro-mechanics of the laminate is then built by the assembly of the homogenized cells, and the macroscale failure modes are subsequently captured at the interfaces of the cells. Although the meso-scale models provide effective macro-mechanical predications, they are not suited for capturing the sequence of interaction between fiber kinking and other failure modes. The aim of the present work is to propose a global/local multiscale technique which is able to integrate a micromechanical finite element model for simulating the kink band initiation and propagation into a macro-mechanical laminated model in order to study the coupled effects. The rest of the paper is structured as follows: Sects. 2 and 3 briefly plot the modeling strategy for the microscale (called fine scale, from this point forward) and macroscale (called coarse scale) analysis. The formulation of a geometrically nonlinear cohesive element is presented in Sect. 4 which is suited for the analysis of delamination and fiber/matrix debonding using finite deformation theory. This element is applied to model the delamination between the 0 degree layer, where the kinking occurs, and its neighboring plies. The followed section introduces the multiscale strategy to couple the fine and coarse scale solutions. Numerical examples will show the capability of the presented multiscale technique for the simulation of fiber kinking in a laminated composite.

2 Fine Scale Modeling On the fine scale level, the geometry of fiber and matrix are represented explicitly. Fibers are assumed to behave according to a finite strain elastic isotropic material of Neo-Hookean type with the following strain energy density function: ψ(b) =

 μ κ 1 det b− 3 trb − 3 , (1 − det b)2 + 8 2

(1)

where b is the left Cauchy-Green strain tensor and κ and μ are the bulk modulus and shear modulus, respectively. Since the main driving factor for the occurrence of fiber kinking is the plasticity of the matrix, an isotropic finite strain elastic-plastic material behavior is assigned to the matrix. The constitutive behavior follows a von Mises yield surface with a linear isotropic hardening such that the yield function has the following form:   f (σ , α) = |σ | − σy0 + H α ,

(2)

170

S. Hosseini et al.

where σy0 is the yield strength of the polymer matrix, H is the hardening modulus and α is the accumulative plastic strain. The finite strain formulation is based on the multiplicative decomposition of the deformation gradient into elastic and plastic part (as Eq. (3)) the definition of an intermediate configuration and on the volumetric-deviatoric split of the strain energy density based on this configuration as in [28]: F = Fe .Fp

(3)

   1 1    1  e e ψ J e, b = κ J e2 − 1 − ln J e + μ trb − 3 , 2 2 2



 

(4)

V olumetric

Deviatoric

e

where b is the elastic left Cauchy-Green tensor in the intermediate configuration, and J e is the determinant of the elastic deformation gradient tensor, Fe . κ and μ are the bulk modulus and shear modulus of the matrix. By applying the appropriate return mapping algorithm described in [28] the Cauchy stress and the algorithmic tangent stiffness tensor are computed at each time integration point.

3 Transversely Isotropic Material Model for the Coarse Scale Outside the multiscale domain a homogenized transversely isotropic material model is considered for each composite ply, including 0 degree layers. The transversely isotropic material model is based on the model developed by Vogler et al. [29], in which the strain energy density function is written based on the isotropic invariants of the strain tensor, and the anisotropy direction tensor M0 . This function is adapted for the present work based on the invariants of the Green-Lagrange strain tensor and reads: ψ (E, M0 ) =

1 1 2 λI1 + μT I2 + αI1 I4 + 2 (μL − μT ) I5 + βI42 , 2 2

(5)

in which I1 = trE I2 = tr(E)2 I4 = tr (E . M0 )   I5 = tr E2 . M0

(6)

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

171

The direction of the anisotropy tensor is defined by the dyadic product of the fiber direction (direction of anisotropy) a , as: M0 = a ⊗ a ,

(7)

and the invariants’ multiplication factors can be obtained from the composite materials’s engineering constants by: λ = E2 (ν23 + ν31 ν13 ) /D α = E2 (ν13 (1 + ν23 − ν31 ) − ν32 ) /D β = E1 (1 − ν32 ν23 ) /D + E2 [1 − ν12 (ν21 + 2 (1 + ν23 ))] /D − 4μL μL = μ12 μT = μ23

(8)

2 D = 1 − ν32 − 2ν13 ν31 − 2ν32 ν31 ν13 ,

where based on the symmetry of the elasticity tensor: ν12 ν21 = ; E1 E2

ν13 ν31 = ; E1 E3

ν23 ν32 = E2 E3

(9)

The second Piola-Kirchhoff stress tensor can now be obtained as the derivative of strain energy density function with respect to the Green-Lagrange strain tensor by: S=

∂ψ(E, M0 ) = λI1 1 + 2μT E + α (I4 1 + I1 M0 ) ∂E + 2 (μL − μT ) (M0 . E + E . M0 ) + βI4 M0 ,

(10)

and the elastic material tangent reads: C=

∂S = λ1 ⊗ 1 + 2μT I + α (M0 ⊗ 1 + 1 ⊗ M0 ) ∂E + 2 (μL − μT ) IM0 + βM0 ⊗ M0

(11)

4 Geometrically Nonlinear Cohesive Element In order to model the delamination between the 0 degree layer and the neighboring plies during the process of kink band formation, a mixed-mode geometrically nonlinear cohesive element is developed based on the work of Reinoso et al. [30]. The point of departure for the formulation of this element is based on the adaptation of mid-surface of two separating surfaces of the cohesive element in the current configuration, and the transformation of the stress and material tangent from the initial global configuration to the local current configuration using a linear transformation matrix.

172

S. Hosseini et al.

4.1 Weak Formulation and Kinematics The contribution of the cohesive tractions to the weak formulation of equilibrium is in the following form: δ coh =

S0

δ(δ T ) τ dA ,

(12)

where δ is the virtual variation, δ is the displacement jump of the two cohesive surfaces and τ is the cohesive traction vector. The standard frame for defining the components of the cohesive displacement jump, is located on the mid-plane of the nodal position vectors of two separating surfaces of the cohesive element in the current configuration, in such a way that 1 x¯ i = Xi + (ui+ + ui− ) . 2

(13)

A local coordinate system is defined on each gauss point of the mid-plane, so that the displacement jumps for Eq. (12) can be obtained by rotating the displacement difference in the global coordinate to the local coordinate system. The local coordinate system in the current configuration is defined by the tangent vectors to the surface of the mid-plane as illustrated in Fig. 1. The tangent vectors to the cohesive mid-plane surface, v ξ and vη , are defined by the spatial derivative of coordinates, as follows:

×

× ×

×

Fig. 1 Deformation of the cohesive element in reference and current configurations

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

 ∂x

∂ x¯ = ∂ξ

∂ξ S

∂ y¯ ∂ z¯ ∂ξ ∂ξ

 ∂x

∂ x¯ = ∂η vη =

∂η S

∂ y¯ ∂ z¯ ∂η ∂η

vξ =

173

T (14) T (15)

A unique normal to the surface can now be defined by: vξ × vη  n=  v × v  ξ η

(16)

And the local coordinate system can be determined having the normal and tangential vectors as: vξ s =  , t = s × n (17) v  ξ

A rotation matrix, as in Eq. (18) is built based on the above three orthonormal vectors, which will transform the displacement difference (displacement jump) from the initial configuration into the current local configuration. T  R= stn

(18)

It can be implied from Eq. (18) that R is not constant, but a function of the displacement field.

4.2 Finite Element Discretization The global displacement jump in the initial configuration is obtained from the following equation: (19) i = ui+ − ui− = Nk uki+ − Nk uki− = Bk uki , where B is the matrix of subtraction of shape functions for the pair of nodes on the upper and lower cohesive surfaces. Applying the rotation matrix of Eqs. (18) and (19) and its substitution in Eq. (12) leads to: δ coh = δuT

T  ∂R RB + Bu τ dA ∂u S0

(20)

The element’s internal load vector and the tangent stiffness matrix can now be obtained from Eq. (20) as: f eint =

T  ∂R RB + Bu τ dA , ∂u S0

(21)

174

S. Hosseini et al.

and Ke =

∂f eint ∂u

=

BT RT S0

∂τ RB dA+ ∂δ

 ∂RT ∂RT ∂τ ∂R 2BT τ + uT BT Bu + ∂u ∂u ∂δ ∂u S0   T T T ∂τ ∂R T T ∂R ∂τ B R Bu + u B RB dA ∂δ ∂u ∂u ∂δ

(22)

The first integral in the above equation is known as the material part of the stiffness is matrix, and the second integral is the so called geometrical part of the stiffness. ∂τ ∂δ the material tangent stiffness of the cohesive element and should be obtained from a proper constitutive behavior. An exponential form of the mixed-mode constitutive behavior for the cohesive element is adapted for this work, based on the model proposed in [31].

5 Multiscale Modeling The Multiscale Projection Method (MPM) [32, 33] is based on the projection of a fine scale problem on the existing coarse scale domain in order to refine the solution in those domains of interest where a localized effect is expected to happen. As for other global-local techniques ([34–39]), the domain of the problem, 0 (as in Fig. 2) , is divided into two subdomains, the coarse scale part, 0\1 , where no local effects is investigated, and the multiscale part, 1 , where the coarse scale solution is needed to be improved by some fine scale effects.

Coarse scale solution, Multi-scale solution, Fine scale solution,

Fig. 2 Domain division for multiscale analysis

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

175

Based on the above mentioned domain division, the displacement field in the multiscale domain, 1 , is defined as: u u1 = u0 + 

(23)

u is the fluctuation Here, u0 is the displacement field of the coarse scale domain, and  of the displacement field due to the fine scale features. In order to ensure the continuity of the displacement field along the boundaries of the multiscale domain, this domain should be chosen large enough such that at its boundary, ∂1 , the assumption of  u = 0 is valid. In another words, the fine scale fluctuations vanish when reaching the boundaries of the multiscale domain, in such a way that in the subdomain 0\1 , one has:  u = 0 , u1 = u0

(24)

The boundary value problem of the entire domain in the initial configuration can be written in the following form: Div (P) + f 0 = 0 ,

(25)

where P is the first Piola-Kirchhoff stress tensor. On the boundaries one has: u0 = g on ∂u0

(26)

P. N = t on ∂t0

Based on Eqs. (23) and (25) the weak form of the balance of linear momentum in the entire domain for a quasi-static problem can be written as: 0

  S u1 : δE0 d  −



f . δu d  − 0

0

∂0

t . δu0 d ∂ = 0

(27)

  where S u1 is the second Piola-Kirchhoff stress tensor, and δE0 is the variation of the Green-Lagrange strain tensor as a function of the variation of the coarse scale displacement field, δu0 . f and t are the applied external body force and external traction vector, respectively. Based on the definition of u1 in Eq. (24), the following equivalence can be written in the zones outside the multiscale domain:

0\1

S(u1 ) : δE0 d  =

0\1

S(u0 ) : δE0 d 

(28)

176

S. Hosseini et al.

Substitution of the above mentioned equation in the weak form of the problem gives: 0\1

  S u0 : δE0 d  +

1

  S u1 : δE0 d  −



0

f . δu0 d  −

∂0

t . δu0 d ∂ = 0

(29) Within this iterative framework, the values of u1 in the multiscale domain can be determined by satisfying a second equilibrium condition for 1 which reads: 1

  S u1 : δE1 d  −

1

f . δu1 d  = 0 ,

(30)

where the boundary conditions are: u1 = u0 on ∂1

(31)

This condition means that the coarse scale displacement solution is applied on the boundaries of multiscale domain as inhomogeneous boundary condition. The contribution of the elements of the multiscale domain on the global tangent stiffness matrix comes from the linearization of the second term of Eq. (29), which is written as:        1   ∂S u1 ∂S u1 0 0 0   : E +   : E  S u :δE d  = δE : u d  ∂E u1 ∂E u1 1 1   S u1 :δE0 d  , + 1

(32) where is the material tangent of the fine scale model being projected on the coarse scale elements. As was mentioned before, Eq. (32) is not being solved as one system of equation for both coarse and fine scale degrees of freedom, but rather is solved concurrently for each set of degrees of freedom. This means that, at each iteration of the global solution the system of equations is solved for only the coarse scale degrees of freedom, taking into account the material tangent and stresses projected from the fine scale, and after the solution of fine scalesystem, the  part related to the displacement fluctuations on the fine scale (i.e. E  u ) will be added as residual force to the coarse scale system at the next iteration. This leads to a slow rate of convergence of the coarse scale solution and the need for a higher number of iterations to get a converged solution. This iterative solution scheme is described in detail in Box. 1. ∂S(u1 ) ∂E(u1 )

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

177

Box.1 Multiscale iterative solution scheme 1. Initial condition k = 0, u0k = 0, u1k = 0, σ 0k = 0, σ 1k = 0 2. Assuming a homogenized transversely isotropic material for all composite layers, solve the first iteration of the coarse scale problem K0 (u0k )u0k+1 = −f int (u0k ) + f 0ext 0 = KIJ



T

0

T

B0I .C0 . B0J d 

3. Apply the resulting solution displacements on the boundaries of the fine scale mesh 4. Solve the fine scale problem iteratively (using Newton-Raphson or Arc-length scheme) until convergence K1 (u1i )u1i+1 = −f int (u1i ) + f ext (u1k+1 ) ,

i = 0, 1, . . . , nconvergence

1

f ext (u1k+1 ) = K .ˆu1k+1 for all nodes on the boundary of the multiscale domain 1

KIJ =

 1

T

T

B1I .C1 . B1J d 

5. Project the converged values of the material tangent stiffness and Cauchy stress to the elements of the coarse scale mesh located in the multiscale domain 6. set k → k + 1 and go to step 2 with the updated values in the multiscale domain 0 = KIJ



T

0

T

B0I .C01 . B0J d 

7. if u0k+1  > tol set k → k + 1 and go to step 2

Compared to other global-local multiscale techniques such as subgrid methods [36], the present method is capable of projecting any fine scale domain with different material constituents, and any arbitrary geometry on the multiscale domain, while keeping the original coarse scale mesh unchanged. This is similar to the multiscale mesh superposition strategy proposed by Fish et al. [35]. Although, those methods are only suitable for linear elastic problems, while the MPM can be applied for both material and geometrically nonlinear problems. Those other global-local multiscale methods which are capable of simulating material nonlinearities as well as arbitrary shapes and geometries of the fine scale domain, such as the “Non-intrusive global-local method” proposed by Allix et al. [38], suffer from the limitation of conformity of the fine and coarse scale meshes at the boundaries of the local domain, which necessitates the application of other forms of constraint conditions to couple the two scales in case of mesh non-conformity. On the other hand, the present method is able to use any fine scale mesh with or without mesh conformity at the boundaries and is able to locate the fine scale mesh adaptively during the solution without a priori knowledge of the position of local

178

S. Hosseini et al.

Fig. 3 Coarse and fine scale mesh for a complete multiscale domain

domain. The only constraint is that the entire boundary of the fine scale mesh should be overlaid at the boundary of a patch of coarse scale elements. In other words, the boundary of the fine scale mesh cannot cut the coarse scale elements.

6 Numerical Examples The proposed multiscale method is implemented in the finite element program FEAP for non-linear material models of Neo-Hookean type, taking into account the plasticity and damage. The first example aims to show the performance of the multiscale method compared to a complete fine scale solution. For this case the size of the whole domain is assumed to be the same as local domain, as is depicted in Fig. 3. The coarse scale domain is a block meshed with 1600 first order solid elements, with initially elastic material. On the other hand, the fine scale mesh includes the details of fiber and matrix geometries including fibers misalignment. Fibers are assumed to have a hyperelastic isotropic material behavior, while a finite strain elasto-plastic material of J 2 type with isotropic hardening is considered for the matrix. The material characteristics for fiber and matrix are listed in Table 1. The diameter of each carbon fiber is assumed to be d = 7.1 µm and a fiber volume fraction of 60% is considered for the micro-structure. The length of the specimen is l = 600 µm with the width of w = 8.5 µm. The boundary condition is fixed in x direction in one side and the other side is subjected to a compressive load in the same direction. A sine shaped waviness is applied as initial misalignment to the fibers, with the function of: )) , (33) y0 = λ(1 − cos( πx l with the local misaligned length of fibers considered as l = 25 µm and λ/l = 0.01.

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber … Table 1 Material characteristics of the fiber and matrix Material E(MPa) ν Carbon fiber Polymer matrix

225.0 4.08

0.22 0.38

σy0 (Mpa)

179

Hiso (MPa)

57.0

90.0

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0

1

2

3

4

5

6

7

10

-3

Fig. 4 Load-displacement behavior of the micromechanical kinking model using multiscale and single fine scale models

The load-displacement behavior of the structure is plotted in Fig. 4 and is compared against the behavior of the single scale analysis with the fine mesh. As can be viewed in this graph, the projection of two scales is working fine and both single scale and multiscale analyses show the same kinking behavior. However, the multiscale simulation shows a stiffer behavior and predicts the compressive strength of the structure with less than 10% error compared to the single scale solution. To justify this difference, it is worth noting that the coarse scale mesh is roughly 30 times coarser than the fine scale mesh, which is definitely expected to show a stiff behavior. As the graph shows, in the linear part of the load-displacement behavior, where the out-of-plane displacements are not significant, the results are completely matching which reveals that the values of the stresses are being calculated correctly with the multiscale model. However, around and after the peak point, where the buckling mode appears and the out-of-plane displacements become considerable, the stiffer behavior of the coarse mesh is more noticeable. Moreover, since the fine scale mesh is highly irregular with respect to the coarse scale mesh the weak discontinuity at the

180

S. Hosseini et al.

(a)

(b)

Fig. 5 Right: Schematic of loading and boundary condition of the [902 /0/902 ] composite laminate test, Left: Coarse and fine scale discretization Table 2 Homogenized material parameters of the transversely isotropic layers E1 (MPa) E2 (MPa) G 12 (MPa) ν12 ν23 138000.0

8960.0

7100.0

0.3

0.3

interface of elements with different materials (fiber and matrix) is not noticeable. In case of more regular fine scale meshes this difference will decrease. The second example is a multiscale model for kinking in a composite laminate with [902 /0/902 ] layup. The geometry and boundary conditions are shown in Fig. 5. The heights of all layers are the same and equal to h = 95.02 µm, the length of the specimen is L = 600 µm with the width of w = 6 µm. Half of the length of the 0 degree layer is assumed to have a fiber misalignment, which activates the buckling, so a fine scale mesh is projected on this part to capture the fiber kinking. The laminate is discretized with a relatively coarse mesh using 2560 first-order solid elements. Each layer has a homogenized transversely isotropic material with the properties listed in Table 2. Cohesive elements are inserted between the 0 degree layer and its adjacent 90 degree plies to permit the delamination of layers. The material data for the cohesive element is reported in Table 3. For the part of the solution, before initiation of the instability, a big compressive load is applied to the laminate in the 0 degree fiber direction in only one step. At this point of the solution the fine scale mesh with misaligned fiber is projected on the half part of 0 degree layer and the solution of the fine scale will be performed using many sub-steps. For the next step of the solution, when the load reaches the values around the instability point, the coarse scale increments become smaller and consequently the number of sub-steps in the fine scale solution decreases. This multiscale scheme continues to perform until the kink band is formed.

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber … Table 3 Constitutive parameters of the cohesive element G Ic (N/mm) G IIc (N/mm) σn0 (N/mm2 ) 0.969

1.719

90.0

τ 0 (N/mm2 ) 100.0

181

β 1.75

Fig. 6 Shear stress distribution in the kink band zone

Figure 6 shows two kink bands formed in the 0 degree ply following its delamination from the neighboring layers. For better visualization of the details of the stress distribution in the fibers, a cross cut of half of the fine scale mesh is illustrated. Key steps of the kinking evolution are shown in the load-displacement graph of the laminate in Fig. 7. As the steps show, the unsymmetrical shear stress distribution in the multiscale zone, due to the fiber misalignment, leads to the shear failure of cohesive elements and initiation of delamination at the right edge of the multiscale zone. This delamination itself causes the shear stress concentration in its neighboring region which triggers the first kink band by increasing the load. The evolution of the first kink band extends the delaminated area and consequently the formation and evolution of the second kink band.

182

S. Hosseini et al. 3

2.5

2

1.5

1

0.5

0 0

0.005

0.01

0.015

0.02

0.025

0.03

Fig. 7 Load-displacement behavior of the composite laminate including kinking in the 0 degree layer

The observed value for the kink band angle is about 15◦ with the kink band width w ≈ 0.08 mm (with the fiber diameter being d = 5.2 µm) which almost matches with the data reported in [11], where the kink band width was measured to be about 10–15 times the fiber diameter for carbon fiber polymers.

7 Conclusion A Multiscale Projection Method proposed in [32] is adapted in this paper by taking into account the geometrical and material nonlinearity of the fiber kinking phenomenon. The domain of the laminated composite is divided into two parts: in the angle ply layers and parts of the 0 degree layer, where no fiber kinking is expected, a transversely isotropic homogenized material is assigned to a coarse regular mesh, while in parts of the 0 degree layers where the fiber kinking is expected to occur a fine scale mesh is projected on the original coarse mesh and a concurrent multiscale solution is performed. The fine scale mesh is capable of including any type of material nonlinearity, and is completely flexible with the geometrical details and level of mesh refinement even at the boundaries. The results show that the method is effectively capable of modeling the fiber kinking instability, which to the best knowledge of the authors has not been possible by other available global/local multiscale techniques. The only drawback of the method is its slow rate of convergence which is the focus of further improvement of the method.

A Multiscale Projection Method for the Analysis of Fiber Microbuckling in Fiber …

183

Acknowledgements The authors greatly acknowledge the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) for funding the International Research Training Group, IRTG 1627.

References 1. Chaudhuri, R. A. (1991). Prediction of the compressive strength of thick-section advanced composite laminates. Journal of Composite Materials, 25(10), 1244–1276. 2. Rosen, B. W. (1965). Mechanics of composite strengthening. Fibre composite materials (pp. 37–75), Metals Park, Ohio: American Society of Metals. 3. Sadowsky, M. A., Pu, S. L., & Hussain, M. A. (1967). Buckling of micro fibers. Journal of Applied Mechanics, 34, 1011–1016. 4. Steif, P. S. (1987). An exact two-dimensional approach to fiber micro-buckling. International Journal of Solids and Structures, 23, 1235–1246. 5. Chung, W. Y., & Testa, R. B. (1969). The elastic stability of fibers in a composite plate. Journal of Composite Materials, 3, 58–80. 6. Greszczuk, L. B. (1975). Microbuckling failure of circular fiber-reinforced composites. AIAA Journal, 13, 1311–1318. 7. Waas, A. M., Babcock, C. D., & Knauss, W. G. (1990). Mechanical model for elastic fiber micro-buckling. Journal of Applied Mechanics, 57(1), 138–149. 8. Argon, A. S. (1972). Fracture of composites. Treatise on Materials Science and Technology, 1, 79–114. Academic Press, New York. 9. Budiansky, B. (1983). Micromechanics. Composite Structures, 16, 3–12. 10. Budiansky, B., & Fleck, N. A. (1993). Compressive failure of fibre composites. Journal of the Mechanics and Physics of Solids, 41(1), 183–211. 11. Fleck, N. A., Deng, L., & Budiansky, B. (1995). Prediction of kink width in compressed fiber composites. Journal of Applied Mechanics, 62, 329–337. 12. Fleck, N. A. (1997). Advances in applied mechanics (Vol. 33, pp. 43–117). Amsterdam: Elsevier 13. Davidson, P., & Waas, A. M. (2014). Mechanics of kinking in fiber-reinforced composites under compressive loading. Mathematics and Mechanics of Solids, 21(6), 667–684. 14. Pimenta, S., Gutkin, R., Pinho, S. T., & Robinson, P. (2009). A micromechanical model for kink-band formation, part II: Analytical modeling. Composites Science and Technology, 69(7– 8), 956–964. 15. Davila, C. G., Jaunky, N., & Goswami, S. (2003, April 7–10). Failure criteria for FRP laminates in plane stress. In 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Norfolk, Virginia. AIAA Paper No. 2003–1991. 16. Maimi, P., Camanho, P., Mayugo, J., & Dávila, C. (2007). A continuum damage model for composite laminates: Part I—constitutive model. Mechanics of Materials, 39(10), 897–908. 17. Maimi, P., Camanho, P., Mayugo, J., Dávila, C. G. (2006). A thermodynamically consistent damage model for advanced composites. Technical Report NASA/TM-2006-214282 18. Pinho, S. T., Dávila, C. G., Camanho, P., Iannucci, L., Robinson, P. (2005). Failure models and criteria for FRP under in-plane or three-dimensional stress states including shear nonlinearity. Technical Report NASA/TM-2005-213530. 19. Kabiri, Ataabadi A., Ziaei-Rad, S., & Hosseini-Toudeshky, H. (2011). An improved model for fiber kinking analysis of unidirectional laminated composites. Applied Composite Materials, 18, 175–196. 20. Feld, N., Allix, O., Baranger, E., Guimard, J. M. (2011). A micromechanics-based mesomodel for unidirectional laminates in compression. In R. Rolfes, E. L. Jansen. 3rd ECCOMAS thematic conference on the mechanical response of composites, (pp. 61–68). Germany: Hannover. 21. Vogler, T. J., Hsu, S. Y., & Kyriakides, S. (2001). On the initiation and growth of kink bands in fiber composites. Part II: analysis. International Journal of Solids and Structures, 38(15), 2653–2682.

184

S. Hosseini et al.

22. Prabhakar, P., & Waas, A. M. (2013). Interaction between kinking and splitting in the compressive failure of unidirectional fiber reinforced laminated composites. Composite Structures, 98, 85–92. 23. Naya, F., Herráez, M., Lopes, C. S., González, C., Van der Veen, S., & Pons, F. (2017). Computational micromechanics of fiber kinking in unidirectional FRP under different environmental conditions. Composites Science and Technology, 144, 26–35. 24. Bishara, M., Rolfes, R., & Allix, O. (2017). Revealing complex aspects of compressive failure of polymer composites –Part I: Fiber kinking at microscale. Composite Structures, 169, 105– 115. 25. Bishara, M., Vogler, M., & Rolfes, R. (2017). Revealing complex aspects of compressive failure of polymer composites—Part II: Failure interactions in multidirectional laminates and validation. Composite Structures, 169, 116–128. 26. Violeau, D., Ladevèze, P., & Lubineau, G. (2009). Micromodel based simulations for laminated composites. Composites Science and Technology, 69, 1364–1371. 27. Allix, O., Feld, N., Baranger, E., Guimard, J. M., & Ha-Minh, C. (2014). The compressive behaviour of composites including fiber kinking: modelling across the scales. Meccanica, 49(11), 2571–2586. 28. Simo, J. C., & Hughes, T. J. R. (1998). Computational inelasticity (pp. 300–334). New York: Springer. 29. Vogler, M., Rolfes, R., & Camanho, P. P. (2013). Modeling the inelastic deformation and fracture of polymer composites—Part I: Plasticity model. Mechanics of Materials, 59, 50–64. 30. Reinoso, J., & Paggi, M. (2014). A consistent interface element formulation for geometrical and material nonlinearities. Computational Mechanics, 54, 1569–1581. 31. Balzani, C., & Wagner, W. (2008). An interface element for the simulation of delamination in unidirectional fiber-reinforced composite laminates. Engineering Fracture Mechanics, 75, 2597–2615. 32. Loehnert, S., & Belytschko, T. (2007). A multiscale projection method for macro/microcrack simulations. International Journal for Numerical Methods in Engineering, 71, 1466–1482. 33. Holl, M., Loehnert, S., & Wriggers, P. (2013). An adaptive multiscale method for crack propagation and crack coalescence. International Journal for Numerical Methods in Engineering, 93, 23–51. 34. Sun, C. T., & Mao, K. M. (1988). Elastic-plastic crack analysis using a global-local approach on a parallel computer. Computational Structural Mechanics and Fluid Dynamics, 30(1), 395– 401. 35. Fish, J. (1992). The s-Version of the finite element method. Computers and Structures, 43(3), 539–547. 36. Whitcomb, J. D. (1991). Iterative global/local finite element analysis. Computers and Structures, 40(4), 1027–1031. 37. Zohdi, T. I., Wriggers, P., & Huet, C. (2001). A method of substructuring large-scale computational micromechanical problems. Computer Methods in Applied Mechanics and Engineering, 190, 5639–5656. 38. Gendre, L., Allix, O., Gosselet, P., & Comte, F. (2009). Non-intrusive and exact global/local techniques for structural problems with local plasticity. Computational Mechanics, 44(2), 233– 245. Springer Verlag. 39. Gendre, L., Allix, O., & Gosselet, P. (2011). A two-scale approximation of the Schur complement and its use for non-intrusive coupling. International Journal for Numerical Methods in Engineering, 87, 889–905.

Topology Optimization of 1-3 Piezoelectric Composites Chuong Nguyen, Xiaoying Zhuang and Ludovic Chamoin

Abstract Optimal hydrophone performance for 1-3 piezoelectric composites is achieved from the design of material properties. The piezocomposite consists of piezoceramic rods immersed in a polymer matrix. We obtain the effective moduli of the piezocomposite by the differential effective medium theory and the results are explicitly dependent on volume fractions of the piezoceramic rods and elastic properties of the matrix. Topology optimization with level set method is used to optimize the elastic properties of the matrix phase. Numerical examples propose three-dimensional microstructures with negative Poisson’s ratio for the polymer matrix that can enhance hydrostatic charge coefficients of the piezocomposite.

1 Introduction This work presents a design procedure for piezoelectric composite materials used in hydrophone applications which typically operate in low frequencies with a wavelength being much larger than the lateral dimensions of the composite plate. The considered class is 1-3 piezocomposites which are constructed by embedding piezoceramic rods in a polymer matrix (see Fig. 1). The combination of a polymer matrix and piezoelectric ceramic rods enhances the flexibility of overall structures and allows the possibility of working under high pressure conditions, since the polymer is low-density and flexible in contrast to piezoelectric ceramics which are high-density and brittle. C. Nguyen (B) · X. Zhuang Institute of Continuum Mechanics, Leibniz Universität Hannover, Hannover, Germany e-mail: [email protected] X. Zhuang e-mail: [email protected] L. Chamoin LMT, ENS Paris-Saclay, CNRS, Université Paris, Saclay 61 Avenue du Président Wilson, 94235 Cachan, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_10

185

186

C. Nguyen et al.

Fig. 1 Class of the 1-3 piezoelectric composite

By assuming the piezocomposite behaves as a homogeneous medium, many studies have proposed analytical solutions for calculating the effective moduli of 1-3 piezoelectric composite with the simple geometry. The initial work was by Haun h h , d31 were deterand Newnham [1, 2] where the effective piezoelectric coefficients d33 mined from the stress along axes x1 and x3 , respectively. In [3, 4], Smith studied the hydrostatic response by assuming constant strain along the rods and constant stress in the x1 –x2 plane, and concluded the possibility to maximize electrical coupling in 1-3 piezocomposites if the Poisson ratio of the polymer passive phase is negative. The polymer matrix facilitates transmitting of the external forces to act on ceramic rods when the composite plate is submitted to an incident hydrostatic pressure field. Therefore, in order to improve the performance of such piezocomposites, one way is to design microstructures of the matrix such that the stress arising from the external forces will produce an optimal piezoelectric response. Sigmund [5] integrated topology optimization to design the unit cells of the matrix, and the homogenized elastic tensors of optimized unit cells were used to calculate the effective piezoelectric properties. The homogenization approach for the 1-3 piezocomposite was developed by Gibiansky and Torquato [6]. In the work by Kikuchi [7, 8], a general homogenization framework for piezoelectric materials was developed using the finite element method and an optimal distribution of material and void phases in the unit cell was found, that enhances performance of the piezocomposites. The two approaches were using the SIMP (solid isotropic material with penalization) method which is widely accepted in material and structural design [9, 10].

Topology Optimization of 1-3 Piezoelectric Composites

187

Recently, structural topology optimization with the level set function [11–13] was considered as an alternative approach to the SIMP method due to the flexible change in topology and shapes. In this approach, the boundaries between solid and void domains are represented by a zero level set of the level set function and the optimized geometry does not have the checker board problem and can be directly used for 3D printing. We also aim to improve performance of the 1-3 piezoelectric by designing microstructures of the matrix phase. This approach is similar to the work in [5], even though we use topology optimization with the level set method instead of using SIMP, so that the optimized geometries are smooth and more suited to manufacturing. The effective piezoelectric coefficients and optimal volume fractions of piezoceramics are calculated by the DEM (differential effective medium) method [14]. This work is organized as follows: in Sect. 2, the calculation of effective moduli of the 1-3 piezocomposites is reviewed. The parameterized version of the level set method in topology optimization and sensitivity calculation are provided in Sect. 3. Next, Sect. 4 describes workflow for optimization process. Finally, Sect. 5 shows optimized results of 3D microstructures.

2 Constitutive Relation of the 1-3 Piezocomposite The constitutive laws of piezoelectric media which describe the linear electromechanical interaction between the mechanical stress σij , strain εij , electric field Ei , and electric displacement Di are given (in stress-charge form) by [15, 16] E σij = cijkl εkl − ekij Ek Di = eijk εjk + ikS Ek ,

(1)

E where cijkl is the elasticity modulus under short-circuit conditions, eijk is the piezoelectric coupling tensor, and ikS is the clamped permittivity. The small mechanical deformation is assumed. The material tensors in (1) are spatially varying in composite media, and can be replaced with a homogeneous medium. For the composite structure consisting of piezoelectric ceramic rods immersed in a polymer matrix material as shown in Fig. 1, the following conditions are assumed:

1. The length scale of microstructures of the matrix phase is much smaller than the rod size. 2. There is perfect bonding between the rods and the polymer matrix. 3. The passive phase is a transversely isotropic material with axes aligned in the x3 direction. 4. The distribution of the rods are random or hexagonal. Allvelade and Swart [14] introduced the effective moduli relevant for hydrostatic performance, which are calculated by

188

C. Nguyen et al. h m i m c13 = c13 + fp(c13 + c13 ) h m i m e31 = e31 + fp(e + e 31 )  31

 (ci −cm )2 h m i m c33 = c33 + f 1 + (p − 1) (κ i −κ13m )(ci13 −cm ) (c33 − c33 ) 33 33   i m i m (c13 −c13 )(e31 −e31 ) h m i m e33 = e33 + f 1 + (p − 1) (κ i −κ m )(ei −em ) (e33 − e33 ) 33 33   (ei −em )2 h m i m 33 = 33 + f 1 + (p − 1) (κ i −κ31m )(31i − m ) (33 − 33 ) 33

(2)

33

where f is the volume fraction of the piezoceramics, and p is a microscopic parameter defined as h −κ m p = f1 κκ i −κ (3) m . The superscripts m and i denote the material parameters of the (polymer) matrix phase and the inclusion (ceramic rods) phase respectively. The traverse bulk and shear moduli of materials are κ = 21 (c11 + c12 ), μ = 21 (c11 − c12 ).

(4)

If the large contrast stiffness between two phases is satisfied, i.e., ceramic rods are much stiffer than the polymer matrix (see also [14]), the differential effective medium (DEM) scheme [17–19] can predict the effective moduli of composites. When the volume fraction of one phase (ceramic rods) is incrementally added to the current configuration, the effective bulk and shear moduli of the composite plate are solutions of a system of ordinary differential equations dκ h df dμh df

= =

h +μh 1 (κ i − κ h ) κκ i +μ h 1−f 2μh (κ h +μh ) 1 i h (μ − μ ) (κ i +μh )κ h +2μh μi . 1−f

(5)

In this present work, we will focus on the design of hydrophone devices having optimal values of effective hydrostatic charge coefficient which is calculated by ˜ h )−1 e˜ h dh = [1 1] · (C with

 h h  κ c h ˜ C = h 13h , c13 c33

 h e31 e˜ = h . e33

(6)



h

(7)

A piezocomposite system with high values of dh is more efficient in the forward problem, i.e., under mechanical forces more electrical signals are produced. Another quantity that also measures the material response is the figure of merit dh gh =

(dh )2 h ˜ h )−1 e˜ h ) . (33 +(˜eh )T ·(C

(8)

We use dh and dh gh as the criteria to analyze the performance characteristics of the piezocomposite.

Topology Optimization of 1-3 Piezoelectric Composites

189

3 Materials Microstructure Topology Optimization by the Level Set Method The concept of representing a domain geometry by a level set function is formulated as ⎧ ⎨ φ(x, τ ) > 0, x ∈  (solid) φ(x, τ ) < 0, x ∈ /  (void) (9) ⎩ φ(x, τ ) = 0, x ∈ ∂ (boundary), in which the shape boundaries of the unit cell are represented by the zero value φ(x, τ ) = 0 as illustrated in Fig. 2. The dynamic evolution of the level set function is governed by the Hamilton-Jacobi (H-J) equation ∂φ ∂τ

where

− Vn |∇φ| = 0

Vn =

dx dτ

·

∇φ |∇φ|

(10)

(11)

is the normal velocity component. In optimization problems, Vn is chosen such that the gradient of the objective function is negative to ensure the objective function is minimized, e.g., using the steepest descent method, and new boundaries are obtained by solving (10) with a finite difference scheme [11, 20]. In this work, we utilize the parameterized version [21, 22] in which the level set function is approximated by a sum of radial basis functions RI (x) and associated expansion coefficients αI (τ ): φ(x, τ ) =

N I

Fig. 2 Level set function

RI (x) αI (τ ) = R(x) · α (τ ).

(12)

190

C. Nguyen et al.

The H-J (10) is then written as a system of ordinary differential equations α − Vn |(∇R)T α | = 0 RT dα dτ

or Vn =

RT α˙ |(∇R)T α |

.

(13)

The radial basis functions with C 2 continuity are given by Wendland [23] RI (x) = (max{0, (1 − rI )})4 (4rI + 1) and rI =

x−xI  r0

(14)

The supported radius r0 is chosen to be 3–5 times of the mesh size in order to have sufficient knots in the neighborhood of any node xI . In the parameterized version, the expansion coefficients αI are considered as design variables and are updated by a gradient-based mathematical programming optimizer, e.g., the method of moving asymptotes [24] is used in this present work. We aim at designing the elastic properties of the matrix material such that the effective performance coefficients mentioned in Sect. 2 are improved. For materials which have a periodic arrangement of microstructures in some given directions, the material properties are modeled as a homogeneous medium as long as the size of the microstructures is much smaller than the operating wavelength. The elastic tensor Cm is obtained according to the homogenization theory in which the two following conditions are satisfied: (1) periodic arrangement of unit cells, and (2) separated length scales between the macroscopic and unit cells. Components of the homogenized elastic tensor are average of mutual strain energy over the unit cell [9, 25, 26] 

 0,ij 0,kl m χ ij ) Cpqrs εrs χ kl ) H(φ) d  εpq − εpq (χ = − εrs (χ Cijkl (15) Y

χ kl ) is the solution to: where the strain field ε (χ



 0,ij χ ij ) Cpqrs εrs (vij )H(φ) d  = 0, ∀v ∈ V0h . εpq − εpq (χ

(16)

Y

We solve the system (16) on a cubic unit cell with six independent test-strain cases ε 0,ij (ij = 11, 22, 33, 23, 13, 12) and periodic boundary conditions are applied on the opposite faces. For sensitivity calculation, a general Lagrange function for a component of the m is formulated as effective elastic tensor Cijkl χ , v, φ) = g(χ χ , φ) + a(χ χ , v, φ) − l(v, φ) J (χ with

0,ij 0,kl χ ij ))Cpqrs (εrs χ kl ))H(φ) d  − εrs (χ = (εpq − εpq (χ Y

χ , v, φ) = εpq (χ χ ij )Cpqrs H(φ) εrs (v) d  a(χ Y

0,ij l(v, φ) = εpq Cpqrs H(φ) εrs (v) d .

(17)

χ , φ) g(χ

Y

(18)

Topology Optimization of 1-3 Piezoelectric Composites

191

We use shape derivatives of the Lagrange function (by Choi et al. [27]) such that ˙ + a(χ˙ , v, φ) + a(χ ˙ − l(˙v, φ) − l(v, φ) ˙ χ , φ) χ , v˙ , φ) + a(χ χ , v, φ) J˙ = g(χ˙ , φ) + g(χ (19) with

0,kl χ kl ) d  − εrs (χ g(χ˙ , φ) = −2εpq (χ˙ ij )Cpqrs H(φ(y)) εrs Y  

0,ij 0,kl ˙ = χ ij ) Cpqrs εrs χ kl ) δ(φ(y)) |∇φ|Vn d  χ , φ) εpq − εpq (χ − εrs (χ g(χ Y

˙ = χ , v, φ) a(χ a(χ˙ , v, φ) =

Y

χ ij )Cpqrs εrs (v)δ(φ) |∇φ|Vn d  εpq (χ εpq (χ˙ ij )Cpqrs εrs (v)H(φ(y)) d 

(20)

Y

χ ij )Cpqrs εrs (˙v)H(φ(y)) d  χ , v˙ , φ) = εpq (χ a(χ Y

0 ˙ = εpq Cpqrs εrs (v)δ(φ) |∇φ|Vn d . l(v, φ) Y

0 l(˙v, φ) = εpq Cpqrs H(φ(y))εrs (˙v) d . Y

All the terms containing v˙ can be eliminated since the following relations hold true χ , v˙ , φ) − l(˙v, φ) = 0, ∀˙v ∈ V0h . a(χ (21) The time derivative of the characteristic displacement χ˙ in (19) is eliminated by forming the following adjoint equation which should be solved for v in general g(χ˙ , φ) + a(χ˙ , v, φ) = 0

(22)

εpq (χ˙ ij )Cpqrs H(φ(y)) εrs (v) d  Y

0,kl χ kl ) d . = 2 εpq (χ˙ ij )Cpqrs H(φ(y)) εrs − εrs (χ

(23)



or

Y

In fact Eq. (23) is a self-adjoint problem and by properly choosing the function space v such that 0,kl χ kl ) , − εrs (χ εrs (v) = 2 εrs (24) Equation (23) is satisfied. By Comparing (19) and (23), the time derivative of the effective elasticity tensor is m ∂Cijkl ∂τ

=−



 0,ij 0,kl χ ij ) Cpqrs εrs χ kl ) δ(φ) |∇φ|Vn d  εpq − εpq (χ − εrs (χ Y

(25)

192

C. Nguyen et al.

It can be written in terms of parameterized level set function (13) as m ∂Cijkl ∂τ

   N

0,ij 0,kl ij kl χ ) Cpqrs (φ) εrs − εrs (χ χ ) RI (x) d  · α˙ I . εpq − εpq (χ =− I =1

Y

(26) Furthermore, the time derivative of effective the elasticity tensor can be obtained by applying the chain rule N ∂C m m ∂Cijkl ijkl ∂αI (27) = . ∂τ ∂αI ∂τ I =1

By comparing (27) and (26), the sensitivity analysis of the effective elasticity tensor with respect to the design variable α I is m ∂Cijkl ∂αI

=−



 0,ij 0,kl χ ij ) Cpqrs εrs χ kl ) δ(φ)RI (x) d  εpq − εpq (χ − εrs (χ

(28)

Y

Similarly, the sensitivity analysis of the volume constraint is given as ∂V ∂αI

=



δ(φ)RI (x) d .

Y

(29)

In the above, the Heaviside function is numerically regularized H(φ) =

⎧ ⎨ξ ⎩

3 φ ( 4 



1

φ3 ) 33

+

1 2

φ

(30)

in which δ(φ) = H (φ),  is the width of numerical approximation, and ξ is a small positive number chosen to avoid numerical instability problems.

4 Optimization Algorithm In this section, we outline the design procedure for microstructures of the passive phase (matrix) by using topology optimization. The aim is to maximize the hydrostatic charge coefficient dh ; the corresponding optimization problem is: maximize

s.t.

d ⎧ h ⎪ ⎪ a(u, v, φ) = l(v, φ) (equilibrium equation) ⎪ ⎪ ⎨

V (φ) = H(φ) d  ≤ vf (volume constraint) ⎪ Y ⎪ ⎪ ⎪ ⎩ αmin ≤ αi ≤ αmax (i = 1, 2, ..., N )

(31)

Topology Optimization of 1-3 Piezoelectric Composites

193

The following steps describe the optimization procedure: 1. Set the initial configuration of the unit cell generated with level set values. m 2. Calculate effective moduli Cijkl of the matrix phase by using the homogenization method (15). Compute effective bulk and shear moduli κ m , μm . 3. Solve the ODEs in (5) by a numerical integration scheme with step sizes f = 0.005 and f ∈ [0, 1]. The initial values of bulk modulus κ h = κ m and shear modulus μh = μm are chosen at volume fraction f = 0. 4. Determine the microscopic parameter p from (3). 5. Find the effective moduli (2) and hydrostatic charge coefficient (6). Then, determine the maximize value of dhmax and the corresponding volume fraction f max . 6. Perform sensitivity analysis of dh and update the design variables αI by MMA. 7. Repeat from step 2 until convergence. The sensitivity calculation of hydrostatic charge coefficient ∂dh /∂αI requires the m /∂αI which are derived in Sect. 3. sensitivities ∂Cijkl

5 Results Results of hydrostatic charge coefficients predicted by DEM scheme are plotted in Fig. 3 for the 1-3 piezocomposite consisting of PZT5A rods embedded in a Stycast matrix. The material properties are listed in Table 1. For the hydrostatic application, the induced dielectric displacement is given by D3 = dh T

(b) 400

250

dh

350

-d 31

300

=-0.5 =-0.4 =-0.2 =0.0 =0.2 =0.4

200

d 33

d h (pC/N)

Piezoelectric charge coefficient (pC/N)

(a)

(32)

250 200 150 100

150

100

50

50 0

0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Volume fraction

1

0

0

0.2

0.4

0.6

0.8

1

Volume fraction

Fig. 3 Effective hydrostatic charge coefficient of the 1-3 piezocomposite (a). Enhancement of dh due to Poisson’s ratio (b)

194 Table 1 Material parameters Material E (109 N/m2 ) c11 E (109 N/m2 ) c12 E (109 N/m2 ) c13 E (109 N/m2 ) c33 e31 (C/m2 ) e33 (C/m2 ) S / 33 0

C. Nguyen et al.

PZT5A

Stycast

120.34 75.17 75.09 110.86 −5.35 15.7835 826.6

12.3 5.2 5.2 12.3 0 0 4

where T is the hydrostatic pressure magnitude. Therefore, a large value of dh is desired to maximize efficiency of hydrophone devices. If the ceramic rods are poled in the x3 -direction, the hydrostatic charge coefficient dh is also defined as dh = 2d31 + d33 .

(33)

As values d31 have opposite signs with d33 , the impact on dh is small when compared to d33 as shown in Fig. 3a. We next assume the polymer matrix is isotropic with Young’s modulus E = 9.22 (109 N/m2 ) and we plot the evolution of dh for different values of Poisson’s ratio ν m ∈ [−0.5, 0.4] in Fig. 3b. The hypothetical effects of Poisson’s ratio are significant. If the polymer matrix has negative Poisson’s ratio, a compressive stress in x1 –x2 plane is converted into a vertical compressive stress bearing on the ceramic rods, which generates the positive values of d31 . The enhancement of dh by negative Poisson’s ratio was predicted by Smith [3]. The optimization algorithm in Sect. 4 is applied to find material microstructures of the matrix phase. We use a cubic domain discretized with 503 bilinear elements. The geometries are represented by level set functions and the knots coincide with the finite element nodes. Solid parts of unit cells are constituted by Stycast, and the void domains are filled with a pseudo material which is (106 times) softer to avoid singularity. The calculation of effective moduli mentioned in Sect. 2 requires the polymer matrix is transversely isotropic. An additional constraint for the isotropy in the x1 –x2 plane is added to the optimization problem (31). m m 2 m m 2 (c11 −c22 ) +(c31 −c32 ) (κ m )2

≤ tol.

(34)

Optimized microstructures are shown Table 2 with corresponding elastic properties. The elastic properties show negative Poisson’s ratio in the planes x3 –x1 and x3 –x2 as predicted. As the unit cells are cubic domains, we only obtain microstructures which meet the transverse isotropy constraint with an acceptable tolerance. Since the size of microstructures is smaller than the rod diameter, the transverse isotropic condition in the matrix phase is preserved and the calculation method (2) for effective moduli of the 1-3 piezocomposite is applicable.

Topology Optimization of 1-3 Piezoelectric Composites

195

Table 2 Designs of microstructures of the polymer matrix

Figure 4 plots the enhancement of dh for the 3D microstructures. Positive values d31 occur when the volume fraction f ≤ 0.2, and the improvements are observed with larger magnitudes of dh at the volume fraction f ≈ 0.1. The figures of merit dh gh are also possibly improved with the designs. These are in contrast to conventional material with positive Poisson’s ratio (Stycast, Fig. 3a).

6 Conclusion We used topology optimization with the level set method to optimize microstructures of the polymer matrix such that the performance of the 1-3 piezoelectric composite is enhanced. The optimized microstructures exhibit negative Poisson’s ratio which

196

C. Nguyen et al.

600

900

dh

500

800

d

700

33

-1

400

-d 31

300

d h g h (pPa)

Piezoelectric charge coefficients (pC/N)

(a)

200 100

600 500 400 300

0

200

-100

100

-200

0

0.2

0.4

0.6

0.8

0

1

0

0.2

400

0.8

1

0.8

1

350 d

300

h

300

-d

250

d

-1

350

31

33

200 150 100

250 200 150 100

50

50

0 -50

0.6

400

450

d h g h (pPa)

Piezoelectric charge coefficients (pC/N)

(b)

0.4

Volume fraction

Volume fraction

0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Volume fraction

1

0

0

0.2

0.4

0.6

Volume fraction

Fig. 4 Effective hydrostatic charge and electrical voltage coefficients of the 1-3 piezocomposite. Top and bottom with the designs (a) and (b) in Table 2, respectively

allows the electric charge coefficient d31 to be positive when compared to the conventional polymer material, and an enhancement is obtained on the effective hydrostatic charge coefficients dh . The unit cell configuration was represented by the level set function which enables to distinguish boundaries between void and solid domain. This way, there is no checker board problem occurring and the prototype can be printed directly with a 3D printer. We used unit cells defined in a cubic domain, and an additional isotropic constraint in the x1 –x2 plane was added to the optimization problem. These constraint can influence the optimized results due to the limitation of design spaces. Unit cells with hexagonal cross-section could be an alternative approach in some future works.

References 1. Haun, M. J., Moses, P., Gururaja, T. R., Schulze, W. A., & Newnham, R. E. (1983). Transversely reinforced 1-3 and 1-3-0 piezoelectric composites. Ferroelectrics, 49(1), 259–264. 2. Haun, M. J., & Newnham, R. E. (1986). An experimental and theoretical study of 1-3 and 13-0 piezoelectric PZT-polymer composites for hydrophone applications. Ferroelectrics, 68(1), 123–139.

Topology Optimization of 1-3 Piezoelectric Composites

197

3. Smith, W. A. (1991). Optimizing electromechanical coupling in piezocomposites using polymers with negative Poisson’s ratio. In IEEE 1991 Ultrasonics Symposium (Vol. 1, pp. 661–666). 4. Smith, W. A. (1993). Modeling 1-3 composite piezoelectrics: Hydrostatic response. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 40(1), 41–49. 5. Sigmund, O., Torquato, S., & Aksay, I. A. (1998). On the design of 1-3 piezocomposites using topology optimization. Journal of Materials Research, 13(4), 1038–1048. 6. Gibiansky, L. V., & Torquato, S. (1997). Optimal design of 1-3 composite piezoelectrics. Structural Optimization, 13(1), 23–28. Feb. 7. Silva, E. C. N., Fonseca, J. S. O., & Kikuchi, N. (1998). Optimal design of periodic piezocomposites. Computer Methods in Applied Mechanics and Engineering, 159(1), 49–77. 8. Silva, E. C. N., Fonseca, J. S. O., de Espinosa, F. M., Crumm, A. T., Brady, G. A., Halloran, J. W., et al. (1999). Design of piezocomposite materials and piezoelectric transducers using topology optimization—Part I. Archives of Computational Methods in Engineering, 6(2), 117–182. 9. Sigmund, O. (1994). Materials with prescribed constitutive parameters: An inverse homogenization problem. International Journal of Solids and Structures, 31(17), 2313–2329. 10. Sigmund, O. (1997). On the design of compliant mechanisms using topology optimization. Mechanics of Structures and Machines, 25(4), 493–524. 11. Osher, S., & Fedkiw, R. (2003). Level set methods and dynamic implicit surfaces (Vol. 153). Springer. 12. Allaire, G., Jouve, F., & Toader, A. M. (2002). A level-set method for shape optimization. Comptes Rendus Mathematique, 334(12), 1125–1130. 13. Wang, M. Y., Wang, X., & Guo, D. (2003). A level set method for structural topology optimization. Computer Methods in Applied Mechanics and Engineering, 192(1–2), 227–246. 14. Avellaneda, M., & Swart, P. J. (1998). Calculating the performance of 1-3 piezoelectric composites for hydrophone applications: An effective medium approach. The Journal of the Acoustical Society of America, 103(3), 1449–1467. 15. Mason, W. P., & Baerwald, H. (1951). Piezoelectric crystals and their applications to ultrasonics. Physics Today, 4(5), 23–24. 16. Lerch, R. (1990). Simulation of piezoelectric devices by two- and three-dimensional finite elements. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 37(3), 233–247. 17. McLaughlin, R. (1977). A study of the differential scheme for composite materials. International Journal of Engineering Science, 15(4), 237–244. 18. Dunn, M. L., & Taya, M. (1993). Micromechanics predictions of the effective electroelastic moduli of piezoelectric composites. International Journal of Solids and Structures, 30(2), 161–175. 19. Norris, A. N., Callegari, A. J., & Sheng, P. (1985). A generalized differential effective medium theory. Journal of the Mechanics and Physics of Solids, 33(6), 525–543. 20. Allaire, G., Jouve, F., & Toader, A. (2004). Structural optimization using sensitivity analysis and a level-set method. Journal of Computational Physics, 194(1), 363–393. 21. Belytschko, T., Xiao, S. P., & Parimi, C. (2003). Topology optimization with implicit functions and regularization. International Journal for Numerical Methods in Engineering, 57(8), 1177– 1196. 22. Wang, M. Y., & Wang, X. (2004). “Color” level sets: A multi-phase method for structural topology optimization with multiple materials. Computer Methods in Applied Mechanics and Engineering, 193(68), 469–496. 23. Wendland, H. (1995). Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Advances in Computational Mathematics, 4(1), 389–396. Dec. 24. Svanberg, K. (1987). The method of moving asymptotes—A new method for structural optimization. International Journal for Numerical Methods in Engineering, 24(2), 359–373. 25. Guedes, J., & Kikuchi, N. (1990). Preprocessing and postprocessing for materials based on the homogenization method with adaptive finite element methods. Computer Methods in Applied Mechanics and Engineering, 83(2), 143–198.

198

C. Nguyen et al.

26. Hassani, B., & Hinton, E. (1998). Homogenization and structural topology optimization: Theory, practice and software. Springer Science & Business Media. 27. Choi, K. K., & Kim, N. H. (2006). Structural sensitivity analysis and optimization., Mechanical engineering series New York: Springer.

Fracture and Fatigue

Treatment of Brittle Fracture in Solids with the Virtual Element Method A. Hussein, P. Wriggers, B. Hudobivnik, F. Aldakheel, P.-A. Guidault and O. Allix

Abstract Computational Mechanics has many applications in engineering. Its range of application has been enlarged widely in the last decades. Still new developments are made to which a new discretization scheme belongs: the virtual element method (VEM). Despite being only few years under development the application range of VEM in engineering includes formulations for linear and nonlinear material responses. In this contribution the focus is on fracture mechanics. Especially the treatment of crack propagation will be discussed where VEM has some advantages. The performance of the formulation is underlined by means of representative examples.

1 Introduction The finite element method (FEM) is a well established tool for solving a wide range of boundary value problems in science and engineering, see e.g. [1, 2]. However in recent years different methods like the isogeometric analysis outlined in [3] and the virtual element method proposed in [4] were introduced as tools that bring some new features to the numerical solution of problems in solid mechanics. The virtual element method is a generalization of the finite element method, which has inspired from modern mimetic finite difference schemes, rooted in the pioneering work in [5]. It has proven to be a competitive discretization scheme for meshes with irregularly shaped elements that can even become non-convex. Furthermore, VEM allows exploration of features such as flexibility with regard to mesh generation and choice of element shapes, e.g. use very general polygonal and polyhedral meshes. In this regard, a stabilization procedure is required in the virtual element method, as described A. Hussein (B) · P. Wriggers · B. Hudobivnik · F. Aldakheel Institute of Continuum Mechanics, Leibniz Universität Hannover, Appelstr. 11, 30167 Hannover, Germany e-mail: [email protected] P.-A. Guidault · O. Allix LMT, ENS Paris-Saclay/CNRS/Université Paris-Saclay, 61 avenue du Président Wilson, 94235 Cachan Cedex, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_11

201

202

A. Hussein et al.

in [6] for linear Poisson problems. So far applications of virtual elements have been devoted to linear elastic deformations in [7], contact problems in [8], discrete fracture network simulations in [9–12], isotropic damage in [13] and phase field modeling of fracture in [14]. In this contribution, we examine the efficiency of the VEM for predicting the failure mechanisms in solids due to crack initiation and propagation. The fracture of solids can be numerically modelled by using discontinuous approaches, where the displacement field is allowed to have a jump across the crack surface. In this regard, several methods can be used, such as (i) the meshless method (elementfree Galerkin method) [15, 16], (ii) the boundary element method [17, 18] and (iii) the extended finite element method (XFEM) [19, 20]. Modelling the fracture by using these methods is based on the calculating of the Stress Intensity Factors (SIF), which is one of the fundamental quantities in fracture mechanics to measure the strength of the stress singularity in the vicinity of a crack tip. In the literature, several methods have been proposed to calculate stress intensity factors. Among these are the displacement extrapolation method [21], the crack opening displacement (COD) [22], the virtual crack extension [23], the J-integral [24, 25] and the interaction integral method [26]. In [27, 28] special types of elements were developed that based on the 8-nodes isoparametric formulation (quarter point elements) to improve the accuracy of the singular field. By using such elements, the r −1/2 singularity can be captured when the mid-side nodes are moved to the quarter position near the crack tip. A hybrid crack element was introduced in [29, 30], which leads to a direct and accurate computation of the SIFs as well as the coefficients of the higher order terms in the elastic asymptotic crack tip field. The above mentioned approaches consider the fracture of solids as a discontinuous interface. The modeling of crack formation can be achieved in a convenient way by continuum phase-field approaches to fracture, which are based on the regularization of sharp crack discontinuities. Phase-field modeling of fracture has been attracting considerable attention in recent years due to its capability of capturing complex crack patterns in various problems in solid mechanics. Many efforts have been focused on the regularized modeling of Griffith-type brittle fracture in elastic solids. In this regard, Miehe et al. [31] proposed a phase-field approach to fracture with a local irreversibility constraint on the crack phase-field. It incorporates regularized crack surface density functions as central constitutive objects, which is motivated in a descriptive format based on geometric considerations. Recent works on brittle fracture have been devoted to the dynamic case in [32], cohesive fracture in [33], multiplicative decomposition of the deformation gradient into compressive-tensile parts in [34], different choices of degradation functions in [35, 36], a new fast hybrid formulation in [37], emphasis on a possible formula for the length scale estimation in [38], for the description of hydraulic fracturing in [39, 40], to describe fatigue effects for brittle materials in [41], crack penetration or deflection at an interface in [42] and material point method in [43]. The paper is organized as follows: Sect. 2 outlines the governing equations for the brittle fracture problem. Section 3 is devoted to the presentation and the construction of the virtual element method. Section 4 discusses the modelling techniques for crack propagation when using arbitrary virtual elements. Section 5 presents examples

Treatment of Brittle Fracture in Solids …

203

of crack propagation that demonstrate the modelling capabilities of the proposed approach. Finally, Sect. 6 provides a summary and some concluding remarks. For comparison purposes, results of the standard finite element method (FEM) are also demonstrated.

2 Governing Equations of Brittle Fracture For the treatment of brittle fracture, we consider a linear elastic structure, whose mathematical description is summarized in Sect. 2.1. The theoretical formulation of the crack growth using the stress intensity factor and the phase-field approach takes place in Sects. 2.2 and 2.3. The equations are written in such a way that an automatic derivation can be done by using the software tool AceGen.

2.1 Basic Equations of Elastic Body Consider an elastic body Ω ⊂ R2 bounded by ∂Ω. As shown in Fig. 1, the boundary ∂Ω is subdivided into Neumann boundary conditions on ∂Ωt , Dirichlet boundary conditions on ∂Ωu and a discontinuity interface on Γc (crack line in 2-D and crack surface in 3-D) such that ∂Ω = ∂Ωt ∪ ∂Ωu ∪ Γc . The discontinuity Γc is composed of the upper Γc+ and the lower Γc− crack faces. The equilibrium equation of the solid body Ω can be expressed as ∇ ·σ + f =0

in Ω,

(1)

where ∇ is the gradient operator, σ the Cauchy stress tensor and f the body force vector per unit volume. The crack face Γc is assumed to be traction-free, thus

Fig. 1 Definition of solid with a crack and boundary conditions

204

A. Hussein et al.

σ · N+ c =0

on Γc+ ,

(2)

σ · N− c =0

on Γc− ,

(3)

− + − where N + c and N c are the outward unit normal vectors defined on Γc and Γc , respectively. The Neumann ∂Ωt and Dirichlet ∂Ωu boundary conditions are given by (4) σ · N = ¯t on ∂Ωt ,

u = u¯

on ∂Ωu ,

(5)

where N is the outward unit normal vector of the boundary, u the displacement vector, u¯ the prescribed displacement on ∂Ωu and ¯t the surface traction on ∂Ωt . We consider that the body undergoes small deformation, therefore we describe the strain ε by 1 (6) ε = (∇u + ∇ T u). 2 For a homogeneous isotropic linear elastic material the strain energy density function ψ can be expressed as λ (7) ψ(ε) = tr2 (ε) + μ tr(ε 2 ), 2 where λ and μ are the Lamé constants. The Cauchy stress tensor σ is obtained from the strain energy ψ in (7) by σ =

∂ψ = λ tr(ε)I + 2με, ∂ε

(8)

where I is the second order unit tensor. The variational formulation of the equilibrium equation (1) can be obtained by using various variational principles; the principle of virtual work, the Hu-Washizu principle, the principle of stationary elastic potential. In this work, the last option is chosen. By using this principle the potential of the problem can be defined in terms of the strain energy density function ψ and the external load Πex as      ¯t · u d A . Π u = ψ(ε(u)) dV − f · u dV − (9) Ω



Ωf



Πex

∂Ωt



Treatment of Brittle Fracture in Solids …

205

2.2 Crack Propagation Based on Stress Intensity Factors In this work, the interaction integral (I-integral) is used to determine the stress intensity factors in case of a mixed-mode loading [26]. This integral is based on the classical J-Integral. For a homogeneous isotropic linear elastic material the J-integral

 ∂u i s ψ δ1 j − σi j N j dΓ, J= ∂ x1

(10)

Γ

is path independent where ψ s is the strain energy density given by ψs =

1 σi j εi j , 2

(11)

and u i is the displacement field, N j the outward normal vector to the contour integral Γ , δ1 j the Kronecker delta and σi j εi j denote the stress and strain tensor, respectively. For the sake of simplification, the components of the tensors and vectors are used in this section. For a linear elastic material, Rice [44] has demonstrated that the J-integral of a contour around the crack tip matches the energy release rate gc . The relationship between these quantities (J, gc ) and the stress intensity factors can be written as J = gc =

K I2 + K II2 , E

(12)

where E  is given in terms of the Young’s modulus E and the Poisson’s ratio ν as E =



E, plane stress E/(1 − ν 2 ), plane strain.

(13)

In a mixed-mode fracture problem, is difficult to use the classical form of the Jintegral to calculate the individual stress intensity factor for each fracture mode separately. To overcome this difficulty, an interaction integral (I-integral) based on two independent equilibrium states of a cracked body, was introduced [45]. The first state represents the actual fields (u i , εi j , σi j ) calculated numerically and the second state defines the auxiliary fields (u i(aux) = K I(aux) giI + K II(aux) giI I , σi(aux) = K I(aux) fi I + K II(aux) fi I I , εi(aux) ) in terms of Williams’ functions (giI,I I , fi I,I I ) j j [46]. By superimposing these states, the I -integral of the new equilibrium state can be defined as

 ∂u i(aux) (aux) ∂u i (aux) δ1 j − σi j − σi j (14) I = ψ N j dΓ, ∂ x1 ∂ x1 Γ

206

A. Hussein et al.

with the interaction strain energy density ψ (aux) = σi j εi(aux) = σi(aux) εi j . Following j j the works of Moës et al. [47], the interaction integral for the actual and auxiliary states can be expressed in terms of the stress intensity factors as I =

2 (K I K I(aux) + K II K II(aux) ). E

(15)

By setting K I(aux) = 1 and K II(aux) = 0 in (15) the stress intensity factor K I for mode I can be determined as E I with u i(aux) = K I(aux) giI and σi(aux) = K I(aux) fi I . j 2

KI =

(16)

Similarly, the same approach can be applied to compute the stress intensity factor K II for mode II K II =

E I with u i(aux) = K II(aux) giI I and σi(aux) = K II(aux) fi I I . j 2

(17)

Due to the mixed-mode loading, an equivalent stress intensity factor K aq (K I , K II ) was introduced to govern the crack initiation [48]. When K aq exceeds a critical material parameter (typically the critical fracture toughness K Ic ) K aq (K I , K II ) ≥ K Ic ,

(18)

the crack begins to propagate, as outlined in [48]. Once a crack has been initiated, the direction of growth can be predicted using different propagation criteria. In this paper, the maximum circumferential stress criterion (MCSC) is considered. The circumferential stress σθθ and the shear stress τr θ in the neighbourhood of the crack tip can be defined as 1 σθθ = √ 2πr



KI 4





 θ 3θ K II θ 3θ 3 cos + cos + −3 sin − 3 sin , 3 2 4 2 2

1 θ τr θ = √ cos [K I sin θ + K II (3 cos θ − 1)]. 2 2 2πr

(19)

(20)

Now, it is assumed that the crack will propagate from the crack tip in the direction of the maximum circumferential stress. For this purpose, the shear stress in (20) is set to zero and after some rearranging we obtain the propagation angle θc,l as a function of the stress intensity factors ⎡ θc,l = −2 arctan ⎣

⎤ 2K II

 ⎦ .  K I 1 + 1 + 8 (K II /K I )2

(21)

Treatment of Brittle Fracture in Solids …

207

2.3 Phase-Field Approach for Brittle Crack Propagation Now we consider the situation, where the response of fracturing solid at material points x ∈ Ω and time t is described by the displacement field u(x, t) and the crack phase-field d(x, t), with d(x, t) = 0 and d(x, t) = 1 for the unbroken and the fully broken state of the material, respectively. For the phase-field problem, the sharp crack topology Γc → Γl is regularized by the crack surface functional as outlined in Miehe et al. [31]  1 l γl (d, ∇d) d V with γl (d, ∇d) = d 2 + |∇d|2 , (22) Γl (d) = 2l 2 Ω based on the crack surface density function γl and the fracture length scale l that governs the width of the diffuse crack, as plotted in Fig. 2. To describe a purely geometric approach to phase-field fracture, the regularized crack phase-field d is obtained by a minimization principle of diffusive crack d = Arg{inf Γl (d)} with d = 1 on Γc ⊂ Ω, d

(23)

yielding the Euler equation d − l 2 Δd = 0 on Ω along with the Neumann boundary condition ∇d · N = 0 on ∂Ω. Additionally a monotonous growth d˙ ≥ 0 of the fracture phase-field has to be assured Miehe et al. [31]. The total specific potential of the phase field problem can be written as sum of the elastic and fracture specific potential: W (ε, d, ∇d, ) = Welas (ε, d) + W f rac (d, ∇d).

Fig. 2 Solid with a regularized crack and boundary conditions

(24)

208

A. Hussein et al.

The specific fracture energy can be defined as a product of surface density function γl (22) and Griffith’s critical energy release rate gc yielding W f rac (d, ∇d) = gc γl (d, ∇d).

(25)

The original approach to phase-field formulation proposed by [49] composed of the whole elastic energy contribution multiplied with a degradation function g(d) as Welas (ε, d) = g(d) ψ(ε),

(26)

where g(d) = (1 − d)2 models the degradation of the stored elastic energy of the solid due to fracture. It interpolates between the unbroken response for d = 0 and the fully broken state at d = 1 by satisfying the constraints g(0) = 1, g(1) = 0, g  (d) ≤ 0 and g  (1) = 0. In order to enforce a crack evolution only in tension, the stored elastic energy of the solid is additively decomposed into a positive part ψ + due to tension and a negative part ψ − due to compression, outlined in the work of Miehe et al. [31] as λ tr[ε] 2± + μ tr[(ε ± )2 ], (27) 2 3 in terms of the bracket operators · ± = (· ± | · |)/2, the positive ε+ = a=1 εa + N a ⊗ N a and the negative strain tensors ε − = ε − ε + , which depend on the principal strains {εa }a=1..3 and the principal strain directions {N a }a=1..3 . The variation of the potential follows as: Welas (ε, d) = g(d)ψ + + ψ − with ψ ± =

δW =

∂ W f rac ∂(Welas + W f rac ) ∂ Welas δd + · ∇δd + · δε, ∂d ∂∇d ∂ε

(28)

where the derivative of ∂ W∂delas can be defined as a crack driving force H = ∂ W∂delas . To ensure crack irreversibility in the sense that the cracks can only grow (i.e. d˙ ≥ 0), the H has to be redefined as: H = max

s∈[0,t]

∂ Welas ∂ Welas ≥ 0, where from (26) = ψ +, ∂d ∂d

(29)

Following the recent works of Miehe et al. [50] the fracture energy can be redefined as: W f rac (d, ∇d, H) = gc γl (d, ∇d) +

η (d − dn )2 + g(d)H 2Δt

(30)

where Δt := t − tn > 0 denotes the time step and η ≥ 0 is a material parameter that characterizes the viscosity of the crack propagation. The above introduced variables will characterize the brittle failure response of a solid, based on the two global primary fields

Treatment of Brittle Fracture in Solids …

209

Global Primary Fields: U := {u, d},

(31)

the displacement field u and the crack phase-field d. The subsequent constitutive approach to the phase-field modelling of brittle fracture focuses on the set Constitutive State Variables: C := {ε, d, ∇d, H},

(32)

reflecting a combination of elasticity with a first-order gradient damage modelling. The development of the virtual element that handle phase-field brittle fracture in elastic solids can start from a pseudo potential density functional instead of using the weak form. This has advantages when the code is automatically generated using the software tool AceGen, see Korelc and Wriggers [51]. The pseudo potential density functional depends on the elastic and the fracture parts and keeps the crack driving force constant during the first variation. The specific pseudo potential density functional can then be written as W (C) = g(d)ψ + + ψ − + gc γl (d, ∇d) +

η (d − dn )2 + (1 − d)2 H 2Δt

(33)

The total pseudo potential can then be written as  Π (U) =

Ω

W (C) d V − Πext (u)

(34)

with the external load Πext (u) defined in (9). The H and g(d) have to be kept constant during the derivation procedure of residual to obtain the correct weak form of the problem.

3 Formulation of the Virtual Element Method Following the work of Brezzi et al. [5], the main idea of the virtual element method is a Galerkin projection of the unknowns onto a specific ansatz space. The domain Ω is partitioned into non-overlapping polygonal elements which need not to be convex and can have any arbitrary shape with different node numbers, as plotted in Fig. 3 representing a bird-like element with vertices x I . Here a low-order approach is adopted, see Wriggers et al. [52] and Wriggers and Hudobivnik [53], using linear ansatz functions where nodes are placed only at the vertices of the polygonal elements. Furthermore, the restriction of the element shape functions to the element boundaries are linear functions.

210

A. Hussein et al.

Fig. 3 Polynomial basis function for the virtual element ansatz with vertices x I

3.1 Ansatz Functions for VEM The virtual element method relies on the split of the ansatz space into a part UΠ representing the projected field defined in (31) and a remainder h h h h + (Uh − UΠ ) with UΠ := {uΠ , dΠh }. Uh = UΠ

(35)

h The projection UΠ is defined at element level by a linear ansatz function NΠ as



⎡ ⎤ ⎤⎡ ⎤ uΠ x a1 a4 a7 1 h = ⎣ u Π y ⎦ = a · NΠ = ⎣ a2 a5 a8 ⎦ ⎣ x ⎦ , UΠ y dΠ a3 a6 a9

(36)

h with the unknowns a. The projection UΠ is now defined such that it satisfies

 Ωe

h ∇UΠ

!

dV =

 Ωe

Grad Uh d V,

(37)

h which yields, with the linear ansatz in (36) that ∇UΠ is constant,

 h  ! 1 = ∇UΠ e Ωe

 Ωe

Grad Uh d V =

1 Ωe

 ∂Ωe

Uh ⊗ N d A,

(38)

Treatment of Brittle Fracture in Solids …

211

Fig. 4 Virtual element with n V nodes and local boundary segment of the bird-like polygonal element

where N is the normal at the boundary ∂Ωe of the domain Ωe of a virtual element e, see Fig. 4. Thus label |e represents element quantities that have constant value within an element e. A direct computation of the projected gradient yields with the linear ansatz in (36) the simple matrix form ⎡ ⎤ ⎡ ⎤ ∇u Π x a4 a7  h  ∇UΠ = ⎣∇u Π y ⎦ = ⎣a5 a8 ⎦ . e ∇dΠ a6 a9

(39)

The boundary integral in (38) has to be evaluated. To this end, a linear ansatz for the primary fields along the element edges is introduced as (Uh )k = (1 − ξk ) U1 + ξk U2 = Mk 1 U1 + Mk 2 U2 with ξk =

xk , Lk

(40)

for a boundary segment k of the virtual element. The local nodes: 1–2 are chosen in counter-clockwise order and can be found in Fig. 4. In (40) Mk 1 is the ansatz function along a segment k, related to node 1, ξk is the local dimensionless coordinate and U1 is the nodal value at that node. The ansatz function Mk 2 is defined in the same way. From (38) to (40), the unknowns a4 –a9 can be computed from the normal vectors of the boundary segments in elements and nodal displacement as ⎡ ⎤ ⎡ ⎤ u x (x)N x u x (x)N y a4 a7  nV   1 1 ⎢ ⎥ ⎢u (x)N u (x)N ⎥ x y y ⎦ d A, (41) Uh ⊗ N d A = ⎣a5 a8 ⎦ = ⎣ y Ωe Ωe k=1 d(x)N d(x)N a6 a9 x y ∂Ωe ∂Ωk where we have used N = { N x , N y }T and U = { u x , u y , d }T , furthermore n V is the number of element vertices which coincides with the number of segments (edges)

212

A. Hussein et al.

of the element, for first order VEM. Note that the normal vector N changes from segment to segment. In the 2D case it can be computed for a segment k as

Nx Nk = Ny

 k

1 = Lk



y1 − y2 x2 − x1

 ,

(42)

k

with x = {xi , yi }i=1,2 being the local coordinates of the two vertices of the segment k. The integral in (41) can be evaluated for the ansatz functions (40) exactly by using the trapezoidal or Gauss-Lobatto rule. By selecting the vertices as the Gauss-Lobatto points it is sufficient to know only the nodal values Ue = {U1 , U2 , . . . , Un V },

(43)

at the n V vertices V in Fig. 4. Since the ansatz function in (40) fulfills the property M I (x J ) = δ I J the actual form of the function M does not enter the evaluation of the boundary integrals which makes the evaluation extremely simple. Finally, by comparing (39) and (41) the unknowns a4 to a9 are obtained by inspection, for further details see e.g. Wriggers et al. [52]. The projection in (38) does not determine the h in (36) completely and has to be supplemented by a further condition to ansatz UΠ obtain the constants a1 , a2 and a3 . For this purpose we adopt the condition that the h are equal. This yields for each sum of the nodal values of Uh and of its projection UΠ element Ωe nV nV 1  1  h U (x I ) = Uh (x I ), (44) n V I =1 Π n V I =1 where x I are the coordinates of the nodal point I and the sum includes all boundary nodes. Substituting (36) and (40) in (44), results with the three unknowns a1 , a2 and a3 as ⎡ ⎤ ⎡ ⎤ u x I − u x,x x I − u x,y y I a1 nV nV     1 1 ⎢ ⎥ ⎢ ⎥ U I − ∇UΠ I · x I = ⎣u y I − u y,x x I − u y,y y I ⎦ . (45) ⎣a2 ⎦ = n V I =1 n V I =1 a3 d I − d,x x I − d,y y I h Thus, the ansatz function UΠ of the virtual element is completely defined.

3.2 Residual and Stiffness Matrix of the Virtual Elements h A virtual element is based only on the projection UΠ of the displacement and fracture phase-field would lead to a rank deficient element once the number of vertices is greater than 3. Thus the formulation has to be stabilized like the classical one-point integrated elements developed by Flanagan and Belytschko [54], Belytschko and Bindeman [55], Reese et al. [56], Reese and Wriggers [57], Mueller-Hoeppe et al.

Treatment of Brittle Fracture in Solids …

213

[58], Korelc et al. [59] and Krysl [60]. In the following we will develop a virtual element for phase-field modeling of brittle fracture in isotropic elastic solids. To this end, the potential density functional defined in (34) can be rewritten by exploiting the split in (35). Thus we have, by summing up all element contributions for the n e virtual elements ne

Π (U) =

A

   h  h  Π (Ue ) with Π (Ue ) = Πc (UΠ ) e + Πstab (Uh − UΠ )e ,

(46)

e=1

based on a constant part Πc and an associated stabilization term Πstab . Herein, the crack driving force H is a local history variable evaluated only once at the element level and used in both parts of the potential density functional. The first part in (46)2 can be computed as 

h  )e Πc (UΠ

 =

 Ωe

h W (CΠ ) dV



 Ωe



h uΠ

dV −

∂Ωt

h ¯t · uΠ dA

(47)

h h with the projected constitutive state variables CΠ = {ε Π , dΠh , ∇dΠh , H}. The projected strain tensor can be computed from the projected displacement as h = εΠ

1 h h (∇uΠ + ∇ T uΠ ). 2

(48)

h h are linear functions and their gradient ∇UΠ is constant over The primary fields UΠ the area of the virtual element Ωe , as a consequence, the potential W is integrated by evaluating the function at the element centroid x c as shown in Fig. 4 and multiplying it with domain size Ωe analogous to the standard Gauss integration scheme in FEM

 Ωe

 h h  W (CΠ ) d V = W (CΠ ) c Ωe

(49)

with the label |c refers to quantities evaluated at the element centroid x c . Next, the stabilization potential has to be derived for the coupled problem based on the potential (34). Following the recent work of Wriggers et al. [52], we introduce a non-linear stabilization procedure, that takes the form    h  h   h ) − Π(U  Π ) e = Π(U )e Πstab (Uh − UΠ e

(50)

 , we propose a similar function to the original For the stabilization density function W  = β W . In (50), density function (34), however scaled by a constant value β as: W  h   ) e can be then the stabilization with respect to the projected primary fields Π(UΠ calculated as (49), yielding   h  h   Π Π(U ) e = β W (CΠ ) c Ωe

(51)

214

A. Hussein et al.

  h ) is computed by applying standard FEM procedure, i.e. by The potential Π(U e first discretizing the virtual element domain Ωe into internal triangle element mesh consisting of n T = n E − 2 triangles as plotted in Fig. 5 for the bird-like polygonal element. Then the integral over Ωe is transformed into the sum of integrals over triangles. By using a linear ansatz for the primary fields U, an approximation can be computed for the constitutive variables C within each triangle Ωmi of the inscribed mesh, see Wriggers et al. [52]. This gives  (Uh ) = Π e

 Ωe

 (Ch ) d V = β W

 Ωe

W (Ch ) d V = β

nT 

 Ωei W (Ch )c

(52)

i

 where W (Ch )c is the potential density function evaluated at the triangle centroid x ic and Ωei is the area of the i th triangle in the element e, as plotted in Fig. 5. To compute the stabilization parameter β, a connection to the bending problem was imposed regarding to the bulk energy as outlined in Wriggers and Hudobivnik [53]. By limiting the element size Ωe towards 0, the difference between the potentials h  h ) will also approach towards  Π ) and the true values Π(U of projected values Π(U 0, thus stabilization will disappear in limit. Due to the finer mesh requirements of the fracture phase field problem compared with [53], the choice of β factor term is less relevant, since it is only relevant for coarse meshes. In this regard we propose a constant value for β taken from the interval: 00) Ti+1,i = Ti,i+1 = γi−1 δi−1 γi−1 γi end (θi )i 0, Cr m zi = Cr m pi + j Cr m p j βi−1, j (same thing for −1 −1 Cr m zi ). Cr m Z and Sr,n Cr m Z are then transformed the same way as Z in order Sr,n to build the required operators. The vectors that compose MV can also be built during the iterations of the conjugate gradient on the same model as V by writing ˆ in Algorithm 1. rˆ i = (−1)i ri γ −1/2 and MV = RU This leads to: ⎧

−1 0 0 ⎪ x˜  + C y˜ ⎨ K = Cx˜  C

(19) x˜ = x˜ 0 + K y˜ − ˜x0 ⎪ ⎩ 0 0 Cx˜ = Cx˜ − KCx˜

All the operators implied in these relations have the same size as the truncated Ritz basis. Consequently, all the operations have a very small cost. Finally, one propagates the uncertainties on x˜ to um thanks to the approximation Cum  VCx˜ VT (at this stage, a part of the uncertainty is neglected because of the truncation of the Ritz basis) and um = V˜x. For the determination of the force fm , we can define it as fmd = Sm,d um − bd or fmn = Sm,n um − bn (they are supposed to be equal if um is equal to the reference solution). If bn is supposed to be noiseless, the second expression is probably better, and thus Cfm = Sm,n VCx˜ VT Sm,n . As previously, the determination of Sm,n V requires solving as many direct problems as the number of vectors in the truncated Ritz basis, and as previously, this matrix can be assembled during the conjugate gradient iterations by storing the vector Sm,n pi and doing a few operations on it. However, the application of Sm,n can be interpreted in this context as a derivation operation, that is well-known for being ill-conditioned. If one considers that the solution um is noisy, the flux fm is then likely to be even noisier. For that reason, its determination should preferably be done with another regularization

302

R. Ferrier et al.

procedure, which gets out of the scope of this article. For that reason, in the numerical part, only the determination of um is studied. The procedure is summed up in Algorithm 2.

Algorithm 2: Bayesian resolution on the Ritz basis Build the Ritz basis V and the associated Ritz values  by means of the conjugate gradient; Choose the optimal number of Ritz modes; Cyd = VT CrTm Cuˆ r Cr m V; −1 C S−T C V; Cyn = VT CrTm Sr,n fˆr r,n r m // Covariance of the reduced right hand side Cy˜ = Cyd + Cyn −1 fˆ // Mean of the reduced right hand side y˜ = VT CrTm uˆ r + VT CrTm Sr,n r Cx0˜ = VT MCu0m MV // Covariance of the reduced a priori solution 0 // Mean of the reduced a priori solution x˜ 0 = VT Mum

−1 K = Cx0˜  Cx0˜  + Cy˜ // Kalman gain

// Update of the reduced mean x˜ = x˜ 0 + K y˜ − ˜x0 Cx˜ = Cx0˜ − KCx0˜ // Update of the reduced variance Cum  VCx˜ VT ; Cfm = Sm,n VCx˜ VT Sm,n ; um = V˜x; −1 fˆ ; fm = Sm,n VCx˜ + VT CrTm Sr,n r

The only computationally expensive step is the computation of the Ritz values and vectors, that is equivalent to the cost of the deterministic resolution by the Steklov-Poincaré method. All the other operations are either done on small operators, or doable at very small extra cost during the iterations of the conjugate gradient algorithm. Remark As for all Bayesian methods, the results obtained are very dependent of the choice of the a priori probability density function. In the case when this function is not available, one can imagine a variant of the method where um is determined by the Steklov-Poincaré method and the covariance is obtained with a propagation of the uncertainties Cx˜ = −1 Cy˜ −1 , that corresponds to the Formula (19) in the case  of equiprobable a priori information on Rn (with n the dimension of x˜ ). Remark The truncation of the Ritz basis is a sine qua non condition to make the computation of the uncertainty on the solution affordable in the cases with a high number of unknowns. However, this is done at the cost of a simplification of the problem, and thus a loss of accuracy. Numerically, it was observed that the uncertainty increases with the number of Ritz modes, but tends to stagnate because the contribution of the highest order Ritz modes is erased by the regularizing effect of the Bayesian approach. The choice of the relevant number of modes (which does not match the one given by the discrete Picard condition [9, 11]), and the evaluation of the resulting imprecision are important questions that are not investigated in this study. 

A Bayesian Approach for Uncertainty Quantification in Elliptic …

303

5 Numerical Example We study a hollow sphere with an internal inhomogeneous pressure p = ax 2 + b with a = 1 N mm−4 and b = 10 N mm−2 . The Cauchy problem consists in determining the displacement and pressure on the internal boundary from the observation of the displacement on the outer Neumann-free boundary (Fig. 2). The interior radius is R1 = 8 mm and the exterior radius is R2 = 12 mm. We consider a uncorrelated Gaussian noise of amplitude 20% on the data uˆ r . The isotropic linear elasticity coefficients are Young modulus E = 210,000 MPa and Poisson ratio ν = 0.3. We use a tetrahedral mesh of which edges length is approximately l = 1 mm. The total number of degrees of freedom is 15,156. 6357 of them are on r and 3042 on m , which corresponds to the number of unknowns of the inverse problem. We illustrate on Fig. 3 the fact that the truncated Ritz decomposition is relatively independent of the mesh size. Three meshes are studied: the first one has a size l = 1.5 mm, which results in a total number of degrees of freedom of 5814, 3033 of which are on r and 1503 on m . The second discretization is the one that will be used for the uncertainty quantification. The third mesh has a size l = 0.75 mm, which results in a total number of degrees of freedom of 33,612, 12,585 of which are on r and 5088 on m . One can observe that the first Ritz values , the terms of the projection of the right hand side VT (bd − bn ) and the terms of the projection of the solution x˜ are quite similar for the different mesh sizes. The next step consists in choosing the number of Ritz modes used for the resolution. In [9], a criterion similar to the discrete Picard condition [11] was proposed. This criterion leads to use about 50 modes. However, it was numerically observed that this number of modes leads to errors on the solution that are much higher than the uncertainty estimation.

Fig. 2 Meshed domain and solution of the direct problem

304

R. Ferrier et al.

Fig. 3 Ritz values, projection of the right-hand side and of the solution on the Ritz basis for different mesh size l

Indeed, it was observed that around the optimal value of 50 modes, the error has a quite uniform amplitude, but the more modes there are, the lower the error due to neglected modes is and the higher the error due to noise on the data is. Consequently, in order to quantify the error thanks to a reduced model, it is better that the error due to the neglected modes is small with respect to the error due to the noise. For that reason, the chosen number of modes needs to be as large as possible. It was chosen to use 77 modes as, at this point, the increase of the components of x˜ seems to be more regular, which let us think that it is governed by the white noise (see the vertical bar on Fig. 3b).

A Bayesian Approach for Uncertainty Quantification in Elliptic …

305

The chosen a priori probability has a null mean value and all the degrees of freedom are uncorrelated. The variance is determined by the amplitude of the solution of the problem by the deterministic Steklov-Poincaré method projected on the 50 first Ritz modes (in conformity with the discrete Picard condition). One uses Algorithm 2 to compute the mean um and its variance matrix Cum . From this variance matrix, one extracts the diagonal terms, that correspond to the uncertainty on um . On Fig. 4, the variance on the component u z is compared to the error on this component. It can be noticed that the variance gives indeed a relevant idea of the uncertainty level. In order to achieve a more quantitative comparison, both are plotted on the meridian of the sphere of equation x = 0 on Fig. 5. The same procedure has been applied on a thicker domain with R1 = 5 mm, R2 = 15 mm and l = 2 mm. This results in much fewer degrees of freedom on m (348), without significantly modifying the Ritz spectrum. The number of modes used for the Bayesian inversion remains the same (77), and the noise is decreased to 5%. The maps of the error and estimated standard deviation are presented on Fig. 6, and their comparison on the meridian is displayed on Fig. 7. On this last figure, the errors due to two realizations of the random noise are also plotted. Note that the computed

(a) Computed standard deviation

(b) Error on the Bayesian identification

Fig. 4 Error and standard deviation on u z (noise = 20%)

Fig. 5 Error and standard deviation on u z (noise = 20%) on the meridian

306

R. Ferrier et al.

Fig. 6 Error and standard deviation on u z (thicker sphere, noise = 5%)

Fig. 7 Error and standard deviation on u z (thicker sphere, noise = 5%) on the meridian

standard deviation is not strictly the same for those two realization of the random noise (although being very close) because this field is computed via the Ritz modes, that slightly depend on the right hand side, that is impacted by the noise.

6 Conclusion and Perspectives In this article, a simple Bayesian approach has been applied to solve the Cauchy problem on an elliptic PDE. As we seek to identify a continuous field, which means that the number of degrees of freedom can be arbitrarily large, a Ritz decomposition of the involved operator has been used to work in a space that is independent of the discretization.

A Bayesian Approach for Uncertainty Quantification in Elliptic …

307

The chosen approach has shown good results concerning the estimation of the uncertainties on the solution provided two requirements are satisfied. First, the noise level should be sufficiently high in order to make it possible to consider it to be the dominant source of error in the resolution of the discretized system. On some geometries, the Cauchy problem is indeed so ill-posed that the numerical truncation noise (whose impact is much harder to quantify) is the main source of error on the solution. The second requirement is that the number of Ritz modes should be sufficient to ensure that the contribution of the neglected part of the spectrum to the error on the solution is small. The determination of the minimal number of modes to meet this requirement is still an open question. What is more, one can remark that nothing prevents to apply the proposed procedure on other inverse problems solved with a Conjugate Gradient algorithm. However, this study has been conducted in the linear Gaussian framework, which tends to facilitate a lot the development of CPU efficient Bayesian methods. However, in nonlinear or non-Gaussian cases, this work could be extended thanks to the samplingfree Bayesian method presented in [19, 20], that uses a polynomial chaos expansion to represent non-Gaussian probabilities. Acknowledgements This work has been accomplished thanks to the ViVaCE—IRTG 1627 program.

References 1. Alessandrini, G., Rondi, L., Rosset, E., & Vessella, S. (2009). The stability for the Cauchy problem for elliptic equations. Inverse Problems, 25(12), 123004. 2. Andrieux, S., Baranger, T. N., & Ben Abda, A. (2006). Solving Cauchy problems by minimizing an energy-like functional. Inverse Problems, 22(1), 115. 3. Ben Belgacem, F. (2007). Why is the Cauchy problem severely ill-posed? Inverse Problems, 23(2), 823–836. 4. Ben Belgacem, F., & El Fekih, H. (2005). On Cauchy’s problem: I. A variational SteklovPoincaré theory. Inverse Problems, 21(6), 1915. 5. Bourgeois, L. (2005). A mixed formulation of quasi-reversibility to solve the Cauchy problem for Laplace’s equation. Inverse Problems, 21(3), 1087. 6. Chaabane, S., Jaoua, M., & Leblond, J. (2003). Parameter identification for Laplace equation and approximation in Hardy classes. Journal of Inverse and Ill-Posed Problems JIIP, 11(1), 33–57. 7. Cimetiere, A., Delvare, F., Jaoua, M., & Pons, F. (2001). Solution of the Cauchy problem using iterated Tikhonov regularization. Inverse Problems, 17(3), 553–570. 8. Faverjon, B., Puig, B., & Baranger, T. (2017). Identification of boundary conditions by solving Cauchy problem in linear elasticity with material uncertainties. Computers & Mathematics with Applications, 73(3), 494–504. 9. Ferrier, R., Kadri, M. L., & Gosselet, P. (2018). The Steklov-Poincaré technique for data completion: Preconditioning and filtering. International Journal for Numerical Methods in Engineering, 116(4), 270–286. 10. Hadamard, J. (1923). Lectures on Cauchy’s problem in linear partial differential equations (Vol. 37). New Haven: Yale University Press. 11. Hansen, P. C. (1990). The discrete Picard condition for discrete ill-posed problems. BIT Numerical Mathematics, 30(4), 658–672.

308

R. Ferrier et al.

12. Jin, B., & Zou, J. (2008). A Bayesian inference approach to the ill-posed Cauchy problem of steady-state heat conduction. International Journal for Numerical Methods in Engineering, 76(4), 521–544. 13. Kadri, M. L., Ben Abdallah, J., & Baranger, T. N. (2011). Identification of internal cracks in a three-dimensional solid body via Steklov-Poincaré approaches. Comptes Rendus Mécanique, 339(10), 674–681. 14. Kozlov, V. A., & Maz’ya, V. G. (1989). Iterative procedures for solving ill-posed boundary value problems that preserve the differential equations. Algebra i Analiz, 1(5), 144–170. 15. Kozlov, V. A., Maz’ya, V. G., & Fomin, A. (1991). An iterative method for solving the Cauchy problem for elliptic equations. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 31(1), 64–74. 16. Liu, C.-S., et al. (2008). A highly accurate MCTM for inverse Cauchy problems of Laplace equation in arbitrary plane domains. CMES: Computer Modeling in Engineering & Sciences, 35(2), 91–111. 17. Marin, L., Hào, D. N., & Lesnic, D. (2002). Conjugate gradient-boundary element method for the Cauchy problem in elasticity. Quarterly Journal of Mechanics and Applied Mathematics, 55(2), 227–247. 18. Matthies, H. G. (2017). Uncertainty quantification and Bayesian inversion. In Encyclopedia of computational mechanics, 2nd ed., 1–51. 19. Matthies, H. G., Zander, E., Rosi´c, B. V., Litvinenko, A., & Pajonk, O. (2016). Inverse problems in a Bayesian setting. In Computational methods for solids and fluids (pp. 245–286). Berlin: Springer. 20. Rosic, B. V., Litvinenko, A., Pajonk, O., & Matthies, H. G. (2011). Direct Bayesian update of polynomial chaos representations. Journal of Computational Physics. 21. Zemzemi, N. (2016). A domain decomposition approach in the electrocardiography inverse problem. In Domain decomposition methods in science and engineering XXII (pp. 641–647). Berlin: Springer.

On-the-Fly Bayesian Data Assimilation Using Transport Map Sampling and PGD Reduced Models Paul-Baptiste Rubio, Ludovic Chamoin and François Louf

Abstract The motivation of this research work is to address real-time sequential data assimilation and inference of model parameters within a full Bayesian formulation. For that purpose, we couple two advanced numerical approaches. First, the Transport Map sampling is used as an alternative to classical Markov Chain approaches in order to facilitate the sampling of posterior densities resulting from Bayesian inference. It builds a deterministic mapping, obtained from a minimization problem, between the posterior probability measure of interest and a simple reference measure. Second, the Proper Generalized Decomposition (PGD) is implemented in order to reduce the computational effort for the evaluation of the multi-parametric numerical model in the online phase, and therefore for uncertainty quantification on outputs of interest of the model. The PGD also speeds up the minimization algorithm of the Transport Map method, as derivatives with respect to model parameters can then be computed in a straightforward manner. The performance of the approach is illustrated on a fusion welding application.

1 Introduction Numerical models are nowadays intensively used to study the behavior and predict the evolution of physical systems. In this framework, and in order to ensure relevant numerical simulations, model updating from experimental observations appears as a mandatory task. Data assimilation can be either performed in a single shot from a full set of observations, or sequentially from data obtained on-the-fly. We focus here on this latter dynamical data assimilation approach, which constitutes a key concept P.-B. Rubio · L. Chamoin (B) · F. Louf LMT ENS Paris-Saclay, 61 Avenue du Président Wilson, 94235 Cachan, France e-mail: [email protected] P.-B. Rubio e-mail: [email protected] F. Louf e-mail: [email protected] © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_16

309

310

P.-B. Rubio et al.

for Dynamic Data Driven Application Systems (DDDAS) with continuous real-time interaction between in situ experimental data and simulation models [5]. It is well-known that the determination of model parameters from indirect and noisy observations usually leads to ill-posed inverse problems. To circumvent this issue, Bayesian inference provides a natural regularization procedure by assigning a probability density to the set of parameter variations according to the uncertainty sources such as measurement noise or modeling error [9, 15, 16]. The parameters to be inferred are thus considered as random variables and the result of the inference is a posterior probability density. This density is defined by the Bayes formula as the product of a prior density (not taking measurement information into account) and a likelihood function that indicates the probability that the model output coincides with the measurement for a given parameter value. The posterior density merely provides global information on the parametric space and additional exploration methods, with the calculation of large dimension integrals, are required to estimate quantities of interest such as means, variances, or first order marginals. For this purpose, MonteCarlo integration is usually employed; it generates samples using indirect methods such as the well-known Markov Chain Monte-Carlo (MCMC) [7, 9] or Sequential Monte-Carlo [1] methods. Nevertheless, such a procedure is a multi-query one as it involves the evaluation of the posterior density a very large number of times, making its numerical cost incompatible with real-time applications [11]. In order to recover fast computations, some assumptions can be made in the Bayesian formulation (Gaussian distributions, …), leading to simplified procedures (such as Kalman filters). However, these assumptions may affect the robustness and decrease the accuracy of the data assimilation results. The goal of the Ph.D. work is to propose a new formulation of Bayesian inference, compatible with fast computations and real-time dynamical model updating envisioned in DDDAS applications, and without resorting to simplifying assumptions in the stochastic properties. To reach this objective, we propose to couple Bayesian inference with two advanced numerical tools that are Transport Map sampling [6] and PGD model reduction techniques [3]. Transport Map sampling aims at building a deterministic application between the posterior probability measure to be sampled and a simple reference probability measure (e.g.: standard normal distribution) [10, 14]. This application thus permits to sample the posterior density from the straightforward sampling of the reference density. The construction of the application is based on a polynomial representation of the mapping, and results in a deterministic minimization problem to solve. The approach goes with well-defined sampling error estimates and convenient convergence criteria. Furthermore, the natural composition of transport maps is particularly suited to sequential data assimilation. PGD model reduction is complementarily employed as a convenient technique to decrease the computation time. It describes the multi-parametric model solution by means of a modal representation with separated variables and explicit dependency on model parameters [3, 4]. This PGD representation, defined in an offline stage, is then used in the online stage and at two levels of the data assimilation procedure.

On-the-Fly Bayesian Data Assimilation Using Transport Map …

311

First, the PGD solution can be easily evaluated for any parameter set, so that the computation of the posterior density is straightforward (it is usually explicit with respect to parameters), and uncertainty quantification on outputs of interest can be conducted at low cost in a post-processing phase. Second, the PGD representation yields explicit gradient and Hessian information that is beneficially used in the transport maps computation. We mention here some preliminary works [2, 12] in which PGD was used in conjunction with Bayesian inference. The resulting procedure, coupling the Bayesian inference framework with Transport Map sampling and PGD model reduction, leads to a very efficient numerical tool to address sequential data assimilation in a stochastic framework [13]. Its performance is here illustrated on a specific numerical example in the context of a welding control process, even though other applications were considered along the Ph.D. work. The paper is structured as follows: Bayesian data assimilation and posterior sampling using the Transport Map framework are presented in Sect. 2; the additional benefit of using PGD model order reduction is explained in Sect. 3; numerical results are reported in Sect. 4; eventually, conclusions and prospects are drawn in Sect. 5.

2 Posterior Sampling in Bayesian Data Assimilation 2.1 Basics on Bayesian Inference The purpose of Bayesian inference is to characterize the Probability Density Function (PDF) π(p|dobs ) of some model parameters p ∈ P by means of indirect noisy measurements dobs . In this context, the Bayesian formulation of the inverse problem reads [9]: 1 (1) π(p|dobs ) = π(dobs |p).π0 (p) C  where C = π(dobs |p).π(p)dp is a normalization constant. The result of the inference, that is the posterior density π(p|dobs ), gives the probability distribution of the parameters of interest p knowing the observations dobs . This density is proportional to the product between the prior density π0 (p), that is related to the a priori knowledge on the parameters before the assimilation of data dobs , and the so-called likelihood function π(dobs |p). This latter function corresponds to the probability for the model M to predict observations dobs given values of the parameters p. In the classical case where an additive measurement noise with density πmeas is considered, the likelihood function is the distance between the observation and the model output weighted by the measurement noise: π(dobs |p) = πmeas (dobs − M(p))

(2)

312

P.-B. Rubio et al.

In the case of sequential assimilation of measurements diobs at time points ti , i ∈ {1, ..., Nt }, the Bayesian formulation is given by considering the prior at time ti as the posterior at time ti−1 : ⎛ π(p|d1obs , ..., diobs ) ∝ ⎝

i 

⎞ ⎠ πt j (dobs j |p) .π0 (p)

(3)

j=1

For a given set of measurements dobs j , and considering the same additive measurement noise as previously, the likelihood function at time t j reads:  obs  πt j (dobs j |p) = πmeas d j − M p, t j

(4)

In this formulation, no assumption is made on the probability densities (prior, measurement noise) or on the linearity of the model. From the expression of π(p|dobs ) (or π(p|d1obs , ..., diobs )), quantities of interest such as means, variances, or first order marginals may be computed. These quantities are based on large dimension integrals, and classical Monte-Carlo integrationbased techniques such as Markov Chain Monte-Carlo (MCMC) require in practice to sample the posterior density a large number of times. This multi-query procedure is much time consuming and incompatible with fast computations; we thus deal with an alternative approach in the following section.

2.2 Transport Map Sampling 2.2.1

General Idea

The principle of the Transport Map strategy is to build a deterministic mapping M between a reference probability measure νρ and a target measure νπ . The purpose is to find the change of variables such that:



gdνπ =

g ◦ Mdνρ

(5)

In this framework, it is possible to transport samples drawn according to the reference density in order to become samples drawn according to the target density (Fig. 1). For the considered inference problems, the target density corresponds to the posterior density π(p|dobs ) derived from the Bayesian formulation, while a standard normal Gaussian density may be chosen as the reference density. In this context, a pioneering work dealing with optimal transports was developed in [17]. More recently, this work was adapted to Bayesian inference [14] with effective computation tools (see http://transportmaps.mit.edu).

On-the-Fly Bayesian Data Assimilation Using Transport Map …

313

Fig. 1 Illustration of the transport map principle for sampling a target density

From the reference density ρ, the purpose is to build the map M : Rd → Rd such that: (6) νπ ≈ M νρ = ρ ◦ M −1 |det∇ M −1 | where  denotes the push forward operator. To quantify the difference between the two distributions νπ and M νρ , the Kullback-Leibler (K-L) divergence D K L is used: D K L (M νρ ||νπ ) = Eρ ln

νρ



M−1 νπ

 log(ρ(p)) − log([π ◦ M](p)) − log(| det ∇ M(p)|) ρ(p)dp

= P

2.2.2

(7)

Computation of the Maps

Maps M are searched among Knothe-Rosenblatt rearrangements (i.e lower triangular and monotonic maps). This particular choice of structure is motivated by the properties of unique minimizer of (7), optimality regarding the weighted quadratic cost and computational feasibility [10, 14]. The maps M are therefore parameterized as:

314

P.-B. Rubio et al.



M 1 (ac1 , ae1 , p1 ) ⎢ M 2 (ac2 , ae2 , p1 , p2 ) ⎢ M(p) = ⎢ . ⎣ ..

⎤ ⎥ ⎥ ⎥ ⎦

(8)

M d (acd , aed , p1 , p2 , ..., pd )

p with M k (ack , aek , p) = c (p)ack + 0 k (e ( p1 , ..., pk−1 , θ )aek )2 dθ . Functions c and e are Hermite polynomials with coefficients ac et ae . Eventually, with this parameterization, the map M is found by minimizing the K-L divergence (7). By using a N quadrature rule (ωi , pi )i=1 for ρ (Monte-Carlo, Gauss) the associated minimization problem reads: min

N 

ac1,...,d ,ae1,...,d i=1

  ωi −log(π˜ ◦ M(ac1,...,d , ae1,...,d , pi ) − log(| det ∇ M(ac1,...,d , ae1,...,d , pi ))|)

(9)

where π¯ is the non-normalized version of the target density. This minimization problem is fully deterministic and can be solved using classical algorithms (such as BFGS) with computation of derivatives (gradient, Hessian) of density π¯ (p). Once the map M is found it can be used for sampling purposes by transporting samples N drawn from ρ to samples drawn from π . Similarly, Gaussian quadrature (ωi , pi )i=1 N for ρ can be transported to quadrature (ωi , M(pi ))i=1 for π . Once a map M is computed, the quality of the approximation M νρ of the measure νπ can be estimated by the convergence criteria σ (variance diagnostic) defined in [14] as: 1 νρ σ = Varρ ln −1 (10) 2 M νπ The numerical cost for computing this criterion is very low as the integration is performed using the reference density and with the same quadrature rule as the one used in the computation of the K-L divergence. Thus, an adaptive strategy regarding the order of the map can be used to derive an automatic algorithm.

2.2.3

Sequential Data Assimilation with Transport Maps

In the case of sequential inference, the Transport Map method exploits the Markov structure of the posterior density (3). Indeed, instead of being fully computed, the map between the reference density ρ and the posterior density at time ti is obtained by composition of low-order maps: (M1 ◦ .... ◦ Mi ) ρ(p) = (Mi ) ρ(p) ≈ π(p|d1obs , ..., diobs )

(11)

The map M1 represents the coupling between the density ρ(p) and the first posterior density π(p|d1obs ) ∝ πt1 (d1obs |p).π(p). Then, each map Mi , i ∈ {2, ..., Nt } is computed between ρ and the density πi∗ defined as:

On-the-Fly Bayesian Data Assimilation Using Transport Map …

315

Fig. 2 Schematic principle of sequential inference with transport maps

πi∗ (p) = πti (diobs |Mi−1 (p)).ρ(p)

(12)

The schematic principle of the sequential maps computation is presented in Fig. 2. For the assimilation of the first measurement d1obs , a linear map L is computed (Laplace approximation). This map is applied to build an intermediate density that is closer to the reference density with approximatively a zero mean and an identity covariance matrix. This step acts as a normalization of the parametric space. Then, the map M1 is computed between the reference density and this intermediate density. As maps are parametrized by Hermite polynomials, convergence is improved by the linear transformation step (the target density becomes closer to the standard normal density). For the sequential assimilation of the other measurements diobs , maps are computed between the reference density and the target densities πi∗ (see 12) which are the posterior densities affected by the inverse transformation of maps already computed for all previous assimilation steps.

316

P.-B. Rubio et al.

3 PGD Model Order Reduction in Bayesian Inference 3.1 Basics on PGD Due to the increasing number of high-dimensional approximation problems, which naturally arise in many situations such as stochastic analysis and uncertainty quantification, model reduction techniques have been the object of a growing interest in research and industry. Tensor methods are among the most prominent tools for the numerical solution of such problems; in many practical applications, the approximation of high-dimensional solutions is made computationally tractable by using low-rank tensor formats. In particular, an appealing technique based on low-rank canonical format and referred to as Proper Generalized Decomposition (PGD) was introduced and successfully used in many applications of Computational Mechanics [3, 4]. Contrary to POD, the PGD approximation does not require any knowledge on the solution, and operates in an iterative strategy in which basis functions (or modes) are computed on the fly, by solving eigenvalue problems. In the formulation (3), the posterior density can be explicitly expressed as a function of the parameters p if the model is also explicit regarding the parameters. However, in complex engineering applications, the models are derived from the numerical solution of Partial Differential Equations (PDEs). Those PDEs usually depend on parameters p, time t, and space x. Thus, a direct multi-parametric solution is not available for real-time applications. Hence the use of reduced order models is necessary. In the classical PGD framework, the reduced model is built directly from the weak formulation of the considered PDE. Then, the global solution u m at order m of the PDE in terms of time, parameters and space is computed in a separated form [3]: m d   u m (x, t, p) = k (x)λk (t) αik ( pi ) (13) k=1

i=1

The computation of the PGD model can be performed in an offline phase and it can then be evaluated for all time, parameters and space by evaluation of products and sums of one dimensional functions (modes).

3.2 Transport Map Sampling with PGD Models Once the PGD approximation u m (x, t, p) is built, an explicit formulation of the nonnormalized posterior density can be derived. Owing to the observation operator O, we extract the output dm (p, t) = O (u m (x, t, p)) from the field u m (x, t, p). In this case, the non-normalized posterior density π¯ reads:

On-the-Fly Bayesian Data Assimilation Using Transport Map …

317

i     .π(p) π¯ p|d1obs , ..., diobs = πmeas dobs j − dm p, t j

(14)

j=1

This leads to cost-effective evaluations of the non-normalized density required by sampling methods (for example MCMC [2]). Moreover, PGD is highly beneficial in the sampling procedure using transport maps. Indeed, the Transport Map method is based on the minimization of the functional (9) which depends on π¯ . With the PGD formulation, partial derivatives of the model with respect to parameters p (required by the computation of the derivatives of the functional (9)) can be easily computed as: m d   ∂ n α jk ∂ n um (x, t, p) = (x)λ (t) ( p ) αik ( pi ) k k j ∂ p nj ∂ p nj k=1 i=1

(15)

i = j

and stored in the offline phase. The parameter modes α jk being most often finite element functions, the derivations are performed on one-dimensional shape functions. Thanks to the separated representation of the PGD, cross-derivatives are computed by combination of univariate mode derivatives. As a result, the problem (9) can be effectively solved by means of minimization algorithms using gradient or Hessian information which speeds up transport maps computations.

4 Illustrative Example In order to illustrate the use and performance of the PGD-Transport Map strategy, the problem which is considered here is a welding control example introduced in [8] and represented in Fig. 3. Two metal plates are welded by a heat source whose center is moving along the geometry. The problem unknown is the non-dimensional temperature T in the space domain denoted , which is equal to 0 when the temperature is equal to the room temperature and 1 when the temperature is equal to the melting temperature of the material. On the boundary  D (see Fig. 3), the temperature is supposed to be equal to the room temperature (T = 0). The other boundaries are assumed insulated. To solve the problem, the coordinates system is moving at the same speed as the heat source. Thus, a convective term is added to the heat equation: ∂T + v(Pe).gradT − κT = s(σ ) ∂t

(16)

c The advection velocity reads v = [Pe; 0] where Pe = v.L is the Peclet number, L c κ being the characteristic length of the problem and κ the thermal diffusivity of the

318

P.-B. Rubio et al.

Fig. 3 Welding control example

material. The volume heat input s is defined by the following Gaussian repartition in the space domain: s(x, y; σ ) =

  u (x − xc )2 + (y − yc )2 exp − 2π σ 2 2σ 2

(17)

where coordinates (xc , yc ) represent the location of the heat source center. By integration of (16) over , the weak formulation in space is of the form: a(T, T ∗ ) = l(T ∗ ) ∀T ∗ with: 

 ∂T + v.gradT .T ∗ + κ.gradT.gradT ∗ d a(T, T ∗ ) = ∂t

 l(T ∗ ) = s.T ∗ d 

4.1 Inference Problem The parameters of interest are σ and Pe, which are respectively related to the spatial spreading and speed of the heat source (see 16). They are assumed to be constant over the time domain. Temperatures T1 and T2 are measured on two points and T3 is the output of interest assumed to be unreachable by direct measurement (see Fig. 3). From the measurements, the purpose is to assess if the welding depth is sufficient (i.e: T3 ≥ 1). For each time point ti , i ∈ {1, ..., Nt }, data T1obs and T2obs are assimilated to refine the knowledge on the parameters. We also denote τi , i ∈ {1, ..., Nτ }, the time points related to the discretization of the physical time used for the numerical solution. In this example, we assume that discretization time points coincide with assimilation time points. The prior density on the parameters (σ, Pe) is supposed to be the product of two independent Gaussian densities with means (μσ = 0.4, μ Pe = −60) and variances

On-the-Fly Bayesian Data Assimilation Using Transport Map …

319

2 (σσ2 = 0.003, σ Pe = 7). Thus, at time point ti , i ∈ {1, ..., Nt } the posterior density (PDF) of having the parameters knowing the measurements reads: i      obs, j obs, j obs,1:i obs,1:i π σ, Pe|T1 = , T2 πt j T1 , T2 |σ, Pe .π (σ, Pe)

(18)

j=1

In the considered inference context, the output of interest is the temperature T3 (see Fig. 3) which is inaccessible by direct measurement. The knowledge of this temperature gives information about the welding depth and consequently on the welding quality in the Region of Interest (RoI). We consider the Peclet number Pe and the standard deviation of the Gaussian heat source input σ as unknown parameters of the model. From successive measurements of temperatures T1 and T2 we want to predict the temperature T3 in order to know if the welding quality will be sufficient.

4.2 PGD Solution The multi-parametric PGD solution of the problem is detailed in [12]. It reads on each point of interest: Tk (xk , yk , t, σ, Pe) =

m 

n (xk , yk )λn (t)αn1 (σ )αn2 (Pe), k = 1, 2, 3

(19)

n=1

and is computed from the full weak formulation of the problem and an iterative method. The first four PGD modes are represented in Fig. 4 (spatial modes), Fig. 5 (parameter modes), and Fig. 6 (time modes).

4.3 Sequential Data Assimilation Measurements are simulated using the PGD model with the reference parameter at value (σ = 0.4, Pe = −60). Then, an independent random normal noise is added with zero means and standard deviations of σ1meas = 0.01925 and σ2meas = 0.01245. Figure 7 shows the model output for each time step and the perturbed output which gives the measurements used for the considered example. The PGD-Transport Map strategy is then applied to this example. The solution of the heat equation (16) is used in its PGD form and derivatives of this solution with respect to the parameters to be inferred are computed in order to derive the transport maps (i.e. successive maps M1 , ..., M Nt ) effectively. In Table 1 we represent the computation time required to compute the transport maps at each assimilation

320 1 0.8 0.6 0.4 0.2 0

P.-B. Rubio et al.

0

1

0

0.2

2

0.4

3

0.6

0.8

4

1

5

1 0.8 0.6 0.4 0.2 0

0

1

0

1

−2

2

0

3

2

4

4

0.1

5

0.15

(b) Spatial mode 2

(a) Spatial mode 1 1 0.8 0.6 0.4 0.2 0

3

5 10−2

0

1.2

2

4

6

5

1 0.8 0.6 0.4 0.2 0

0

1

−4

8 10−2

−3

2

−2

3

−1

4

0

1

(d) Spatial mode 4

(c) Spatial mode 3

5

2 10−2

Fig. 4 First four spatial modes of the PGD solution 0.28

4

0.26

2

2

1

0.24 0

−4 0.3

0.35

0.4

0.45

Mode 1 Mode 2 Mode 3 Mode 4

0.2

Mode 1 Mode 2 Mode 3 Mode 4

−2

0.22

0.18 0.5

0.16 −70

(a) Modes on

−65

−60

−55

−50

(b) Modes on Pe

Fig. 5 First four parametric modes of the PGD solution

step. The computation time shown also takes into account the transport of 20,000 samples according to the reference density to obtain as many samples according to the posterior density for post-processing purposes. We compare the computation time with the different derivative orders provided to the minimization algorithm. With order 0, the minimization problem (9) is solved using a BFGS algorithm where the gradient is computed numerically. With order 1, the minimization is also performed using a BFGS algorithm but with the gradient given explicitly with respect to the PGD modes derivatives. With order 2, a conjugate gradient algorithm is used with an explicit formulation of both gradient and Hessian. The stopping criterion is a tolerance of 10−3 on the variance diagnostic (10); the complexity of the maps (order of the Hermite polynomials) is increased until this tolerance is fulfilled. The first assimilation step is the most expensive because of the higher complexity of the transformation between the reference and the first posterior density (a 4th order map

On-the-Fly Bayesian Data Assimilation Using Transport Map …

321

4

2

0

−2 Mode 1 Mode 2 Mode 3 Mode 4

−4

−6

0

100

200

300

400

500

Time steps Fig. 6 First four time modes of the PGD solution 1 0.58 0.56

0.9

2

1

0.95

0.85

0.54 0.52

Numerical output Measurements

0.8

0

10

20

30

Numerical output Measurements

0.5

40

0

10

20

30

Time steps (assimilation)

Time steps (assimilation)

(a) Output T1

(b) Output T2

40

Fig. 7 Measurements simulated with the numerical model

is required to fulfill the variance diagnostic criterion). The other transformations computed at other time steps are much less expensive (time less than 1 second) as they are built between intermediate posteriors which slightly differ at each step and can thus be easily represented by a linear (i.e. first order) transformation. The speedup for the first iteration is about 5.5 between zeroth-order information and first-order information. Between the first-order information and the second-order information, the speed-up is about 1.34. For the other time steps, the speed-up is very small as the computed map is very simple. We observe that using gradient and Hessian information to solve the minimization problem related to the computation of the transport maps leads to low computation times. As a comparison, the generation of 20,000 samples with a classical MCMC

322

P.-B. Rubio et al.

Table 1 Computation costs of the transport maps depending on the derivatives order information given to the minimization algorithm Derivatives order information: 0 1 2 Number of iterations for step 1 Computation time for step 1 (s) Average number of iterations for steps {2, . . . , 45} Average computation time for steps {2, . . . , 45} (s)

107 33.85 4.2 1.24

33 6.18 4.16 0.92

10 4.60 4.13 0.90

method would lead to a computation time of about 5 s. This time is close to the time required to compute the first transport map but much higher than the time to compute the other maps. Furthermore, the 20,000 samples generated by the transport maps are independent, which is not the case of the MCMC method. Actually, in the chain generated by the MCMC method, the Integrated AutoCorrelation Time (IACT) is about 12; this means that the number of samples to generate in order to have 20,000 independent samples would be 20,000 ×IACT which takes about 60 s to compute. This consideration shows the benefit of using transport maps. Moreover, the use of the PGD allows to have a large speed-up compared to the classical MCMC method. In Fig. 8, two computation costs over the time steps and using both gradient and Hessian information (order 2 information) are shown: Fig. 8a shows the computation time to build each map Mi , i ∈ {1, ..., Nt }, while Fig. 8b shows the cost in terms of model evaluations to compute each map. A level 10 Gauss-Hermite quadrature is used. From the second step to the final step, we observe that the computation time slowly increases (Fig. 8a) while the evaluation cost slowly decreases (Fig. 8b). This is due to the fact that the evaluation of the composition of maps grows with the number of steps. One way to circumvent this issue would consist in performing regression on the map composition. In the computation costs presented in Table 1 and Fig. 8, successive transports of samples for Monte-Carlo integration are included. First, 20,000 samples are drawn according to the 2D standard normal distribution (reference measure), then successive transports of those samples are computed through the successive maps. A Kernel Density Estimation on each coordinate separately gives an estimation of the posterior marginals. Figures 9 and 10 represent all the marginals at each time step and for both parameters σ and Pe, respectively. The x-axis represents the time steps and y-axis the parameter values. The color map informs on the probability density function values. During the iterations over the time steps, we observe that marginals become thinner with higher PDF values giving more confidence on the parameters estimation. We also observe that the parameter σ is less sensitive than the parameter Pe regarding the inference process.

On-the-Fly Bayesian Data Assimilation Using Transport Map …

323

4

Number of iterations

Computation time (s)

25 3

2

1

20 15 10 5 0

0

10

20

30

0

40

10

20

30

40

Time steps (assimilation)

Time steps (assimilation)

(a) Computation time for each time step

(b) Number of iterations of the minimization algorithm for each time step

Fig. 8 Costs of the transport maps computations made with Hessian information for each assimilation time step

0.5 20

0.4 10

0.3 PDF

0.2

Reference

10

20

30

40

0

Time steps (assimilation)

Fig. 9 Marginals on σ computed with 20,000 samples and kernel density estimation for each assimilation time step

After 45 assimilation time steps, the algorithm gives a maximum estimator [0.394, −60.193] and a mean estimator [0.392, −59.949]. These values are very close to the reference values [0.40, −60] used to simulate the measurements.

4.4 Uncertainty Propagation on Outputs of Interest In addition to the mean and maximum a posteriori estimates on model parameters, another post-processing application may be the prediction of temperature T3 in the region of interest (see Fig. 3). Once the parameters σ and Pe have been inferred in

324

P.-B. Rubio et al.

−55 0.8 0.6

−60

0.4 −65

PDF

0.2

Reference

10 20 30 40 Time steps (assimilation)

0

Fig. 10 Marginals on Pe computed with 20,000 samples and kernel density estimation for each assimilation time step

a probabilistic way, it is indeed valuable to propagate uncertainties a posteriori in order to know their impact on the output of interest T3 during the process, and to assess the welding depth and consequently the welding quality. For this purpose, and in a first approach, samples of the last posterior density taken from the Transport Map sampling are used after the assimilation of the Nt measurements. As the PGD model gives the temperature field globally in terms of time and parameters over the whole space domain, the output T3 can be easily computed for all values of the parameters samples at each physical time point τi , i ∈ {1, ..., Nτ }. For a given physical time point τi the set of outputs T3 is used to build the kernel density estimation of the PDF π(T3 |σ, Pe, τi ) of having the temperature T3 knowing uncertainties on the parameters σ and Pe. Figure 11 represents the uncertainty map on the output T3 depending on the physical time. The x-axis represents time so that each vertical slice represents the PDF π(T3 |σ, Pe, τi ). The y-axis represents temperature values and the color map gives PDF values. The discontinuous line represents the evolution of the temperature T3 with no uncertainty on the parameters [(i.e. the output computed from PGD with the reference values (σ = 0.4, Pe = −60)]. This computation can be used to determine at which time the plates are properly welded (i.e: T3 ≥ 1) and with which confidence. With these knowledge, a stochastic computation of the structural strength can be obtained. Uncertainty propagation can also be performed in real-time in a purpose of temperature prediction in the region of interest. Knowing the uncertainties on the parameters, the purpose is to predict at each assimilation time step the evolution of the temperature T3 for the next physical time steps. Similarly to the previous example, this can be done thanks to the PGD model as the temperature field is known globally over the time domain. At each time step, samples from the posterior density are available from the Transport Map sampling. Evaluating then the output of the PGD on these parameters samples at the spatial coordinates of the temperature T3 and for a given time step provides for output samples. A Kernel Density Estimation on these samples gives the PDF of having the output T3 knowing the uncertainties on

On-the-Fly Bayesian Data Assimilation Using Transport Map … Fig. 11 Inference of temperature T3 computed a posteriori after the last assimilation step

325

1 100

3

0.9

0.8 50

0.7 Uncertainty map

0.6

Numerical output

10

20

30

40

0

Time steps (physical)

the inferred parameters. This computation is performed after each assimilation time point ti and for all the physical time points τ j following this considered assimilation time point ti . Results are reported in Fig. 12. Figure 12a shows the prediction result with uncertainty propagation after the first assimilation point t1 for all the physical points τi , i > 1. To that end, samples are drawn according to the first posterior π(σ, Pe|T1obs,1 , T2obs,1 ) = πt1 (T1obs,1 , T2obs,1 |σ, Pe).π(σ, Pe). The slice [τ0 , τ1 ] represents the guess on the temperature T3 knowing the uncertainties on the parameters (σ, Pe) after the first assimilation point t1 . For τi > τ1 the graph represents the prediction of the output T3 considering the current knowledge on the parameters uncertainty (i.e. with the assimilation of the first set of measurements T1obs,1 and T2obs,1 alone). The discontinuous line represents the evolution of the temperature T3 with the true value of parameters (σ = 0.4, Pe = −60). Other graphs (12b–d) show the refinement of the prediction with the improvement on the parameters uncertainty knowledge. The current measurement assimilation step is indicated by the vertical cursor. On the right of the cursor τ = ti , the graphs represent the prediction of the temperature T3 after the assimilation of the measurements T1obs,1:i and T2obs,1:i . On the left of the cursor, each slice [t j−1 , t j ] ( j ≤ i) represents the prediction made at the assimilation time point t j (the predictions of the temperature T3 for physical time points anterior to the assimilation time point ti are not updated). Figure 13 shows the convergence of the prediction of temperature T3 at the steady state τ45 with respect to the assimilation steps. We observe that, as foreseen, more confidence is given to the output of interest (evaluated at final time) along the realtime data assimilation process. This assimilation procedure performed in situ and in real-time can be used in the context of welding control. If the temperature prediction on the region of interest is

326

P.-B. Rubio et al.

1

100

3

3

150

1

200

0.8

0.8

100 Uncertainty map Numerical output Assimilation time

0.6 10

20

30

Uncertainty map Numerical output Assimilation time

0.6 0

40

10

Time steps (physical)

20

30

40

50 0

Time steps (physical)

(b) Assimilation step t15

(a) Assimilation step t1 150

3

100

150

1

100

3

1

0.8

0.8 Uncertainty map Numerical output Assimilation time

0.6 10

20

30

Uncertainty map Numerical output Assimilation time

50 0.6 0

40

10

Time steps (physical)

20

30

40

50 0

Time steps (physical)

(d) Assimilation step t45

(c) Assimilation step t30

Fig. 12 Prediction of the output T3 for all time steps after the considered assimilation step Fig. 13 Prediction of temperature T3 at physical time step τ45 after each assimilation time step ti , i ∈ {1, ..., 45}

Uncertainty map Numerical ouptut

100

3

1.05

50

1

0.95 10

20

30

Time steps (assimilation)

40

0

On-the-Fly Bayesian Data Assimilation Using Transport Map …

327

not satisfying in terms of uncertainties, a control procedure on input parameters (for example the heat source intensity) can be conducted.

5 Conclusions and Prospects In this work we presented a powerful data assimilation procedure within the general framework of Bayesian inference. In order to perform fast computations and sequential data assimilation, the Bayesian approach was coupled with PGD model reduction and Transport Map sampling. Computed in an offline phase, the PGD technique provides for an analytical formulation of the posterior density in the online phase. Then, the computation of transport maps enables a fast sampling from the posterior. This latter technique is particularly suited for sequential inference as it exploits the Markov structure of the posterior to build low-order maps. Owing to PGD models, large-scale multi-parameter engineering problems can be effectively addressed and information on derivatives can be added in a straightforward manner to speed up the computation of the transport maps. Eventually, the global time-space definition of the PGD models allows to post-process the posterior density and predict quantities of interest with uncertainty propagation included. The proposed approach appears to be a suitable tool for real-time applications and it is an appealing candidate for data assimilation in the general stochastic Bayesian context as the sampling method leans on deterministic computations alone (the results of which do not vary with respect to the random number generator seed), with a clear convergence criterion. Moreover, a trade-off between speed-up and quality can be obtained using order adaptivity of the maps knowing the error by the variance diagnostic criterion.

References 1. Arulampalam, M. S., Maskell, S., Gordon, N., & Clapp, T. (2002). A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2), 174–188. 2. Berger, J., Orlande, H. R. B., & Mendes, N. (2017). Proper generalized decomposition model reduction in the Bayesian framework for solving inverse heat transfer problems. Inverse Problems in Science and Engineering, 25(2), 260–278. 3. Chinesta, F., Keunings, R., & Leygue, A. (2014). The proper generalized decomposition for advanced numerical simulations: A primer. Springer Briefs in Applied Sciences and Technology. 4. Chinesta, F., Ladevèze, P., & Cueto, E. (2011). A short review on model order reduction based on proper generalized decomposition. Archives of Computational Methods in Engineering, 18(4), 395–404. 5. Darema, F. (2004). Dynamic data driven applications systems: A new paradigm for application simulations and measurements. In Computational Science—ICCS (pp. 662–669). 6. El Moselhy, T. A., & Marzouk, Y. (2012). Bayesian inference with optimal maps. Journal of Computational Physics, 231(23), 7815–7850.

328

P.-B. Rubio et al.

7. Gamerman, D., & Lopes, H. F. (2006). Markov Chain Monte Carlo-stochastic simulation for Bayesian inference. Boca Raton: CRC Press. 8. Grepl, M. A. (2005). Reduced-basis approximation and a posteriori error estimation for parabolic partial differential equations (Ph.D. thesis). Massachusetts Institute of Technology. 9. Kaipio, J., & Somersalo, E. (2004). Statistical and computational inverse problems. New York: Springer. 10. Marzouk, Y., Moselhy, T., Parno, M., & Spantini, A. (2016). Sampling via measure transport: An introduction. Handbook of Uncertainty Quantification, 1–41. 11. Robert, C. P., & Casella, G. (2004). Monte Carlo statistical methods, Springer texts in statistics. New York: Springer. 12. Rubio, P. B., Louf, F., & Chamoin, L. (2018). Fast model updating coupling Bayesian inference and PGD model reduction. Computational Mechanics, 62(6), 1485–1509. 13. Rubio, P. B., Louf, F., & Chamoin, L. (2019). Transport Map sampling with PGD model reduction for fast dynamical Bayesian data assimilation. International Journal in Numerical Methods in Engineering, 120(4), 447–472. 14. Spantini, A., Bigoni, D., & Marzouk, Y. (2018). Inference via low-dimensional couplings. Journal of Machine Learning Research, 19, 1–71. 15. Stuart, A. M. (2010). Inverse problems: A Bayesian perspective. Acta Numerica, 19, 451–559. 16. Tarantola, A. (2005). Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics. 17. Villani, C. (2008). Optimal transport: Old and new. Berlin: Springer.

Stochastic Material Modeling for Fatigue Damage Analysis W. Zhang, A. Fau, U. Nackenhorst and R. Desmorat

Abstract Experimental observation of the evolution of a structure under fatigue loading has shown in the literature largely scattered results. To represent these uncertainties, a stochastic damage model based on random process is proposed. The kinetic continuum damage model is compared with some experimental data and other modelling approaches. The original approach is investigated for a two-dimensional structures computed using finite element method. Using this modeling approach, probabilistic fatigue damage information can be provided for any time instant at any point of the structure.

1 Introduction Due to uncertain mechanical material properties and external influences, engineering structures exhibit strongly random fatigue behaviour. This phenomenon can be observed on S-N curves as a scattering of results [1], which may be due, for instance, to material uncertainties, or external factors. For engineering applications, strongly scattered fatigue data and limited knowledge about high cycle fatigue lead to complication for predicting the lifetime of a structure in a deterministic framework. Therefore, reliability assessment based on stochastic approaches have been more recently proposed [2]. To quantify the loss of bearing capacity during fatigue loading, different approaches have been developed historically, such as empirical methods based on S-N curves which require a large amount of experimental data [3, 4], post-processing of plasticity computations by energetic methods [5], or continuum damage mechanics (CDM) [6]. Using CDM approaches, the loss of capacity is described by an internal variable defined at any point of the structure. This method, although being W. Zhang (B) · A. Fau · U. Nackenhorst Institut für Baumechanik und Numerische Mechanik, Leibniz Universität Hannover, Appelstrasse 9a, Hannover, Germany e-mail: [email protected] R. Desmorat LMT UMR CNRS 8535 École Normale Supérieure Paris-Saclay, Cachan, France © Springer Nature Switzerland AG 2020 P. Wriggers et al. (eds.), Virtual Design and Validation, Lecture Notes in Applied and Computational Mechanics 93, https://doi.org/10.1007/978-3-030-38156-1_17

329

330

W. Zhang et al.

computationally expensive, can simulate the evolution of the structure by time increment, for any loading case and any coupling with other physics [7]. Considering kinetic damage model, i.e. a description of the evolution by time increment, and not cycle increment, the problem can become numerically so expensive, that the computations turn in unfeasible for very large number of loading cycles. Different strategies have been proposed in the literature to reduce the computational cost, using model order reduction [8, 9], or not computing explicitly all the cycles but only some characteristic cycles, such as jump cycles [10, 11] or two time-scale approach [12, 13]. Compared to contributions in deterministic framework, the number of works exploring stochastic damage computations remains relatively low, as this perspective face two main challenges, the lack of experimental data on random damage growth which may be an obstacle for the validation of proposed stochastic models, and the high computational cost. Therefore, efficient and robust numerical approaches including random contributions to simulate this phenomenon are of high demand. Considering a stochastic differential equation (SDE), i.e. a differential equation in which at least one term is a stochastic process, the solution is also a stochastic process. SDEs have been used to model a large variety of phenomena, such as physical systems, stock prices, etc. [14]. Wiener processes require their own calculus rules. Two main variants exist: the Itô stochastic calculus [15] and the Stratonovich stochastic calculus [16]. Most of algorithms which are used for ordinary differential equations are not optimal for SDEs. Dedicated methods such as the Euler-Maruyama method, Milstein method and Runge-Kutta method are more appropriate [17]. In the framework of CDM, some simulations including stochastic processes have been proposed. In [18, 19] not only initial damage has been assumed random, but also the growth rate of the process by adding a white noise to the deterministic evolution directly in terms of free energy. In [20] it has been proposed to evaluate the damage evolution as a solution of a Langevin equation, i.e. without investigating the physics behind, the damage evolution is supposed slow in comparison with the rapid changes of the microscopic quantities which cause its change. A general framework has been suggested in [21], where a stochastic variant of Lemaitre model is exposed, and Itô and Stratonovich interpretations of the underlying stochastic differential equation are discussed. However, no numerical results have been estimated therein. Other works have studied random damage in the context of micro-mechanical models. For instance, considering the micro-structure has a set of fiber bounds, the damage growth has been simulated as random from the uncertain behaviour of fiber cracks [22, 23]. In the context of crack propagation, stochastic analysis of fatigue damage based on linear fracture mechanics has been proposed in [24]. The novelty herein is to propose numerical simulations based on finite element method (FEM) in which the damage evolution is not represented as a deterministic law but as a random process. In other words, instead of representing uncertainties through some random parameters, the usage of a random process is investigated. Therefore, the damage evolution rate turns to be a SDE with drift and diffusion. The drift term which describes the mean of the stochastic process, is identical with the deterministic damage evolution rate. The stochastic diffusion determines the impor-

Stochastic Material Modeling for Fatigue Damage Analysis

331

tance of the damage fluctuation. The challenge are to describe the damage evolution as a random process which represents accurately the experimental information, and to provide a numerical tool which is able to tackle this random process and to incorporate it into a finite-element computation, while maintaining a reasonable computational cost. First numerical tests implemented for two-dimensional cases based on a quasibrittle damage law [25] are presented and discussed with respect to the modelling hypotheses and the numerical approach. In Sect. 2, a brief introduction to the theoretical background of deterministic quasibrittle damage law is given. In Sect. 3, the stochastic description of the damage law as a random process is detailed, and the numerical scheme to solve it is introduced. Some numerical results are shown and analysed in Sect. 4.

2 Deterministic Modelling of Fatigue Damage Materials such as concrete, ceramics or glass do not exhibit a preponderant plasticity behaviour. Therefore, they are generally modelled as brittle or quasi-brittle materials [25]. Here a quasi-brittle damage model is considered. Following the CDM theory [6], the effective stress σ eff at any point of the structure can be expressed from the stress tensor σ and the damage variable D as σ eff =

  σ H  σD + − −σ H  I, 1− D 1− D

(1)

with I the identity tensor, σ H the hydrostatic stress and σ D the deviatoric stress such that σ = σ H I + σ D . The Macaulay brackets • denote the positive part of the quantity •, i.e. • = max (•, 0). In case of isotropic damage, the damage is defined as a scalar variable.

2.1 Material Model For a quasi-brittle material, plastic behaviour is neglected, the free energy function ψ reads   ˜ D : ε D + K (1 − D) tr ε2 + −tr ε2 , ρψ = Gε (2) 2 where the linear strain tensor ε is decomposed into the hydrostatic strain ε H = 13 tr ε and the deviatoric strain ε D = ε − ε H I. The bulk and shear modulus are denoted K and G, respectively and G˜ = G (1 − D)ϕ with ϕ a parameter with small value. From the free energy function, the stress tensor is defined as

332

W. Zhang et al.

σ =ρ

∂ψ ˜ D + K ((1 − D) tr ε I − −tr ε I) , = 2Gε ∂ε

(3)

and the strain energy release rate Y as Y = −ρ

∂ψ K = Gϕ (1 − D)ϕ−1 ε D : ε D + tr ε2 . ∂D 2

(4)

˙ is The damage evolution law, i.e. the time derivative of the damage variable D, defined from the thermodynamic force Y as proposed in [25, 26]   Y − YD s  ˙ ∂D ˙ = Y . D= ∂t S

(5)

The damage evolution is parametrised by S and s, which are two material parameters. The threshold Y D beyond which damage increases is defined in terms of energy density release rate. The damage increases as defined by Eq. (5) until reaching the critical value Dc , for which macro-cracks develop. The interest of using a description of the damage evolution per time increment is that any kind of loading may be accurately simulated. However, comparing with models based on the description of the damage evolution per cycle, the computational effort required is significantly enlarged.

2.2 Numerical Approach The structure of interest is defined by its initial geometry B, and the problem is solved for the time domain T = T0 , T f , with T0 and T f the initial and final time instants respectively. As the damage evolution is herein defined per time increment, the problem is worked out consecutively for every time step. Using Newton-Raphson scheme, the equilibrium is solved, and the internal variables are updated in a staggered scheme. Because a quasi-brittle damage model is assumed, there is no increment of the plastic strain and the consistent tangent operator D is defined [27] as   −tr ε tr ˜ + (1 − D) ε − K I ⊗ I. D = 2GP |tr ε| |tr ε|

(6)

To speed-up the computation for cases involving large number of cycles, a jumpcycle process is used [28]. The idea is to avoid the computations of some cycles as illustrated on Fig. 1. After computing cycle j, the evolution of damage during that cycle denoted D j is known. From that information, the number of cycles N cj which are skipped for the j-th jump is evaluated as

Stochastic Material Modeling for Fatigue Damage Analysis

333

Fig. 1 Schematic representation of the jump cycle algorithm

N cj =

Dmax  , D(x G ) κ · max

N j

(7)



  D(x G ) is the maximal increment where κ is the jumping factor and max

N j of damage over all the Gauss points x G during the final cycle before the j-th jump. Then, the evolution of damage during N cj jumped cycles is extrapolated to establish the initial value of damage D j+1 for the next cycle j + 1 D j+1 (x G ) = D j (x G ) +



N cj

 +1 ×

D(x G ) N

 .

(8)

j

The cycle j + 1 is computed per time increment to evaluate the damage evolution D j+1 , and so on until reaching the critical damage value or the maximum number of loading cycles. Thus, this process allows to benefit from the knowledge of time increment simulation and to simulate the damage evolution over the number of cycles N with N possibly very large.

3 Stochastic Modelling of Fatigue Damage The uncertain damage evolution is a macroscopic phenomenon, which is caused by the material heterogeneity at the micro-level, or a random load e.g. The stochastic modification of the deterministic fatigue damage law attempts to describe these effects in a phenomenological manner instead of studying in details the microscopic behaviour and other random sources. This allows to circumvent the lack of knowledge.

334

W. Zhang et al.

3.1 From a Deterministic to a Stochastic Process Instead of a damage process D : D × T → [0, 1] defined on the spatial domain D and temporal domain T by a deterministic evolution equation as Eq. (5), here a stochastic damage process D˜ is proposed D˜ : D × T × → S ∈ [0, 1]

(9)

with a probability space and S a space equipped with a measure. Thus, the damage evolution is described as a random phenomenon, by exploiting mathematical tools originally designed to describe random systems such as Brownian motion, or electronic systems [29]. Deterministic and random processes to represent the damage evolution over number of cycles N are illustrated in Fig. 2, it can be seen that the stochastic process can express different sample functions or sample paths. Fixing ω, ˜ ω0 ) is a realisation or a sample path of the stochastic process, i.e. a function of D(t, ˜ 0 , ω) is a random time depending on the parameter ω. On another side, fixing t, D(t variable depending on the parameter t0 .

3.2 Numerical Properties of the Damage Process Damage process is here assumed as a sample-continuous process, which means that every sample path is a continuous function [14]. The damage process is memoryless i.e. it satisfies the Markov postulate [29]. The conditional probability distribution of future state of the process depends only on the present state and not on the past events, which can be written as ∀ B ⊂ S

Fig. 2 Deterministic and random processes to describe damage evolution

Stochastic Material Modeling for Fatigue Damage Analysis

335

    P D˜ (tn+1 ) ∈ B| D˜ (tn ) , . . . , D˜ (t1 ) = P D˜ (tn+1 ) ∈ B| D˜ (tn ) ,

(10)

where P denotes the probability operator. This nice property allows to escape from the numerical challenge of intractability. To reach these conditions of Markov assumption and continuity, it is proposed to consider a diffusion process. It is assumed that the material continuity is kept by the stochastic modification of damage evolution, which allows all the theories defined under CDM assumptions to remain usable. A supplementary restriction is imposed to ensure the monotonic increasing property of the damage evolution as healing effects are not of interest here.

3.3 Proposed Diffusion Random Process Instead of considering a deterioration process based only on jump process such as a Gamma process representing the damage evolution per cycle for instance, the goal is to base the representation on a kinetic CDM model, i.e. to simulation damage evolution per time increment. Considering stochastic integral, the random damage may be defined as ˜ D(x, t, ω) = D0 (x) +



t

˜ h(x, s, D(x, s)) ds +

0



t

˜ g(x, s, D(x, s)) dWs , (11)

0

or as the solution of the following SDE ˜ ˜ ˜ d D(x, t, ω) = h(x, t, D(x, t)) dt + g(x, t, D(x, t)) dWt ,

(12)

where h and g are the drift term and diffusion terms, respectively. dWt denotes a Wiener process which is detailed afterwards. The initial damage field is denoted as D0 (x) and is considered as deterministic information.

3.4 Drift Term ˜ The drift h(x, t, D(t)) of the stochastic process is defined as the tendency term   1  ˜ ˜ E D (x, t + , ω) − D˜ (x, t, ω) | D˜ (x, t, ω) , (13) h(x, t, D(t)) = lim t→0 t

336

W. Zhang et al.

where E denotes the expected value operator [14]. It represents the evolution of the average value of the stochastic process, which would be identified as mean behavior in a traditional deterministic framework. Therefore, it depends on time and on the value of the random process at the current instant   Y (x, t, D(t)) − Y D s  ˙ ∂ D(x, t) ˜ = · Y . h(x, t, D(t)) = ∂t S

(14)

3.5 Diffusion Term The diffusion term, which represents the random fluctuation, is defined as   2 1 E D˜ (x, t + , ω) − D˜ (x, t, ω) | D˜ (x, t, ω) , t→0 t (15) and represents the instantaneous variance of the process [14]. As proposed in [21], the diffusion term can be assumed proportional to the drift term as ˜ g(x, t, D(t)) = lim

˜ ˜ g p (x, t, D(t)) = h(x, t, D(t)).

(16)

The damage process, as well as the random damage rate process, is not a stationary process [14]. However, it is herein proposed to assume a strongly stationary random process as noise, i.e. the probability distribution of the noise does not change when shifted in time. The random noise Ws is here considered as a Wiener process, the mean of which is denoted as μWs and auto-covariance Cov(Ws (t1 , ω), Ws (t2 , ω)) = |t2 − t1 |σξ2 , with ξ(t, ω) a random process which provides at every instant a Gaussian random number of standard deviation σξ . This random process follows Itô integration rule [15], because Riemann integration rules can no more be applied in the stochastic framework. The infinitesimal discretisation of the Wiener process reads √ dWt = ξ(t, ω) t.

(17)

The decrease of damage is prohibited by rejecting the non-acceptable realisations. In details, for the proportional diffusion case, a time step size dependent standard √ t deviation σξ = 3 is proposed to prevent at best martingale perturbation, by avoiding too many samples with damage decrease and so rejection. The evolution of the probability density function of the damage increment is illustrated by Fig. 3. It can be seen that both the mean and standard deviation of the damage increments are depending on the time, as the mean rises with time while the distribution becomes more flattened.

Stochastic Material Modeling for Fatigue Damage Analysis

337

Fig. 3 Example of probability density of the damage increment within a loading interval

3.6 Introduction of the Stochastic Damage Model in the Finite-Element Framework The numerical algorithm including random damage process is detailed in algorithm 1. The stochastic damage process is computed for N p samples. For each of them, from the initial fields of the internal variables, the mechanical problem is solved for the whole structure and the evolution and state equations are evaluated as detailed in Sect. 2.2. The iterative process is continued until the error criterion given by

r

err = ext (18)

f

reaches the stopping value tol, where the residual r reads r = f int − f ext ,

(19)

with f ext and f int the external and internal forces. To save effort due to the size of the temporal problem, a jump-cycle procedure is used [28]. For each sample path, the number of cycles which are skipped N cj for the j-th cycle is evaluated as previously introduced by Eq. (7). Finally, the outcome is analysed, for instance, as cumulative distribution functions FD(x ˜ a ,ta ,ω) . The probability of failure P f at t is defined as the probability that the maximum damage value in the structure has reached the critical damage value, i.e.  ˜ P f (t) = P max D(x, t, ω)  Dc . 

x∈D

(20)

338

W. Zhang et al.

Algorithm 1 Newton-Raphson scheme for stochastic damage computation using FE Input: Ft+1 external load at t + 1, ξt+1 random noise, material states at t and material parameters Output: D˜ t+1 1: initial guess for t + 1      (k)   2: while rt+1  Ft+1  ≤ tol do 3: k = k + 1 compute residual

(k=0)

ut+1

(k=0)

= ut , Kt+1

(k)

(k)

= Kt

(k)

rt+1 = Kt+1 ut+1 − Ft+1     ext ft+1

int (k) ft+1

4:

compute displacement increment (k) (k) (k) δut+1 = rt+1 /Kt+1

5:

update displacement

(k)

(k−1)

(k)

ut+1 = ut+1 + δut+1 6:

compute strain

(k)

(k)

εt+1 = But+1 7:

compute damage driven force (k)

Yt+1 = 8:

1 (k) (k) : C : εt+1 ε 2 t+1

compute increment of damage driven force (k)

(k)

δYt+1 = Yt+1 − Yt 9:

10:

compute stochastic damage increment using Eq. (12)   (k) (k) (k) δ D˜ t+1 Yt+1 , δYt+1 update tangent operator using Eq. (6)   (k) Dkt+1 δ D˜ t+1

11:

update global stiffness matrix (k)

(k+1)

Kt+1 = Kt+1 12:

goto line 3



Dkt+1



Stochastic Material Modeling for Fatigue Damage Analysis

339

4 Numerical Example In order to verify the applicability of the stochastic damage evolution law on the structural fatigue analysis, a two-dimensional bending test is simulated with FEM associated with jumping-cycle algorithm previously introduced. The structure is a beam, with length L = 0.5 m and cross sectional area A = 0.1 m × 0.1 m, as illustrated in Fig. 4. The beam is subjected to a four-point bending test, which is simplified as a two-dimensional plane stress problem with the thickness of t = 0.1m. The geometry is initially discretised using 320 uniform fournode quadratic elements, which corresponds to the element size ele = 12.5 [mm]. The two supporting points are located at the bottom, separated by the distance L s = 0.45m. The node displacement at the left supporting point is fixed in two directions (Ux = U y = 0), whereas at the other supporting point has only the displacement in the x direction fixed, namely, Ux = 0. Two concentrated loads F are applied on the top with the distance of L i = 0.1m in between. Both of them have the same amplitude and frequency, such that the system remains in quasi-static condition. The maximum load amplitude is Fmax = −11.5 kN min = 0.1. Two different types of load, i.e. sawin y direction and the load ratio R = FFmax tooth and sinus, each having initially 100 discretisation steps per cycle, are presented here, see Fig. 5.

Fig. 4 Geometry of the bending test

Fig. 5 Saw-tooth and sinus like load

340

W. Zhang et al.

Table 1 Material parameters for a reinforced concrete with 0.5% steel-fiber (SFRSCC0.5) calibrated from [30] E ν S s1 YD Dc 42 GPa

0.2

1.51 kPa

12.05

4.5 Pa

0.3

Fig. 6 Damage distribution after 105 cycles

The material is a reinforced concrete with 0.5% steel-fiber (SFRSCC0.5) modelled here as a homogeneous material, the properties of which, summarised in Table 1, have been calibrated from four-point fatigue bending test [30]. One example of damage distribution after 105 cycles is given in Fig. 6.

4.1 Influence of the Numerical Parameters on the Stochastic Results Let

consider a proportional diffusion with Gaussian random noise ξ ∼ G S 2 √ to investigate the effect of the numerical parameters on the stochas0, t/3 tic results.

4.1.1

Effect of the Jump Cycle Procedure

For deterministic applications, the numerical results show an acceptable estimation using a jumping factor κ, as introduced in Eq. (7), with a minimal value of 100. Considering stochastic damage process, the effect of the jumping factor κ on the estimation of the mean and the standard deviation of the fatigue life is shown in Fig. 7. The results have been averaged on 1000 sample paths. The influence of κ is much more drastic on the standard deviation than on the mean value of the fatigue life. To guarantee an error less than 1%, a value of κ = 100 appears acceptable with this damage model.

4.1.2

Effect of the Number of Sample Paths

The influence of the number of samples N p in terms of accuracy of the estimation of the mean value and the standard deviation of the fatigue life is exposed in Fig. 8.

Stochastic Material Modeling for Fatigue Damage Analysis

341

Fig. 7 Influence of κ on fatigue life estimation

Fig. 8 Influence of the number of realizations on fatigue life estimation

It can be observed that the estimation of the mean converges quite quickly with respect to the number of sample paths. However, the accurate estimation of the standard deviation requires much more realizations. The value N p = 200 appears as a good compromise for acceptable estimation with the stochastic damage model.

4.2 Evolution of Stochastic Fatigue Damage Considering the set of parameters ele = 12.5 mm, t = 2.5e − 4, κ = 100 and N p = 200 the random evolution of damage under four-point bending test is explored. The element at the middle of the bottom of the beam, as expected, is the element of most interest, as it is critical for failure under four-point bending test.

342

W. Zhang et al.

Fig. 9 Damage evolution including statistical information

Fig. 10 Statistics on fatigue life

In Fig. 9 the random damage evolution in terms of mean and confidence interval averaged on 1000 sample paths is shown. It can be seen that the confidence interval increases with the number of cycles. The statistics in terms of fatigue life denoted N f is represented in Fig. 10, considering the time elapsed before reaching the critical value of damage Dc = 0.3 as the structure lifetime. The fatigue life histogram, which can be seen in Fig. 10a, has similarity with normal distribution. For a given number of cycles the probability of failure P f can be approximated based on the CDF of N f , as illustrated in Fig. 10b. Under a given load level, for increasing value of D, the evolution of the standard deviation S with respect to the mean of number of cycles E[N ] is illustrated by Fig. 11. A sub-linear dependence is observed, which is typical of the Wiener process.

Stochastic Material Modeling for Fatigue Damage Analysis

343

Fig. 11 Evolution of standard deviation S(N ) versus mean E(N ), for a given load level

Table 2 Expectation and standard deviation of fatigue life for five different load levels using model proposed by Oh [31] with parameters obtained from [30] Ls 0.9 0.85 0.8 0.75 0.7 E(N f ) S(N f )

2742 2042

10,873 8099

46,869 34,912

222,010 165,372

1,170,810 872,122

4.3 Virtual S-N Curves The goal here is to investigate the ability of the previously proposed random model to reproduce accurately the scattering of the empirical results in terms of S-N curves. It has been established in the literature [31] that the distribution of the fatigue life N f observed in S-N curves can be correctly described by a Weibull distribution. The stochastic experimental information E(N f ) and S(N f ) obtained from the S-N curves by Goel and Singh [30] is detailed in Table 2. Five load levels Ls = ffMr , with f M and fr the maximum fatigue stress and static flexural strength respectively, have been investigated. The visualization of these results in Fig. 12 shows clearly that the standard deviation increases drastically with the mean of the number of surviving cycles. As outlined by the authors, the experimental observations are largely scattered, which is usual in experimental fatigue results for fiber-reinforced concrete. In order to reproduce these empirical fatigue results, the same five load levels are simulated using the suggested stochastic model. The mean virtual S-N curve is plotted in Fig. 13. It can that the stochastic damage model using the

beobserved 2  √ leads to a mean estimation of the fatigue t/3 normal noise ξ1 ∼ G S 0, lifetime which matches closely with the empirical fatigue data provided by Goel and Singh [30].

344

W. Zhang et al.

Fig. 12 Empirical fatigue information from [30] Fig. 13 Virtual S-N curve using different Gaussian random noise

However, investigating the standard deviation of the simulated fatigue life S(N f ) as plotted in Fig. 14, it appears that the simulated variance considering the Gaussian noise ξ1 is much smaller than the empirical variance.

 2  √ By increasing the variance of the Gaussian noise from ξ1 ∼ G S 0, t/3

 2  √ , it can be seen in Fig. 14 that the standard deviation to ξ2 ∼ G S 0, 10 · t/3 of the fatigue life gets closer to the empirical results. However, this change of variance perturbs the mean estimation as visualized in Fig. 13. The simulated mean lifetime considering ξ2 is no more in accordance with the empirical mean results. Indeed, since stochastic damage evolution has to be monotonically increasing for every time step, any sample leading to a decrease of the damage is refused by the algorithm. Therefore, by increasing the variance of the random noise, significant amount of Gaussian noise samples leads to damage increment filtered out by the algorithm, which perturbs the mean distribution of the stochastic results due to the non-zero

Stochastic Material Modeling for Fatigue Damage Analysis

345

Fig. 14 Mean versus standard deviation of fatigue life using Gaussian random noise with different intensity in comparison with reference

Fig. 15 Mean versus standard deviation of fatigue life using Weibull random noise with different intensity in comparison with reference

mean of the numerical random noise. Therefore using Gaussian random noise and refusing any local decrease of the damage, the fatigue mean can be reproduced but not the large scattering of the S-N results. On the contrary, after shifting the Weibull random noise to be zero mean variables, it delivers better results. As demonstrated by Fig. 15, increasing the intensity of randomness of Gaussian random variable allows to enlarge the fatigue uncertainty while keeping the fatigue mean. Simulation study also shows that the variance of Weibull random noise cannot increase √ without limitation. Since the mean of Weibull random noise is requested to be t ≈ 0.0158, it is difficult for modern computer to generate accurate Weibull random numbers having large difference between the mean and a desired variance.

346

W. Zhang et al.

5 Summary In this article a framework for stochastic damage evolution based on stochastic differential equation has been proposed. The uncertain damage evolution is formalised as a stochastic process under three assumptions. First, the mean of the stochastic process, which is controlled by the drift, is identical with the deterministic damage evolution. Second, the diffusion which governs the randomness of stochastic process shall be correlated to the drift term. Third the stochastic process which denotes random damage evolution shall be consistent with all the thermodynamic constrains. The applicability of this approach is verified on quasi-brittle material using a four-point flexion fatigue test. Associated with jump-cycle algorithm, the high-cycle fatigue damage evolution can be obtained by the finite element analysis under the demand of both efficiency and accuracy. Statistical analysis on large amount of sample paths of stochastic fatigue damage evolution is able to deliver a probabilistic description of material failure. With calibrated material parameters, the mean of fatigue life under different load levels can be reproduced in accordance with results from empirical data. Due to the lack of experimental samples, some difficulties in calibrating the random noise remain. This causes considerable bias in terms of fatigue uncertainties between the simulation and the reference. Since the stochastic damage process is restricted by the principles in continuum damage mechanics, Gaussian random noise with very large variance faces difficulty in reproducing the S-N diagram. Other random distributions, e.g. Weibull, log-normal are applicable to solve this issue. However, any other alternatives to Gaussian may not be mathematically consistent [14, 21]. Acknowledgements Financial support from DFG through the international research training group on “Virtual Materials and Structures and their Validation” (IRTG 1627) and from the FrenchGerman University through the doctoral college “Sophisticated Numerical and Testing Approaches” is acknowledged.

References 1. Schijve, J. (2001). Fatigue of structures and materials. Kluwer Academic Publishers. 2. Nakagawa, T. (2011). Stochastic processes with applications to reliability theory. Berlin: Springer. 3. Pavlou, D. G. (2018). The theory of S-N fatigue damage envelope: generalization of linear, double-linear, and non-linear fatigue damage models. International Journal of Fatigue. 4. Ortega, J. J., Ruiz, G., Yu, R. C., Afanador-García, N., Tarifa, M., Poveda, E., et al. (2018). Number of tests and corresponding error in concrete fatigue. International Journal of Fatigue, 116, 210–219. 5. Desmorat, R. (2002). Fast estimation of localized plasticity and damage by energetic methods. International Journal of Solids and Structures, 39(12), 3289–3310. 6. Lemaitre, J. (1996). A course on damage mechanics. Berlin: Springer. 7. Desmorat, R., Kane, A., Seyedi, M., & Sermage, J. P. (2007). Two scale damage model and related numerical issues for thermo-mechanical high cycle fatigue. European Journal of Mechanics A/Solids, 26, 909–935.

Stochastic Material Modeling for Fatigue Damage Analysis

347

8. Bhattacharyya, M., Fau, A., Nackenhorst, U., Néron, D., & Ladevèze, P. (2018). A LATINbased model reduction approach for the simulation of cycling damage. Computational Mechanics, 62(4), 725–743. 9. Bhattacharyya, M., Fau, A., Nackenhorst, U., Néron, D., & Ladevèze, P. (2018). A model reduction technique in space and time for fatigue simulation. In Multiscale modeling of heterogeneous structures (pp. 183–203). Springer International Publishing. 10. Lemaitre, J., & Doghri, I. (1994). Damage 90: A post-processor for crack initiation. Computational Methods of Applied Mechanics Engineering, 115, 197–232. 11. Van Paepegem, W., Degrieck, J., De Baets, P. (2001). Finite element approach for modelling fatigue damage in fibre-reinforced composite materials. Composites: Part B, 32, 575–588. 12. Bhattacharyya, M., Fau, A., Nackenhorst, U., Néron, D., & Ladevèze, P. (2018). A multitemporal scale model reduction approach for the computation of fatigue damage. Computer Methods in Applied Mechanics and Engineering, 340, 630–656. 13. Bhattacharyya, M., Fau, A., Desmorat, R., Alameddin, S., Néron, D., Ladevèze, P, & U. Nackenhorst. A kinetic two-scale damage model for high-cycle fatigue simulation using multi-temporal latin framework. European Journal of Mechanics/A Solids (in press). 14. Sobczyk, K. (1991). Stochastic differential equations. Berlin: Springer. 15. Itô, K. (1951). Multiple Wiener integral. Journal of the Mathematical Society of Japan, 3(1), 157–169. 16. Stratonovich, R. L. (1966). A new representation for stochastic integrals and equations. SIAM Journal on Control, 4(2), 362–371. 17. Kloeden, P. E., & Platen, E. (1992). Numerical solution of stochastic differential equations. Berlin: Springer. 18. Bhattacharya, B. (1997). A damage mechanics based approach to structural deterioration and reliability. 19. Bhattacharya, B., & Ellingwood, B. (1998). Continuum damage mechanics-based model of stochastic damage growth. Journal of Engineering Mechanics, 124(9), 1000–1009. 20. Silberschmidt, V. V. (1998). Dynamics of stochastic damage evolution. International Journal of Damage mechanics, 7(1), 84–98. 21. Woo, C. W., & Li, D. L. (1992). A general stochastic dynamic model of continuum damage mechanics. International Journal of Solids and Structures, 29(23), 2921–2932. 22. Kandarpa, S., Kirkner, D. J., & Spencer, B. F. (1996). Stochastic damage model for brittle materials subjected to monotonic loading. Journal of Engineering Mechanics, 122(8), 788– 795. 23. Li, J., & Ren, X. (2009). Stochastic damage model for concrete based on energy equivalent strain. International Journal of Solids and Structures, 4(2), 362–371. 24. Sobczyk, K., & Spencer, B. F. (1992). Random fatigue—From data to theory. Cambridge: Academic Press. 25. Lemaitre, J., & Desmorat, R. (2005). Engineering damage mechanics: Ductile, creep, fatigue and brittle failures. Berlin: Springer. 26. Lemaitre, J. (1987). Continuum damage mechanics theory and application, chapter formulation and identification of damage kinetic constitutive equations (pp. 37–89). Vienna: Springer. 27. de Souza Neto, E. A., Peri´c, D., & Owen, D. R. J. (2008). Computational methods for plasticity. Wiley. 28. Benallal, A., Billardon, R., & Doghri, I. (1988). An integration algorithm and the corresponding consistent tangent operator for fully coupled elastoplastic and damage equations. International Journal for Numerical Methods in Biomechanical Engineering, 4(6), 731–740. 29. Van Kampen, N. G. (1992). Stochastic processes in physics and chemistry. Elsevier Science. 30. Goel, S., & Singh, S. P. (2014). Fatigue performance of plain and steel fibre reinforced self compacting concrete using S-N relationship. Engineering Structures, 74, 65–73. 31. Oh, B. H. (1986). Fatigue analysis of plain concrete in flexure. Journal of Structural Engineering, 112(2), 273–288.